path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
exploration/protein/Transformation_and_normalization.ipynb | ###Markdown
Proteins: transformation & normalization
###Code
protein_levels = read_csv(protein_levels_path, index_col=[0, 1, 2, 3])
###Output
_____no_output_____
###Markdown
Choosing a single, unique proteins index As demonstrated in [Exploration_and_quality_control.ipynb](Exploration_and_quality_control.ipynb), `entrez_gene_symbol` unique. For simplicity - and to enable high interpretability - we will only use the `target` column to index proteins henceforth:
###Code
protein_levels = protein_levels.reset_index(level=[
'target_full_name', 'entrez_gene_symbol', 'soma_id'
], drop=True)
protein_levels.head(2)
protein_levels.to_csv(indexed_by_target_path)
###Output
_____no_output_____
###Markdown
What is the distribution of the measurments?
###Code
from statsmodels.graphics.gofplots import qqplot_2samples
from helpers.data_frame import select_columns
###Output
_____no_output_____
###Markdown
The quantile-quantile distribution between healthy controls and all the other samples:
###Code
qqplot_2samples(
select_columns(protein_levels, match='.*HC').mean(axis=1),
select_columns(protein_levels, exclude='.*HC').mean(axis=1)
);
###Output
_____no_output_____
###Markdown
No striking outliers. Are the average protein levels normally distributed?
###Code
average_protein_level = select_columns(protein_levels, '.*HC').mean(axis=1)
average_protein_level.head()
%%R -i average_protein_level -w 400 -h 400 -u px
qqnorm(average_protein_level)
###Output
_____no_output_____
###Markdown
Nope. This may be expected given the high dynamic range of the platform; it also tells us that there are many values close to zero:
###Code
average_protein_level.hist();
###Output
_____no_output_____
###Markdown
Does it follow log-normal distribution?
###Code
df = DataFrame(dict(average_protein_level=average_protein_level))
%%R -i df -w 400 -h 400 -u px
(
ggplot(df, aes(sample=average_protein_level))
+ qqplotr::stat_qq_point(distribution='lnorm')
)
###Output
_____no_output_____
###Markdown
Not great, though better.
###Code
from numpy import log10
average_protein_level.apply(log10).hist();
###Output
_____no_output_____
###Markdown
Were there any useful notes in methods section of previous studies utilizing SOMAscan? - "Protein levels were natural log transformed prior to batch effects adjustment to improve the normality of protein level distributions" - https://www.nature.com/articles/s41598-018-26640-w- "All protein values were log transformed because of their nonnormal distributions as determined by the Kolomogorov-Smirnov and Shapiro-Wilk normality tests" - [Aptamer-Based Proteomic Profiling Reveals Novel Candidate Biomarkers and Pathways in Cardiovascular Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4963294/) (I don't quite get the reasoning of this sentence - non-normality does not imply log-normal distribution, regardless of the tests that you use)- "all CSF and plasma protein values measured in untargeted and targeted proteomic experiments were log10 transformed" [The Alzheimer study](https://alzres.biomedcentral.com/articles/10.1186/s13195-017-0258-6)- "Prior to analysis, NMR lipoprotein and plasma proteome data were transformed to Z-scores (by subtracting the mean and dividing by the SD) for ease of comparison. Plasma proteome data were log-transformed prior to Z-score transformation." [(Harbaum, et al., 2019)](https://thorax.bmj.com/content/74/4/380) **this is a fresh study from Imperial College London**, and two of the authors are affiliated with the Department of Surgery and Cancer- "Data from all samples were log2 transformed, normalized and calibrated using standard hybridization and calibration procedures." [(Scribe et al, 2017)](https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1006687) - some authors affiliated with SomaLogic, NB this is also a Cape Town study.- "All data were log-transformed to stabilize the variance. [...] Student's t tests were used to identify differentially expressed SOMAmer reagents" [(Groote, et al. 2017)](https://jcm.asm.org/content/55/2/391.long) - again in collaboration with SomaLogic.There is a strong case for log-transformation, as it was frequently used in previous research. The base, however, varies.I was specifically interested to see if anyone used Van der Waerden transformation before, as it could correct the skew (as partially does the log transformation). Here are two more articles:- "All proteomics data were transformed using the natural logarithm and transformed to zero mean and unit s.d. In addition, protein values >2.5 s.d. from the mean were excluded as outliers." - this sounds like an exclusion of a lot of signal; in supplementary information: "This [modeling] was performed on the SOMAscan data, both untransformed, and transformed using the Van der Waerden transformation" - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4490288/- "We log10 transformed the protein data as the protein concentrations were not normally distributed. Additionally, protein values ± 6 SDs were excluded as outliers." - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4469006/ (same authors, previous article)At least two studies first log-transformed and then scaled to z-scores.Alternatives to simple log-transform include: - Box-Cox - quantile normalization / Van de Waerden / rank-based inverse normal transformation; possibly used in [this study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5271178/). There are articles discussing practical benefits and shortcomings of application to GWAS studies (I haven't found discussion relevant to SOMAscan though): - for [(Pain et al. 2018)](https://europepmc.org/articles/pmc6057994) - against [(Beasley et al. 2009)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2921808/) - I am convinced by some of the arguments, though can't say that I understand it fully (yet).There seem to be a strong preference for the simplest log transformation in the previous research (though I do not fully understand this choice). NB: the relative abundances of proteins in cells are known to vary greatly; our samples are not cells from a single tissue but a mixture of different cells and (potentially) organisms. This may influence the distribution of protein levels, and it appears justifiable to suspect that the measured distribution is complex and skewed as it is a sum of multiple distributions (which may or may not be normal). Log-transformation with base 10 Since now, the log-10 transformed data will be used through the subsequent analyses. I chose base of 10 due to high range of the SOMAScan measurements.
###Code
from numpy import log10
log_matrix = protein_levels.applymap(log10)
log_matrix.to_csv(log_matrix_path)
###Output
_____no_output_____
###Markdown
How to normalize the values for use with PLS? Concern: the high dynamic range of values z-score? Further attempts to normalize/transformSome thoughts on transformations:- we may suspect that there will be less proteins in the healthy controls, - we could control for that if the goal is to elucidate differences in the immune system proxies or look for specific biomarkers (i.e. what immune-response related proteins are more often active in the CSF when compared against the background) - but not controlling is a real-life scenario: the mere fact of detecting much more proteins than expected might be used to help diagnose the patient- log transformation reduces the problem of high dynamic range. However, it also over-emphasizes the proteins with very low levels [(Berg et al, 2006)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1534033/) / common knowledge.- z-score appears to be well suited for distributions closer to the normal family - as it uses mean and standard deviation- it might be better to use the more robust median rather than mean as it is less prone to outliers [(Berg et al, 2006)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1534033/) / common knowledge. However, log-transformation partially alleviates this problem (and there is little precedent such approach)- I would be tempted to use an advanced transformation followed by scaling, though there is little precedent in the field. Also, this would reduce the ease of interpretation of results (everyone understands log-transform, but not necessarily Box-Cox)A [review of HCA for proteomic data](https://pubs.acs.org/doi/full/10.1021/pr060343h) mentions an alternative being division by maximum value of each sample - I would not do that as this procedure may susceptible to outliers, though they demonstrate that it gives better results than z-score (though not on SOMAscan data which has greater dynamic range). Double z-score transformation for unsupervised analyses I originally proposed to follow the log-transformation with: - z-score on samples (patients) - to address the issue of some samples having more proteins than others (which could be either technical or biological) - see 024.TMR (NB: the above discussed issue of the relative levels being potentially diagnostic is not necessarily important: confirming that more proteins in CSF may imply greater chance of a disease is not necessarily novel neither ambitious - I could just do a separate analysis for that); I could use modified z-score with median instead of mean (Iglewicz-Hoaglin) - though it does not seem to be necessary - z-score normalization of each feature (protein) - to give each protein an equal weight in the unsupervised analyses Unfortunately, in this procedure the second step will reduce the impact of the first step. Thus the order matters and different results would be achieved depending on which normalization is performed first.
###Code
from helpers import z_score
zz_log_matrix = log_matrix.apply(z_score).apply(z_score, axis=1)
zz_log_matrix.to_csv(zz_log_path)
###Output
_____no_output_____
###Markdown
We have unit variance on proteins:
###Code
zz_log_matrix.var(axis=1)
###Output
_____no_output_____
###Markdown
Single-z score transformation to account for different protein abundances among patients The above proposed transformations result in very good unsupervised clustering, however pose a challenge in the interpretation. Therefore I also use a simpler transformation with the single purpose of making the variance equal among all patients:
###Code
# Center patients and scale to unit variance
patients_variance_at_one = log_matrix.apply(z_score)
patients_variance_at_one.to_csv(patients_variance_at_one_path)
patients_variance_at_one.var()
from numpy import isclose
assert all(map(partial(isclose, 1), patients_variance_at_one.var()))
assert all(map(partial(isclose, 0), patients_variance_at_one.mean()))
###Output
_____no_output_____ |
docs/auto_examples/plot_digits.ipynb | ###Markdown
Visualizing the digits datasetThis example loads in some data from the scikit-learn digits dataset and plotsit.
###Code
# Code source: Andrew Heusser
# License: MIT
# import
from sklearn import datasets
import hypertools as hyp
# load example data
digits = datasets.load_digits(n_class=6)
data = digits.data
hue = digits.target
# plot
hyp.plot(data, '.', hue=hue)
###Output
_____no_output_____ |
tutorials/create_advanced.ipynb | ###Markdown
Create Networks - Advanced This tutorial shows how to create a more complex pandapower network step by step. The network includes every element which is availiable in the pandapower framework.The final network looks like this: The structural information about this network are stored in csv tables in the example_advanced folder.For a better overview the creation of the individual components is divided in three steps. Each step handles one of the three voltage levels: high, medium and low voltage. We star by initializing an empty pandapower network:
###Code
#import the pandapower module
import pandapower as pp
import pandas as pd
#create an empty network
net = pp.create_empty_network()
###Output
_____no_output_____
###Markdown
High voltage level Buses There are two 380 kV and five 110 kV busbars (type="b"). The 380/110 kV substation is modeled in detail with all nodes and switches, which is why we need additional nodes (type="b") to connect the switches.
###Code
# Double busbar
pp.create_bus(net, name='Double Busbar 1', vn_kv=380, type='b')
pp.create_bus(net, name='Double Busbar 2', vn_kv=380, type='b')
for i in range(10):
pp.create_bus(net, name='Bus DB T%s' % i, vn_kv=380, type='n')
for i in range(1, 5):
pp.create_bus(net, name='Bus DB %s' % i, vn_kv=380, type='n')
# Single busbar
pp.create_bus(net, name='Single Busbar', vn_kv=110, type='b')
for i in range(1, 6):
pp.create_bus(net, name='Bus SB %s' % i, vn_kv=110, type='n')
for i in range(1, 6):
for j in [1, 2]:
pp.create_bus(net, name='Bus SB T%s.%s' % (i, j), vn_kv=110, type='n')
# Remaining buses
for i in range(1, 5):
pp.create_bus(net, name='Bus HV%s' % i, vn_kv=110, type='n')
# show bustable
net.bus
###Output
_____no_output_____
###Markdown
Lines The information about the 6 HV lines are stored in a csv file that we load from the hard drive:
###Code
hv_lines = pd.read_csv('example_advanced/hv_lines.csv', sep=';', header=0, decimal=',')
hv_lines
###Output
_____no_output_____
###Markdown
and use to create all lines:
###Code
# create lines
for _, hv_line in hv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", hv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", hv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=hv_line.length,std_type=hv_line.std_type, name=hv_line.line_name, parallel=hv_line.parallel)
# show line table
net.line
###Output
_____no_output_____
###Markdown
Transformer The 380/110 kV transformer connects the buses "Bus DB 1" and "Bus DB 2". We use the get_element_index function from the pandapower toolbox to find the bus indices of the buses with these names and create a transformer by directly specifying the parameters:
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus DB 2")
lv_bus = pp.get_element_index(net, "bus", "Bus SB 1")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=300, vn_hv_kv=380, vn_lv_kv=110, vscr_percent=0.06,
vsc_percent=8, pfe_mw=0, i0_percent=0, tp_pos=0, shift_degree=0, name='EHV-HV-Trafo')
net.trafo # show trafo table
###Output
_____no_output_____
###Markdown
Switches Now we create the switches to connect the buses in the transformer station. The switch configuration is stored in the following csv table:
###Code
hv_bus_sw = pd.read_csv('example_advanced/hv_bus_sw.csv', sep=';', header=0, decimal=',')
hv_bus_sw
# Bus-bus switches
for _, switch in hv_bus_sw.iterrows():
from_bus = pp.get_element_index(net, "bus", switch.from_bus)
to_bus = pp.get_element_index(net, "bus", switch.to_bus)
pp.create_switch(net, from_bus, to_bus, et=switch.et, closed=switch.closed, type=switch.type, name=switch.bus_name)
# Bus-line switches
hv_buses = net.bus[(net.bus.vn_kv == 380) | (net.bus.vn_kv == 110)].index
hv_ls = net.line[(net.line.from_bus.isin(hv_buses)) & (net.line.to_bus.isin(hv_buses))]
for _, line in hv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus DB 2'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch DB2 - EHV-HV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus SB 1'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch SB1 - EHV-HV-Trafo')
# show switch table
net.switch
###Output
_____no_output_____
###Markdown
External Grid We equip the high voltage side of the transformer with an external grid connection:
###Code
pp.create_ext_grid(net, pp.get_element_index(net, "bus", 'Double Busbar 1'), vm_pu=1.03, va_degree=0, name='External grid',
s_sc_max_mva=10000, rx_max=0.1, rx_min=0.1)
net.ext_grid # show external grid table
###Output
_____no_output_____
###Markdown
Loads The five loads in the HV network are defined in the following csv file:
###Code
hv_loads = pd.read_csv('example_advanced/hv_loads.csv', sep=';', header=0, decimal=',')
hv_loads
for _, load in hv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show load table
net.load
###Output
_____no_output_____
###Markdown
Generator The voltage controlled generator is created with an active power of 100 MW (negative for generation) and a voltage set point of 1.03 per unit:
###Code
pp.create_gen(net, pp.get_element_index(net, "bus", 'Bus HV4'), vm_pu=1.03, p_mw=100, name='Gas turbine')
# show generator table
net.gen
###Output
_____no_output_____
###Markdown
Static generators We create this wind park with an active power of 20 MW (negative for generation) and a reactive power of -4 Mvar. To classify the generation as a wind park, we set type to "WP":
###Code
pp.create_sgen(net, pp.get_element_index(net, "bus", 'Bus SB 5'), p_mw=20, q_mvar=4, sn_mva=45,
type='WP', name='Wind Park')
# show static generator table
net.sgen
###Output
_____no_output_____
###Markdown
Shunt
###Code
pp.create_shunt(net, pp.get_element_index(net, "bus", 'Bus HV1'), p_mw=0, q_mvar=0.960, name='Shunt')
# show shunt table
net.shunt
###Output
_____no_output_____
###Markdown
External network equivalents The two remaining elements are impedances and extended ward equivalents:
###Code
# Impedance
pp.create_impedance(net, pp.get_element_index(net, "bus", 'Bus HV3'), pp.get_element_index(net, "bus", 'Bus HV1'),
rft_pu=0.074873, xft_pu=0.198872, sn_mva=100, name='Impedance')
# show impedance table
net.impedance
# xwards
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV3'), ps_kw=23942, qs_kvar=-12241.87, pz_kw=2814.571,
qz_kvar=0, r_ohm=0, x_ohm=12.18951, vm_pu=1.02616, name='XWard 1')
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV1'), ps_kw=3776, qs_kvar=-7769.979, pz_kw=9174.917,
qz_kvar=0, r_ohm=0, x_ohm=50.56217, vm_pu=1.024001, name='XWard 2')
# show xward table
net.xward
###Output
_____no_output_____
###Markdown
Medium voltage level Buses
###Code
pp.create_bus(net, name='Bus MV0 20kV', vn_kv=20, type='n')
for i in range(8):
pp.create_bus(net, name='Bus MV%s' % i, vn_kv=10, type='n')
#show only medium voltage bus table
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)]
mv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
mv_lines = pd.read_csv('example_advanced/mv_lines.csv', sep=';', header=0, decimal=',')
for _, mv_line in mv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", mv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", mv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=mv_line.length, std_type=mv_line.std_type, name=mv_line.line_name)
# show only medium voltage lines
net.line[net.line.from_bus.isin(mv_buses.index)]
###Output
_____no_output_____
###Markdown
3 Winding Transformer The three winding transformer transforms its high voltage level to two different lower voltage levels, in this case from 110 kV to 20 kV and 10 kV.
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus HV2")
mv_bus = pp.get_element_index(net, "bus", "Bus MV0 20kV")
lv_bus = pp.get_element_index(net, "bus", "Bus MV0")
pp.create_transformer3w_from_parameters(net, hv_bus, mv_bus, lv_bus, vn_hv_kv=110, vn_mv_kv=20, vn_lv_kv=10,
sn_hv_kva=40000, sn_mv_kva=15000, sn_lv_kva=25000, vsc_hv_percent=10.1,
vsc_mv_percent=10.1, vsc_lv_percent=10.1, vscr_hv_percent=0.266667,
vscr_mv_percent=0.033333, vscr_lv_percent=0.04, pfe_kw=0, i0_percent=0,
shift_mv_degree=30, shift_lv_degree=30, tp_side="hv", tp_mid=0, tp_min=-8,
tp_max=8, tp_st_percent=1.25, tp_pos=0, name='HV-MV-MV-Trafo')
# show transformer3w table
net.trafo3w
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)].index
mv_ls = net.line[(net.line.from_bus.isin(mv_buses)) & (net.line.to_bus.isin(mv_buses))]
for _, line in mv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# open switch
open_switch_id = net.switch[(net.switch.name == 'Switch Bus MV5 - MV Line5')].index
net.switch.closed.loc[open_switch_id] = False
#show only medium voltage switch table
net.switch[net.switch.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Loads
###Code
mv_loads = pd.read_csv('example_advanced/mv_loads.csv', sep=';', header=0, decimal=',')
for _, load in mv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_kw=load.p, q_kvar=load.q, name=load.load_name)
# show only medium voltage loads
net.load[net.load.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
mv_sgens = pd.read_csv('example_advanced/mv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in mv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_kw=sgen.p, q_kvar=sgen.q, sn_kva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only medium voltage static generators
net.sgen[net.sgen.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Low voltage level Busses
###Code
pp.create_bus(net, name='Bus LV0', vn_kv=0.4, type='n')
for i in range(1, 6):
pp.create_bus(net, name='Bus LV1.%s' % i, vn_kv=0.4, type='m')
for i in range(1, 5):
pp.create_bus(net, name='Bus LV2.%s' % i, vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.1', vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.2', vn_kv=0.4, type='m')
# show only low voltage buses
lv_buses = net.bus[net.bus.vn_kv == 0.4]
lv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
# create lines
lv_lines = pd.read_csv('example_advanced/lv_lines.csv', sep=';', header=0, decimal=',')
for _, lv_line in lv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", lv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", lv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=lv_line.length, std_type=lv_line.std_type, name=lv_line.line_name)
# show only low voltage lines
net.line[net.line.from_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Transformer
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus MV4")
lv_bus = pp.get_element_index(net, "bus","Bus LV0")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_kva=400, vn_hv_kv=10, vn_lv_kv=0.4, vscr_percent=1.325, vsc_percent=4, pfe_kw=0.95, i0_percent=0.2375, tp_side="hv", tp_mid=0, tp_min=-2, tp_max=2, tp_st_percent=2.5, tp_pos=0, shift_degree=150, name='MV-LV-Trafo')
#show only low voltage transformer
net.trafo[net.trafo.lv_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
lv_ls = net.line[(net.line.from_bus.isin(lv_buses)) & (net.line.to_bus.isin(lv_buses))]
for _, line in lv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus MV4'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch MV4 - MV-LV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus LV0'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch LV0 - MV-LV-Trafo')
# show only low vvoltage switches
net.switch[net.switch.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Loads
###Code
lv_loads = pd.read_csv('example_advanced/lv_loads.csv', sep=';', header=0, decimal=',')
for _, load in lv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_kw=load.p, q_kvar=load.q, name=load.load_name)
# show only low voltage loads
net.load[net.load.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
lv_sgens = pd.read_csv('example_advanced/lv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in lv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_kw=sgen.p, q_kvar=sgen.q, sn_kva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only low voltage static generators
net.sgen[net.sgen.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Run a Power Flow
###Code
pp.runpp(net, calculate_voltage_angles=True, init="dc")
net
###Output
_____no_output_____
###Markdown
Create Networks - Advanced This tutorial shows how to create a more complex pandapower network step by step. The network includes every element which is availiable in the pandapower framework.The final network looks like this: The structural information about this network are stored in csv tables in the example_advanced folder.For a better overview the creation of the individual components is divided in three steps. Each step handles one of the three voltage levels: high, medium and low voltage. We star by initializing an empty pandapower network:
###Code
#import the pandapower module
import pandapower as pp
import pandas as pd
#create an empty network
net = pp.create_empty_network()
###Output
_____no_output_____
###Markdown
High voltage level Buses There are two 380 kV and five 110 kV busbars (type="b"). The 380/110 kV substation is modeled in detail with all nodes and switches, which is why we need additional nodes (type="b") to connect the switches.
###Code
# Double busbar
pp.create_bus(net, name='Double Busbar 1', vn_kv=380, type='b')
pp.create_bus(net, name='Double Busbar 2', vn_kv=380, type='b')
for i in range(10):
pp.create_bus(net, name='Bus DB T%s' % i, vn_kv=380, type='n')
for i in range(1, 5):
pp.create_bus(net, name='Bus DB %s' % i, vn_kv=380, type='n')
# Single busbar
pp.create_bus(net, name='Single Busbar', vn_kv=110, type='b')
for i in range(1, 6):
pp.create_bus(net, name='Bus SB %s' % i, vn_kv=110, type='n')
for i in range(1, 6):
for j in [1, 2]:
pp.create_bus(net, name='Bus SB T%s.%s' % (i, j), vn_kv=110, type='n')
# Remaining buses
for i in range(1, 5):
pp.create_bus(net, name='Bus HV%s' % i, vn_kv=110, type='n')
# show bustable
net.bus
###Output
_____no_output_____
###Markdown
Lines The information about the 6 HV lines are stored in a csv file that we load from the hard drive:
###Code
hv_lines = pd.read_csv('example_advanced/hv_lines.csv', sep=';', header=0, decimal=',')
hv_lines
###Output
_____no_output_____
###Markdown
and use to create all lines:
###Code
# create lines
for _, hv_line in hv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", hv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", hv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=hv_line.length,std_type=hv_line.std_type, name=hv_line.line_name, parallel=hv_line.parallel)
# show line table
net.line
###Output
_____no_output_____
###Markdown
Transformer The 380/110 kV transformer connects the buses "Bus DB 1" and "Bus DB 2". We use the get_element_index function from the pandapower toolbox to find the bus indices of the buses with these names and create a transformer by directly specifying the parameters:
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus DB 2")
lv_bus = pp.get_element_index(net, "bus", "Bus SB 1")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_kva=300000, vn_hv_kv=380, vn_lv_kv=110, vscr_percent=0.06,
vsc_percent=8, pfe_kw=0, i0_percent=0, tp_pos=0, shift_degree=0, name='EHV-HV-Trafo')
net.trafo # show trafo table
###Output
_____no_output_____
###Markdown
Switches Now we create the switches to connect the buses in the transformer station. The switch configuration is stored in the following csv table:
###Code
hv_bus_sw = pd.read_csv('example_advanced/hv_bus_sw.csv', sep=';', header=0, decimal=',')
hv_bus_sw
# Bus-bus switches
for _, switch in hv_bus_sw.iterrows():
from_bus = pp.get_element_index(net, "bus", switch.from_bus)
to_bus = pp.get_element_index(net, "bus", switch.to_bus)
pp.create_switch(net, from_bus, to_bus, et=switch.et, closed=switch.closed, type=switch.type, name=switch.bus_name)
# Bus-line switches
hv_buses = net.bus[(net.bus.vn_kv == 380) | (net.bus.vn_kv == 110)].index
hv_ls = net.line[(net.line.from_bus.isin(hv_buses)) & (net.line.to_bus.isin(hv_buses))]
for _, line in hv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus DB 2'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch DB2 - EHV-HV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus SB 1'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch SB1 - EHV-HV-Trafo')
# show switch table
net.switch
###Output
_____no_output_____
###Markdown
External Grid We equip the high voltage side of the transformer with an external grid connection:
###Code
pp.create_ext_grid(net, pp.get_element_index(net, "bus", 'Double Busbar 1'), vm_pu=1.03, va_degree=0, name='External grid',
s_sc_max_mva=10000, rx_max=0.1, rx_min=0.1)
net.ext_grid # show external grid table
###Output
_____no_output_____
###Markdown
Loads The five loads in the HV network are defined in the following csv file:
###Code
hv_loads = pd.read_csv('example_advanced/hv_loads.csv', sep=';', header=0, decimal=',')
hv_loads
for _, load in hv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_kw=load.p, q_kvar=load.q, name=load.load_name)
# show load table
net.load
###Output
_____no_output_____
###Markdown
Generator The voltage controlled generator is created with an active power of 100 MW (negative for generation) and a voltage set point of 1.03 per unit:
###Code
pp.create_gen(net, pp.get_element_index(net, "bus", 'Bus HV4'), vm_pu=1.03, p_kw=-100e3, name='Gas turbine')
# show generator table
net.gen
###Output
_____no_output_____
###Markdown
Static generators We create this wind park with an active power of 20 MW (negative for generation) and a reactive power of -4 Mvar. To classify the generation as a wind park, we set type to "WP":
###Code
pp.create_sgen(net, pp.get_element_index(net, "bus", 'Bus SB 5'), p_kw=-20e3, q_kvar=-4e3, sn_kva=45e3,
type='WP', name='Wind Park')
# show static generator table
net.sgen
###Output
_____no_output_____
###Markdown
Shunt
###Code
pp.create_shunt(net, pp.get_element_index(net, "bus", 'Bus HV1'), p_kw=0, q_kvar=-960, name='Shunt')
# show shunt table
net.shunt
###Output
_____no_output_____
###Markdown
External network equivalents The two remaining elements are impedances and extended ward equivalents:
###Code
# Impedance
pp.create_impedance(net, pp.get_element_index(net, "bus", 'Bus HV3'), pp.get_element_index(net, "bus", 'Bus HV1'),
rft_pu=0.074873, xft_pu=0.198872, sn_kva=100000, name='Impedance')
# show impedance table
net.impedance
# xwards
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV3'), ps_kw=23942, qs_kvar=-12241.87, pz_kw=2814.571,
qz_kvar=0, r_ohm=0, x_ohm=12.18951, vm_pu=1.02616, name='XWard 1')
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV1'), ps_kw=3776, qs_kvar=-7769.979, pz_kw=9174.917,
qz_kvar=0, r_ohm=0, x_ohm=50.56217, vm_pu=1.024001, name='XWard 2')
# show xward table
net.xward
###Output
_____no_output_____
###Markdown
Medium voltage level Buses
###Code
pp.create_bus(net, name='Bus MV0 20kV', vn_kv=20, type='n')
for i in range(8):
pp.create_bus(net, name='Bus MV%s' % i, vn_kv=10, type='n')
#show only medium voltage bus table
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)]
mv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
mv_lines = pd.read_csv('example_advanced/mv_lines.csv', sep=';', header=0, decimal=',')
for _, mv_line in mv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", mv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", mv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=mv_line.length, std_type=mv_line.std_type, name=mv_line.line_name)
# show only medium voltage lines
net.line[net.line.from_bus.isin(mv_buses.index)]
###Output
_____no_output_____
###Markdown
3 Winding Transformer The three winding transformer transforms its high voltage level to two different lower voltage levels, in this case from 110 kV to 20 kV and 10 kV.
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus HV2")
mv_bus = pp.get_element_index(net, "bus", "Bus MV0 20kV")
lv_bus = pp.get_element_index(net, "bus", "Bus MV0")
pp.create_transformer3w_from_parameters(net, hv_bus, mv_bus, lv_bus, vn_hv_kv=110, vn_mv_kv=20, vn_lv_kv=10,
sn_hv_kva=40000, sn_mv_kva=15000, sn_lv_kva=25000, vsc_hv_percent=10.1,
vsc_mv_percent=10.1, vsc_lv_percent=10.1, vscr_hv_percent=0.266667,
vscr_mv_percent=0.033333, vscr_lv_percent=0.04, pfe_kw=0, i0_percent=0,
shift_mv_degree=30, shift_lv_degree=30, tp_side="hv", tp_mid=0, tp_min=-8,
tp_max=8, tp_st_percent=1.25, tp_pos=0, name='HV-MV-MV-Trafo')
# show transformer3w table
net.trafo3w
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)].index
mv_ls = net.line[(net.line.from_bus.isin(mv_buses)) & (net.line.to_bus.isin(mv_buses))]
for _, line in mv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# open switch
open_switch_id = net.switch[(net.switch.name == 'Switch Bus MV5 - MV Line5')].index
net.switch.closed.loc[open_switch_id] = False
#show only medium voltage switch table
net.switch[net.switch.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Loads
###Code
mv_loads = pd.read_csv('example_advanced/mv_loads.csv', sep=';', header=0, decimal=',')
for _, load in mv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_kw=load.p, q_kvar=load.q, name=load.load_name)
# show only medium voltage loads
net.load[net.load.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
mv_sgens = pd.read_csv('example_advanced/mv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in mv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_kw=sgen.p, q_kvar=sgen.q, sn_kva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only medium voltage static generators
net.sgen[net.sgen.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Low voltage level Busses
###Code
pp.create_bus(net, name='Bus LV0', vn_kv=0.4, type='n')
for i in range(1, 6):
pp.create_bus(net, name='Bus LV1.%s' % i, vn_kv=0.4, type='m')
for i in range(1, 5):
pp.create_bus(net, name='Bus LV2.%s' % i, vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.1', vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.2', vn_kv=0.4, type='m')
# show only low voltage buses
lv_buses = net.bus[net.bus.vn_kv == 0.4]
lv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
# create lines
lv_lines = pd.read_csv('example_advanced/lv_lines.csv', sep=';', header=0, decimal=',')
for _, lv_line in lv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", lv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", lv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=lv_line.length, std_type=lv_line.std_type, name=lv_line.line_name)
# show only low voltage lines
net.line[net.line.from_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Transformer
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus MV4")
lv_bus = pp.get_element_index(net, "bus","Bus LV0")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_kva=400, vn_hv_kv=10, vn_lv_kv=0.4, vscr_percent=1.325, vsc_percent=4, pfe_kw=0.95, i0_percent=0.2375, tp_side="hv", tp_mid=0, tp_min=-2, tp_max=2, tp_st_percent=2.5, tp_pos=0, shift_degree=150, name='MV-LV-Trafo')
#show only low voltage transformer
net.trafo[net.trafo.lv_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
lv_ls = net.line[(net.line.from_bus.isin(lv_buses)) & (net.line.to_bus.isin(lv_buses))]
for _, line in lv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus MV4'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch MV4 - MV-LV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus LV0'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch LV0 - MV-LV-Trafo')
# show only low vvoltage switches
net.switch[net.switch.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Loads
###Code
lv_loads = pd.read_csv('example_advanced/lv_loads.csv', sep=';', header=0, decimal=',')
for _, load in lv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_kw=load.p, q_kvar=load.q, name=load.load_name)
# show only low voltage loads
net.load[net.load.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
lv_sgens = pd.read_csv('example_advanced/lv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in lv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_kw=sgen.p, q_kvar=sgen.q, sn_kva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only low voltage static generators
net.sgen[net.sgen.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Run a Power Flow
###Code
pp.runpp(net, calculate_voltage_angles=True, init="dc")
net
###Output
_____no_output_____
###Markdown
Create Networks - Advanced This tutorial shows how to create a more complex pandapower network step by step. The network includes every element which is availiable in the pandapower framework.The final network looks like this: The structural information about this network are stored in csv tables in the example_advanced folder.For a better overview the creation of the individual components is divided in three steps. Each step handles one of the three voltage levels: high, medium and low voltage. We star by initializing an empty pandapower network:
###Code
#import the pandapower module
import pandapower as pp
import pandas as pd
#create an empty network
net = pp.create_empty_network()
###Output
_____no_output_____
###Markdown
High voltage level Buses There are two 380 kV and five 110 kV busbars (type="b"). The 380/110 kV substation is modeled in detail with all nodes and switches, which is why we need additional nodes (type="b") to connect the switches.
###Code
# Double busbar
pp.create_bus(net, name='Double Busbar 1', vn_kv=380, type='b')
pp.create_bus(net, name='Double Busbar 2', vn_kv=380, type='b')
for i in range(10):
pp.create_bus(net, name='Bus DB T%s' % i, vn_kv=380, type='n')
for i in range(1, 5):
pp.create_bus(net, name='Bus DB %s' % i, vn_kv=380, type='n')
# Single busbar
pp.create_bus(net, name='Single Busbar', vn_kv=110, type='b')
for i in range(1, 6):
pp.create_bus(net, name='Bus SB %s' % i, vn_kv=110, type='n')
for i in range(1, 6):
for j in [1, 2]:
pp.create_bus(net, name='Bus SB T%s.%s' % (i, j), vn_kv=110, type='n')
# Remaining buses
for i in range(1, 5):
pp.create_bus(net, name='Bus HV%s' % i, vn_kv=110, type='n')
# show bustable
net.bus
###Output
_____no_output_____
###Markdown
Lines The information about the 6 HV lines are stored in a csv file that we load from the hard drive:
###Code
hv_lines = pd.read_csv('example_advanced/hv_lines.csv', sep=';', header=0, decimal=',')
hv_lines
###Output
_____no_output_____
###Markdown
and use to create all lines:
###Code
# create lines
for _, hv_line in hv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", hv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", hv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=hv_line.length,std_type=hv_line.std_type, name=hv_line.line_name, parallel=hv_line.parallel)
# show line table
net.line
###Output
_____no_output_____
###Markdown
Transformer The 380/110 kV transformer connects the buses "Bus DB 1" and "Bus DB 2". We use the get_element_index function from the pandapower toolbox to find the bus indices of the buses with these names and create a transformer by directly specifying the parameters:
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus DB 2")
lv_bus = pp.get_element_index(net, "bus", "Bus SB 1")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=300, vn_hv_kv=380, vn_lv_kv=110, vkr_percent=0.06,
vk_percent=8, pfe_kw=0, i0_percent=0, tp_pos=0, shift_degree=0, name='EHV-HV-Trafo')
net.trafo # show trafo table
###Output
_____no_output_____
###Markdown
Switches Now we create the switches to connect the buses in the transformer station. The switch configuration is stored in the following csv table:
###Code
hv_bus_sw = pd.read_csv('example_advanced/hv_bus_sw.csv', sep=';', header=0, decimal=',')
hv_bus_sw
# Bus-bus switches
for _, switch in hv_bus_sw.iterrows():
from_bus = pp.get_element_index(net, "bus", switch.from_bus)
to_bus = pp.get_element_index(net, "bus", switch.to_bus)
pp.create_switch(net, from_bus, to_bus, et=switch.et, closed=switch.closed, type=switch.type, name=switch.bus_name)
# Bus-line switches
hv_buses = net.bus[(net.bus.vn_kv == 380) | (net.bus.vn_kv == 110)].index
hv_ls = net.line[(net.line.from_bus.isin(hv_buses)) & (net.line.to_bus.isin(hv_buses))]
for _, line in hv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus DB 2'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch DB2 - EHV-HV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus SB 1'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch SB1 - EHV-HV-Trafo')
# show switch table
net.switch
###Output
_____no_output_____
###Markdown
External Grid We equip the high voltage side of the transformer with an external grid connection:
###Code
pp.create_ext_grid(net, pp.get_element_index(net, "bus", 'Double Busbar 1'), vm_pu=1.03, va_degree=0, name='External grid',
s_sc_max_mva=10000, rx_max=0.1, rx_min=0.1)
net.ext_grid # show external grid table
###Output
_____no_output_____
###Markdown
Loads The five loads in the HV network are defined in the following csv file:
###Code
hv_loads = pd.read_csv('example_advanced/hv_loads.csv', sep=';', header=0, decimal=',')
hv_loads
for _, load in hv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show load table
net.load
###Output
_____no_output_____
###Markdown
Generator The voltage controlled generator is created with an active power of 100 MW (negative for generation) and a voltage set point of 1.03 per unit:
###Code
pp.create_gen(net, pp.get_element_index(net, "bus", 'Bus HV4'), vm_pu=1.03, p_mw=100, name='Gas turbine')
# show generator table
net.gen
###Output
_____no_output_____
###Markdown
Static generators We create this wind park with an active power of 20 MW (negative for generation) and a reactive power of -4 Mvar. To classify the generation as a wind park, we set type to "WP":
###Code
pp.create_sgen(net, pp.get_element_index(net, "bus", 'Bus SB 5'), p_mw=20, q_mvar=4, sn_mva=45,
type='WP', name='Wind Park')
# show static generator table
net.sgen
###Output
_____no_output_____
###Markdown
Shunt
###Code
pp.create_shunt(net, pp.get_element_index(net, "bus", 'Bus HV1'), p_mw=0, q_mvar=0.960, name='Shunt')
# show shunt table
net.shunt
###Output
_____no_output_____
###Markdown
External network equivalents The two remaining elements are impedances and extended ward equivalents:
###Code
# Impedance
pp.create_impedance(net, pp.get_element_index(net, "bus", 'Bus HV3'), pp.get_element_index(net, "bus", 'Bus HV1'),
rft_pu=0.074873, xft_pu=0.198872, sn_mva=100, name='Impedance')
# show impedance table
net.impedance
# xwards
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV3'), ps_mw=23.942, qs_mvar=-12.24187, pz_mw=2.814571,
qz_mvar=0, r_ohm=0, x_ohm=12.18951, vm_pu=1.02616, name='XWard 1')
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV1'), ps_mw=3.776, qs_mvar=-7.769979, pz_mw=9.174917,
qz_mvar=0, r_ohm=0, x_ohm=50.56217, vm_pu=1.024001, name='XWard 2')
# show xward table
net.xward
###Output
_____no_output_____
###Markdown
Medium voltage level Buses
###Code
pp.create_bus(net, name='Bus MV0 20kV', vn_kv=20, type='n')
for i in range(8):
pp.create_bus(net, name='Bus MV%s' % i, vn_kv=10, type='n')
#show only medium voltage bus table
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)]
mv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
mv_lines = pd.read_csv('example_advanced/mv_lines.csv', sep=';', header=0, decimal=',')
for _, mv_line in mv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", mv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", mv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=mv_line.length, std_type=mv_line.std_type, name=mv_line.line_name)
# show only medium voltage lines
net.line[net.line.from_bus.isin(mv_buses.index)]
###Output
_____no_output_____
###Markdown
3 Winding Transformer The three winding transformer transforms its high voltage level to two different lower voltage levels, in this case from 110 kV to 20 kV and 10 kV.
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus HV2")
mv_bus = pp.get_element_index(net, "bus", "Bus MV0 20kV")
lv_bus = pp.get_element_index(net, "bus", "Bus MV0")
pp.create_transformer3w_from_parameters(net, hv_bus, mv_bus, lv_bus, vn_hv_kv=110, vn_mv_kv=20, vn_lv_kv=10,
sn_hv_mva=40, sn_mv_mva=15, sn_lv_mva=25, vk_hv_percent=10.1,
vk_mv_percent=10.1, vk_lv_percent=10.1, vkr_hv_percent=0.266667,
vkr_mv_percent=0.033333, vkr_lv_percent=0.04, pfe_kw=0, i0_percent=0,
shift_mv_degree=30, shift_lv_degree=30, tap_side="hv", tap_neutral=0, tap_min=-8,
tap_max=8, tap_step_percent=1.25, tap_pos=0, name='HV-MV-MV-Trafo')
# show transformer3w table
net.trafo3w
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)].index
mv_ls = net.line[(net.line.from_bus.isin(mv_buses)) & (net.line.to_bus.isin(mv_buses))]
for _, line in mv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# open switch
open_switch_id = net.switch[(net.switch.name == 'Switch Bus MV5 - MV Line5')].index
net.switch.closed.loc[open_switch_id] = False
#show only medium voltage switch table
net.switch[net.switch.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Loads
###Code
mv_loads = pd.read_csv('example_advanced/mv_loads.csv', sep=';', header=0, decimal=',')
for _, load in mv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only medium voltage loads
net.load[net.load.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
mv_sgens = pd.read_csv('example_advanced/mv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in mv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only medium voltage static generators
net.sgen[net.sgen.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Low voltage level Busses
###Code
pp.create_bus(net, name='Bus LV0', vn_kv=0.4, type='n')
for i in range(1, 6):
pp.create_bus(net, name='Bus LV1.%s' % i, vn_kv=0.4, type='m')
for i in range(1, 5):
pp.create_bus(net, name='Bus LV2.%s' % i, vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.1', vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.2', vn_kv=0.4, type='m')
# show only low voltage buses
lv_buses = net.bus[net.bus.vn_kv == 0.4]
lv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
# create lines
lv_lines = pd.read_csv('example_advanced/lv_lines.csv', sep=';', header=0, decimal=',')
for _, lv_line in lv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", lv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", lv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=lv_line.length, std_type=lv_line.std_type, name=lv_line.line_name)
# show only low voltage lines
net.line[net.line.from_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Transformer
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus MV4")
lv_bus = pp.get_element_index(net, "bus","Bus LV0")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=.4, vn_hv_kv=10, vn_lv_kv=0.4, vkr_percent=1.325, vk_percent=4, pfe_kw=0.95, i0_percent=0.2375, tap_side="hv", tap_neutral=0, tap_min=-2, tap_max=2, tap_step_percent=2.5, tp_pos=0, shift_degree=150, name='MV-LV-Trafo')
#show only low voltage transformer
net.trafo[net.trafo.lv_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Switches
###Code
lv_buses
# Bus-line switches
lv_ls = net.line[(net.line.from_bus.isin(lv_buses.index)) & (net.line.to_bus.isin(lv_buses.index))]
for _, line in lv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus MV4'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch MV4 - MV-LV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus LV0'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch LV0 - MV-LV-Trafo')
# show only low vvoltage switches
net.switch[net.switch.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Loads
###Code
lv_loads = pd.read_csv('example_advanced/lv_loads.csv', sep=';', header=0, decimal=',')
for _, load in lv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only low voltage loads
net.load[net.load.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
lv_sgens = pd.read_csv('example_advanced/lv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in lv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only low voltage static generators
net.sgen[net.sgen.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Run a Power Flow
###Code
pp.runpp(net, calculate_voltage_angles=True, init="dc")
net
###Output
_____no_output_____
###Markdown
Create Networks - Advanced This tutorial shows how to create a more complex pandapower network step by step. The network includes every element which is availiable in the pandapower framework.The final network looks like this: The structural information about this network are stored in csv tables in the example_advanced folder.For a better overview the creation of the individual components is divided in three steps. Each step handles one of the three voltage levels: high, medium and low voltage. We star by initializing an empty pandapower network:
###Code
#import the pandapower module
import pandapower as pp
import pandas as pd
#create an empty network
net = pp.create_empty_network()
###Output
_____no_output_____
###Markdown
High voltage level Buses There are two 380 kV and five 110 kV busbars (type="b"). The 380/110 kV substation is modeled in detail with all nodes and switches, which is why we need additional nodes (type="b") to connect the switches.
###Code
# Double busbar
pp.create_bus(net, name='Double Busbar 1', vn_kv=380, type='b')
pp.create_bus(net, name='Double Busbar 2', vn_kv=380, type='b')
for i in range(10):
pp.create_bus(net, name='Bus DB T%s' % i, vn_kv=380, type='n')
for i in range(1, 5):
pp.create_bus(net, name='Bus DB %s' % i, vn_kv=380, type='n')
# Single busbar
pp.create_bus(net, name='Single Busbar', vn_kv=110, type='b')
for i in range(1, 6):
pp.create_bus(net, name='Bus SB %s' % i, vn_kv=110, type='n')
for i in range(1, 6):
for j in [1, 2]:
pp.create_bus(net, name='Bus SB T%s.%s' % (i, j), vn_kv=110, type='n')
# Remaining buses
for i in range(1, 5):
pp.create_bus(net, name='Bus HV%s' % i, vn_kv=110, type='n')
# show bustable
net.bus
###Output
_____no_output_____
###Markdown
Lines The information about the 6 HV lines are stored in a csv file that we load from the hard drive:
###Code
hv_lines = pd.read_csv('example_advanced/hv_lines.csv', sep=';', header=0, decimal=',')
hv_lines
###Output
_____no_output_____
###Markdown
and use to create all lines:
###Code
# create lines
for _, hv_line in hv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", hv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", hv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=hv_line.length,std_type=hv_line.std_type, name=hv_line.line_name, parallel=hv_line.parallel)
# show line table
net.line
###Output
_____no_output_____
###Markdown
Transformer The 380/110 kV transformer connects the buses "Bus DB 1" and "Bus DB 2". We use the get_element_index function from the pandapower toolbox to find the bus indices of the buses with these names and create a transformer by directly specifying the parameters:
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus DB 2")
lv_bus = pp.get_element_index(net, "bus", "Bus SB 1")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=300, vn_hv_kv=380, vn_lv_kv=110, vkr_percent=0.06,
vk_percent=8, pfe_kw=0, i0_percent=0, tp_pos=0, shift_degree=0, name='EHV-HV-Trafo')
net.trafo # show trafo table
###Output
_____no_output_____
###Markdown
Switches Now we create the switches to connect the buses in the transformer station. The switch configuration is stored in the following csv table:
###Code
hv_bus_sw = pd.read_csv('example_advanced/hv_bus_sw.csv', sep=';', header=0, decimal=',')
hv_bus_sw
# Bus-bus switches
for _, switch in hv_bus_sw.iterrows():
from_bus = pp.get_element_index(net, "bus", switch.from_bus)
to_bus = pp.get_element_index(net, "bus", switch.to_bus)
pp.create_switch(net, from_bus, to_bus, et=switch.et, closed=switch.closed, type=switch.type, name=switch.bus_name)
# Bus-line switches
hv_buses = net.bus[(net.bus.vn_kv == 380) | (net.bus.vn_kv == 110)].index
hv_ls = net.line[(net.line.from_bus.isin(hv_buses)) & (net.line.to_bus.isin(hv_buses))]
for _, line in hv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus DB 2'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch DB2 - EHV-HV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus SB 1'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch SB1 - EHV-HV-Trafo')
# show switch table
net.switch
###Output
_____no_output_____
###Markdown
External Grid We equip the high voltage side of the transformer with an external grid connection:
###Code
pp.create_ext_grid(net, pp.get_element_index(net, "bus", 'Double Busbar 1'), vm_pu=1.03, va_degree=0, name='External grid',
s_sc_max_mva=10000, rx_max=0.1, rx_min=0.1)
net.ext_grid # show external grid table
###Output
_____no_output_____
###Markdown
Loads The five loads in the HV network are defined in the following csv file:
###Code
hv_loads = pd.read_csv('example_advanced/hv_loads.csv', sep=';', header=0, decimal=',')
hv_loads
for _, load in hv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show load table
net.load
###Output
_____no_output_____
###Markdown
Generator The voltage controlled generator is created with an active power of 100 MW (negative for generation) and a voltage set point of 1.03 per unit:
###Code
pp.create_gen(net, pp.get_element_index(net, "bus", 'Bus HV4'), vm_pu=1.03, p_mw=100, name='Gas turbine')
# show generator table
net.gen
###Output
_____no_output_____
###Markdown
Static generators We create this wind park with an active power of 20 MW (negative for generation) and a reactive power of -4 Mvar. To classify the generation as a wind park, we set type to "WP":
###Code
pp.create_sgen(net, pp.get_element_index(net, "bus", 'Bus SB 5'), p_mw=20, q_mvar=4, sn_mva=45,
type='WP', name='Wind Park')
# show static generator table
net.sgen
###Output
_____no_output_____
###Markdown
Shunt
###Code
pp.create_shunt(net, pp.get_element_index(net, "bus", 'Bus HV1'), p_mw=0, q_mvar=0.960, name='Shunt')
# show shunt table
net.shunt
###Output
_____no_output_____
###Markdown
External network equivalents The two remaining elements are impedances and extended ward equivalents:
###Code
# Impedance
pp.create_impedance(net, pp.get_element_index(net, "bus", 'Bus HV3'), pp.get_element_index(net, "bus", 'Bus HV1'),
rft_pu=0.074873, xft_pu=0.198872, sn_mva=100, name='Impedance')
# show impedance table
net.impedance
# xwards
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV3'), ps_mw=23.942, qs_mvar=-12.24187, pz_mw=2.814571,
qz_mvar=0, r_ohm=0, x_ohm=12.18951, vm_pu=1.02616, name='XWard 1')
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV1'), ps_mw=3.776, qs_mvar=-7.769979, pz_mw=9.174917,
qz_mvar=0, r_ohm=0, x_ohm=50.56217, vm_pu=1.024001, name='XWard 2')
# show xward table
net.xward
###Output
_____no_output_____
###Markdown
Medium voltage level Buses
###Code
pp.create_bus(net, name='Bus MV0 20kV', vn_kv=20, type='n')
for i in range(8):
pp.create_bus(net, name='Bus MV%s' % i, vn_kv=10, type='n')
#show only medium voltage bus table
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)]
mv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
mv_lines = pd.read_csv('example_advanced/mv_lines.csv', sep=';', header=0, decimal=',')
for _, mv_line in mv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", mv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", mv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=mv_line.length, std_type=mv_line.std_type, name=mv_line.line_name)
# show only medium voltage lines
net.line[net.line.from_bus.isin(mv_buses.index)]
###Output
_____no_output_____
###Markdown
3 Winding Transformer The three winding transformer transforms its high voltage level to two different lower voltage levels, in this case from 110 kV to 20 kV and 10 kV.
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus HV2")
mv_bus = pp.get_element_index(net, "bus", "Bus MV0 20kV")
lv_bus = pp.get_element_index(net, "bus", "Bus MV0")
pp.create_transformer3w_from_parameters(net, hv_bus, mv_bus, lv_bus, vn_hv_kv=110, vn_mv_kv=20, vn_lv_kv=10,
sn_hv_mva=40, sn_mv_mva=15, sn_lv_mva=25, vk_hv_percent=10.1,
vk_mv_percent=10.1, vk_lv_percent=10.1, vkr_hv_percent=0.266667,
vkr_mv_percent=0.033333, vkr_lv_percent=0.04, pfe_kw=0, i0_percent=0,
shift_mv_degree=30, shift_lv_degree=30, tap_side="hv", tap_neutral=0, tap_min=-8,
tap_max=8, tap_step_percent=1.25, tap_pos=0, name='HV-MV-MV-Trafo')
# show transformer3w table
net.trafo3w
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)].index
mv_ls = net.line[(net.line.from_bus.isin(mv_buses)) & (net.line.to_bus.isin(mv_buses))]
for _, line in mv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# open switch
open_switch_id = net.switch[(net.switch.name == 'Switch Bus MV5 - MV Line5')].index
net.switch.closed.loc[open_switch_id] = False
#show only medium voltage switch table
net.switch[net.switch.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Loads
###Code
mv_loads = pd.read_csv('example_advanced/mv_loads.csv', sep=';', header=0, decimal=',')
for _, load in mv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only medium voltage loads
net.load[net.load.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
mv_sgens = pd.read_csv('example_advanced/mv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in mv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only medium voltage static generators
net.sgen[net.sgen.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Low voltage level Busses
###Code
pp.create_bus(net, name='Bus LV0', vn_kv=0.4, type='n')
for i in range(1, 6):
pp.create_bus(net, name='Bus LV1.%s' % i, vn_kv=0.4, type='m')
for i in range(1, 5):
pp.create_bus(net, name='Bus LV2.%s' % i, vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.1', vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.2', vn_kv=0.4, type='m')
# show only low voltage buses
lv_buses = net.bus[net.bus.vn_kv == 0.4]
lv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
# create lines
lv_lines = pd.read_csv('example_advanced/lv_lines.csv', sep=';', header=0, decimal=',')
for _, lv_line in lv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", lv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", lv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=lv_line.length, std_type=lv_line.std_type, name=lv_line.line_name)
# show only low voltage lines
net.line[net.line.from_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Transformer
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus MV4")
lv_bus = pp.get_element_index(net, "bus","Bus LV0")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=.4, vn_hv_kv=10, vn_lv_kv=0.4, vkr_percent=1.325, vk_percent=4, pfe_kw=0.95, i0_percent=0.2375, tap_side="hv", tap_neutral=0, tap_min=-2, tap_max=2, tap_step_percent=2.5, tp_pos=0, shift_degree=150, name='MV-LV-Trafo')
#show only low voltage transformer
net.trafo[net.trafo.lv_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Switches
###Code
lv_buses
# Bus-line switches
lv_ls = net.line[(net.line.from_bus.isin(lv_buses.index)) & (net.line.to_bus.isin(lv_buses.index))]
for _, line in lv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus MV4'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch MV4 - MV-LV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus LV0'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch LV0 - MV-LV-Trafo')
# show only low vvoltage switches
net.switch[net.switch.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Loads
###Code
lv_loads = pd.read_csv('example_advanced/lv_loads.csv', sep=';', header=0, decimal=',')
for _, load in lv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only low voltage loads
net.load[net.load.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
lv_sgens = pd.read_csv('example_advanced/lv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in lv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only low voltage static generators
net.sgen[net.sgen.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Run a Power Flow
###Code
pp.runpp(net, calculate_voltage_angles=True, init="dc")
net
###Output
_____no_output_____
###Markdown
Create Networks - Advanced This tutorial shows how to create a more complex pandapower network step by step. The network includes every element which is availiable in the pandapower framework.The final network looks like this: The structural information about this network are stored in csv tables in the example_advanced folder.For a better overview the creation of the individual components is divided in three steps. Each step handles one of the three voltage levels: high, medium and low voltage. We star by initializing an empty pandapower network:
###Code
#import the pandapower module
import pandapower as pp
import pandas as pd
#create an empty network
net = pp.create_empty_network()
###Output
_____no_output_____
###Markdown
High voltage level Buses There are two 380 kV and five 110 kV busbars (type="b"). The 380/110 kV substation is modeled in detail with all nodes and switches, which is why we need additional nodes (type="b") to connect the switches.
###Code
# Double busbar
pp.create_bus(net, name='Double Busbar 1', vn_kv=380, type='b')
pp.create_bus(net, name='Double Busbar 2', vn_kv=380, type='b')
for i in range(10):
pp.create_bus(net, name='Bus DB T%s' % i, vn_kv=380, type='n')
for i in range(1, 5):
pp.create_bus(net, name='Bus DB %s' % i, vn_kv=380, type='n')
# Single busbar
pp.create_bus(net, name='Single Busbar', vn_kv=110, type='b')
for i in range(1, 6):
pp.create_bus(net, name='Bus SB %s' % i, vn_kv=110, type='n')
for i in range(1, 6):
for j in [1, 2]:
pp.create_bus(net, name='Bus SB T%s.%s' % (i, j), vn_kv=110, type='n')
# Remaining buses
for i in range(1, 5):
pp.create_bus(net, name='Bus HV%s' % i, vn_kv=110, type='n')
# show bustable
net.bus
###Output
_____no_output_____
###Markdown
Lines The information about the 6 HV lines are stored in a csv file that we load from the hard drive:
###Code
hv_lines = pd.read_csv('example_advanced/hv_lines.csv', sep=';', header=0, decimal=',')
hv_lines
###Output
_____no_output_____
###Markdown
and use to create all lines:
###Code
# create lines
for _, hv_line in hv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", hv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", hv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=hv_line.length,std_type=hv_line.std_type, name=hv_line.line_name, parallel=hv_line.parallel)
# show line table
net.line
###Output
_____no_output_____
###Markdown
Transformer The 380/110 kV transformer connects the buses "Bus DB 1" and "Bus DB 2". We use the get_element_index function from the pandapower toolbox to find the bus indices of the buses with these names and create a transformer by directly specifying the parameters:
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus DB 2")
lv_bus = pp.get_element_index(net, "bus", "Bus SB 1")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=300, vn_hv_kv=380, vn_lv_kv=110, vkr_percent=0.06,
vk_percent=8, pfe_kw=0, i0_percent=0, tp_pos=0, shift_degree=0, name='EHV-HV-Trafo')
net.trafo # show trafo table
###Output
_____no_output_____
###Markdown
Switches Now we create the switches to connect the buses in the transformer station. The switch configuration is stored in the following csv table:
###Code
hv_bus_sw = pd.read_csv('example_advanced/hv_bus_sw.csv', sep=';', header=0, decimal=',')
hv_bus_sw
# Bus-bus switches
for _, switch in hv_bus_sw.iterrows():
from_bus = pp.get_element_index(net, "bus", switch.from_bus)
to_bus = pp.get_element_index(net, "bus", switch.to_bus)
pp.create_switch(net, from_bus, to_bus, et=switch.et, closed=switch.closed, type=switch.type, name=switch.bus_name)
# Bus-line switches
hv_buses = net.bus[(net.bus.vn_kv == 380) | (net.bus.vn_kv == 110)].index
hv_ls = net.line[(net.line.from_bus.isin(hv_buses)) & (net.line.to_bus.isin(hv_buses))]
for _, line in hv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus DB 2'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch DB2 - EHV-HV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus SB 1'), pp.get_element_index(net, "trafo", 'EHV-HV-Trafo'), et='t', closed=True, type='LBS', name='Switch SB1 - EHV-HV-Trafo')
# show switch table
net.switch
###Output
_____no_output_____
###Markdown
External Grid We equip the high voltage side of the transformer with an external grid connection:
###Code
pp.create_ext_grid(net, pp.get_element_index(net, "bus", 'Double Busbar 1'), vm_pu=1.03, va_degree=0, name='External grid',
s_sc_max_mva=10000, rx_max=0.1, rx_min=0.1)
net.ext_grid # show external grid table
###Output
_____no_output_____
###Markdown
Loads The five loads in the HV network are defined in the following csv file:
###Code
hv_loads = pd.read_csv('example_advanced/hv_loads.csv', sep=';', header=0, decimal=',')
hv_loads
for _, load in hv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show load table
net.load
###Output
_____no_output_____
###Markdown
Generator The voltage controlled generator is created with an active power of 100 MW (negative for generation) and a voltage set point of 1.03 per unit:
###Code
pp.create_gen(net, pp.get_element_index(net, "bus", 'Bus HV4'), vm_pu=1.03, p_mw=100, name='Gas turbine')
# show generator table
net.gen
###Output
_____no_output_____
###Markdown
Static generators We create this wind park with an active power of 20 MW (negative for generation) and a reactive power of -4 Mvar. To classify the generation as a wind park, we set type to "WP":
###Code
pp.create_sgen(net, pp.get_element_index(net, "bus", 'Bus SB 5'), p_mw=20, q_mvar=4, sn_mva=45,
type='WP', name='Wind Park')
# show static generator table
net.sgen
###Output
_____no_output_____
###Markdown
Shunt
###Code
pp.create_shunt(net, pp.get_element_index(net, "bus", 'Bus HV1'), p_mw=0, q_mvar=0.960, name='Shunt')
# show shunt table
net.shunt
###Output
_____no_output_____
###Markdown
External network equivalents The two remaining elements are impedances and extended ward equivalents:
###Code
# Impedance
pp.create_impedance(net, pp.get_element_index(net, "bus", 'Bus HV3'), pp.get_element_index(net, "bus", 'Bus HV1'),
rft_pu=0.074873, xft_pu=0.198872, sn_mva=100, name='Impedance')
# show impedance table
net.impedance
# xwards
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV3'), ps_mw=23.942, qs_mvar=-12.24187, pz_mw=2.814571,
qz_mvar=0, r_ohm=0, x_ohm=12.18951, vm_pu=1.02616, name='XWard 1')
pp.create_xward(net, pp.get_element_index(net, "bus", 'Bus HV1'), ps_mw=3.776, qs_mvar=-7.769979, pz_mw=9.174917,
qz_mvar=0, r_ohm=0, x_ohm=50.56217, vm_pu=1.024001, name='XWard 2')
# show xward table
net.xward
###Output
_____no_output_____
###Markdown
Medium voltage level Buses
###Code
pp.create_bus(net, name='Bus MV0 20kV', vn_kv=20, type='n')
for i in range(8):
pp.create_bus(net, name='Bus MV%s' % i, vn_kv=10, type='n')
#show only medium voltage bus table
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)]
mv_buses
###Output
_____no_output_____
###Markdown
Lines
###Code
mv_lines = pd.read_csv('example_advanced/mv_lines.csv', sep=';', header=0, decimal=',')
for _, mv_line in mv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", mv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", mv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=mv_line.length, std_type=mv_line.std_type, name=mv_line.line_name)
# show only medium voltage lines
net.line[net.line.from_bus.isin(mv_buses.index)]
###Output
_____no_output_____
###Markdown
3 Winding Transformer The three winding transformer transforms its high voltage level to two different lower voltage levels, in this case from 110 kV to 20 kV and 10 kV.
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus HV2")
mv_bus = pp.get_element_index(net, "bus", "Bus MV0 20kV")
lv_bus = pp.get_element_index(net, "bus", "Bus MV0")
pp.create_transformer3w_from_parameters(net, hv_bus, mv_bus, lv_bus, vn_hv_kv=110, vn_mv_kv=20, vn_lv_kv=10,
sn_hv_mva=40, sn_mv_mva=15, sn_lv_mva=25, vk_hv_percent=10.1,
vk_mv_percent=10.1, vk_lv_percent=10.1, vkr_hv_percent=0.266667,
vkr_mv_percent=0.033333, vkr_lv_percent=0.04, pfe_kw=0, i0_percent=0,
shift_mv_degree=30, shift_lv_degree=30, tap_side="hv", tap_neutral=0, tap_min=-8,
tap_max=8, tap_step_percent=1.25, tap_pos=0, name='HV-MV-MV-Trafo')
# show transformer3w table
net.trafo3w
###Output
_____no_output_____
###Markdown
Switches
###Code
# Bus-line switches
mv_buses = net.bus[(net.bus.vn_kv == 10) | (net.bus.vn_kv == 20)].index
mv_ls = net.line[(net.line.from_bus.isin(mv_buses)) & (net.line.to_bus.isin(mv_buses))]
for _, line in mv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# open switch
open_switch_id = net.switch[(net.switch.name == 'Switch Bus MV5 - MV Line5')].index
net.switch.closed.loc[open_switch_id] = False
#show only medium voltage switch table
net.switch[net.switch.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Loads
###Code
mv_loads = pd.read_csv('example_advanced/mv_loads.csv', sep=';', header=0, decimal=',')
for _, load in mv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only medium voltage loads
net.load[net.load.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
mv_sgens = pd.read_csv('example_advanced/mv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in mv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only medium voltage static generators
net.sgen[net.sgen.bus.isin(mv_buses)]
###Output
_____no_output_____
###Markdown
Low voltage level Busses
###Code
pp.create_bus(net, name='Bus LV0', vn_kv=0.4, type='n')
for i in range(1, 6):
pp.create_bus(net, name='Bus LV1.%s' % i, vn_kv=0.4, type='m')
for i in range(1, 5):
pp.create_bus(net, name='Bus LV2.%s' % i, vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.1', vn_kv=0.4, type='m')
pp.create_bus(net, name='Bus LV2.2.2', vn_kv=0.4, type='m')
# show only low voltage buses
net.bus[net.bus.vn_kv == 0.4]
###Output
_____no_output_____
###Markdown
Lines
###Code
# create lines
lv_lines = pd.read_csv('example_advanced/lv_lines.csv', sep=';', header=0, decimal=',')
for _, lv_line in lv_lines.iterrows():
from_bus = pp.get_element_index(net, "bus", lv_line.from_bus)
to_bus = pp.get_element_index(net, "bus", lv_line.to_bus)
pp.create_line(net, from_bus, to_bus, length_km=lv_line.length, std_type=lv_line.std_type, name=lv_line.line_name)
# show only low voltage lines
net.line[net.line.from_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Transformer
###Code
hv_bus = pp.get_element_index(net, "bus", "Bus MV4")
lv_bus = pp.get_element_index(net, "bus","Bus LV0")
pp.create_transformer_from_parameters(net, hv_bus, lv_bus, sn_mva=.4, vn_hv_kv=10, vn_lv_kv=0.4, vkr_percent=1.325, vk_percent=4, pfe_kw=0.95, i0_percent=0.2375, tap_side="hv", tap_neutral=0, tap_min=-2, tap_max=2, tap_step_percent=2.5, tp_pos=0, shift_degree=150, name='MV-LV-Trafo')
#show only low voltage transformer
net.trafo[net.trafo.lv_bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Switches
###Code
lv_buses
# Bus-line switches
lv_ls = net.line[(net.line.from_bus.isin(lv_buses.index)) & (net.line.to_bus.isin(lv_buses.index))]
for _, line in lv_ls.iterrows():
pp.create_switch(net, line.from_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.from_bus], line['name']))
pp.create_switch(net, line.to_bus, line.name, et='l', closed=True, type='LBS', name='Switch %s - %s' % (net.bus.name.at[line.to_bus], line['name']))
# Trafo-line switches
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus MV4'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch MV4 - MV-LV-Trafo')
pp.create_switch(net, pp.get_element_index(net, "bus", 'Bus LV0'), pp.get_element_index(net, "trafo", 'MV-LV-Trafo'), et='t', closed=True, type='LBS', name='Switch LV0 - MV-LV-Trafo')
# show only low vvoltage switches
net.switch[net.switch.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Loads
###Code
lv_loads = pd.read_csv('example_advanced/lv_loads.csv', sep=';', header=0, decimal=',')
for _, load in lv_loads.iterrows():
bus_idx = pp.get_element_index(net, "bus", load.bus)
pp.create_load(net, bus_idx, p_mw=load.p, q_mvar=load.q, name=load.load_name)
# show only low voltage loads
net.load[net.load.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Static generators
###Code
lv_sgens = pd.read_csv('example_advanced/lv_sgens.csv', sep=';', header=0, decimal=',')
for _, sgen in lv_sgens.iterrows():
bus_idx = pp.get_element_index(net, "bus", sgen.bus)
pp.create_sgen(net, bus_idx, p_mw=sgen.p, q_mvar=sgen.q, sn_mva=sgen.sn, type=sgen.type, name=sgen.sgen_name)
# show only low voltage static generators
net.sgen[net.sgen.bus.isin(lv_buses.index)]
###Output
_____no_output_____
###Markdown
Run a Power Flow
###Code
pp.runpp(net, calculate_voltage_angles=True, init="dc")
net
###Output
_____no_output_____ |
ArbolDeDecision/RandomForest/FeatureImportance.ipynb | ###Markdown
Pre Procesado
###Code
df = pd.read_csv( "/home/bautista/Datos/Machine-Learning-Datos/Training.csv" )
df
df.loc[df['Total_Amount_Currency'] == 'JPY', 'Total_Amount'] = df['Total_Amount']*0.0096
df.loc[df['Total_Amount_Currency'] == 'JPY', 'Total_Amount_Currency'] = 'USD'
df.loc[df['Total_Amount_Currency'] == 'EUR', 'Total_Amount'] = df['Total_Amount']*1.17
df.loc[df['Total_Amount_Currency'] == 'EUR', 'Total_Amount_Currency'] = 'USD'
df.loc[df['Total_Amount_Currency'] == 'AUD', 'Total_Amount'] = df['Total_Amount']*0.70
df.loc[df['Total_Amount_Currency'] == 'AUD', 'Total_Amount_Currency'] = 'USD'
df.loc[df['Total_Amount_Currency'] == 'GBP', 'Total_Amount'] = df['Total_Amount']*1.29
df.loc[df['Total_Amount_Currency'] == 'GBP', 'Total_Amount_Currency'] = 'USD'
df.loc[df['Total_Taxable_Amount_Currency'] == 'JPY', 'Total_Taxable_Amount'] = df['Total_Taxable_Amount']*0.0096
df.loc[df['Total_Taxable_Amount_Currency'] == 'JPY', 'Total_Taxable_Amount_Currency'] = 'USD'
df.loc[df['Total_Taxable_Amount_Currency'] == 'EUR', 'Total_Taxable_Amount'] = df['Total_Taxable_Amount']*1.17
df.loc[df['Total_Taxable_Amount_Currency'] == 'EUR', 'Total_Taxable_Amount_Currency'] = 'USD'
df.loc[df['Total_Taxable_Amount_Currency'] == 'AUD', 'Total_Taxable_Amount'] = df['Total_Taxable_Amount']*0.70
df.loc[df['Total_Taxable_Amount_Currency'] == 'AUD', 'Total_Taxable_Amount_Currency'] = 'USD'
df.loc[df['Total_Taxable_Amount_Currency'] == 'GBP', 'Total_Taxable_Amount'] = df['Total_Taxable_Amount']*1.29
df.loc[df['Total_Taxable_Amount_Currency'] == 'GBP', 'Total_Taxable_Amount_Currency'] = 'USD'
#short_df = df[['Region','Total_Amount','TRF','Pricing, Delivery_Terms_Approved','Pricing, Delivery_Terms_Quote_Appr','Stage' ]].rename(columns={'Stage': 'Decision'})
short_df = df.drop(columns = {'Sales_Contract_No', 'Total_Taxable_Amount_Currency', 'Total_Amount_Currency', 'ASP','ASP_Currency', 'ASP_(converted)_Currency'}).rename(columns={'Stage': 'Decision'})
short_df = short_df[ (short_df['Decision'] == 'Closed Won') | (short_df['Decision'] == 'Closed Lost') ]
short_df['Decision'] = np.where(short_df['Decision'] == 'Closed Won',1,0)
short_df
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
short_df = short_df[short_df['Total_Amount'] > 0]
short_df.describe()
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(10, 10))
sns.distplot(
short_df.Total_Amount,
hist = False,
rug = True,
color = "blue",
kde_kws = {'shade': True, 'linewidth': 1},
ax = axes[0]
)
axes[0].set_title("Distribución original", fontsize = 'medium')
axes[0].set_xlabel('Total_Amount', fontsize='small')
axes[0].tick_params(labelsize = 6)
sns.distplot(
np.sqrt(short_df.Total_Amount),
hist = False,
rug = True,
color = "blue",
kde_kws = {'shade': True, 'linewidth': 1},
ax = axes[1]
)
axes[1].set_title("Transformación raíz cuadrada", fontsize = 'medium')
axes[1].set_xlabel('sqrt(Total_Amount)', fontsize='small')
axes[1].tick_params(labelsize = 6)
sns.distplot(
np.log(short_df.Total_Amount),
hist = False,
rug = True,
color = "blue",
kde_kws = {'shade': True, 'linewidth': 1},
ax = axes[2]
)
axes[2].set_title("Transformación logarítmica", fontsize = 'medium')
axes[2].set_xlabel('log(Total_Amount)', fontsize='small')
axes[2].tick_params(labelsize = 6)
fig.tight_layout()
short_df.Total_Amount = np.log(short_df.Total_Amount)
short_df.shape
#uso para ver los feature importance.
#vector_binario = np.zeros(short_df.shape[0])
#for i in range(short_df.shape[0]):
# if (i%2):
# vector_binario[i] = 1
#short_df['feature_binario'] = vector_binario
#short_df
#short_df['ASP'] = short_df['ASP'].fillna('NaN')
#short_df['ASP_(converted)'] = short_df['ASP_(converted)'].fillna('NaN')
short_df.dtypes
short_df['ASP_(converted)'] = short_df['ASP_(converted)'].fillna(0)
short_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Train y Test
###Code
# División de los datos en train y test
# ==============================================================================
X_train, X_test, y_train, y_test = train_test_split(
short_df.drop(columns = 'Decision'),
short_df['Decision'],
random_state = 123
)
# One-hot-encoding de las variables categóricas
# ==============================================================================
# Se identifica el nobre de las columnas numéricas y categóricas
cat_cols = X_train.select_dtypes(include=['object', 'category']).columns.to_list()
numeric_cols = X_train.select_dtypes(include=['float64', 'int']).columns.to_list()
# Se aplica one-hot-encoding solo a las columnas categóricas
preprocessor = ColumnTransformer(
[('onehot', OneHotEncoder(handle_unknown='ignore'), cat_cols)],
remainder='passthrough'
)
# Una vez que se ha definido el objeto ColumnTransformer, con el método fit()
# se aprenden las transformaciones con los datos de entrenamiento y se aplican a
# los dos conjuntos con transform(). Ambas operaciones a la vez con fit_transform().
X_train_prep = preprocessor.fit_transform(X_train)
X_test_prep = preprocessor.transform(X_test)
#El resultado devuelto por ColumnTransformer es un numpy array, por lo que se pierden los nombres de las columnas. Es interesante poder inspeccionar cómo queda el set de datos tras el preprocesado en formato dataframe. Por defecto, OneHotEncoder ordena las nuevas columnas de izquierda a derecha por orden alfabético.
# Convertir el output del ColumnTransformer en dataframe y añadir nombre columnas
# ==============================================================================
# Nombre de todas las columnas
encoded_cat = preprocessor.named_transformers_['onehot'].get_feature_names(cat_cols)
labels = np.concatenate([encoded_cat,numeric_cols])
# Conversión a dataframe
X_train_prep = pd.DataFrame(X_train_prep, columns=labels)
X_test_prep = pd.DataFrame(X_test_prep, columns=labels)
X_train_prep.info()
X_train
X_train_prep['Total_Amount'].value_counts()
###Output
_____no_output_____
###Markdown
Grid Serch de Hiperparametros
###Code
# Grid de hiperparámetros evaluados
# ==============================================================================
param_grid = ParameterGrid(
{'n_estimators': [150],
'max_features': [5, 7, 9],
'max_depth' : [None, 3, 10, 20],
'criterion' : ['gini', 'entropy']
}
)
# Loop para ajustar un modelo con cada combinación de hiperparámetros
# ==============================================================================
resultados = {'params': [], 'oob_accuracy': []}
for params in param_grid:
modelo = RandomForestClassifier(
oob_score = True,
n_jobs = -1,
random_state = 123,
** params
)
modelo.fit(X_train_prep, y_train)
resultados['params'].append(params)
resultados['oob_accuracy'].append(modelo.oob_score_)
print(f"Modelo: {params} \u2713")
# Resultados
# ==============================================================================
resultados = pd.DataFrame(resultados)
resultados = pd.concat([resultados, resultados['params'].apply(pd.Series)], axis=1)
resultados = resultados.sort_values('oob_accuracy', ascending=False)
resultados = resultados.drop(columns = 'params')
resultados.head(4)
# VERSIÓN PARALELIZADA
# ==============================================================================
# Loop para ajustar un modelo con cada combinación de hiperparámetros
# ==============================================================================
param_grid = ParameterGrid(
{'n_estimators': [150],
'max_features': [5, 7, 9],
'max_depth' : [None, 3, 10, 20],
'criterion' : ['gini', 'entropy']
}
)
# Loop paralelizado para ajustar un modelo con cada combinación de hiperparámetros
# ==============================================================================
def eval_oob_error(X, y, modelo, params, verbose=True):
"""
Función para entrenar un modelo utilizando unos parámetros determinados
y que devuelve el out-of-bag error
"""
modelo.set_params(
oob_score = True,
n_jobs = -1,
random_state = 123,
** params
)
modelo.fit(X, y)
if verbose:
print(f"Modelo: {params} \u2713")
return{'params': params, 'oob_accuracy': modelo.oob_score_}
n_jobs = multiprocessing.cpu_count() -1
pool = multiprocessing.Pool(processes=n_jobs)
resultados = pool.starmap(
eval_oob_error,
[(X_train_prep, y_train, RandomForestClassifier(), params) for params in param_grid]
)
# Resultados
# ==============================================================================
resultados = pd.DataFrame(resultados)
resultados = pd.concat([resultados, resultados['params'].apply(pd.Series)], axis=1)
resultados = resultados.drop(columns = 'params')
resultados = resultados.sort_values('oob_accuracy', ascending=False)
resultados.head(4)
# Mejores hiperparámetros por out-of-bag error
# ==============================================================================
print("--------------------------------------------------")
print("Mejores hiperparámetros encontrados (oob-accuracy)")
print("--------------------------------------------------")
print(resultados.iloc[0,0], ":", resultados.iloc[0,:]['oob_accuracy'], "accuracy")
# Grid de hiperparámetros evaluados
# ==============================================================================
param_grid ={'n_estimators': [150],
'max_features': [5, 7, 9],
'max_depth' : [None, 3, 10, 20],
'criterion' : ['gini', 'entropy']
}
# Búsqueda por grid search con validación cruzada
# ==============================================================================
grid = GridSearchCV(
estimator = RandomForestClassifier(random_state = 123),
param_grid = param_grid,
scoring = 'accuracy',
n_jobs = multiprocessing.cpu_count() - 1,
cv = RepeatedKFold(n_splits=5, n_repeats=3, random_state=123),
refit = True,
verbose = 0,
return_train_score = True
)
grid.fit(X = X_train_prep, y = y_train)
# Resultados
# ==============================================================================
resultados = pd.DataFrame(grid.cv_results_)
resultados.filter(regex = '(param*|mean_t|std_t)') \
.drop(columns = 'params') \
.sort_values('mean_test_score', ascending = False) \
.head(4)
# Mejores hiperparámetros por validación cruzada
# ==============================================================================
print("----------------------------------------")
print("Mejores hiperparámetros encontrados (cv)")
print("----------------------------------------")
print(grid.best_params_, ":", grid.best_score_, grid.scoring)
###Output
_____no_output_____
###Markdown
Prediccion
###Code
modelo_final = grid.best_estimator_
# Error de test del modelo final
# ==============================================================================
predicciones = modelo_final.predict(X = X_test_prep)
predicciones[:10]
mat_confusion = confusion_matrix(
y_true = y_test,
y_pred = predicciones
)
accuracy = accuracy_score(
y_true = y_test,
y_pred = predicciones,
normalize = True
)
print("Matriz de confusión")
print("-------------------")
print(mat_confusion)
print("")
print(f"El accuracy de test es: {100 * accuracy} %")
print(
classification_report(
y_true = y_test,
y_pred = predicciones
)
)
###Output
_____no_output_____
###Markdown
Feature importance
###Code
importancia_predictores = pd.DataFrame(
{'predictor': X_train_prep.columns,
'importancia': modelo_final.feature_importances_}
)
print("Importancia de los predictores en el modelo")
print("-------------------------------------------")
importancia_predictores.sort_values('importancia', ascending=False)
###Output
_____no_output_____
###Markdown
Kaggle
###Code
DataFrame_test = pd.read_csv( "/home/bautista/Datos/Machine-Learning-Datos/Test/Test.csv" )
DataFrame_test
DataFrame_test.loc[df['Total_Amount_Currency'] == 'JPY', 'Total_Amount'] = DataFrame_test['Total_Amount']*0.0096
DataFrame_test.loc[df['Total_Amount_Currency'] == 'JPY', 'Total_Amount_Currency'] = 'USD'
DataFrame_test.loc[df['Total_Amount_Currency'] == 'EUR', 'Total_Amount'] = DataFrame_test['Total_Amount']*1.17
DataFrame_test.loc[df['Total_Amount_Currency'] == 'EUR', 'Total_Amount_Currency'] = 'USD'
DataFrame_test.loc[df['Total_Amount_Currency'] == 'AUD', 'Total_Amount'] = DataFrame_test['Total_Amount']*0.70
DataFrame_test.loc[df['Total_Amount_Currency'] == 'AUD', 'Total_Amount_Currency'] = 'USD'
DataFrame_test.loc[df['Total_Amount_Currency'] == 'GBP', 'Total_Amount'] = DataFrame_test['Total_Amount']*1.29
DataFrame_test.loc[df['Total_Amount_Currency'] == 'GBP', 'Total_Amount_Currency'] = 'USD'
DataFrame_test = DataFrame_test[['Opportunity_ID','Region','Total_Amount','TRF','Pricing, Delivery_Terms_Approved','Pricing, Delivery_Terms_Quote_Appr' ]]
DataFrame_test = DataFrame_test.drop_duplicates('Opportunity_ID',keep = 'last')
subir = pd.DataFrame()
subir['Opportunity_ID'] = DataFrame_test['Opportunity_ID']
DataFrame_test = DataFrame_test.drop(columns = ['Opportunity_ID'])
DataFrame_test
DataFrame_test.Total_Amount = np.log(DataFrame_test.Total_Amount)
DataFrame_test['Total_Amount'].describe()
###Output
_____no_output_____
###Markdown
Encoding
###Code
# One-hot-encoding de las variables categóricas
# ==============================================================================
# Se identifica el nobre de las columnas numéricas y categóricas
cat_cols = DataFrame_test.select_dtypes(include=['object', 'category']).columns.to_list()
numeric_cols = DataFrame_test.select_dtypes(include=['float64', 'int']).columns.to_list()
# Se aplica one-hot-encoding solo a las columnas categóricas
preprocessor = ColumnTransformer(
[('onehot', OneHotEncoder(handle_unknown='ignore'), cat_cols)],
remainder='passthrough'
)
# Una vez que se ha definido el objeto ColumnTransformer, con el método fit()
# se aprenden las transformaciones con los datos de entrenamiento y se aplican a
# los dos conjuntos con transform(). Ambas operaciones a la vez con fit_transform().
DataFrame_test_prep = preprocessor.fit_transform(DataFrame_test)
#El resultado devuelto por ColumnTransformer es un numpy array, por lo que se pierden los nombres de las columnas. Es interesante poder inspeccionar cómo queda el set de datos tras el preprocesado en formato dataframe. Por defecto, OneHotEncoder ordena las nuevas columnas de izquierda a derecha por orden alfabético.
# Convertir el output del ColumnTransformer en dataframe y añadir nombre columnas
# ==============================================================================
# Nombre de todas las columnas
encoded_cat = preprocessor.named_transformers_['onehot'].get_feature_names(cat_cols)
labels = np.concatenate([numeric_cols, encoded_cat])
# Conversión a dataframe
DataFrame_test_prep = pd.DataFrame(DataFrame_test_prep, columns=labels)
DataFrame_test_prep.info()
###Output
_____no_output_____
###Markdown
Prediction
###Code
pred_posta = modelo_final.predict(X = DataFrame_test_prep)
prueba = DataFrame_test.reset_index()['Opportunity_ID']
prueba
subir['target'] = pred_posta
subir.set_index('Opportunity_ID', inplace = True)
subir
subir['target'].value_counts()
subir.to_csv('RandomForest.csv')
###Output
_____no_output_____ |
dataproject2.ipynb | ###Markdown
Tara's Open Data Project This project aims to study the correlation between one's gender and one's health.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import seaborn as sns
from datetime import datetime
###Output
_____no_output_____
###Markdown
Some magic that tells jupyter to put graphs and things in the notebook instead of the default behaviour which is to save it as a file.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Seting the size of the plots that will come out.(Numbers in inches)
###Code
plt.rcParams['figure.figsize'] = (10, 5)
#saved_style_state = matplotlib.rcParams.copy() #a style state to go back to
###Output
_____no_output_____
###Markdown
Downloading the dataset
###Code
if os.path.isfile("Gender_StatsData.csv"):
filepath = "Gender_StatsData.csv"
print("loading from file")
else:
filepath = "https://databank.worldbank.org/data/download/Gender_Stats_csv.zip"
print("loading from the internet")
gender_data = pd.read_csv(filepath)
print("done")
gender_data.head()
###Output
_____no_output_____
###Markdown
A list of the coloumns in the dataset
###Code
gender_data.columns
###Output
_____no_output_____
###Markdown
Using the iloc property to index a row as a series
###Code
row_zero = gender_data.iloc[0]
row_zero
###Output
_____no_output_____
###Markdown
Below is a list of health related indicators which were selected from the entire list of indicators.
###Code
health_ind = ["Cause of death, by injury (% of total)", "Cause of death, by communicable diseases and maternal, prenatal and nutrition conditions (% of total)", "Incidence of HIV, ages 15-24, female (per 1,000 uninfected female population ages 15-24)", "Incidence of HIV, ages 15-24, male (per 1,000 uninfected male population ages 15-24)", "Life expectancy at birth, female (years)", "Life expectancy at birth, male (years)", "Mortality from CVD, cancer, diabetes or CRD between exact ages 30 and 70, female (%)", "Mortality from CVD, cancer, diabetes or CRD between exact ages 30 and 70, male (%)", "Mortality rate, infant, female (per 1,000 live births)", "Mortality rate, infant, male (per 1,000 live births)", "Prevalence of HIV, female (% ages 15-24)", "Prevalence of HIV, male (% ages 15-24)", "Prevalence of obesity, female (% of female population ages 18+)", "Prevalence of obesity, male (% of male population ages 18+)", "Prevalence of underweight, weight for age, female (% of children under 5)", "Prevalence of underweight, weight for age, male (% of children under 5)", "Total alcohol consumption per capita, female (liters of pure alcohol, projected estimates, female 15+ years of age)", "Total alcohol consumption per capita, male (liters of pure alcohol, projected estimates, male 15+ years of age)", "Women participating in own health care decisions (% of women age 15-49)", "Access to anti-retroviral drugs, female (%)", "Access to anti-retroviral drugs, male (%)", "Human Capital Index (HCI), Female (scale 0-1)", "Human Capital Index (HCI), Male (scale 0-1)"]
print(health_ind)
import re
for indicator in health_ind:
male = re.findall(' male', indicator)
female = re.findall('female', indicator)
if male:
male_ind = gender_data.loc[gender_data["Indicator Name"] == indicator]
if female:
female_ind = gender_data.loc[gender_data["Indicator Name"] == indicator]
print(male_ind.loc[(gender_data['Country Name'] == "Africa Eastern and Southern") &
(gender_data["Indicator Name"] == 'Access to anti-retroviral drugs, male (%)')])
def extract_data(df, indicator_name):
test_row = df.loc[(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, 'Access to anti-retroviral drugs, male (%)')
my_data_female = extract_data(gender_data, 'Access to anti-retroviral drugs, female (%)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Access to anti-retroviral drugs(%)')
ax.set(xlabel='Year', ylabel='Access to anti-retroviral drugs (%)')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "Low income", 'Access to anti-retroviral drugs, male (%)')
my_data_female = extract_data(gender_data, "Low income", 'Access to anti-retroviral drugs, female (%)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
result.head()
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Access to anti-retroviral drugs(%) in low income countries')
ax.set(xlabel='Year', ylabel='Access to anti-retroviral drugs (%)')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "World", 'Prevalence of overweight, male (% of male adults)')
my_data_female = extract_data(gender_data, "World", 'Prevalence of overweight, female (% of female adults)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Prevalence of overweight adults in the world')
ax.set(xlabel='Year', ylabel='Prevalence of Overweight(%)')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "Low income", 'Prevalence of overweight, male (% of male adults)')
my_data_female = extract_data(gender_data, "Low income", 'Prevalence of overweight, female (% of female adults)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Prevalence of overweight in low income countries (Ages 18+)')
ax.set(xlabel='Year', ylabel='Prevalence of Overweight(%)')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "World", 'Total alcohol consumption per capita, male (liters of pure alcohol, projected estimates, male 15+ years of age)')
my_data_female = extract_data(gender_data, "World", 'Total alcohol consumption per capita, female (liters of pure alcohol, projected estimates, female 15+ years of age)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Total alcohol consumption in the world')
ax.set(xlabel='Year', ylabel='Litres of pure alcohol')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "Low income", 'Total alcohol consumption per capita, male (liters of pure alcohol, projected estimates, male 15+ years of age)')
my_data_female = extract_data(gender_data, "Low income", 'Total alcohol consumption per capita, female (liters of pure alcohol, projected estimates, female 15+ years of age)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Total alcohol consumption in low income countries')
ax.set(xlabel='Year', ylabel='Litres of pure alcohol')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "World", 'Mortality rate, infant, male (per 1,000 live births)')
my_data_female = extract_data(gender_data, "World", 'Mortality rate, infant, female (per 1,000 live births)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Mortality rate of infants in the world')
ax.set(xlabel='Year', ylabel=' Amount per 1,000 live births')
plt.show()
def extract_data(df, country_name, indicator_name):
test_row = df.loc[(gender_data['Country Name'] == country_name) &
(gender_data["Indicator Name"] == indicator_name)]
year_list = []
data_list = []
country_list = []
indicator_list = []
gender_list = []
male = re.findall(' male', indicator_name)
female = re.findall('female', indicator_name)
for i in range(1960, 2021):
year_list.append(i)
data_list.append(test_row[str(i)].values[0])
country_list.append(country_name)
indicator_list.append(indicator_name)
if male:
gender_list.append('M')
if female:
gender_list.append('F')
new_df = pd.DataFrame({'Country Name': country_list, 'Indicator Name': indicator_list, 'Year': year_list, 'Data': data_list, 'Gender': gender_list})
return new_df
my_data_male = extract_data(gender_data, "Low income", 'Mortality rate, infant, male (per 1,000 live births)')
my_data_female = extract_data(gender_data, "Low income", 'Mortality rate, infant, female (per 1,000 live births)')
frames = [my_data_male, my_data_female]
result = pd.concat(frames)
ax = sns.relplot(x='Year', y='Data', data=result, kind='scatter', style='Gender').set(title= 'Mortality rate of infants in low income countries')
ax.set(xlabel='Year', ylabel=' Amount per 1,000 live births')
plt.show()
pip install RISE
###Output
Requirement already satisfied: RISE in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (5.7.1)
Requirement already satisfied: notebook>=6.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from RISE) (6.3.0)
Requirement already satisfied: jupyter-client>=5.3.4 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (6.1.12)
Requirement already satisfied: ipykernel in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (5.3.4)
Requirement already satisfied: prometheus-client in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (0.10.1)
Requirement already satisfied: nbformat in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (5.1.3)
Requirement already satisfied: jinja2 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (2.11.3)
Requirement already satisfied: argon2-cffi in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (20.1.0)
Requirement already satisfied: pyzmq>=17 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (20.0.0)
Requirement already satisfied: traitlets>=4.2.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (5.0.5)
Requirement already satisfied: jupyter-core>=4.6.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (4.7.1)
Requirement already satisfied: tornado>=6.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (6.1)
Requirement already satisfied: Send2Trash>=1.5.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (1.5.0)
Requirement already satisfied: terminado>=0.8.3 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (0.9.4)
Requirement already satisfied: ipython-genutils in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (0.2.0)
Requirement already satisfied: nbconvert in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from notebook>=6.0->RISE) (6.0.7)
Requirement already satisfied: python-dateutil>=2.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from jupyter-client>=5.3.4->notebook>=6.0->RISE) (2.8.1)
Requirement already satisfied: six>=1.5 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from python-dateutil>=2.1->jupyter-client>=5.3.4->notebook>=6.0->RISE) (1.16.0)
Requirement already satisfied: ptyprocess in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from terminado>=0.8.3->notebook>=6.0->RISE) (0.7.0)
Requirement already satisfied: cffi>=1.0.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from argon2-cffi->notebook>=6.0->RISE) (1.14.5)
Requirement already satisfied: pycparser in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from cffi>=1.0.0->argon2-cffi->notebook>=6.0->RISE) (2.20)
Requirement already satisfied: ipython>=5.0.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipykernel->notebook>=6.0->RISE) (7.22.0)
Requirement already satisfied: appnope in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipykernel->notebook>=6.0->RISE) (0.1.2)
Requirement already satisfied: pexpect>4.3 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (4.8.0)
Requirement already satisfied: setuptools>=18.5 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (52.0.0.post20210125)
Requirement already satisfied: pickleshare in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (0.7.5)
Requirement already satisfied: pygments in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (2.8.1)
Requirement already satisfied: backcall in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (0.2.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (3.0.17)
Requirement already satisfied: jedi>=0.16 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (0.17.2)
Requirement already satisfied: decorator in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (5.0.6)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from jedi>=0.16->ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (0.7.0)
Requirement already satisfied: wcwidth in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=5.0.0->ipykernel->notebook>=6.0->RISE) (0.2.5)
Requirement already satisfied: MarkupSafe>=0.23 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from jinja2->notebook>=6.0->RISE) (1.1.1)
Requirement already satisfied: jupyterlab-pygments in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.1.2)
Requirement already satisfied: bleach in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (3.3.0)
Requirement already satisfied: testpath in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.4.4)
Requirement already satisfied: entrypoints>=0.2.2 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.3)
Requirement already satisfied: defusedxml in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.7.1)
Requirement already satisfied: mistune<2,>=0.8.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (1.4.3)
Requirement already satisfied: nbclient<0.6.0,>=0.5.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=6.0->RISE) (0.5.3)
Requirement already satisfied: async-generator in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert->notebook>=6.0->RISE) (1.10)
Requirement already satisfied: nest-asyncio in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbclient<0.6.0,>=0.5.0->nbconvert->notebook>=6.0->RISE) (1.5.1)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from nbformat->notebook>=6.0->RISE) (3.2.0)
Requirement already satisfied: pyrsistent>=0.14.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook>=6.0->RISE) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook>=6.0->RISE) (20.3.0)
Requirement already satisfied: packaging in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from bleach->nbconvert->notebook>=6.0->RISE) (20.9)
Requirement already satisfied: webencodings in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from bleach->nbconvert->notebook>=6.0->RISE) (0.5.1)
Requirement already satisfied: pyparsing>=2.0.2 in /Users/tararavieshwar/opt/anaconda3/lib/python3.8/site-packages (from packaging->bleach->nbconvert->notebook>=6.0->RISE) (2.4.7)
Note: you may need to restart the kernel to use updated packages.
|
ML0101EN-RecSys-Content-Based-movies-py-v1.ipynb | ###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-01-11 17:53:55-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 19.0MB/s in 7.8s
2020-01-11 17:54:04 (19.5 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2019-12-07 16:59:48-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 16.8MB/s in 11s
2019-12-07 17:00:00 (13.4 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Content Based FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:- Create a recommendation system using collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2021-01-30 19:53:45-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 24.0MB/s in 7.0s
2021-01-30 19:53:53 (21.8 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
Bad key "text.kerning_factor" on line 4 in
/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test_patch.mplstyle.
You probably need to get an updated matplotlibrc file from
http://github.com/matplotlib/matplotlib/blob/master/matplotlibrc.template
or from the matplotlib source distribution
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents- Acquiring the Data- Preprocessing- Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`**. To download the data, we will use `!wget` to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2019-03-29 07:10:39-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[=====================>] 152.88M 34.2MB/s in 4.5s
2019-03-29 07:10:44 (34.4 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Content Based FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:- Create a recommendation system using collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-12-05 10:40:19-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 17.9MB/s in 9.7s
2020-12-05 10:40:29 (15.8 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-05-05 18:36:47-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 19.3MB/s in 7.3s
2020-05-05 18:36:55 (20.9 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Content Based FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:- Create a recommendation system using collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2021-01-13 15:57:41-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 22.8MB/s in 8.0s
2021-01-13 15:57:49 (19.0 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-04-12 01:55:26-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 29.1MB/s in 6.2s
2020-04-12 01:55:33 (24.6 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
inputMovies
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Content Based FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:- Create a recommendation system using collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2020-11-04 14:44:31-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 67.228.254.196
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 22.5MB/s in 7.3s
2020-11-04 14:44:38 (20.9 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
Content Based FilteringEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:* Create a recommendation system using collaborative filtering Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts:\Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01). Let's download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage.\**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2021-09-10 11:16:37-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%205/data/moviedataset.zip
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 29.8MB/s in 5.1s
2021-09-10 11:16:42 (29.8 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify for future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a users favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling the Pandas "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents- Acquiring the Data- Preprocessing- Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`**. To download the data, we will use `!wget` to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
unziping ...
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('ml-latest/movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ml-latest/ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____
###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-Coursera-20231514&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-Coursera-20231514&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-Coursera-20231514&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-Coursera-20231514&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2021-04-01 09:15:00-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 23.2MB/s in 6.8s
2021-04-01 09:15:07 (22.6 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the **title** column by using pandas' replace function and store in a new **year** column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the **Genres** column into a **list of Genres** to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement **Content-Based** or **Item-Item recommendation systems**. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the **userInput**. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____ |
notebooks/ML-GridSearchCV-Pipeline.ipynb | ###Markdown
Machine Learning GridSearch Pipeline
###Code
# Import libraries
import os
import sys
# cpu_count returns the number of CPUs in the system.
from multiprocessing import cpu_count
import numpy as np
import pandas as pd
# Import metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Import preprocessing methods from sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import MinMaxScaler
# Import PCA
from sklearn.decomposition import PCA
# Import feature_selection tools
from sklearn.feature_selection import VarianceThreshold
# Import models from sklearn
from sklearn.dummy import DummyClassifier
from sklearn.linear_model import LogisticRegression
# Import XGBClassifier
from xgboost.sklearn import XGBClassifier
# Import from sklearn
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
from sklearn.base import TransformerMixin
from sklearn.base import BaseEstimator
# Import plotting libraries
import matplotlib.pyplot as plt
# Modify notebook settings
pd.options.display.max_columns = 150
pd.options.display.max_rows = 150
%matplotlib inline
plt.style.use('ggplot')
###Output
/anaconda/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
Create paths to data file, append `src` directory to sys.path
###Code
# Create a variable for the project root directory
proj_root = os.path.join(os.pardir)
# Save path to the processed data file
# "dataset_processed.csv"
processed_data_file = os.path.join(proj_root,
"data",
"processed",
"dataset_processed.csv")
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(proj_root, "src")
sys.path.append(src_dir)
###Output
_____no_output_____
###Markdown
Create paths to data file, append `src` directory to sys.path
###Code
# Save the path to the folder that will contain
# the figures for the final report:
# /reports/figures
figures_dir = os.path.join(proj_root,
"reports",
"figures")
###Output
_____no_output_____
###Markdown
Read in the processed data
###Code
# Read in the processed credit card client default data set.
df = pd.read_csv(processed_data_file,
index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Train test split
###Code
# Extract X and y from df
X = df.drop('y', axis=1).values
#y = df[['y']].values
y = df['y'].values
# Train test split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=42)
# Define a function`namestr` to access the name of a variable
def namestr(obj, namespace):
return [name for name in namespace if namespace[name] is obj][0]
# Print the shape of X, y, X_train, X_test, y_train, and y_test
for var in [X, y, X_train, X_test, y_train, y_test]:
print(namestr(var, globals()),
'shape:\t',
var.shape)
###Output
X shape: (30000, 91)
y shape: (30000,)
X_train shape: (15000, 91)
X_test shape: (15000, 91)
y_train shape: (15000,)
y_test shape: (15000,)
###Markdown
Make pipeline
###Code
df_X = df.drop('y', axis=1)
def create_binary_feature_list(df=df_X,
return_binary_features=True):
"""
Docstring ...
"""
# Create boolean maskDrop the column with the target values
binary_mask = df.isin([0, 1]).all()
# If return_binary_features=True,
# create a list of the binary features.
# If return_binary_features=False,
# create a list of the nonbinary features.
features_list = list(binary_mask[binary_mask == \
return_binary_features].index)
return features_list
def binary_feature_index_list(df=df_X,
features_list=None):
"""
Docstring ...
"""
feature_index_list = [df.columns.get_loc(c) for c \
in df.columns if c in features_list]
return feature_index_list
binary_features = create_binary_feature_list(df=df_X,
return_binary_features=True)
non_binary_features = create_binary_feature_list(df=df_X,
return_binary_features=False)
binary_index_list = \
binary_feature_index_list(df=df_X,
features_list=binary_features)
non_binary_index_list = \
binary_feature_index_list(df=df_X,
features_list=non_binary_features)
print('Binary features:\n')
print(''.join('{:2s}: {:40s}'.format(str(i), col) \
for i, col in zip(binary_index_list,
binary_features)))
print('\n')
print('Non-binary features:\n')
print(''.join('{:2s}: {:40s}'.format(str(i), col) \
for i, col in zip(non_binary_index_list,
non_binary_features)))
###Output
Binary features:
26: sex_2 27: edu_2 28: edu_3 29: edu_4 30: marriage_1 31: marriage_2 32: marriage_3 33: pay_1_-2 34: pay_1_0 35: pay_1_1 36: pay_1_2 37: pay_1_3 38: pay_1_4 39: pay_1_5 40: pay_1_6 41: pay_1_7 42: pay_1_8 43: pay_2_-2 44: pay_2_0 45: pay_2_1 46: pay_2_2 47: pay_2_3 48: pay_2_4 49: pay_2_5 50: pay_2_6 51: pay_2_7 52: pay_2_8 53: pay_3_-2 54: pay_3_0 55: pay_3_1 56: pay_3_2 57: pay_3_3 58: pay_3_4 59: pay_3_5 60: pay_3_6 61: pay_3_7 62: pay_3_8 63: pay_4_-2 64: pay_4_0 65: pay_4_1 66: pay_4_2 67: pay_4_3 68: pay_4_4 69: pay_4_5 70: pay_4_6 71: pay_4_7 72: pay_4_8 73: pay_5_-2 74: pay_5_0 75: pay_5_2 76: pay_5_3 77: pay_5_4 78: pay_5_5 79: pay_5_6 80: pay_5_7 81: pay_5_8 82: pay_6_-2 83: pay_6_0 84: pay_6_2 85: pay_6_3 86: pay_6_4 87: pay_6_5 88: pay_6_6 89: pay_6_7 90: pay_6_8
Non-binary features:
0 : limit_bal 1 : age 2 : bill_amt1 3 : bill_amt2 4 : bill_amt3 5 : bill_amt4 6 : bill_amt5 7 : bill_amt6 8 : pay_amt1 9 : pay_amt2 10: pay_amt3 11: pay_amt4 12: pay_amt5 13: pay_amt6 14: bl_ratio_1 15: bl_ratio_2 16: bl_ratio_3 17: bl_ratio_4 18: bl_ratio_5 19: bl_ratio_6 20: blpl_ratio_1 21: blpl_ratio_2 22: blpl_ratio_3 23: blpl_ratio_4 24: blpl_ratio_5 25: blpl_ratio_6
###Markdown
User defined preprocessors
###Code
class NonBinary_PCA(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = PCA(n_components=None, random_state=42)
# Fit PCA only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with PCA
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_RobustScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = RobustScaler()
# Fit RobustScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with RobustScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_StandardScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = StandardScaler()
# Fit StandardScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with StandardScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_MinMaxScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = MinMaxScaler()
# Fit MinMaxScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with MinMaxScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
###Output
_____no_output_____
###Markdown
Define the pipeline
###Code
# Set a high threshold for removing near-zero variance features
#thresh_prob = 0.999
thresh_prob = 0.99
threshold = (thresh_prob * (1 - thresh_prob))
# Create pipeline
pipe = Pipeline([('preprocessing_1', VarianceThreshold(threshold)),
('preprocessing_2', None),
('preprocessing_3', None),
('classifier', DummyClassifier(strategy='most_frequent',
random_state=42))])
# Create parameter grid
param_grid = [
{'classifier': [LogisticRegression(random_state=42)],
'preprocessing_1': [None, NonBinary_RobustScaler()],
'preprocessing_2': [None, NonBinary_PCA()],
'preprocessing_3': [None, VarianceThreshold(threshold)],
'classifier__C': [0.01, 0.1],
'classifier__penalty': ['l1','l2']},
{'classifier': [XGBClassifier(objective='binary:logistic', n_estimators=1000)],
'preprocessing_1': [None, VarianceThreshold(threshold)],
'preprocessing_2': [None],
'preprocessing_3': [None],
'classifier__n_estimators': [1000],
'classifier__learning_rate': [0.01, 0.1],
'classifier__gamma': [0.01, 0.1],
'classifier__max_depth': [3, 4],
'classifier__min_child_weight': [1, 3],
'classifier__subsample': [0.8],
# 'classifier__colsample_bytree': [0.8, 1.0],
'classifier__reg_lambda': [0.1, 1.0],
'classifier__reg_alpha': [0, 0.1]}]
# Set the number of cores to be used
cores_used = cpu_count() - 1
cores_used
cores_used = 1
# Set verbosity
verbosity = 1
# Execute Grid search
grid = GridSearchCV(pipe, param_grid, cv=5, scoring='roc_auc',
verbose=verbosity, n_jobs=cores_used)
grid.fit(X_train, y_train)
print("Best params:\n{}\n".format(grid.best_params_))
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
###Output
Fitting 5 folds for each of 72 candidates, totalling 360 fits
###Markdown
Save the grid search object as a pickle file
###Code
# Save path to the `models` folder
models_folder = os.path.join(proj_root,
"models")
# full_gridsearch_file_name = 'gridsearch_pickle_20171029.pkl'
full_gridsearch_file_name = 'gridsearch_pickle.pkl'
full_gridsearch_path = os.path.join(models_folder,
full_gridsearch_file_name)
joblib.dump(grid, full_gridsearch_path)
# best_pipeline_file_name = 'pipeline_pickle_20171029.pkl'
best_pipeline_file_name = 'pipeline_pickle.pkl'
best_pipeline_path = os.path.join(models_folder,
best_pipeline_file_name)
joblib.dump(grid.best_estimator_, best_pipeline_path)
###Output
_____no_output_____
###Markdown
Grid search for best *logistic regression* model
###Code
# Create parameter grid
param_grid = [
{'classifier': [LogisticRegression(random_state=42)],
'preprocessing_1': [None], # [VarianceThreshold(threshold)],
'preprocessing_2': [NonBinary_RobustScaler()],
'preprocessing_3': [None, NonBinary_PCA(), VarianceThreshold(threshold)],
'classifier__C': [0.001, 0.01, 0.1, 1, 10, 100],
'classifier__penalty': ['l1','l2']}]
# Set the number of cores to be used
cores_used = cpu_count() - 1
cores_used
cores_used = 1
# Set verbosity
verbosity = 1
# Execute Grid search
logreg_grid = GridSearchCV(pipe, param_grid, cv=5, scoring='roc_auc',
verbose=verbosity, n_jobs=cores_used)
logreg_grid.fit(X_train, y_train)
print("Best logistic regression params:\n{}\n".format(logreg_grid.best_params_))
print("Best cross-validated logistic regression score: {:.2f}".format(logreg_grid.best_score_))
# Save the grid search object as a pickle file
models_folder = os.path.join(proj_root,
"models")
logreg_gridsearch_file_name = 'logreg_gridsearch_pickle.pkl'
logreg_gridsearch_path = os.path.join(models_folder,logreg_gridsearch_file_name)
joblib.dump(logreg_grid, logreg_gridsearch_path)
best_logreg_pipeline_file_name = 'best_logreg_pipeline_pickle.pkl'
best_logreg_pipeline_path = os.path.join(models_folder,
best_logreg_pipeline_file_name)
joblib.dump(logreg_grid.best_estimator_, best_logreg_pipeline_path)
###Output
Fitting 5 folds for each of 36 candidates, totalling 180 fits
###Markdown
Read in the best pipeline
###Code
# best_pipeline_file_name = 'pipeline_pickle_20171029.pkl'
best_pipeline_file_name = 'pipeline_pickle.pkl'
best_pipeline_path = os.path.join(models_folder,
best_pipeline_file_name)
clf = joblib.load(best_pipeline_path)
###Output
_____no_output_____
###Markdown
Read in the best logistic regression pipeline
###Code
best_logreg_pipeline_file_name = 'best_logreg_pipeline_pickle.pkl'
best_logreg_pipeline_path = os.path.join(models_folder,
best_logreg_pipeline_file_name)
logreg_clf = joblib.load(best_logreg_pipeline_path)
###Output
_____no_output_____
###Markdown
Check AUC scores
###Code
cross_val_results = cross_val_score(clf,
X_train,
y_train,
scoring="roc_auc",
cv=5,
n_jobs=1)
results_mean = np.mean(cross_val_results)
print("Best pipeline:")
print("Mean Cross validation AUC:\n{:.3f}\n".format(results_mean))
cross_val_results_logreg = cross_val_score(logreg_clf,
X_train,
y_train,
scoring="roc_auc",
cv=5,
n_jobs=1)
results_mean_logreg = np.mean(cross_val_results_logreg)
print("Best logistic regression pipeline:")
print("Mean Cross validation AUC:\n{:.3f}\n".format(results_mean_logreg))
###Output
Best logistic regression pipeline:
Mean Cross validation AUC:
0.767
###Markdown
Best logistic regression pipeline: Mean Cross validation AUC: 0.771
###Code
clf.fit(X_train, y_train)
auc_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1])
print("Train AUC:\n{:.3f}\n".format(auc_train))
auc_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
print("Test AUC:\n{:.3f}\n".format(auc_test))
dummy_clf = DummyClassifier(strategy='most_frequent',
random_state=42)
dummy_clf.fit(X_train, y_train)
dummy_auc_train = roc_auc_score(y_train,
dummy_clf.predict_proba(X_train)[:,1])
print("Dummy Train AUC:\n{:.3f}\n".format(dummy_auc_train))
dummy_auc_test = roc_auc_score(y_test,
dummy_clf.predict_proba(X_test)[:,1])
print("Dummy Test AUC:\n{:.3f}\n".format(dummy_auc_test))
###Output
Dummy Test AUC:
0.500
###Markdown
Plot the Receiver Operating Characteristic Curves
###Code
probs = clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'b', label = 'XGBoost Test AUC = %0.3f' % roc_auc)
probs = logreg_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'g', label = 'Logistic Regression\nTest AUC = %0.3f' % roc_auc)
#plt.plot([0, 1], [0, 1],'k', label = 'Baseline AUC = 0.500' )
probs = dummy_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'r--', label = 'Dummy Model AUC = %0.3f' % roc_auc)
plt.title('Receiver Operating Characteristic')
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
# figure file_name
fig_file_name = 'roc_curve'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
# Save the figure
plt.savefig(fig_path, dpi = 300)
plt.plot([0, 1], [0, 1],'k', label = 'Baseline AUC = 0.50' )
probs = clf.predict_proba(X_train)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_train, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train, preds)
plt.plot(fpr, tpr, 'b', label = 'Train AUC = %0.2f' % roc_auc)
probs = clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'g', label = 'Test AUC = %0.2f' % roc_auc)
probs = dummy_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'r--', label = 'Dummy Model AUC = %0.2f' % roc_auc)
plt.title('Receiver Operating Characteristic')
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.savefig('roc_curve.png', dpi = 300)
###Output
_____no_output_____
###Markdown
Check accuracy scores
###Code
cross_val_accuracy = cross_val_score(clf,
X_train,
y_train,
scoring="accuracy",
cv=5,
n_jobs=1,
verbose=1)
accuracy_mean = np.mean(cross_val_accuracy)
print("Mean Cross validation accuracy:\n{:.3f}\n".format(accuracy_mean))
dummy_cross_val_accuracy = cross_val_score(dummy_clf,
X_train,
y_train,
scoring="accuracy",
cv=5,
n_jobs=1)
dummy_accuracy_mean = np.mean(dummy_cross_val_accuracy)
print("Baseline accuracy:\n{:.3f}\n".format(dummy_accuracy_mean))
accuracy_train = accuracy_score(y_train,
clf.predict(X_train))
print("Train Accuracy:\n{:.3f}\n".format(accuracy_train))
print("Train Error Rate:\n{:.3f}\n".format(1 - accuracy_train))
accuracy_test = accuracy_score(y_test,
clf.predict(X_test))
print("Test Accuracy:\n{:.3f}\n".format(accuracy_test))
print("Test Error Rate:\n{:.3f}\n".format(1 - accuracy_test))
###Output
Test Accuracy:
0.821
Test Error Rate:
0.179
###Markdown
Save the trained model object as a pickle file
###Code
clf.fit(X_train, y_train)
# trained_model = 'trained_model_20171029.pkl'
trained_model = 'trained_model.pkl'
trained_model_path = os.path.join(models_folder,
trained_model)
joblib.dump(clf, trained_model_path)
###Output
_____no_output_____
###Markdown
Load trained model
###Code
# Save path to the `models` folder
models_folder = os.path.join(proj_root,
"models")
trained_model = 'trained_model.pkl'
trained_model_path = os.path.join(models_folder,
trained_model)
clf = joblib.load(trained_model_path)
clf
###Output
_____no_output_____
###Markdown
Lift Charts
###Code
def lift_chart_area_ratio(clf, X, y):
"""
"""
# Create an array of classification thresholds
# ranging from 0 to 1.
thresholds = np.arange(0.0, 1.0001, 0.0001)[np.newaxis, :]
true_actual = (y == 1)[:, np.newaxis]
false_actual = (y != 1)[:, np.newaxis]
predicted_probabilities = clf.predict_proba(X)[:,1][:, np.newaxis]
predicted_true = np.greater(predicted_probabilities, thresholds)
tp = true_actual * predicted_true
fp = false_actual * predicted_true
true_positive_count = np.sum(tp, axis=0)
false_positive_count = np.sum(fp, axis=0)
total = true_positive_count + false_positive_count
# Theoretically best curve
tp_best = np.clip(total, 0, np.max(true_positive_count))
#Calculate area ratio
area_best = np.abs(np.trapz(tp_best, total))
area_model = np.abs(np.trapz(true_positive_count, total))
area_baseline = np.max(total) * np.max(true_positive_count) / 2
area_ratio = (area_model - area_baseline) / \
(area_best - area_baseline)
return area_ratio, true_positive_count, total, tp_best
def plot_lift_chart(total,
true_positive_count,
tp_best,
title,
fname):
"""
"""
plt.plot(total,
true_positive_count,
'r',
label = 'Model Curve')
plt.plot(total,
tp_best,
'b',
label = 'Theoretically Best Curve')
plt.plot([0, np.max(total)],
[0, np.max(true_positive_count)],
'k',
label = 'Baseline Curve' )
plt.title(title)
plt.legend(loc = 'lower right')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.ylabel('True Positives')
plt.xlabel('True Positives + False Positives')
plt.savefig(fname, dpi = 300)
###Output
_____no_output_____
###Markdown
Train Set Lift Chart Area Ratio
###Code
area_ratio_train, true_positive_count_train, \
total_train, tp_best_train = \
lift_chart_area_ratio(clf, X_train, y_train)
title = 'Lift Chart - Training Set\n' + \
'(Area Ratio = {:.3f})'.format(area_ratio_train)
# figure file_name
fig_file_name = 'lift_chart_train'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
plot_lift_chart(total_train,
true_positive_count_train,
tp_best_train,
title,
fig_path)
print("Area ratio:\t",
"{:.3f}".format(area_ratio_train))
area_ratio_test, true_positive_count_test, \
total_test, tp_best_test = \
lift_chart_area_ratio(clf, X_test, y_test)
title = 'Lift Chart - Test Set\n' + \
'(Area Ratio = {:.3f})'.format(area_ratio_test)
# figure file_name
fig_file_name = 'lift_chart_test'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
plot_lift_chart(total_test,
true_positive_count_test,
tp_best_test,
title,
fig_path)
print("Area ratio:\t",
"{:.3f}".format(area_ratio_test))
###Output
Area ratio: 0.616
|
examples/convert.ipynb | ###Markdown
Example notebook for the functions contained in cry_convert.py CRYSTAL pymatgen cry_out2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output
from crystal_functions.convert import cry_out2pmg
cry_output = Crystal_output()
cry_output.read_cry_output('data/mgo_optgeom.out')
pmg_structure = cry_out2pmg(cry_output,initial=False)
pmg_structure.lattice.matrix
###Output
_____no_output_____
###Markdown
cry_gui2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output, Crystal_gui
from crystal_functions.convert import cry_gui2pmg
gui_object = Crystal_gui()
gui_object.read_cry_gui('data/mgo.gui')
mgo_pmg = cry_gui2pmg(gui_object)
mgo_pmg.cart_coords
###Output
_____no_output_____
###Markdown
cry_pmg2gui function
###Code
from pymatgen.core.surface import Structure, Lattice
from crystal_functions.convert import cry_pmg2gui
from crystal_functions.file_readwrite import write_crystal_gui
lattice = Lattice.cubic(3.)
mgo_pmg_obj = Structure(lattice, ["Mg", "O"],
[[0,0,0], [.5,.5,.5]])
mgo_gui = cry_pmg2gui(mgo_pmg_obj, symmetry=True)
write_crystal_gui('data/mgo_gui_from_pmg.gui',mgo_gui)
###Output
_____no_output_____
###Markdown
cry_bands2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output, Properties_output
from crystal_functions.convert import cry_bands2pmg
###Output
_____no_output_____
###Markdown
Read the band file and convert to a pymatgen object
###Code
cry_output = Crystal_output()
cry_output.read_cry_output('data/mgo_optgeom.out')
cry_bands = Properties_output().read_cry_bands('data/mgo_BAND_dat.BAND')
bs = cry_bands2pmg(cry_output,cry_bands,labels=['\\Gamma','B','C','\\Gamma','E'])
###Output
_____no_output_____
###Markdown
Plot the bands
###Code
%matplotlib inline
from pymatgen.electronic_structure.plotter import BSPlotter
bsplot = BSPlotter(bs)
bsplot.get_plot(ylim=(-10, 10), zero_to_efermi=True)
###Output
_____no_output_____
###Markdown
CRYSTAL ASE cry_gui2ase function
###Code
from crystal_functions.file_readwrite import Crystal_gui
from crystal_functions.convert import cry_gui2ase
mgo_gui = Crystal_gui()
mgo_gui.read_cry_gui('data/mgo_optgeom.gui')
mgo_ase = cry_gui2ase(mgo_gui)
mgo_ase
###Output
_____no_output_____
###Markdown
cry_ase2gui function
###Code
from crystal_functions.file_readwrite import write_crystal_gui
from crystal_functions.convert import cry_ase2gui
from ase.build import bulk
copper_ase = bulk('Cu', 'fcc', a=3.6)
copper_gui = cry_ase2gui(copper_ase, symmetry=True)
write_crystal_gui('data/copper_from_ase.gui',copper_gui)
###Output
_____no_output_____
###Markdown
cry_out2gui function
###Code
from crystal_functions.file_readwrite import Crystal_output
from crystal_functions.convert import cry_out2ase
mgo_out = Crystal_output()
mgo_out.read_cry_output('data/mgo_optgeom.out')
mgo_ase = cry_out2ase(mgo_out)
mgo_ase
###Output
_____no_output_____
###Markdown
Saving structure files (.cif and .xyz) cry_gui2cif function
###Code
from crystal_functions.file_readwrite import Crystal_gui
from crystal_functions.convert import cry_gui2cif
mgo_gui = Crystal_gui()
mgo_gui.read_cry_gui('data/mgo_optgeom.gui')
cif_file_name = 'data/mgo_optgeom.cif'
cry_gui2cif(cif_file_name,mgo_gui)
! cat data/mgo_optgeom.cif
###Output
# generated using pymatgen
data_MgO
_symmetry_space_group_name_H-M 'P 1'
_cell_length_a 2.99828833
_cell_length_b 2.99828833
_cell_length_c 2.99828833
_cell_angle_alpha 60.00000000
_cell_angle_beta 60.00000000
_cell_angle_gamma 60.00000000
_symmetry_Int_Tables_number 1
_chemical_formula_structural MgO
_chemical_formula_sum 'Mg1 O1'
_cell_volume 19.05922268
_cell_formula_units_Z 1
loop_
_symmetry_equiv_pos_site_id
_symmetry_equiv_pos_as_xyz
1 'x, y, z'
loop_
_atom_site_type_symbol
_atom_site_label
_atom_site_symmetry_multiplicity
_atom_site_fract_x
_atom_site_fract_y
_atom_site_fract_z
_atom_site_occupancy
Mg Mg0 1 0.00000000 0.00000000 0.00000000 1
O O1 1 -0.50000000 -0.50000000 -0.50000000 1
###Markdown
cry_out2cif function
###Code
from crystal_functions.file_readwrite import Crystal_output
from crystal_functions.convert import cry_gui2cif
mgo_out = Crystal_output()
mgo_out.read_cry_output('data/mgo_optgeom.out')
cif_file_name = 'data/mgo_optgeom.cif'
cry_gui2cif(cif_file_name,mgo_gui)
###Output
_____no_output_____
###Markdown
Converting a simple NetCDF file to a TileDB array Import packages
###Code
import netCDF4
import numpy as np
import tiledb
from tiledb.cf import Group, GroupSchema
from tiledb.cf.engines.netcdf4_engine import NetCDF4ConverterEngine
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Create an example NetCDF file Example datasetCreate two 100x100 numpy arrays:
###Code
x_data = np.linspace(-5.0, 5.0, 100)
y_data = np.linspace(-5.0, 5.0, 100)
xv, yv = np.meshgrid(x_data, y_data, sparse=True)
A1_data = xv + yv
A2_data = np.sin((xv / 2.0) ** 2 + yv ** 2)
###Output
_____no_output_____
###Markdown
If the file does not exist yet, write the example data to a netcdf file:
###Code
netcdf_file = "output/simple1.nc"
vfs = tiledb.VFS()
if not vfs.is_file(netcdf_file):
with netCDF4.Dataset(netcdf_file, mode="w") as dataset:
dataset.setncatts({"title": "Simple dataset for examples"})
dataset.createDimension("x", 100)
dataset.createDimension("y", 100)
A1 = dataset.createVariable("A1", np.float64, ("x", "y"))
A1.setncattr("full_name", "Example matrix A1")
A1.setncattr("description", "x + y")
A1[:, :] = A1_data
A2 = dataset.createVariable("A2", np.float64, ("x", "y"))
A2[:, :] = A2_data
A2.setncattr("full_name", "Example matrix A2")
A2.setncattr("description", "sin((x/2)^2 + y^2")
x1 = dataset.createVariable("x_data", np.float64, ("x",))
x1[:] = x_data
y = dataset.createVariable("y_data", np.float64, ("y",))
y[:] = y_data
print(f"Created example NetCDF file `{netcdf_file}`.")
else:
print(f"Example NetCDF file `{netcdf_file}` already exists.")
###Output
_____no_output_____
###Markdown
Examine the variables in the netcdf file:
###Code
netcdf_data = netCDF4.Dataset(netcdf_file)
print(netcdf_data.variables)
###Output
_____no_output_____
###Markdown
Convert the NetCDF file to a TileDB arrayBefore converting the file create a converter that contains the parameters for the conversion. Optionally the following parameters can be added:* `unlimited_dim_size`: The size of the domain for TileDB dimensions created from unlimited NetCDF dimensions.* `dim_dtype`: The numpy dtype for TileDB dimensions.* `tiles_by_var`: A map from the name of a NetCDF variable to the tiles of the dimensions of the variable in the generated NetCDF array.* `tiles_by_dims`: A map from the name of NetCDF dimensions defining a variable to the tiles of those dimensions in the generated NetCDF array.* `collect_attrs`: If True, store all attributes with the same dimensions in the same array. Otherwise, store each attribute in a scalar array.* `collect_scalar_attrs`: If True, store all attributes with no dimensions in the same array. This is always done if collect_attributes=True.For example, the below converter will create a separate array for each of the attributes in the NetCDf file with `collect_attrs=False`:```converter = NetCDF4ConverterEngine.from_file( netcdf_file, collect_attrs = False)```
###Code
converter = NetCDF4ConverterEngine.from_file(
netcdf_file,
coords_to_dims=False,
collect_attrs=True,
dim_dtype=np.uint32,
tiles_by_dims={("x", "y"): (20,20), ("x",): (20,), ("y",): (20,)},
)
converter
###Output
_____no_output_____
###Markdown
Rename the array names to be more descriptive:
###Code
converter.rename_array('array0', 'x')
converter.rename_array('array1', 'matrices')
converter.rename_array('array2', 'y')
###Output
_____no_output_____
###Markdown
Run the conversions to create two dense TileDB arrays:
###Code
group_uri = "output/tiledb_simple1"
converter.convert_to_group(group_uri)
###Output
_____no_output_____
###Markdown
Examine the TileDB group schema
###Code
group_schema = GroupSchema.load(group_uri)
group_schema
###Output
_____no_output_____
###Markdown
Examine the data in the arraysOpen the attributes from the generated TileDB group:
###Code
with Group(group_uri, attr="x.data") as group:
x = group.array[:]
with Group(group_uri, attr="y.data") as group:
y = group.array[:]
with Group(group_uri, array="matrices") as group:
data = group.array[...]
A1 = data["A1"]
A2 = data["A2"]
a1_description = group.get_attr_metadata("A1")["description"]
a2_description = group.get_attr_metadata("A2")["description"]
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].contourf(x, y, A1);
axes[0].set_title(a1_description);
axes[1].contourf(x, y, A2);
axes[1].set_title(a2_description);
###Output
_____no_output_____
###Markdown
Binary classification Load dataset
###Code
df = pd.read_csv('titanic_train.csv')
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
Train model
###Code
feature_columns = ['Age', 'Fare', 'Pclass', 'Embarked']
label_column = "Survived"
y = df[[label_column]]
le = LabelEncoder()
y_enc = le.fit_transform(y)
x = df[feature_columns]
x_train, x_test, y_train, y_test = train_test_split(x, y_enc)
ebm = ExplainableBoostingClassifier(
interactions=2,
feature_types=['continuous', 'continuous', 'continuous','categorical']
)
ebm.fit(x_train, y_train)
# A lookup at the generated model
ebm_global = ebm.explain_global()
show(ebm_global)
###Output
_____no_output_____
###Markdown
Convert model
###Code
onnx_model = ebm2onnx.to_onnx(
model=ebm,
dtype=ebm2onnx.get_dtype_from_pandas(x_train),
name="ebm",
)
###Output
_____no_output_____
###Markdown
Predict with EBM implementation
###Code
ebm_pred = ebm.predict(x_test)
pd.DataFrame(precision_recall_fscore_support(y_test, ebm_pred, average=None), index=['Precision', 'Recall', 'FScore', 'Support'])
###Output
_____no_output_____
###Markdown
Predict with ONNX Runtime
###Code
_, filename = tempfile.mkstemp()
onnx.save_model(onnx_model, filename)
sess = rt.InferenceSession(filename)
onnx_pred = sess.run(None, {
'Age': x_test['Age'].values,
'Fare': x_test['Fare'].values,
'Pclass': x_test['Pclass'].values,
'Embarked': x_test['Embarked'].values,
})
print("metrics of output {}:".format(sess.get_outputs()[0].name))
pd.DataFrame(precision_recall_fscore_support(y_test, onnx_pred[0], average=None), index=['Precision', 'Recall', 'FScore', 'Support'])
###Output
metrics of output predict_0:
###Markdown
Example notebook for the functions contained in cry_convert.py cry_out2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output
from crystal_functions.convert import cry_out2pmg
cry_output = Crystal_output('data/mgo_optgeom.out')
pmg_structure = cry_out2pmg(cry_output,initial=False)
pmg_structure.lattice.matrix
###Output
_____no_output_____
###Markdown
cry_gui2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output
from crystal_functions.convert import cry_gui2pmg
pmg_structure = cry_gui2pmg('data/mgo_optgeom.gui')
pmg_structure.cart_coords
###Output
_____no_output_____
###Markdown
cry_bands2pmg function
###Code
from crystal_functions.file_readwrite import Crystal_output, Properties_output
from crystal_functions.convert import cry_bands2pmg
###Output
_____no_output_____
###Markdown
Read the band file and convert to a pymatgen object
###Code
cry_output = Crystal_output('data/mgo_optgeom.out')
cry_bands = Properties_output('data/mgo_BAND_dat.BAND').read_cry_bands()
bs = cry_bands2pmg(cry_output,cry_bands,labels=['\\Gamma','B','C','\\Gamma','E'])
###Output
_____no_output_____
###Markdown
Plot the bands
###Code
%matplotlib inline
from pymatgen.electronic_structure.plotter import BSPlotter
bsplot = BSPlotter(bs)
bsplot.get_plot(ylim=(-10, 10), zero_to_efermi=True)
###Output
<frozen importlib._bootstrap>:228: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
|
code/chap20-MINE.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 20Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Dropping penniesI'll start by getting the units we need from Pint.
###Code
m = UNITS.meter
s = UNITS.second
###Output
_____no_output_____
###Markdown
And defining the initial state.
###Code
init = State(y=381 * m,
v=0 * m/s)
###Output
_____no_output_____
###Markdown
Acceleration due to gravity is about 9.8 m / s$^2$.
###Code
g = 9.8 * m/s**2
###Output
_____no_output_____
###Markdown
When we call `odeint`, we need an array of timestamps where we want to compute the solution.I'll start with a duration of 10 seconds.
###Code
t_end = 10 * s
###Output
_____no_output_____
###Markdown
Now we make a `System` object.
###Code
system = System(init=init, g=g, t_end=t_end)
###Output
_____no_output_____
###Markdown
And define the slope function.
###Code
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
"""
y, v = state
unpack(system)
dydt = v
dvdt = -g
return dydt, dvdt
###Output
_____no_output_____
###Markdown
It's always a good idea to test the slope function with the initial conditions.
###Code
dydt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
###Output
0.0 meter / second
-9.8 meter / second ** 2
###Markdown
Now we're ready to call `run_ode_solver`
###Code
results, details = run_ode_solver(system, slope_func, max_step=0.5*s)
details.message
###Output
_____no_output_____
###Markdown
Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
And here's position as a function of time:
###Code
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
###Output
Saving figure to file figs/chap09-fig01.pdf
###Markdown
Onto the sidewalkTo figure out when the penny hit the sidewalk, we can use `crossings`, which finds the times where a `Series` passes through a given value.
###Code
t_crossings = crossings(results.y, 0)
###Output
_____no_output_____
###Markdown
For this example there should be just one crossing, the time when the penny hits the sidewalk.
###Code
t_sidewalk = t_crossings[0] * s
###Output
_____no_output_____
###Markdown
We can compare that to the exact result. Without air resistance, we have$v = -g t$and$y = 381 - g t^2 / 2$Setting $y=0$ and solving for $t$ yields$t = \sqrt{\frac{2 y_{init}}{g}}$
###Code
sqrt(2 * init.y / g)
###Output
_____no_output_____
###Markdown
The estimate is accurate to about 10 decimal places. EventsInstead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. `run_ode_solver` provides exactly the tool we need, **event functions**.Here's an event function that returns the height of the penny above the sidewalk:
###Code
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
###Output
_____no_output_____
###Markdown
And here's how we pass it to `run_ode_solver`. The solver should run until the event function returns 0, and then terminate.
###Code
results, details = run_ode_solver(system, slope_func, events=event_func)
details
###Output
_____no_output_____
###Markdown
The message from the solver indicates the solver stopped because the event we wanted to detect happened.Here are the results:
###Code
results
###Output
_____no_output_____
###Markdown
With the `events` option, the solver returns the actual time steps it computed, which are not necessarily equally spaced. The last time step is when the event occurred:
###Code
t_sidewalk = get_last_label(results) * s
###Output
_____no_output_____
###Markdown
Unfortunately, `run_ode_solver` does not carry the units through the computation, so we have to put them back at the end.We could also get the time of the event from `details`, but it's a minor nuisance because it comes packed in an array:
###Code
details.t_events[0][0] * s
###Output
_____no_output_____
###Markdown
The result is accurate to about 15 decimal places.We can also check the velocity of the penny when it hits the sidewalk:
###Code
v_sidewalk = get_last_value(results.v) * m / s
###Output
_____no_output_____
###Markdown
And convert to kilometers per hour.
###Code
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
###Output
_____no_output_____
###Markdown
If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.So it's a good thing there is air resistance. Under the hoodHere is the source code for `crossings` so you can see what's happening under the hood:
###Code
%psource crossings
###Output
_____no_output_____
###Markdown
The [documentation of InterpolatedUnivariateSpline is here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.InterpolatedUnivariateSpline.html).And you can read the [documentation of `scipy.integrate.solve_ivp`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html) to learn more about how `run_ode_solver` works. Exercises**Exercise:** Here's a question from the web site [Ask an Astronomer](http://curious.astro.cornell.edu/about-us/39-our-solar-system/the-earth/other-catastrophes/57-how-long-would-it-take-the-earth-to-fall-into-the-sun-intermediate):"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."Use `run_ode_solver` to answer this question.Here are some suggestions about how to proceed:1. Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.2. When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.3. Express your answer in days, and plot the results as millions of kilometers versus days.If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.You might also be interested to know that [it's actually not that easy to get to the Sun](https://www.theatlantic.com/science/archive/2018/08/parker-solar-probe-launch-nasa/567197/).
###Code
m = UNITS.meters
kg = UNITS.kilograms
N = UNITS.newtons
s = UNITS.seconds
AU = UNITS.astronomical_unit
init = State(r=1.496 * 10 ** 11 * m,
v=0 * m/s)
r_0 = (1 * AU).to_base_units()
v_0 = 0 * m / s
init = State(r=r_0,
v=v_0)
r_earth = 6.371e6 * m
r_sun = 695.508e6 * m
def universal_gravitation(state, system):
"""Computes gravitational force.
state: State object with position and velocity
system: System object with m1, m2, and G
"""
r, v = state
unpack(system)
force = G * m1 * m2 / r**2
return force
system = System(init=init,
G=6.674e-11 * N / kg**2 * m**2,
m1=1.989e30 * kg,
r_final=r_sun + r_earth,
m2=5.972e24 * kg,
t_end=1e7 * s)
def slope_func(state, t, system):
"""Compute derivatives of the state."""
y, v = state
unpack(system)
force = universal_gravitation(state, system)
dydt = v
dvdt = -force / m2
return dydt, dvdt
drdt, dvdt = slope_func(init, 0, system)
print(dydt)
print(dvdt)
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
r, v = state
return r - system.r_final
results, details = run_ode_solver(system, slope_func,events=event_func)
results
plot(results.r)
results.index /= 60 * 60 * 24
results.r /= 1e9
plot(results.r, label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
def event_func(state, t, system):
r, v = state
return r - system.r_final
event_func(init, 0, system)
results, details = run_ode_solver(system, slope_func,events=event_func)
results
plot(results.r)
###Output
_____no_output_____ |
examples/futures.ipynb | ###Markdown
导入库
###Code
import efinance as ef
###Output
_____no_output_____
###Markdown
使用例子 获取四大交易所期货信息
###Code
ef.futures.get_futures_base_info()
###Output
_____no_output_____
###Markdown
获取单个期货 k 线数据
###Code
secid = '115.ZCM'
ef.futures.get_quote_history(secid)
###Output
_____no_output_____
###Markdown
获取多个期货历史 k 线数据
###Code
secids = ['115.ZCM','115.ZC109']
ef.futures.get_quote_history(secids)
###Output
Processing: 115.ZCM: 100%|██████████| 2/2 [00:00<00:00, 3.69it/s]
###Markdown
导入库
###Code
import efinance as ef
###Output
_____no_output_____
###Markdown
使用例子 获取交易所期货信息
###Code
ef.futures.get_futures_base_info()
###Output
_____no_output_____
###Markdown
获取期货实时行情信息
###Code
ef.futures.get_realtime_quotes()
###Output
_____no_output_____
###Markdown
获取单个期货 k 线数据
###Code
quote_id = '115.ZCM'
ef.futures.get_quote_history(quote_id)
###Output
_____no_output_____
###Markdown
单个期货的 5 分钟K线数据
###Code
quote_id = '115.ZCM'
freq = 5
ef.futures.get_quote_history(quote_id,klt=freq)
###Output
_____no_output_____
###Markdown
获取多个期货历史 k 线数据
###Code
# 多个行情ID构成的列表
quote_ids = ['115.ZCM','115.ZC109']
# 一次性获取多个期货的日K线行情
futures_df = ef.futures.get_quote_history(quote_ids)
# 查看行情ID 为 '115.ZCM' 的期货行情数据
futures_df['115.ZCM']
###Output
Processing => 115.ZCM: 50%|█████ | 1/2 [00:00<00:00, 3.23it/s]
###Markdown
导入库
###Code
import efinance as ef
###Output
_____no_output_____
###Markdown
使用例子 获取四大交易所期货信息
###Code
ef.futures.get_futures_base_info()
###Output
_____no_output_____
###Markdown
获取单个期货 k 线数据
###Code
secid = '115.ZCM'
ef.futures.get_quote_history(secid)
###Output
_____no_output_____
###Markdown
获取多个期货历史 k 线数据
###Code
secids = ['115.ZCM','115.ZC109']
ef.futures.get_quote_history(secids)
###Output
_____no_output_____ |
audrey_demafo_DataDrivenOptimization (1).ipynb | ###Markdown
$$\underline{\textbf{Solving Optimization Problem using Python Programming}}$$ $\textbf{Covid-19 dataset WHO-COVID-19-global-data.csv}$ Import Package $\textbf{Reading the data}$
###Code
pd.read_csv('WHO-COVID-19-global-data.csv')
Covid = pd.read_csv('WHO-COVID-19-global-data.csv',parse_dates=[0])
Covid
###Output
_____no_output_____
###Markdown
1. $\textbf{Prediction of the number of New_Cases with Linear_Regression}$ $\textbf{Extracting the columns of dates from the initial dataset}$
###Code
col=Covid.iloc[:,0]
col
###Output
_____no_output_____
###Markdown
$\textbf{Building a new dataset with year, month and day}$
###Code
A=pd.DataFrame({'Year': col.dt.year,'Month':col.dt.month,'Day':col.dt.day,'Country':Covid['Country'], 'New_cases':Covid['New_cases'],'New_deaths':Covid['New_deaths'], 'Country_code':Covid['Country_code'], 'Cumulative_cases':Covid['Cumulative_cases'], 'Cumulatives_deaths':Covid['Cumulative_deaths'],'WHO_region':Covid['WHO_region']})
A
###Output
_____no_output_____
###Markdown
$\textbf{Building a dataset for only Angola}$
###Code
D = A[(A['Year']==2021) & (A['Month']==11) & (A['Country']=='Angola')]
D
###Output
_____no_output_____
###Markdown
$\textbf{Training the data}$
###Code
x = D[['Day']].values
Y = D[['New_cases']].values
x_train,x_test,Y_train,Y_test = train_test_split(x,Y,test_size=0.5,random_state=0)
reg = LinearRegression()
reg.fit(x_train,Y_train)
###Output
_____no_output_____
###Markdown
$\textbf{Prediction}$
###Code
y_pred = reg.predict(x_test)
###Output
_____no_output_____
###Markdown
$\textbf{Prediction for November 20, 2021}$
###Code
n_20 = reg.predict([[20]]).sum()
print('The number of new case on November 20, 2021 is:', n_20)
###Output
The number of new case on November 20, 2021 is: 14.809203142536475
###Markdown
$\textbf{Prediction for November 21, 2021}$
###Code
n_21 = reg.predict([[21]]).sum()
print('The number of new case on November 21, 2021 is:', n_21)
###Output
The number of new case on November 21, 2021 is: 12.770482603815942
###Markdown
PLot
###Code
plt.scatter(y_pred,Y_test,color='b')
plt.plot(Y_test,Y_test,color='r',linewidth=5)
###Output
_____no_output_____
###Markdown
$\textbf{Mean Square Error for linear regression}$
###Code
err = mean_squared_error(Y_test, y_pred)
print('The mean squared error is:', err)
###Output
The mean squared error is: 294.06763828583894
###Markdown
$\textbf{Lasso Regression}$ $\textbf{Training the dataset}$
###Code
reg1=Lasso(alpha = 0.3)
reg1.fit(x_train,Y_test)
y1_pred=reg1.predict(x_test)
###Output
_____no_output_____
###Markdown
$\textbf{Plot}$
###Code
plt.scatter(y1_pred,Y_test,color='b')
plt.plot(Y_test,Y_test,color='r',linewidth=3)
###Output
_____no_output_____
###Markdown
$\textbf{The mean Square Error for Lasso Regression}$
###Code
err1 = mean_squared_error(Y_test, y1_pred)
print('The mean squarred error is:',err1)
###Output
The mean squarred error is: 215.54175287920984
###Markdown
$\textbf{Prediction for November 20, 2021}$
###Code
reg1.predict([[20]]).sum()
###Output
_____no_output_____
###Markdown
$\textbf{Prediction for November 21, 2021}$
###Code
reg1.predict([[21]]).sum()
###Output
_____no_output_____
###Markdown
The Lasso model is more accurate than the LinearRegression model because its mean squared error is less than the other one. However, it's still not giving a better approximation. In conclusion, the Linear Model Regression are good for the prediction of this problem. 2. $\textbf{Prediction for the average number of New death for the whole Africa}$
###Code
DD = A[(A['Year']==2021) & (A['Month']==11) & (A['WHO_region']=='AFRO')]
D1 = DD.groupby(['Year','Day','Month','Country']).mean()
D2 = D1.reset_index()
D1
xx = D2[['Day']].values
YY = D2[['New_deaths']].values
xx_train,xx_test,YY_train,YY_test = train_test_split(xx,YY,test_size=0.2,random_state=0)
reg2 = LinearRegression()
reg2.fit(xx_train,YY_train)
yy_pred = reg2.predict(xx_test)
###Output
_____no_output_____
###Markdown
$\textbf{Prediction for November 20, 2021}$
###Code
reg2.predict(np.array([[20]]))
###Output
_____no_output_____
###Markdown
$\textbf{Prediction for November 21, 2021}$
###Code
reg2.predict(np.array([[21]]))
###Output
_____no_output_____
###Markdown
$$\textbf{SAHeart.csv}$$ 1. (a). $\textbf{Uploading the dataset}$
###Code
H = pd.read_csv('SAheart.data')
H
###Output
_____no_output_____
###Markdown
(b). $\textbf{replacing non-number data with a reasonable numerical representation}$
###Code
H_dummies = pd.get_dummies(H,columns=['famhist'])
H_dummies
X_d = H_dummies[['sbp','tobacco','ldl','adiposity','typea','obesity','alcohol','age','famhist_Absent','famhist_Present']].values
###Output
_____no_output_____
###Markdown
2. $\textbf{Training the LogisticRegression Model}$
###Code
y_d = H_dummies.iloc[:,[9]].values
X_d_train,X_d_test,y_d_train,y_d_test = train_test_split(X_d,y_d,test_size=0.25,random_state = 0)
clas = LogisticRegression()
clas.fit(X_d_train,y_d_train)
Y_pred = clas.predict(X_d_test)
Y1 = Y_pred.reshape(-1,1)
Y1
error = mean_squared_error(y_d_test,Y_pred1)
print('The mean squarred error is:', error)
###Output
The mean squarred error is: 0.2672413793103448
###Markdown
3. $\textbf{Identify if the a patient with the following data is of high risk or not}$
###Code
Y_pred = clas.predict([[133, 3.3, 4.6, 34.5,0,1, 52, 30, 32, 44]])
if Y_pred == True:
print ('There is a high probability for the person to get ill.')
else:
print('The probability for the person to have heart disease is low')
###Output
There is a high probability for the person to get ill.
###Markdown
4. $\textbf{The most important factors for heart disease}$
###Code
H_dummies.corr()
###Output
_____no_output_____
###Markdown
The most determinant factor are:- $\textbf{The age:}$ with around 37.29% - $\textbf{tobacco:} $ with around 29.97%- $\textbf{famhist_present:} $ with around 27.23%- $\textbf{Idl:} $ with around 26.30%- $\textbf{adposity:} $ with around 25.41% 5. $\textbf{Does having a family history of coronary heart disease affect a patients chance of havingcoronary heart disease?}$ According to the correlation between the factors and the target which is $\textit{Coronary heart disease},$ the response is $\textbf{YES}$ because if your family has family history of coronary heart disease, your probability to get it also is more that 27%
###Code
acc = accuracy_score(y_d_test,Y_pred1)
print('The accuracy of the model is:', acc)
###Output
The accuracy of the model is: 0.7327586206896551
|
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
!pip install -q tensorflow-probability
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
try:
tfe.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
try:
tfe.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction >[TensorFlow Distributions: A Gentle Introduction](scrollTo=DcriL2xPrG3_)>>[Basic Univariate Distributions](scrollTo=QD5lzFZerG4H)>>[Multivariate Distributions](scrollTo=ztM2d-N9nNX2)>>[Multiple Distributions](scrollTo=57lLzC7MQV-9)>>[Using Independent To Aggregate Batches to Events](scrollTo=t52ptQXvUO07)>>[Batches of Multivariate Distirbutions](scrollTo=INu1viAVXz93)>>[Broadcasting, aka Why Is This So Confusing?](scrollTo=72uiME85SmEH)>>[Going Farther](scrollTo=JpjjIGThrj8Q) In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
tfe.enable_eager_execution()
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
ndb = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
ndb.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
!pip install -q tensorflow-probability
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
try:
tfe.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub >[TensorFlow Distributions: A Gentle Introduction](scrollTo=DcriL2xPrG3_)>>[Basic Univariate Distributions](scrollTo=QD5lzFZerG4H)>>[Multivariate Distributions](scrollTo=ztM2d-N9nNX2)>>[Multiple Distributions](scrollTo=57lLzC7MQV-9)>>[Using Independent To Aggregate Batches to Events](scrollTo=t52ptQXvUO07)>>[Batches of Multivariate Distirbutions](scrollTo=INu1viAVXz93)>>[Broadcasting, aka Why Is This So Confusing?](scrollTo=72uiME85SmEH)>>[Going Farther](scrollTo=JpjjIGThrj8Q) In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
!pip install tensorflow_probability
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
tfe.enable_eager_execution()
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
ndb = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
ndb.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction Run in Google Colab View source on GitHub In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
!pip install -q tensorflow-probability
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfe = tf.contrib.eager
try:
tfe.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
from __future__ import print_function
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Probability Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Distributions: A Gentle Introduction View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out [Understanding TensorFlow Distributions Shapes](https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Understanding_TensorFlow_Distributions_Shapes.ipynb). If you have any questions about the material here, don't hesitate to contact (or join) [the TensorFlow Probability mailing list](https://groups.google.com/a/tensorflow.org/forum/!forum/tfprobability). We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is `tensorflow_probability`. By convention, we generally refer to the distributions library as `tfd`.[Tensorflow Eager](https://www.tensorflow.org/guide/eager) is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
###Code
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic Univariate Distributions Let's dive right in and create a normal distribution:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
###Output
_____no_output_____
###Markdown
We can draw a sample from it:
###Code
n.sample()
###Output
_____no_output_____
###Markdown
We can draw multiple samples:
###Code
n.sample(3)
###Output
_____no_output_____
###Markdown
We can evaluate a log prob:
###Code
n.log_prob(0.)
###Output
_____no_output_____
###Markdown
We can evaluate multiple log probabilities:
###Code
n.log_prob([0., 2., 4.])
###Output
_____no_output_____
###Markdown
We have a wide range of distributions. Let's try a Bernoulli:
###Code
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Multivariate Distributions We'll create a multivariate normal with a diagonal covariance:
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
###Output
_____no_output_____
###Markdown
Comparing this to the univariate normal we created earlier, what's different?
###Code
tfd.Normal(loc=0., scale=1.)
###Output
_____no_output_____
###Markdown
We see that the univariate normal has an `event_shape` of `()`, indicating it's a scalar distribution. The multivariate normal has an `event_shape` of `2`, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&41;) of this distribution is two-dimensional. Sampling works just as before:
###Code
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
###Output
_____no_output_____
###Markdown
Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
###Code
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
###Output
_____no_output_____
###Markdown
Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single `Distribution` object:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
###Output
_____no_output_____
###Markdown
It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python `Distribution` object. The three distributions cannot be manipulated individually. Note how the `batch_shape` is `(3,)`, indicating a batch of three distributions, and the `event_shape` is `()`, indicating the individual distributions have a univariate event space.If we call `sample`, we get a sample from all three:
###Code
b3.sample()
b3.sample(6)
###Output
_____no_output_____
###Markdown
If we call `prob`, (this has the same shape semantics as `log_prob`; we use `prob` with these small Bernoulli examples for clarity, although `log_prob` is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
###Code
b3.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a `for` loop (at least in Eager mode, in TF graph mode you'd need a `tf.while` loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created `b3`, a single `Distribution` object that represented three coin flips. If we called `b3.prob` on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, `prob` on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.How do we accomplish this? We use a "higher-order" distribution called `Independent`, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
###Code
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
###Output
_____no_output_____
###Markdown
Compare the shape to that of the original `b3`:
###Code
b3
###Output
_____no_output_____
###Markdown
As promised, we see that that `Independent` has moved the batch shape into the event shape: `b3_joint` is a single distribution (`batch_shape = ()`) over a three-dimensional event space (`event_shape = (3,)`).Let's check the semantics:
###Code
b3_joint.prob([1, 1, 0])
###Output
_____no_output_____
###Markdown
An alternate way to get the same result would be to compute probabilities using `b3` and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
###Code
tf.reduce_prod(b3.prob([1, 1, 0]))
###Output
_____no_output_____
###Markdown
`Indpendent` allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts:* `b3.sample` and `b3_joint.sample` have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using `Independent` shows up when computing probabilites, not when sampling.* `MultivariateNormalDiag` could be trivially implemented using the scalar `Normal` and `Independent` distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals:
###Code
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
###Output
_____no_output_____
###Markdown
We see `batch_shape = (3,)`, so there are three independent multivariate normals, and `event_shape = (2,)`, so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.Sampling works:
###Code
nd_batch.sample(4)
###Output
_____no_output_____
###Markdown
Since `batch_shape = (3,)` and `event_shape = (2,)`, we pass a tensor of shape `(3, 2)` to `log_prob`:
###Code
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
###Output
_____no_output_____
###Markdown
Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape `B` and an event shape `E`. Let `BE` be the concatenation of the event shapes:* For the univariate scalar distributions `n` and `b`, `BE = ().`.* For the two-dimensional multivariate normals `nd`. `BE = (2).`* For both `b3` and `b3_joint`, `BE = (3).`* For the batch of multivariate normals `ndb`, `BE = (3, 2).`The "evaluation rules" we've been using so far are:* Sample with no argument returns a tensor with shape `BE`; sampling with a scalar n returns an "n by `BE`" tensor.* `prob` and `log_prob` take a tensor of shape `BE` and return a result of shape `B`.The actual "evaluation rule" for `prob` and `log_prob` is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that **the argument to `log_prob` *must* be [broadcastable](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) against `BE`; any "extra" dimensions are preserved in the output.** Let's explore the implications. For the univariate normal `n`, `BE = ()`, so `log_prob` expects a scalar. If we pass `log_prob` a tensor with non-empty shape, those show up as batch dimensions in the output:
###Code
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
###Output
_____no_output_____
###Markdown
Let's turn to the two-dimensional multivariate normal `nd` (parameters changed for illustrative purposes):
###Code
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
###Output
_____no_output_____
###Markdown
`log_prob` "expects" an argument with shape `(2,)`, but it will accept any argument that broadcasts against this shape:
###Code
nd.log_prob([0., 0.])
###Output
_____no_output_____
###Markdown
But we can pass in "more" examples, and evaluate all their `log_prob`'s at once:
###Code
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
###Output
_____no_output_____
###Markdown
Perhaps less appealingly, we can broadcast over the event dimensions:
###Code
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
###Output
_____no_output_____
###Markdown
Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.Now let's look at the three coins example again:
###Code
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
###Output
_____no_output_____
###Markdown
Here, using broadcasting to represent the probability that *each* coin comes up heads is quite intuitive:
###Code
b3.prob([1])
###Output
_____no_output_____
###Markdown
(Compare this to `b3.prob([1., 1., 1.])`, which we would have used back where `b3` was introduced.)Now suppose we want to know, for each coin, the probability the coin comes up heads *and* the probability it comes up tails. We could imagine trying:`b3.log_prob([0, 1])`Unfortunately, this produces an error with a long and not-very-readable stack trace. `b3` has `BE = (3)`, so we must pass `b3.prob` something broadcastable against `(3,)`. `[0, 1]` has shape `(2)`, so it doesn't broadcast and creates an error. Instead, we have to say:
###Code
b3.prob([[0], [1]])
###Output
_____no_output_____ |
notebooks/040_dataflows_automation.ipynb | ###Markdown
Data Workflows and Automation
###Code
# Author: Martin Callaghan
# Date: 2021-05-17
# Lesson link: https://arctraining.github.io/python-2021-04/06-loops-and-functions/index.html
# Connect my Google Drive to Google Colab
from google.colab import drive
drive.mount ('/content/gdrive')
# Load the python packages we need
import pandas as pd
# Remember that we need to link back to the file and folder we permanently stored in our Google Drive
# But having to include this long path every time is a pain so
filepath = "/content/gdrive/MyDrive/Colab Notebooks/intro-python-2021-04/data/"
###Output
_____no_output_____
###Markdown
For loopsLoops allow us to repeat a workflow (or series of actions) a given number of times or while some condition is true. We could use a loop to automatically process data that’s stored in multiple files (daily values with one file per year, for example).
###Code
# Let's visit the zoo...
animals = ['lion', 'tiger', 'crocodile', 'vulture', 'hippo']
print(animals)
# Let's iterate across this list
for creature in animals:
print(creature)
print (creature)
###Output
hippo
###Markdown
Automate data processing
###Code
import os
os.mkdir (filepath + "yearly_files")
os.listdir(filepath)
# Load in the data
surveys_df = pd.read_csv (filepath + "surveys.csv")
# Only need data from 2002
surveys2002 = surveys_df [surveys_df.year == 2002]
# Write out the new df
surveys2002.to_csv (filepath + "yearly_files/surveys2002.csv")
# We need the years
surveys_df['year'].unique()
# Use these in the loop to get the filenames
for year in surveys_df['year'].unique():
filename = (filepath + "yearly_files/surveys" + str(year) + ".csv")
print (filename)
# Full code
surveys_df = pd.read_csv (filepath + "surveys.csv")
for year in surveys_df['year'].unique():
# Select data fro the year
surveys_year = surveys_df[surveys_df.year == year]
# Write out the new data
filename = (filepath + "yearly_files/surveys" + str(year) + ".csv")
surveys_year.to_csv(filename)
# We can turn this into a reusable function
def one_year_csv_writer (a_year, all_data):
"""
Writes a csv file for data from a given year.
a_year -- year for the data to be extracted
all_data -- dataframe containing the multi-year data
"""
# Select data for the year
surveys_year = all_data[all_data.year == a_year]
# Write dataframe to csv
filename = filepath + "yearly_files/function_surveys" + str(a_year) + ".csv"
surveys_year.to_csv(filename)
one_year_csv_writer?
# To all the function
one_year_csv_writer (2002, surveys_df)
###Output
_____no_output_____ |
8- How to solve Problem/A Data Science Framework for Elo/A Data Science Framework for Elo.ipynb | ###Markdown
A Data Science Framework for Elo Quite Practical and Far from any Theoretical Conceptslast update: 11/28/2018You can Fork and Run this kernel on **Github**:> [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) 1- Introduction**[Elo](https://www.cartaoelo.com.br/)** has defined a competition in **Kaggle**. A realistic and attractive data set for data scientists.on this notebook, I will provide a **comprehensive** approach to solve Elo Recommendation problem.I am open to getting your feedback for improving this **kernel**. Notebook Content1. [Introduction](1)1. [Data Science Workflow for Elo](2)1. [Problem Definition](3) 1. [Business View](4) 1. [Real world Application Vs Competitions](31)1. [Problem feature](7) 1. [Aim](8) 1. [Variables](9) 1. [ Inputs & Outputs](10) 1. [Evaluation](10)1. [Select Framework](11) 1. [Import](12) 1. [Version](13) 1. [Setup](14)1. [Exploratory data analysis](15) 1. [Data Collection](16) 1. [data_dictionary Analysis](17) 1. [Explorer Dataset](18) 1. [Data Cleaning](19) 1. [Data Preprocessing](20) 1. [Data Visualization](23) 1. [countplot](61) 1. [pie plot](62) 1. [Histogram](63) 1. [violin plot](64) 1. [kdeplot](65)1. [Apply Learning](24)1. [Conclusion](25)1. [References](26) ------------------------------------------------------------------------------------------------------------- **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** ----------- 2- A Data Science Workflow for EloOf course, the same solution can not be provided for all problems, so the best way is to create a **general framework** and adapt it to new problem.**You can see my workflow in the below image** : **You should feel free to adjust this checklist to your needs** [Go to top](top) 3- Problem DefinitionI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)> We are predicting a **loyalty score** for each card_id represented in test.csv and sample_submission.csv. 3-1 About Elo [Elo](https://www.cartaoelo.com.br/) is one of the largest **payment brands** in Brazil, has built partnerships with merchants in order to offer promotions or discounts to cardholders. But 1. do these promotions work for either the consumer or the merchant?1. Do customers enjoy their experience? 1. Do merchants see repeat business? **Personalization is key**. 3-2 Business View **Elo** has built machine learning models to understand the most important aspects and preferences in their customers’ lifecycle, from food to shopping. But so far none of them is specifically tailored for an individual or profile. This is where you come in. 3-2-1 Real world Application Vs CompetitionsJust a simple comparison between real-world apps with competitions: [Go to top](top) 4- Problem FeatureProblem Definition has four steps that have illustrated in the picture below:1. Aim1. Variable1. Inputs & Outputs1. Evaluation 4-1 AimDevelop algorithms to identify and serve the most relevant opportunities to individuals, by uncovering signal in customer loyalty.We are predicting a **loyalty score** for each card_id represented in test.csv and sample_submission.csv. 4-2 VariablesThe data is formatted as follows:train.csv and test.csv contain card_ids and information about the card itself - the first month the card was active, etc. train.csv also contains the target.historical_transactions.csv and new_merchant_transactions.csv are designed to be joined with train.csv, test.csv, and merchants.csv. They contain information about transactions for each card, as described above.merchants can be joined with the transaction sets to provide additional merchant-level information. 4-3 Inputs & Outputswe use train.csv and test.csv as Input and we should upload a submission.csv as Output 4-4 EvaluationSubmissions are scored on the root mean squared error. RMSE(Root Mean Squared Error) is defined as:where y^ is the predicted loyalty score for each card_id, and y is the actual loyalty score assigned to a card_id.**>**> You must answer the following question:How does your company expect to use and benefit from **your model**. [Go to top](top) 5- Select FrameworkAfter problem definition and problem feature, we should select our **framework** to solve the **problem**.What we mean by the framework is that the programming languages you use and by what modules the problem will be solved. [Go to top](top) 5-2 Import
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import matplotlib.pylab as pylab
import matplotlib.pyplot as plt
from pandas import get_dummies
import matplotlib as mpl
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib
import warnings
import sklearn
import scipy
import numpy
import json
import sys
import csv
import os
###Output
_____no_output_____
###Markdown
5-3 version
###Code
print('matplotlib: {}'.format(matplotlib.__version__))
print('sklearn: {}'.format(sklearn.__version__))
print('scipy: {}'.format(scipy.__version__))
print('seaborn: {}'.format(sns.__version__))
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
###Output
matplotlib: 2.2.3
sklearn: 0.20.1
scipy: 1.1.0
seaborn: 0.9.0
pandas: 0.23.4
numpy: 1.15.4
Python: 3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16)
[GCC 7.3.0]
###Markdown
5-4 SetupA few tiny adjustments for better **code readability**
###Code
sns.set(style='white', context='notebook', palette='deep')
warnings.filterwarnings('ignore')
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
6- EDA In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. * Which variables suggest interesting relationships?* Which observations are unusual?* Analysis of the features!By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:1. Data Collection1. Visualization1. Data Cleaning1. Data Preprocessing [Go to top](top) 6-1 Data Collection**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]I start Collection Data by the training and testing datasets into **Pandas DataFrames**. [Go to top](top)
###Code
train = pd.read_csv('../input/train.csv', parse_dates=["first_active_month"] )
test = pd.read_csv('../input/test.csv' ,parse_dates=["first_active_month"] )
merchants=pd.read_csv('../input/merchants.csv')
###Output
_____no_output_____
###Markdown
**>*** Each **row** is an observation (also known as : sample, example, instance, record).* Each **column** is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate). [Go to top](top) 6-1-1 data_dictionary AnalysisElo Provides a excel file to describe about data. It has four sheet and we have just read them with below code:
###Code
data_dictionary_train=pd.read_excel('../input/Data_Dictionary.xlsx',sheet_name='train')
data_dictionary_history=pd.read_excel('../input/Data_Dictionary.xlsx',sheet_name='history')
data_dictionary_new_merchant_period=pd.read_excel('../input/Data_Dictionary.xlsx',sheet_name='new_merchant_period')
data_dictionary_merchant=pd.read_excel('../input/Data_Dictionary.xlsx',sheet_name='merchant')
###Output
_____no_output_____
###Markdown
6-1-1-1 data_dictionary_train
###Code
data_dictionary_train.head(10)
# what we know about train:
###Output
_____no_output_____
###Markdown
6-1-1-2 data_dictionary_history
###Code
data_dictionary_history.head(10)
# what we know about history:
###Output
_____no_output_____
###Markdown
6-1-1-3 data_dictionary_new_merchant_period
###Code
data_dictionary_new_merchant_period.head(10)
# what we know about new_merchant_period:
###Output
_____no_output_____
###Markdown
6-1-1-4 data_dictionary_merchant:
###Code
data_dictionary_merchant.head(30)
# what we know about merchant:
###Output
_____no_output_____
###Markdown
6-1-2 Train Analysis
###Code
train.sample(1)
test.sample(1)
###Output
_____no_output_____
###Markdown
Or you can use others command to explorer dataset, such as
###Code
train.tail(1)
###Output
_____no_output_____
###Markdown
6-1-1 FeaturesFeatures can be from following types:* numeric* categorical* ordinal* datetime* coordinatesFind the type of features in **Qoura dataset**?!For getting some information about the dataset you can use **info()** command.
###Code
print(train.info())
print(test.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 123623 entries, 0 to 123622
Data columns (total 5 columns):
first_active_month 123622 non-null datetime64[ns]
card_id 123623 non-null object
feature_1 123623 non-null int64
feature_2 123623 non-null int64
feature_3 123623 non-null int64
dtypes: datetime64[ns](1), int64(3), object(1)
memory usage: 4.7+ MB
None
###Markdown
6-1-2 Explorer Dataset1- Dimensions of the dataset.2- Peek at the data itself.3- Statistical summary of all attributes.4- Breakdown of the data by the class variable.Don’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects. [Go to top](top)
###Code
# shape for train and test
print('Shape of train:',train.shape)
print('Shape of test:',test.shape)
#columns*rows
train.size
###Output
_____no_output_____
###Markdown
After loading the data via **pandas**, we should checkout what the content is, description and via the following:
###Code
type(train)
type(test)
train.describe()
###Output
_____no_output_____
###Markdown
To pop up 5 random rows from the data set, we can use **sample(5)** function and find the type of features.
###Code
train.sample(5)
###Output
_____no_output_____
###Markdown
6-2 Data CleaningWhen dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.The primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[4] [Go to top](top) How many NA elements in every column!!Good news, it is Zero!To check out how many null info are on the dataset, we can use **isnull().sum()**.
###Code
train.isnull().sum()
###Output
_____no_output_____
###Markdown
But if we had , we can just use **dropna()**(be careful sometimes you should not do this!)
###Code
# remove rows that have NA's
print('Before Droping',train.shape)
train = train.dropna()
print('After Droping',train.shape)
###Output
Before Droping (201917, 6)
After Droping (201917, 6)
###Markdown
We can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property. To print dataset **columns**, we can use columns atribute.
###Code
train.columns
###Output
_____no_output_____
###Markdown
You see number of unique item for Target with command below:
###Code
train_target = train['target'].values
np.unique(train_target)
###Output
_____no_output_____
###Markdown
To check the first 5 rows of the data set, we can use head(5).
###Code
train.head(5)
###Output
_____no_output_____
###Markdown
Or to check out last 5 row of the data set, we use tail() function.
###Code
train.tail()
###Output
_____no_output_____
###Markdown
To give a **statistical summary** about the dataset, we can use **describe()**
###Code
train.describe()
###Output
_____no_output_____
###Markdown
As you can see, the statistical information that this command gives us is not suitable for this type of data**describe() is more useful for numerical data sets** 6-3 Data Preprocessing**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm. Data Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.there are plenty of steps for data preprocessing and we just listed some of them in general(Not just for Quora) :1. removing Target column (id)1. Sampling (without replacement)1. Making part of iris unbalanced and balancing (with undersampling and SMOTE)1. Introducing missing values and treating them (replacing by average values)1. Noise filtering1. Data discretization1. Normalization and standardization1. PCA analysis1. Feature selection (filter, embedded, wrapper)1. Etc.What methods of preprocessing can we run on Quora?! [Go to top](top) **>**in pandas's data frame you can perform some query such as "where"
###Code
train.where(train ['target']==1).count()
###Output
_____no_output_____
###Markdown
As you can see in the below in python, it is so easy perform some query on the dataframe:
###Code
train[train['target']<-32].head(5)
train[train['target']==1].head(5)
train.feature_1.unique()
train.feature_2.unique()
train.feature_3.unique()
train.first_active_month.unique()
###Output
_____no_output_____
###Markdown
**>**>**Preprocessing and generation pipelines depend on a model type** 6-4 Data Visualization**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.> * Two** important rules** for Data visualization:> 1. Do not put too little information> 1. Do not put too much information [Go to top](top) 6-4-1 Histogram
###Code
train["target"].hist();
# histograms
train.hist(figsize=(15,20))
plt.figure()
f,ax=plt.subplots(1,2,figsize=(20,10))
train[train['feature_3']==0].target.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('feature_3= 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
train[train['feature_3']==1].target.plot.hist(ax=ax[1],color='green',bins=20,edgecolor='black')
ax[1].set_title('feature_3= 1')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
train['feature_3'].value_counts().plot.pie(explode=[0,0.1],autopct='%1.1f%%',ax=ax[0],shadow=True)
ax[0].set_title('feature_3')
ax[0].set_ylabel('')
sns.countplot('feature_3',data=train,ax=ax[1])
ax[1].set_title('feature_3')
plt.show()
f,ax=plt.subplots(1,2,figsize=(18,8))
train[['feature_3','feature_2']].groupby(['feature_3']).mean().plot.bar(ax=ax[0])
ax[0].set_title('Survived vs feature_2')
sns.countplot('feature_3',hue='feature_2',data=train,ax=ax[1])
ax[1].set_title('feature_3:feature')
plt.show()
###Output
_____no_output_____
###Markdown
6-4-2 distplot
###Code
sns.distplot(train['target'])
###Output
_____no_output_____
###Markdown
6-4-3 violinplot
###Code
sns.violinplot(data=train, x="feature_1", y='target')
###Output
_____no_output_____
###Markdown
6-2-4 Scatter plotScatter plot Purpose to identify the type of relationship (if any) between two quantitative variables
###Code
# Modify the graph above by assigning each species an individual color.
g = sns.FacetGrid(train, hue="feature_3", col="feature_2", margin_titles=True,
palette={1:"blue", 0:"red"} )
g=g.map(plt.scatter, "first_active_month", "target",edgecolor="w").add_legend();
###Output
_____no_output_____
###Markdown
6-4-5 BoxIn descriptive statistics, a box plot or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram.[wikipedia]
###Code
sns.boxplot(x="feature_3", y="feature_2", data=test )
plt.show()
###Output
_____no_output_____ |
Custom.ipynb | ###Markdown
3 Method
###Code
#!python3 -m pip install -U tensorflow-gpu
###Output
_____no_output_____
###Markdown
Train custom NN
###Code
import numpy as np
background = np.zeros((8,8))
kernel_size = (3,2)
strides = (1,1)
seed = 5
def get_square(background):
square = np.zeros_like(background)
square[1:-1,[1,-2]] = 1
square[[1,-2],1:-1] = 1
return 0.9*square[..., np.newaxis]
def get_cross(background):
cross = np.zeros_like(background)
cross[3:-3,1:-1] = 1
cross[1:-1,3:-3] = 1
return 0.9*cross[..., np.newaxis]
training_data = np.asarray([get_square(background), get_cross(background)]).repeat(5_000, axis=0)
np.random.seed(seed)
training_data += np.random.uniform(0, 0.1, size=training_data.shape)
training_labels = np.asarray([1, -1])[..., np.newaxis].repeat(5_000, axis=0)
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
for gpu_instance in physical_devices:
tf.config.experimental.set_memory_growth(gpu_instance, True)
tf.random.set_seed(seed)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(1, kernel_size=kernel_size, strides=strides, input_shape=[8,8,1], padding='valid', use_bias=True),
tf.keras.layers.GlobalMaxPooling2D()
])
# Instead of random weights, why not use the 'proposed' weights?
#model.layers[0].set_weights([np.array([ [[[-1]],[[-1]]], [[[1]],[[1]]], [[[-1]],[[-1]]] ]), np.array([-1])])
model.compile(optimizer="sgd", loss="mae")
print(model.summary())
model.fit(training_data, training_labels, batch_size=16, epochs=3, shuffle=True)
###Output
2021-12-02 10:02:41.665872: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 10:02:41.719229: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-12-02 10:02:41.719640: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2596990000 Hz
###Markdown
Print the kernel and the bias
###Code
# The kernel could look like one of the expected patches
# Everytime the model is trained, it's not like the one in the paper (more a combination)
from matplotlib import pyplot as plt
weights = np.squeeze(model.layers[0].weights[0].numpy())
print(weights)
plt.imshow(weights, vmin=-1, vmax=1, cmap="gray")
plt.show()
bias = model.layers[0].bias
if not bias is None:
bias = bias.numpy()[np.newaxis]
print(bias)
plt.imshow(bias, vmin=-1, vmax=1, cmap="gray")
plt.show()
###Output
[[-0.89865404 -1.3615712 ]
[ 0.91517675 1.367188 ]
[-0.90292746 -1.3679376 ]]
###Markdown
Some prediction
###Code
print(model(get_square(background)[np.newaxis])) # class 1
print(model(get_cross(background)[np.newaxis])) # class -1
###Output
tf.Tensor([[1.0128759]], shape=(1, 1), dtype=float32)
tf.Tensor([[-1.0213267]], shape=(1, 1), dtype=float32)
###Markdown
Check 'test set' accuracy
###Code
# Test set accuracy is 1.0
test_data = np.asarray([get_square(background), get_cross(background)]).repeat(5_000, axis=0)
np.random.seed(seed)
test_data += np.random.uniform(0, 0.1, size=test_data.shape)
test_labels = np.asarray([1, -1])[...,np.newaxis].repeat(5_000, axis=0)
from sklearn.metrics import classification_report
predictions = model(test_data).numpy().round()
print(classification_report(test_labels, predictions))
###Output
precision recall f1-score support
-1 1.00 1.00 1.00 5000
1 1.00 1.00 1.00 5000
accuracy 1.00 10000
macro avg 1.00 1.00 1.00 10000
weighted avg 1.00 1.00 1.00 10000
###Markdown
Extract individual unique patches
###Code
# from: https://www.geeksforgeeks.org/python-intersection-two-lists/
def intersection(lst1, lst2):
return list(set(lst1) & set(lst2))
def get_all_unique_patches(cropped_image, kernel_size, strides):
view_shape = tuple(
np.subtract(cropped_image.shape, kernel_size) + 1
) + kernel_size
sub_matrices = np.lib.stride_tricks.as_strided(
cropped_image,
view_shape,
cropped_image.strides + cropped_image.strides
)
return np.unique(sub_matrices[::strides[0],::strides[1]].reshape((-1,*kernel_size)), axis=0)
square_patches = get_all_unique_patches(get_square(background)[...,0], kernel_size, strides)
cross_patches = get_all_unique_patches(get_cross(background)[...,0], kernel_size, strides)
###Output
_____no_output_____
###Markdown
Exclusive cross patches
###Code
# Patches unique to the cross
cross_indices, = np.where(
np.asarray([[np.allclose(square_p, cross_p) for square_p in square_patches] for cross_p in cross_patches])
.mean(axis=1) <= 0
)
print(cross_indices)
for patch in cross_patches[cross_indices]:
plt.imshow(patch, vmin=0, vmax=1, cmap="gray")
plt.show()
#break
###Output
[ 1 2 6 7 10 11 14 18 19 20 21]
###Markdown
Exclusive cross patches
###Code
# Patches unique to the square
square_indices, = np.where(
np.asarray([[np.allclose(cross_p, square_p) for cross_p in cross_patches] for square_p in square_patches])
.mean(axis=1) <= 0
)
print(square_indices)
square_indices = square_indices[[0,5,3,2,1,6,4]] # reorder in the same way the paper did
for patch in square_patches[square_indices]:
plt.imshow(patch, vmin=0, vmax=1, cmap="gray")
plt.show()
#break
###Output
[ 4 5 6 8 10 12 14]
###Markdown
Trying the evaluation (not simulative) on the square
###Code
from sklearn.metrics import r2_score, classification_report
from scipy import stats
# actual NN weights
weights_ = weights
bias_ = bias[0][0]
# NN reduction (only a single convolution) --> No 8x8x1 input image
inference = lambda x: np.sum(weights_.reshape(-1) * x.reshape(-1)) + bias_
#patches = np.concatenate([square_patches[square_indices], cross_patches[cross_indices]])
patches = square_patches[square_indices]
# currently predicting only on every patch once (you could also sample randomly multiple times with added noise)
sample_idx = np.random.randint(0,patches.shape[0], 50)
samples = []
predictions = []
classes = []
for idx, patch in enumerate(patches[sample_idx]):
noise = np.random.uniform(0, 0.1, size=patch.shape)
samples += [sample_idx[idx]]
predictions += [inference(patch + noise)]
classes += [1 if sample_idx[idx] < square_patches[square_indices].shape[0] else -1]
samples = np.asarray(samples).astype(float)[:,np.newaxis]
predictions = np.asarray(predictions).astype(float)[:,np.newaxis]
classes = np.asarray(classes).astype(float)[:,np.newaxis]
from scipy.stats import ttest_ind
import numpy as np
for idx, patch in enumerate(patches[:square_indices.shape[0]]):
# prediction for the samples with the same id (could be used for larger simulative metrics)
identifier = np.asarray(sample_idx==idx)
non_identifier = np.asarray(sample_idx!=idx)
is_pattern_sample = np.array(samples)
is_pattern_sample[identifier] = 1
is_pattern_sample[non_identifier] = -1
#classes_ = np.array(classes)
#classes_[identifier] = 1
#classes_[non_identifier] = -1
class_predictions = np.array(predictions)
ttest, pval = ttest_ind(class_predictions[is_pattern_sample==1], class_predictions[is_pattern_sample==-1])
print(f"{ttest:.02f} : {pval:.02f}")
#print(classification_report(labels, predictions))
#print(r2_score(labels, predictions))
#print(stats.pearsonr(labels, predictions))
#print()
###Output
3.77 : 0.00
-3.97 : 0.00
-6.88 : 0.00
2.48 : 0.02
0.57 : 0.57
1.85 : 0.07
0.68 : 0.50
|
Final Project _ Graduate Admission Predictor/Code/Data Preprocessing and Feature Engineering/Preprocessing and Feature Engineering.ipynb | ###Markdown
This notebook Contains:- Taking scraped input(HTML formatted code)- Cleaning, Data Preprocessing and Feature Engineering on the data set- Importing the Cleaned CSV File
###Code
# imoporting libraries
import pandas as pd
import os
from bs4 import BeautifulSoup
import re
# Reading the list of files inside the HTML_FILES folder
allfileslist = os.listdir("../../Data/HTML_FILES/")
# Concatenating all the files in the HTML_FILES folder
combined_csv = pd.concat( [ pd.read_csv("../../Data/HTML_FILES/"+f) for f in allfileslist ] )
# count of accept and reject
combined_csv.status.value_counts()
combined_csv.loc[combined_csv['status']=='acccept',"status"]="accept"
combined_csv.status.value_counts()
#filtering out empty records
combined_csv=combined_csv.loc[~(combined_csv['links']=="[]"),:]
#removing empty records
combined_csv.drop(columns='Unnamed: 0',inplace=True)
combined_csv.reset_index(drop=True,inplace=True)
# Changing university name in proper naming convention
combined_csv.loc[combined_csv.loc[:,'university_name']=="illinois_institute_of_technology_accept","university_name"]="illinois_institute_of_technology"
# Changing university name in proper naming convention
combined_csv.loc[combined_csv.loc[:,'university_name']=="university of california, irvine","university_name"]="university_of_california_irvine"
# Changing university name in proper naming convention
combined_csv.loc[combined_csv.loc[:,'university_name']=="clemson_university_accept","university_name"]="clemson_university"
combined_csv.loc[combined_csv.loc[:,'university_name']=="clemson_university_reject","university_name"]="clemson_university"
# Changing university name in proper naming convention
combined_csv.loc[combined_csv.loc[:,'university_name']=="university_of_texas_dallas_accept","university_name"]="university_of_texas_dallas"
combined_csv.loc[combined_csv.loc[:,'university_name']=="university_of_texas_dallas_reject","university_name"]="university_of_texas_dallas"
# Accept and Reject for every university with percentage of accept and reject
combined_csv.groupby(by=["university_name"])['status'].value_counts(normalize=True)
# shape of the datset
combined_csv.shape
#unwrapping stored html pages and extracting features from html tags
html_pages = combined_csv.links.tolist()
temp = []
# Function to unwrap the html
for i in html_pages:
soup = BeautifulSoup(i)
a = soup.find_all('div', class_ = 'col-sm-4 col-xs-4')
temp_inside = []
for x in a:
k =(x.h4.text)
t=[j for j in k.strip().split("\n") if len(j) is not 0]
temp_inside.append(t)
temp.append(temp_inside)
temp[0:1]
# getting all the profile data in nested list and extracting it
all=[]
for each in temp:
list = []
for i in each:
for j in i:
list.append(j)
all.append(list)
#verifing if we have unpacked all html pages collected correctly
len(all)
all[0]
#we will make a new dataframe with extracted information from html pages and it's corresponding university name and status
university_list=combined_csv.university_name.tolist()
status_list=combined_csv.status.tolist()
combined_df = pd.DataFrame(all)
combined_df['university_name']=university_list
combined_df['status']=status_list
#naming our features
list_columns = ['gre_score','droping', 'gre_score_quant','gre_score_verbal','test_score_toefl','droping_1', 'undergraduation_score','work_ex', 'papers_published','droping_3','university_name','status']
combined_df.columns = list_columns
combined_df.drop(columns = ['droping','droping_1','droping_3'], inplace=True)
# Null in columns
combined_df.isna().sum()
#filling work experience and work_ex with zero, considering when there are no values given
combined_df=combined_df.fillna(0)
combined_df.head()
###Output
_____no_output_____
###Markdown
Data Pre processing and Feature Engineering- Removing Null values from columns- Removing noise data, Unformatted Text and Inconsistent Data - Conversion of % and 10 pinter score in CGPA to 4 pointer- Toefl and IELTS score to the same scale according to the information available on ETS Official website (https://www.ets.org/toefl/institutions/scores/compare/)- Including Ranking of University as a column- Changed paper Published containing column values as NoneInternational/National/Local
###Code
# Function for removing special charaters
def replace_special_chars(i):
#a = re.sub('[^A-Za-z]+',' ',str(i))
a=re.findall(r'\d+', str(i))
#a = a.lower()
return ''.join(a)
# calling this function for various columns
combined_df['gre_score']=combined_df.gre_score.apply(replace_special_chars)
combined_df['gre_score_quant']=combined_df['gre_score_quant'].apply(replace_special_chars)
combined_df['test_score_toefl'] = combined_df['test_score_toefl'].apply(replace_special_chars)
combined_df['gre_score_verbal'] = combined_df['gre_score_verbal'].apply(replace_special_chars)
combined_df['work_ex'] = combined_df['work_ex'].apply(replace_special_chars)
combined_df["undergraduation_score"] = [x.replace('CGPA','') for x in combined_df["undergraduation_score"]]
combined_df["undergraduation_score"] = [x.replace('%','') for x in combined_df["undergraduation_score"]]
combined_df["papers_published"] = [str(x).replace('Tech Papers','') for x in combined_df["papers_published"]]
# data type for multiple columns
combined_df.dtypes
combined_df.loc[combined_df['work_ex']=='','work_ex']=0
values=[]
for each in combined_df.undergraduation_score.unique():
try:
float(each)
except:
values.append(each)
for each in values:
combined_df=combined_df[combined_df.undergraduation_score!=each]
combined_df[['gre_score','gre_score_quant','gre_score_verbal','test_score_toefl','undergraduation_score','work_ex']]=combined_df[['gre_score','gre_score_quant','gre_score_verbal','test_score_toefl','undergraduation_score','work_ex']].apply(pd.to_numeric)
combined_df=combined_df.loc[~(combined_df.test_score_toefl.isna()),:]
combined_df.isna().sum()
combined_df.reset_index(drop=True,inplace=True)
# function to scale the cgpa on the scale of 4
update_cgpa_score_scale_4 = []
for score in combined_df.undergraduation_score.tolist():
s = 0
try:
score = float(score)
except:
score= 0
if score > 10:
s = ((score)/20) - 1
s = round(s,2)
update_cgpa_score_scale_4.append(s)
else:
s = ((score)/10)*4
s = round(s,2)
update_cgpa_score_scale_4.append(s)
combined_df['undergraduation_score']=update_cgpa_score_scale_4
combined_df.loc[combined_df['test_score_toefl']<9,'test_score_toefl']=pd.cut(combined_df.loc[combined_df['test_score_toefl']<9,'test_score_toefl'], bins=[-1,0.5,4,4.5,5,5.5,6,6.5,7,7.5,8,8.5,9], labels=[0,31,34,45,59,78,93,101,109,114,117,120])
combined_df.loc[combined_df['test_score_toefl']<9,'test_score_toefl'].value_counts()
###Output
_____no_output_____
###Markdown
working on the paper published column to assign values: International as 3, National as 2, Local as 1 and None as 0
###Code
combined_df.papers_published.unique()
#df_all_neu["papers_published"] = [x.replace('','0') for x in df_all_neu["papers_published"]]
combined_df["papers_published"] = [x.replace('None','0') for x in combined_df["papers_published"]]
combined_df["papers_published"] = [x.replace('NA','0') for x in combined_df["papers_published"]]
combined_df.papers_published.value_counts()
combined_df.loc[combined_df['papers_published'] == 'Local', 'papers_published'] = '1'
combined_df.loc[combined_df['papers_published'] == 'International', 'papers_published'] = '3'
combined_df.loc[combined_df['papers_published'] == 'National', 'papers_published'] = '2'
list_ppr_pub = combined_df.papers_published.tolist()
new_list_ppr_pub = []
for i in list_ppr_pub:
if i == '':
new_list_ppr_pub.append('0')
else:
new_list_ppr_pub.append(i)
combined_df['papers_published'] = new_list_ppr_pub
combined_df['papers_published'] = combined_df['papers_published'].astype(int)
combined_df.describe()
###Output
_____no_output_____
###Markdown
checking and removing incorrect recordGre quant/verbal >170 and <130
###Code
combined_df.loc[(combined_df['gre_score_quant'] <130) | (combined_df['gre_score_verbal'] < 130) | (combined_df['gre_score'] < 260),:]
combined_df = combined_df.loc[~((combined_df['gre_score_quant'] <130) | (combined_df['gre_score_verbal'] < 130) | (combined_df['gre_score'] < 260)),:]
# No null columns remaining
combined_df.isna().sum()
def replace_special_chars_university_name(i):
a = re.sub('[^A-Za-z]+',' ',str(i))
#a=re.findall(r'\d+', str(i))
a = a.lower()
return '_'.join(a.split(' '))
#replacing special characters and spaces in university name
combined_df.loc[:,"university_name"]=combined_df.university_name.apply(replace_special_chars_university_name)
required_colleges=combined_df.university_name.unique().tolist()
len(required_colleges)
required_colleges=['northeastern_university','illinois_institute_of_technology','michigan_technological_university','rochester_institute_of_technology','university_of_southern_california','north_carolina_state_university_raleigh','university_of_texas_arlington','university_of_texas_dallas','syracuse_university','clemson_university','new_york_university','indiana_university_bloomington','rutgers_university_new_brunswick', "---",'university_of_florida','carnegie_mellon_university','georgia_institiute_of_technology','university_of_colorado_boulder','university_of_north_carolina_at_charlotte','university_of_iowa','university_of_connecticut','worcester_polytechnic_institute','---','kansas_state_university','university_of_cincinnati','university_of_maryland_college_park','university_of_california_irvine','texas_a_m_university_college_station','state_university_of_new_york_at_stony_brook','george_mason_university','university_of_texas_austin']
# Assigining universities with their respective rankings in CS
required_colleges_ranking = [15,97,117,66,19,49,64,52,118,89,22,48,25,150,62,1,9,58,30, 71, 70,79, 76, 115, 130, 10, 23, 31, 35, 59,16]
dictionary_req_college = dict(zip(required_colleges, required_colleges_ranking))
dictionary_req_college
combined_df['ranking'] = combined_df['university_name']
combined_df['ranking'].replace(dictionary_req_college,inplace=True)
# no null values remaining
combined_df.isna().sum()
# cleaned datset
combined_df.head()
# describing the dataset
combined_df.describe()
# transferring CSV file
combined_df.reset_index(drop =True).to_csv('../../Data/clean_profile_data_all.csv',index=False)
###Output
_____no_output_____ |
EHR_Only/LR/Hemorrhage_FAMD.ipynb | ###Markdown
Template LR
###Code
def lr(X_train, y_train):
from sklearn.linear_model import Lasso
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import StandardScaler
model = LogisticRegression()
param_grid = [
{'C' : np.logspace(-4, 4, 20)}
]
clf = GridSearchCV(model, param_grid, cv = 5, verbose = True, n_jobs = 10)
best_clf = clf.fit(X_train, y_train)
return best_clf
import pandas as pd
import numpy as np
import scipy.stats
# AUC comparison adapted from
# https://github.com/Netflix/vmaf/
def compute_midrank(x):
"""Computes midranks.
Args:
x - a 1D numpy array
Returns:
array of midranks
"""
J = np.argsort(x)
Z = x[J]
N = len(x)
T = np.zeros(N, dtype=np.float)
i = 0
while i < N:
j = i
while j < N and Z[j] == Z[i]:
j += 1
T[i:j] = 0.5*(i + j - 1)
i = j
T2 = np.empty(N, dtype=np.float)
# Note(kazeevn) +1 is due to Python using 0-based indexing
# instead of 1-based in the AUC formula in the paper
T2[J] = T + 1
return T2
def fastDeLong(predictions_sorted_transposed, label_1_count):
"""
The fast version of DeLong's method for computing the covariance of
unadjusted AUC.
Args:
predictions_sorted_transposed: a 2D numpy.array[n_classifiers, n_examples]
sorted such as the examples with label "1" are first
Returns:
(AUC value, DeLong covariance)
Reference:
@article{sun2014fast,
title={Fast Implementation of DeLong's Algorithm for
Comparing the Areas Under Correlated Receiver Operating Characteristic Curves},
author={Xu Sun and Weichao Xu},
journal={IEEE Signal Processing Letters},
volume={21},
number={11},
pages={1389--1393},
year={2014},
publisher={IEEE}
}
"""
# Short variables are named as they are in the paper
m = label_1_count
n = predictions_sorted_transposed.shape[1] - m
positive_examples = predictions_sorted_transposed[:, :m]
negative_examples = predictions_sorted_transposed[:, m:]
k = predictions_sorted_transposed.shape[0]
tx = np.empty([k, m], dtype=np.float)
ty = np.empty([k, n], dtype=np.float)
tz = np.empty([k, m + n], dtype=np.float)
for r in range(k):
tx[r, :] = compute_midrank(positive_examples[r, :])
ty[r, :] = compute_midrank(negative_examples[r, :])
tz[r, :] = compute_midrank(predictions_sorted_transposed[r, :])
aucs = tz[:, :m].sum(axis=1) / m / n - float(m + 1.0) / 2.0 / n
v01 = (tz[:, :m] - tx[:, :]) / n
v10 = 1.0 - (tz[:, m:] - ty[:, :]) / m
sx = np.cov(v01)
sy = np.cov(v10)
delongcov = sx / m + sy / n
return aucs, delongcov
def calc_pvalue(aucs, sigma):
"""Computes log(10) of p-values.
Args:
aucs: 1D array of AUCs
sigma: AUC DeLong covariances
Returns:
log10(pvalue)
"""
l = np.array([[1, -1]])
z = np.abs(np.diff(aucs)) / np.sqrt(np.dot(np.dot(l, sigma), l.T))
return np.log10(2) + scipy.stats.norm.logsf(z, loc=0, scale=1) / np.log(10)
def compute_ground_truth_statistics(ground_truth):
assert np.array_equal(np.unique(ground_truth), [0, 1])
order = (-ground_truth).argsort()
label_1_count = int(ground_truth.sum())
return order, label_1_count
def delong_roc_variance(ground_truth, predictions):
"""
Computes ROC AUC variance for a single set of predictions
Args:
ground_truth: np.array of 0 and 1
predictions: np.array of floats of the probability of being class 1
"""
order, label_1_count = compute_ground_truth_statistics(ground_truth)
predictions_sorted_transposed = predictions[np.newaxis, order]
aucs, delongcov = fastDeLong(predictions_sorted_transposed, label_1_count)
assert len(aucs) == 1, "There is a bug in the code, please forward this to the developers"
return aucs[0], delongcov
def delong_roc_test(ground_truth, predictions_one, predictions_two):
"""
Computes log(p-value) for hypothesis that two ROC AUCs are different
Args:
ground_truth: np.array of 0 and 1
predictions_one: predictions of the first model,
np.array of floats of the probability of being class 1
predictions_two: predictions of the second model,
np.array of floats of the probability of being class 1
"""
order, label_1_count = compute_ground_truth_statistics(ground_truth)
predictions_sorted_transposed = np.vstack((predictions_one, predictions_two))[:, order]
aucs, delongcov = fastDeLong(predictions_sorted_transposed, label_1_count)
return calc_pvalue(aucs, delongcov)
def train_scores(X_train,y_train):
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
pred = best_clf.predict(X_train)
actual = y_train
print(accuracy_score(actual,pred))
print(f1_score(actual,pred))
print(fbeta_score(actual,pred, average = 'macro', beta = 2))
print(roc_auc_score(actual, best_clf.predict_proba(X_train)[:,1]))
print(log_loss(actual,best_clf.predict_proba(X_train)[:,1]))
def test_scores(X_test,y_test):
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import log_loss
pred = best_clf.predict(X_test)
actual = y_test
print(accuracy_score(actual,pred))
print(f1_score(actual,pred))
print(fbeta_score(actual,pred, average = 'macro', beta = 2))
print(roc_auc_score(actual, best_clf.predict_proba(X_test)[:,1]))
print(log_loss(actual,best_clf.predict_proba(X_test)[:,1]))
def cross_val(X,y):
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_validate
from sklearn.metrics import log_loss
from sklearn.metrics import roc_auc_score
from sklearn.metrics import fbeta_score
import sklearn
import numpy as np
cv = KFold(n_splits=5, random_state=1, shuffle=True)
log_loss = []
auc = []
accuracy = []
f1 = []
f2 = []
for train_index, test_index in cv.split(X):
X_train, X_test, y_train, y_test = X.iloc[train_index], X.iloc[test_index], y.iloc[train_index], y.iloc[test_index]
model = lr(X_train, y_train)
prob = model.predict_proba(X_test)[:,1] # prob is a vector of probabilities
print(prob)
pred = np.round(prob) # pred is the rounded predictions
log_loss.append(sklearn.metrics.log_loss(y_test, prob))
auc.append(sklearn.metrics.roc_auc_score(y_test, prob))
accuracy.append(sklearn.metrics.accuracy_score(y_test, pred))
f1.append(sklearn.metrics.f1_score(y_test, pred, average = 'macro'))
f2.append(fbeta_score(y_test,pred, average = 'macro', beta = 2))
print(np.mean(accuracy))
print(np.mean(f1))
print(np.mean(f2))
print(np.mean(auc))
print(np.mean(log_loss))
from prince import FAMD
famd = FAMD(n_components = 15, n_iter = 3, random_state = 101)
for (colName, colData) in co_train_gpop.iteritems():
if (colName != 'Co_N_Drugs_R0' and colName!= 'Co_N_Hosp_R0' and colName != 'Co_Total_HospLOS_R0' and colName != 'Co_N_MDVisit_R0'):
co_train_gpop[colName].replace((1,0) ,('yes','no'), inplace = True)
co_train_low[colName].replace((1,0) ,('yes','no'), inplace = True)
co_train_high[colName].replace((1,0) ,('yes','no'), inplace = True)
co_validation_gpop[colName].replace((1,0), ('yes','no'), inplace = True)
co_validation_high[colName].replace((1,0), ('yes','no'), inplace = True)
co_validation_low[colName].replace((1,0), ('yes','no'), inplace = True)
famd.fit(co_train_gpop)
co_train_gpop_FAMD = famd.transform(co_train_gpop)
famd.fit(co_train_high)
co_train_high_FAMD = famd.transform(co_train_high)
famd.fit(co_train_low)
co_train_low_FAMD = famd.transform(co_train_low)
famd.fit(co_validation_gpop)
co_validation_gpop_FAMD = famd.transform(co_validation_gpop)
famd.fit(co_validation_high)
co_validation_high_FAMD = famd.transform(co_validation_high)
famd.fit(co_validation_low)
co_validation_low_FAMD = famd.transform(co_validation_low)
###Output
/PHShome/se197/anaconda3/lib/python3.8/site-packages/pandas/core/series.py:4509: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().replace(
###Markdown
General Population
###Code
best_clf = lr(co_train_gpop_FAMD, out_train_death_gpop)
cross_val(co_train_gpop_FAMD, out_train_death_gpop)
print()
test_scores(co_validation_gpop_FAMD, out_validation_death_gpop)
comb = []
for i in range(len(predictor_variable)):
comb.append(predictor_variable[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.08547767 0.08519332 0.11433485 ... 0.08129531 0.09061412 0.11059361]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.08571507 0.08248705 0.09045555 ... 0.08271565 0.08443662 0.08316753]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.08446429 0.13606233 0.08496492 ... 0.10025366 0.11218504 0.0987848 ]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.07982931 0.08000746 0.08562286 ... 0.11618733 0.12465834 0.08709272]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.08254724 0.08136691 0.08278828 ... 0.0836711 0.08177691 0.08414325]
0.9063342401131862
0.4761033953085928
0.4902566170085402
0.750954640206554
0.2994690185290467
0.8965396888711608
0.0
0.4887203739943466
0.7395804540669768
0.323858278471526
###Markdown
High Continuity
###Code
best_clf = lr(co_train_high_FAMD, out_train_death_high)
cross_val(co_train_high_FAMD, out_train_death_high)
print()
test_scores(co_validation_high_FAMD, out_validation_death_high)
comb = []
for i in range(len(predictor_variable)):
comb.append(predictor_variable[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.05396105 0.05166111 0.05726803 ... 0.05597741 0.06993479 0.05384642]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.06022603 0.0539651 0.05183649 ... 0.05565727 0.05137424 0.05061988]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.05557256 0.06673877 0.05036173 ... 0.10349286 0.05761498 0.07748889]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.05026072 0.15785263 0.09110281 ... 0.06577864 0.05128724 0.10898802]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.05135528 0.04985913 0.05244069 ... 0.11114037 0.06147489 0.08762499]
0.9312215506079762
0.4839517041333295
0.49380614140495566
0.7615809379370233
0.23295160306740104
0.9304656504489455
0.0
0.49263697872904966
0.7637225982715212
0.23928770340317954
###Markdown
Low Continuity
###Code
best_clf = lr(co_train_low_FAMD, out_train_death_low)
cross_val(co_train_low_FAMD, out_train_death_low)
print()
test_scores(co_validation_low_FAMD, out_validation_death_low)
comb = []
for i in range(len(predictor_variable)):
comb.append(predictor_variable[i] + str(best_clf.best_estimator_.coef_[:,i:i+1]))
comb
###Output
Fitting 5 folds for each of 20 candidates, totalling 100 fits
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.11630022 0.1148216 0.10665957 ... 0.11507001 0.11902873 0.13138666]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.11607133 0.10511268 0.11565756 ... 0.11922704 0.13299084 0.11781846]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.0597152 0.18892576 0.03123151 ... 0.47615755 0.12385666 0.04676939]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.11091976 0.11094845 0.11262233 ... 0.13470406 0.11503359 0.11607765]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[0.11841682 0.11594522 0.12612768 ... 0.11303374 0.11464482 0.11616816]
0.8779837176998028
0.47824763391019476
0.4932007687095675
0.7504418478689567
0.35362959161649826
0.8655277724756633
0.0
0.48493177054369685
0.7014542556432541
0.38797679875769936
|
fitness_inference_analysis/figure4/4d_clustering/20210622_clustering_analysis-threshold-bs-CI-1000-v3.ipynb | ###Markdown
First calculate true mean from clustering dataset
###Code
#reading in data
gen0 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_0_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen200 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_200_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen400 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_400_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen600 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_600_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen800 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_800_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen1000 = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_1000_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
cum = (pd.DataFrame(pd.read_csv('20200509_PCA_threshold_cum_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
#finding nearest neighbor indices
neigh=NearestNeighbors(n_neighbors=6) #6 neighbors is really 5, plus self, which we exclude later from analysis
#epoch0
neigh.fit(gen0)
gen0_neigh = (pd.DataFrame(neigh.kneighbors(gen0, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch200
neigh.fit(gen200)
gen200_neigh = (pd.DataFrame(neigh.kneighbors(gen200, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch400
neigh.fit(gen400)
gen400_neigh = (pd.DataFrame(neigh.kneighbors(gen400, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch600
neigh.fit(gen600)
gen600_neigh = (pd.DataFrame(neigh.kneighbors(gen600, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch800
neigh.fit(gen800)
gen800_neigh = (pd.DataFrame(neigh.kneighbors(gen800, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch1000
neigh.fit(gen1000)
gen1000_neigh = (pd.DataFrame(neigh.kneighbors(gen1000, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#cumulative
neigh.fit(cum)
cum_neigh = (pd.DataFrame(neigh.kneighbors(cum, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#make new neighbors df, use dictionary and to replace neighbor index with evoEnvt
neighbors=['1','2','3','4','5']
#epoch0
gen0_df = pd.DataFrame()
gen0_dict = (pd.read_csv('20200509_PCA_threshold_0_adj.csv', delimiter=',')).to_dict('series')
gen0_df['self']=gen0_neigh['self'].replace(gen0_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen0_df['neigh%s' % neighbor]=gen0_neigh['neigh%s' % neighbor].replace(gen0_dict['evoEnvt-ploidy'])
# epoch200
gen200_df = pd.DataFrame()
gen200_dict = (pd.read_csv('20200509_PCA_threshold_200_adj.csv', delimiter=',')).to_dict('series')
gen200_df['self']=gen200_neigh['self'].replace(gen200_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen200_df['neigh%s' % neighbor]=gen200_neigh['neigh%s' % neighbor].replace(gen200_dict['evoEnvt-ploidy'])
# epoch400
gen400_df = pd.DataFrame()
gen400_dict = (pd.read_csv('20200509_PCA_threshold_400_adj.csv', delimiter=',')).to_dict('series')
gen400_df['self']=gen400_neigh['self'].replace(gen400_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen400_df['neigh%s' % neighbor]=gen400_neigh['neigh%s' % neighbor].replace(gen400_dict['evoEnvt-ploidy'])
# epoch600
gen600_df = pd.DataFrame()
gen600_dict = (pd.read_csv('20200509_PCA_threshold_600_adj.csv', delimiter=',')).to_dict('series')
gen600_df['self']=gen600_neigh['self'].replace(gen600_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen600_df['neigh%s' % neighbor]=gen600_neigh['neigh%s' % neighbor].replace(gen600_dict['evoEnvt-ploidy'])
# epoch800
gen800_df = pd.DataFrame()
gen800_dict = (pd.read_csv('20200509_PCA_threshold_800_adj.csv', delimiter=',')).to_dict('series')
gen800_df['self']=gen800_neigh['self'].replace(gen800_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen800_df['neigh%s' % neighbor]=gen800_neigh['neigh%s' % neighbor].replace(gen800_dict['evoEnvt-ploidy'])
# epoch1000
gen1000_df = pd.DataFrame()
gen1000_dict = (pd.read_csv('20200509_PCA_threshold_1000_adj.csv', delimiter=',')).to_dict('series')
gen1000_df['self']=gen1000_neigh['self'].replace(gen1000_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
gen1000_df['neigh%s' % neighbor]=gen1000_neigh['neigh%s' % neighbor].replace(gen1000_dict['evoEnvt-ploidy'])
# cum
cum_df = pd.DataFrame()
cum_dict = (pd.read_csv('20200509_PCA_threshold_cum_adj.csv', delimiter=',')).to_dict('series')
cum_df['self']=cum_neigh['self'].replace(cum_dict['evoEnvt-ploidy'])
for neighbor in neighbors:
cum_df['neigh%s' % neighbor]=cum_neigh['neigh%s' % neighbor].replace(cum_dict['evoEnvt-ploidy'])
#counting how many neighboring nodes are from the same environment
def similarity(self,neigh):
if self == neigh:
return 1
else:
return 0
for neighbor in neighbors:
gen0_df['match%s' % neighbor] = gen0_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen200_df['match%s' % neighbor] = gen200_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen400_df['match%s' % neighbor] = gen400_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen600_df['match%s' % neighbor] = gen600_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen800_df['match%s' % neighbor] = gen800_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen1000_df['match%s' % neighbor] = gen1000_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
cum_df['match%s' % neighbor] = cum_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen0_df['sum'] = gen0_df.sum(axis=1)
gen200_df['sum'] = gen200_df.sum(axis=1)
gen400_df['sum'] = gen400_df.sum(axis=1)
gen600_df['sum'] = gen600_df.sum(axis=1)
gen800_df['sum'] = gen800_df.sum(axis=1)
gen1000_df['sum'] = gen1000_df.sum(axis=1)
cum_df['sum'] = cum_df.sum(axis=1)
#merge dataframe with summed matches with PCs for plotting
gen0_plot = pd.concat([gen0,gen0_df],axis=1)
gen200_plot = pd.concat([gen200,gen200_df],axis=1)
gen400_plot = pd.concat([gen400,gen400_df],axis=1)
gen600_plot = pd.concat([gen600,gen600_df],axis=1)
gen800_plot = pd.concat([gen800,gen800_df],axis=1)
gen1000_plot = pd.concat([gen1000,gen1000_df],axis=1)
cum_plot = pd.concat([cum,cum_df],axis=1)
#average the summed matches by evolution condition
mean0=[]
mean200=[]
mean400=[]
mean600=[]
mean800=[]
mean1000=[]
meancum=[]
mean0.append(gen0_plot.groupby('self')['sum'].mean())
mean200.append(gen200_plot.groupby('self')['sum'].mean())
mean400.append(gen400_plot.groupby('self')['sum'].mean())
mean600.append(gen600_plot.groupby('self')['sum'].mean())
mean800.append(gen800_plot.groupby('self')['sum'].mean())
mean1000.append(gen1000_plot.groupby('self')['sum'].mean())
meancum.append(cum_plot.groupby('self')['sum'].mean())
#reformatting epoch data as x series
epoch = [0,200,400,600,800,1000]
epoch_array = np.array(epoch)
mean0_df=(pd.DataFrame(mean0)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean200_df=(pd.DataFrame(mean200)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean400_df=(pd.DataFrame(mean400)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean600_df=(pd.DataFrame(mean600)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean800_df=(pd.DataFrame(mean800)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean1000_df=(pd.DataFrame(mean1000)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
meancum_df=(pd.DataFrame(meancum)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
true_means = pd.concat([mean0_df,mean200_df,mean400_df,mean600_df,mean800_df,mean1000_df],axis=0)
true_cum = pd.concat([meancum_df],axis=0)
## functions to check clustering by plotting
# #checking that these are actually neighbors
# check = pd.concat([cum,cum_neigh],axis=1)
# ax = check.plot.scatter(x='principle_component_1', y='principle_component_2', c='b')
# for i, txt in enumerate(check.self):
# ax.annotate(txt, (check.principle_component_1.iat[i],check.principle_component_2.iat[i]))
# plt.show()
# #color by number of matches to home envt
# neighplot0 = gen0_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplot200 = gen200_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplot400 = gen400_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplot600 = gen600_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplot800 = gen800_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplot1000 = gen1000_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# neighplotcum = cum_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='sum', colormap='viridis')
# #color by number of matches to home envt
# numeric = {'YPD(37C)-H':1, 'YPD(37C)-D':2, 'YPD-D':3, 'YPD+AA-D':4}
# gen0_plot['numeric']=gen0_plot["self"].replace(numeric)
# gen200_plot['numeric']=gen200_plot["self"].replace(numeric)
# gen400_plot['numeric']=gen400_plot["self"].replace(numeric)
# gen600_plot['numeric']=gen600_plot["self"].replace(numeric)
# gen800_plot['numeric']=gen800_plot["self"].replace(numeric)
# gen1000_plot['numeric']=gen1000_plot["self"].replace(numeric)
# cum_plot['numeric']=cum_plot["self"].replace(numeric)
# neighplot0 = gen0_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplot200 = gen200_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplot400 = gen400_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplot600 = gen600_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplot800 = gen800_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplot1000 = gen1000_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
# neighplotcum = cum_plot.plot.scatter(x='principle_component_1', y='principle_component_2', c='numeric', colormap='viridis')
###Output
_____no_output_____
###Markdown
bootstrapping to calculate 95% CI
###Code
#running through files
import os
bs_e0=[]
bs_e200=[]
bs_e400=[]
bs_e600=[]
bs_e800=[]
bs_e1000=[]
bs_cum=[]
count = 0
#for each bootstrapped PCA, calculate the clustering metric for each epoch/envt
directory = 'threshold_bootstrap_adj'
for filename in os.listdir(directory):
for i in range(0,99):
if filename.endswith("_0_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
count = count + 1
gen0bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen0bs)
gen0_neighbs = (pd.DataFrame(neigh.kneighbors(gen0bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen0_dfbs = pd.DataFrame()
gen0_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen0_dfbs['self']=gen0_neighbs['self'].replace(gen0_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen0_dfbs['neigh%s' % neighbor]=gen0_neighbs['neigh%s' % neighbor].replace(gen0_dictbs['evoEnvt-ploidy'])
gen0_dfbs['match%s' % neighbor] = gen0_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen0_dfbs['sum'] = gen0_dfbs.sum(axis=1)
gen0_plotbs = pd.concat([gen0bs,gen0_dfbs],axis=1)
bs_e0.append(gen0_plotbs)
if filename.endswith("_200_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
gen200bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen200bs)
gen200_neighbs = (pd.DataFrame(neigh.kneighbors(gen200bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen200_dfbs = pd.DataFrame()
gen200_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen200_dfbs['self']=gen200_neighbs['self'].replace(gen200_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen200_dfbs['neigh%s' % neighbor]=gen200_neighbs['neigh%s' % neighbor].replace(gen200_dictbs['evoEnvt-ploidy'])
gen200_dfbs['match%s' % neighbor] = gen200_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen200_dfbs['sum'] = gen200_dfbs.sum(axis=1)
gen200_plotbs = pd.concat([gen200bs,gen200_dfbs],axis=1)
bs_e200.append(gen200_plotbs)
if filename.endswith("_400_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
gen400bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen400bs)
gen400_neighbs = (pd.DataFrame(neigh.kneighbors(gen400bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen400_dfbs = pd.DataFrame()
gen400_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen400_dfbs['self']=gen400_neighbs['self'].replace(gen400_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen400_dfbs['neigh%s' % neighbor]=gen400_neighbs['neigh%s' % neighbor].replace(gen400_dictbs['evoEnvt-ploidy'])
gen400_dfbs['match%s' % neighbor] = gen400_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen400_dfbs['sum'] = gen400_dfbs.sum(axis=1)
gen400_plotbs = pd.concat([gen400bs,gen400_dfbs],axis=1)
bs_e400.append(gen400_plotbs)
if filename.endswith("_600_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
gen600bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen600bs)
gen600_neighbs = (pd.DataFrame(neigh.kneighbors(gen600bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen600_dfbs = pd.DataFrame()
gen600_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen600_dfbs['self']=gen600_neighbs['self'].replace(gen600_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen600_dfbs['neigh%s' % neighbor]=gen600_neighbs['neigh%s' % neighbor].replace(gen600_dictbs['evoEnvt-ploidy'])
gen600_dfbs['match%s' % neighbor] = gen600_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen600_dfbs['sum'] = gen600_dfbs.sum(axis=1)
gen600_plotbs = pd.concat([gen600bs,gen600_dfbs],axis=1)
bs_e600.append(gen600_plotbs)
if filename.endswith("_800_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
gen800bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen800bs)
gen800_neighbs = (pd.DataFrame(neigh.kneighbors(gen800bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen800_dfbs = pd.DataFrame()
gen800_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen800_dfbs['self']=gen800_neighbs['self'].replace(gen800_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen800_dfbs['neigh%s' % neighbor]=gen800_neighbs['neigh%s' % neighbor].replace(gen800_dictbs['evoEnvt-ploidy'])
gen800_dfbs['match%s' % neighbor] = gen800_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen800_dfbs['sum'] = gen800_dfbs.sum(axis=1)
gen800_plotbs = pd.concat([gen800bs,gen800_dfbs],axis=1)
bs_e800.append(gen800_plotbs)
if filename.endswith("_1000_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
gen1000bs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(gen1000bs)
gen1000_neighbs = (pd.DataFrame(neigh.kneighbors(gen1000bs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
gen1000_dfbs = pd.DataFrame()
gen1000_dictbs = (pd.read_csv('threshold_bootstrap_adj/'+filename, delimiter=',')).to_dict('series')
gen1000_dfbs['self']=gen1000_neighbs['self'].replace(gen1000_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
gen1000_dfbs['neigh%s' % neighbor]=gen1000_neighbs['neigh%s' % neighbor].replace(gen1000_dictbs['evoEnvt-ploidy'])
gen1000_dfbs['match%s' % neighbor] = gen1000_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen1000_dfbs['sum'] = gen1000_dfbs.sum(axis=1)
gen1000_plotbs = pd.concat([gen1000bs,gen1000_dfbs],axis=1)
bs_e1000.append(gen1000_plotbs)
print(count)
count_2=0
directory = 'threshold_bootstrap_cum'
for filename in os.listdir(directory):
for i in range(0,99):
if filename.endswith("_adj.csv") and ("threshold_"+str(i)+"_") in filename:
file = directory+filename
print(filename)
count_2 = count_2 + 1
cumbs = (pd.DataFrame(pd.read_csv('threshold_bootstrap_cum/'+filename, delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
neigh.fit(cumbs)
cum_neighbs = (pd.DataFrame(neigh.kneighbors(cumbs, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
cum_dfbs = pd.DataFrame()
cum_dictbs = (pd.read_csv('threshold_bootstrap_cum/'+filename, delimiter=',')).to_dict('series')
cum_dfbs['self']=cum_neighbs['self'].replace(cum_dictbs['evoEnvt-ploidy'])
for neighbor in neighbors:
cum_dfbs['neigh%s' % neighbor]=cum_neighbs['neigh%s' % neighbor].replace(cum_dictbs['evoEnvt-ploidy'])
cum_dfbs['match%s' % neighbor] = cum_dfbs.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
cum_dfbs['sum'] = cum_dfbs.sum(axis=1)
cum_plotbs = pd.concat([cumbs,cum_dfbs],axis=1)
bs_cum.append(cum_plotbs)
print(count_2)
#calculate the mean and sem for each bootstrapped dataset by envt ploidy
bs_meana = []
bs_meanb = []
bs_meanc = []
bs_meand = []
bs_meane = []
bs_meanf = []
bs_meang = []
for i in range(0,99):
bs_meana.append(bs_e0[i].groupby('self')['sum'].mean())
bs_meanb.append(bs_e200[i].groupby('self')['sum'].mean())
bs_meanc.append(bs_e400[i].groupby('self')['sum'].mean())
bs_meand.append(bs_e600[i].groupby('self')['sum'].mean())
bs_meane.append(bs_e800[i].groupby('self')['sum'].mean())
bs_meanf.append(bs_e1000[i].groupby('self')['sum'].mean())
bs_meang.append(bs_cum[i].groupby('self')['sum'].mean())
#calculate the overall mean by environment
bs_meana_df=pd.DataFrame(bs_meana)
bs_meanb_df=pd.DataFrame(bs_meanb)
bs_meanc_df=pd.DataFrame(bs_meanc)
bs_meand_df=pd.DataFrame(bs_meand)
bs_meane_df=pd.DataFrame(bs_meane)
bs_meanf_df=pd.DataFrame(bs_meanf)
bs_meang_df=pd.DataFrame(bs_meang)
#these dataframes are average clustering metrics for each bootstrapped PC for each env
#calculate 95% CI of bootstrapped data
from scipy.stats import sem, t
from scipy import mean
##merging datasets for plotting
epoch = [0,200,400,600,800,1000]
epoch_array = np.array(epoch)
epoch_df = pd.DataFrame(data=[epoch_array])
transposed = epoch_df.T
transposed=transposed.rename(columns={0:"epoch"})
#zero
bs_zero_mean = bs_meana_df.groupby(level=0).mean()
bs_zero_025 = bs_meana_df.groupby(level=0).quantile(0.025)
bs_zero_975 = bs_meana_df.groupby(level=0).quantile(0.975)
bs_zero_low = bs_zero_mean - bs_zero_025
bs_zero_hi = bs_zero_975 - bs_zero_mean
bs_zero_low=bs_zero_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_zero_hi=bs_zero_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#200
bs_200_mean = bs_meanb_df.groupby(level=0).mean()
bs_200_025 = bs_meanb_df.groupby(level=0).quantile(0.025)
bs_200_975 = bs_meanb_df.groupby(level=0).quantile(0.975)
bs_200_low = bs_200_mean - bs_200_025
bs_200_hi = bs_200_975 - bs_200_mean
bs_200_low=bs_200_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_200_hi=bs_200_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#400
bs_400_mean = bs_meanc_df.groupby(level=0).mean()
bs_400_025 = bs_meanc_df.groupby(level=0).quantile(0.025)
bs_400_975 = bs_meanc_df.groupby(level=0).quantile(0.975)
bs_400_low = bs_400_mean - bs_400_025
bs_400_hi = bs_400_975 - bs_400_mean
bs_400_low=bs_400_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_400_hi=bs_400_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#600
bs_600_mean = bs_meand_df.groupby(level=0).mean()
bs_600_025 = bs_meand_df.groupby(level=0).quantile(0.025)
bs_600_975 = bs_meand_df.groupby(level=0).quantile(0.975)
bs_600_low = bs_600_mean - bs_600_025
bs_600_hi = bs_600_975 - bs_600_mean
bs_600_low=bs_600_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_600_hi=bs_600_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#800
bs_800_mean = bs_meane_df.groupby(level=0).mean()
bs_800_025 = bs_meane_df.groupby(level=0).quantile(0.025)
bs_800_975 = bs_meane_df.groupby(level=0).quantile(0.975)
bs_800_low = bs_800_mean - bs_800_025
bs_800_hi = bs_800_975 - bs_800_mean
bs_800_low=bs_800_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_800_hi=bs_800_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#1000
bs_1000_mean = bs_meanf_df.groupby(level=0).mean()
bs_1000_025 = bs_meanf_df.groupby(level=0).quantile(0.025)
bs_1000_975 = bs_meanf_df.groupby(level=0).quantile(0.975)
bs_1000_low = bs_1000_mean - bs_1000_025
bs_1000_hi = bs_1000_975 - bs_1000_mean
bs_1000_low=bs_1000_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_1000_hi=bs_1000_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
#cum
bs_cum_mean = bs_meang_df.groupby(level=0).mean()
bs_cum_025 = bs_meang_df.groupby(level=0).quantile(0.025)
bs_cum_975 = bs_meang_df.groupby(level=0).quantile(0.975)
bs_cum_low = bs_cum_mean - bs_cum_025
bs_cum_hi = bs_cum_975 - bs_cum_mean
bs_cum_low=bs_cum_low.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_cum_hi=bs_cum_hi.rename(columns={"evoEnvt-ploidy":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
bs_mean_all = pd.concat([bs_zero_mean,bs_200_mean,bs_400_mean,bs_600_mean,bs_800_mean,bs_1000_mean],axis=0)
bs_low_all = pd.concat([bs_zero_low,bs_200_low,bs_400_low,bs_600_low,bs_800_low,bs_1000_low],axis=0)
bs_high_all = pd.concat([bs_zero_hi,bs_200_hi,bs_400_hi,bs_600_hi,bs_800_hi,bs_1000_hi],axis=0)
bs_mean_cum = pd.concat([bs_cum_mean],axis=0)
bs_low_cum = pd.concat([bs_cum_low],axis=0)
bs_high_cum = pd.concat([bs_cum_hi],axis=0)
## put these arrays into a dataframe and save as csv for plotting with permuted data
true_means.to_csv('20210622_clustering_output/thres_true_means.csv', index=False)
true_cum.to_csv('20210622_clustering_output/thres_true_cum.csv', index=False)
bs_low_all.to_csv('20210622_clustering_output/thres_bs_low_all.csv', index=False)
bs_high_all.to_csv('20210622_clustering_output/thres_bs_high_all.csv', index=False)
bs_low_cum.to_csv('20210622_clustering_output/thres_bs_low_cum.csv', index=False)
bs_high_cum.to_csv('20210622_clustering_output/thres_bs_high_cum.csv', index=False)
###Output
20200509_PCA_threshold_55_800_adj.csv
20200509_PCA_threshold_26_200_adj.csv
20200509_PCA_threshold_92_0_adj.csv
20200509_PCA_threshold_36_400_adj.csv
20200509_PCA_threshold_21_0_adj.csv
20200509_PCA_threshold_25_800_adj.csv
20200509_PCA_threshold_91_600_adj.csv
20200509_PCA_threshold_56_200_adj.csv
20200509_PCA_threshold_46_400_adj.csv
20200509_PCA_threshold_73_1000_adj.csv
20200509_PCA_threshold_26_1000_adj.csv
20200509_PCA_threshold_18_400_adj.csv
20200509_PCA_threshold_86_800_adj.csv
20200509_PCA_threshold_32_600_adj.csv
20200509_PCA_threshold_9_600_adj.csv
20200509_PCA_threshold_35_0_adj.csv
20200509_PCA_threshold_74_1000_adj.csv
20200509_PCA_threshold_21_1000_adj.csv
20200509_PCA_threshold_68_400_adj.csv
20200509_PCA_threshold_85_200_adj.csv
20200509_PCA_threshold_42_600_adj.csv
20200509_PCA_threshold_86_0_adj.csv
20200509_PCA_threshold_78_200_adj.csv
20200509_PCA_threshold_95_400_adj.csv
20200509_PCA_threshold_17_800_adj.csv
20200509_PCA_threshold_89_400_adj.csv
20200509_PCA_threshold_64_200_adj.csv
20200509_PCA_threshold_74_400_adj.csv
20200509_PCA_threshold_14_0_adj.csv
20200509_PCA_threshold_67_800_adj.csv
20200509_PCA_threshold_14_200_adj.csv
20200509_PCA_threshold_2_800_adj.csv
20200509_PCA_threshold_70_600_adj.csv
20200509_PCA_threshold_39_800_adj.csv
20200509_PCA_threshold_61_1000_adj.csv
20200509_PCA_threshold_34_1000_adj.csv
20200509_PCA_threshold_1_200_adj.csv
20200509_PCA_threshold_49_800_adj.csv
20200509_PCA_threshold_66_1000_adj.csv
20200509_PCA_threshold_33_1000_adj.csv
20200509_PCA_threshold_65_600_adj.csv
20200509_PCA_threshold_56_0_adj.csv
20200509_PCA_threshold_98_600_adj.csv
20200509_PCA_threshold_15_600_adj.csv
20200509_PCA_threshold_4_400_adj.csv
20200509_PCA_threshold_38_0_adj.csv
20200509_PCA_threshold_71_200_adj.csv
20200509_PCA_threshold_61_400_adj.csv
20200509_PCA_threshold_50_1000_adj.csv
20200509_PCA_threshold_72_800_adj.csv
20200509_PCA_threshold_0_600_adj.csv
20200509_PCA_threshold_42_0_adj.csv
20200509_PCA_threshold_11_400_adj.csv
20200509_PCA_threshold_57_1000_adj.csv
20200509_PCA_threshold_19_0_adj.csv
20200509_PCA_threshold_93_800_adj.csv
20200509_PCA_threshold_27_600_adj.csv
20200509_PCA_threshold_57_600_adj.csv
20200509_PCA_threshold_63_0_adj.csv
20200509_PCA_threshold_90_200_adj.csv
20200509_PCA_threshold_80_400_adj.csv
20200509_PCA_threshold_33_200_adj.csv
20200509_PCA_threshold_42_1000_adj.csv
20200509_PCA_threshold_17_1000_adj.csv
20200509_PCA_threshold_40_800_adj.csv
20200509_PCA_threshold_8_200_adj.csv
20200509_PCA_threshold_77_0_adj.csv
20200509_PCA_threshold_23_400_adj.csv
20200509_PCA_threshold_43_200_adj.csv
20200509_PCA_threshold_30_800_adj.csv
20200509_PCA_threshold_84_600_adj.csv
20200509_PCA_threshold_45_1000_adj.csv
20200509_PCA_threshold_0_0_adj.csv
20200509_PCA_threshold_10_1000_adj.csv
20200509_PCA_threshold_79_600_adj.csv
20200509_PCA_threshold_53_400_adj.csv
20200509_PCA_threshold_8_1000_adj.csv
20200509_PCA_threshold_14_400_adj.csv
20200509_PCA_threshold_68_0_adj.csv
20200509_PCA_threshold_5_600_adj.csv
20200509_PCA_threshold_77_800_adj.csv
20200509_PCA_threshold_64_400_adj.csv
20200509_PCA_threshold_89_200_adj.csv
20200509_PCA_threshold_12_0_adj.csv
20200509_PCA_threshold_74_200_adj.csv
20200509_PCA_threshold_1_400_adj.csv
20200509_PCA_threshold_59_800_adj.csv
20200509_PCA_threshold_98_1000_adj.csv
20200509_PCA_threshold_32_1000_adj.csv
20200509_PCA_threshold_67_1000_adj.csv
20200509_PCA_threshold_10_600_adj.csv
20200509_PCA_threshold_29_800_adj.csv
20200509_PCA_threshold_35_1000_adj.csv
20200509_PCA_threshold_60_1000_adj.csv
20200509_PCA_threshold_60_600_adj.csv
20200509_PCA_threshold_56_400_adj.csv
20200509_PCA_threshold_27_0_adj.csv
20200509_PCA_threshold_46_200_adj.csv
20200509_PCA_threshold_81_600_adj.csv
20200509_PCA_threshold_35_800_adj.csv
20200509_PCA_threshold_94_0_adj.csv
20200509_PCA_threshold_26_400_adj.csv
20200509_PCA_threshold_36_200_adj.csv
20200509_PCA_threshold_45_800_adj.csv
20200509_PCA_threshold_85_400_adj.csv
20200509_PCA_threshold_68_200_adj.csv
20200509_PCA_threshold_20_1000_adj.csv
20200509_PCA_threshold_75_1000_adj.csv
20200509_PCA_threshold_80_0_adj.csv
20200509_PCA_threshold_49_0_adj.csv
20200509_PCA_threshold_52_600_adj.csv
20200509_PCA_threshold_95_200_adj.csv
20200509_PCA_threshold_78_400_adj.csv
20200509_PCA_threshold_18_200_adj.csv
20200509_PCA_threshold_33_0_adj.csv
20200509_PCA_threshold_27_1000_adj.csv
20200509_PCA_threshold_72_1000_adj.csv
20200509_PCA_threshold_22_600_adj.csv
20200509_PCA_threshold_96_800_adj.csv
20200509_PCA_threshold_65_0_adj.csv
20200509_PCA_threshold_90_400_adj.csv
20200509_PCA_threshold_80_200_adj.csv
20200509_PCA_threshold_47_600_adj.csv
20200509_PCA_threshold_37_600_adj.csv
20200509_PCA_threshold_83_800_adj.csv
20200509_PCA_threshold_43_400_adj.csv
20200509_PCA_threshold_6_0_adj.csv
20200509_PCA_threshold_69_600_adj.csv
20200509_PCA_threshold_9_1000_adj.csv
20200509_PCA_threshold_11_1000_adj.csv
20200509_PCA_threshold_94_600_adj.csv
20200509_PCA_threshold_44_1000_adj.csv
20200509_PCA_threshold_20_800_adj.csv
20200509_PCA_threshold_53_200_adj.csv
20200509_PCA_threshold_33_400_adj.csv
20200509_PCA_threshold_19_600_adj.csv
20200509_PCA_threshold_50_800_adj.csv
20200509_PCA_threshold_16_1000_adj.csv
20200509_PCA_threshold_43_1000_adj.csv
20200509_PCA_threshold_23_200_adj.csv
20200509_PCA_threshold_71_0_adj.csv
20200509_PCA_threshold_8_400_adj.csv
20200509_PCA_threshold_4_200_adj.csv
20200509_PCA_threshold_88_600_adj.csv
20200509_PCA_threshold_50_0_adj.csv
20200509_PCA_threshold_75_600_adj.csv
20200509_PCA_threshold_7_800_adj.csv
20200509_PCA_threshold_56_1000_adj.csv
20200509_PCA_threshold_44_0_adj.csv
20200509_PCA_threshold_62_800_adj.csv
20200509_PCA_threshold_11_200_adj.csv
20200509_PCA_threshold_51_1000_adj.csv
20200509_PCA_threshold_71_400_adj.csv
20200509_PCA_threshold_12_800_adj.csv
20200509_PCA_threshold_61_200_adj.csv
20200509_PCA_threshold_61_800_adj.csv
20200509_PCA_threshold_12_200_adj.csv
20200509_PCA_threshold_34_0_adj.csv
20200509_PCA_threshold_28_600_adj.csv
20200509_PCA_threshold_11_800_adj.csv
20200509_PCA_threshold_62_200_adj.csv
20200509_PCA_threshold_87_0_adj.csv
20200509_PCA_threshold_72_400_adj.csv
20200509_PCA_threshold_58_600_adj.csv
20200509_PCA_threshold_7_200_adj.csv
20200509_PCA_threshold_93_0_adj.csv
20200509_PCA_threshold_4_800_adj.csv
20200509_PCA_threshold_76_600_adj.csv
20200509_PCA_threshold_20_0_adj.csv
20200509_PCA_threshold_23_800_adj.csv
20200509_PCA_threshold_97_600_adj.csv
20200509_PCA_threshold_50_200_adj.csv
20200509_PCA_threshold_40_400_adj.csv
20200509_PCA_threshold_53_800_adj.csv
20200509_PCA_threshold_20_200_adj.csv
20200509_PCA_threshold_30_400_adj.csv
20200509_PCA_threshold_83_200_adj.csv
20200509_PCA_threshold_49_1000_adj.csv
20200509_PCA_threshold_44_600_adj.csv
20200509_PCA_threshold_4_1000_adj.csv
20200509_PCA_threshold_15_0_adj.csv
20200509_PCA_threshold_93_400_adj.csv
20200509_PCA_threshold_80_800_adj.csv
20200509_PCA_threshold_34_600_adj.csv
20200509_PCA_threshold_3_1000_adj.csv
20200509_PCA_threshold_51_600_adj.csv
20200509_PCA_threshold_96_200_adj.csv
20200509_PCA_threshold_39_0_adj.csv
20200509_PCA_threshold_86_400_adj.csv
20200509_PCA_threshold_18_800_adj.csv
20200509_PCA_threshold_95_800_adj.csv
20200509_PCA_threshold_21_600_adj.csv
20200509_PCA_threshold_43_0_adj.csv
20200509_PCA_threshold_68_800_adj.csv
20200509_PCA_threshold_45_200_adj.csv
20200509_PCA_threshold_36_800_adj.csv
20200509_PCA_threshold_82_600_adj.csv
20200509_PCA_threshold_78_1000_adj.csv
20200509_PCA_threshold_87_1000_adj.csv
20200509_PCA_threshold_57_0_adj.csv
20200509_PCA_threshold_55_400_adj.csv
20200509_PCA_threshold_35_200_adj.csv
20200509_PCA_threshold_46_800_adj.csv
20200509_PCA_threshold_80_1000_adj.csv
20200509_PCA_threshold_25_400_adj.csv
20200509_PCA_threshold_13_600_adj.csv
20200509_PCA_threshold_39_400_adj.csv
20200509_PCA_threshold_29_200_adj.csv
20200509_PCA_threshold_2_400_adj.csv
20200509_PCA_threshold_76_0_adj.csv
20200509_PCA_threshold_63_600_adj.csv
20200509_PCA_threshold_1_0_adj.csv
20200509_PCA_threshold_49_400_adj.csv
20200509_PCA_threshold_59_200_adj.csv
20200509_PCA_threshold_74_800_adj.csv
20200509_PCA_threshold_6_600_adj.csv
20200509_PCA_threshold_18_0_adj.csv
20200509_PCA_threshold_95_1000_adj.csv
20200509_PCA_threshold_89_800_adj.csv
20200509_PCA_threshold_17_400_adj.csv
###Markdown
calc and format for plotting
###Code
def ap_ttest(perm_array,true):
count = 0
for val in perm_array:
if val > true:
count = count+1
else:
continue
return ((count/1000)*24)
filtering='thres'
true_means = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_true_means.csv' % filtering,delimiter=','))
true_cum = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_true_cum.csv'% filtering,delimiter=','))
bs_low_all = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_bs_low_all.csv'% filtering,delimiter=','))
bs_high_all = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_bs_high_all.csv'% filtering,delimiter=','))
bs_low_cum = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_bs_low_cum.csv'% filtering,delimiter=','))
bs_high_cum = pd.DataFrame(pd.read_csv('20210622_clustering_output/%s_bs_high_cum.csv'% filtering,delimiter=','))
#reformat for plotting
#reformatting epoch data as x series
epoch = ["0","200","400","600","800","1000"]
cum = ["all"]
epoch_array = np.array(epoch)
cum_array = np.array(cum)
############################################ making arrays for plotting
#YPD_s30 data
true_YPD_mean = np.array(true_means.YPD)
bs_YPD_lo = np.array(bs_low_all.YPD)
bs_YPD_hi = np.array(bs_high_all.YPD)
bs_YPD_CI = [bs_YPD_lo,bs_YPD_hi]
#YPD_s37 data
true_YPD37_mean = np.array(true_means.YPD37)
bs_YPD37_lo = np.array(bs_low_all.YPD37)
bs_YPD37_hi = np.array(bs_high_all.YPD37)
bs_YPD37_CI = [bs_YPD37_lo,bs_YPD37_hi]
#YPD_s37H data
true_YPD37H_mean = np.array(true_means.YPD37H)
bs_YPD37H_lo = np.array(bs_low_all.YPD37H)
bs_YPD37H_hi = np.array(bs_high_all.YPD37H)
bs_YPD37H_CI = [bs_YPD37H_lo,bs_YPD37H_hi]
#YPD_sAA data
true_YPDAA_mean = np.array(true_means.YPDAA)
bs_YPDAA_lo = np.array(bs_low_all.YPDAA)
bs_YPDAA_hi = np.array(bs_high_all.YPDAA)
bs_YPDAA_CI = [bs_YPDAA_lo,bs_YPDAA_hi]
########################################same for cume data
#YPD_s30 data
true_YPD_cum = np.array(true_cum.YPD)
bs_YPD_lo_c = np.array(bs_low_cum.YPD)
bs_YPD_hi_c = np.array(bs_high_cum.YPD)
bs_YPD_CI_c = [bs_YPD_lo_c,bs_YPD_hi_c]
#YPD_s37 data
true_YPD37_cum = np.array(true_cum.YPD37)
bs_YPD37_lo_c = np.array(bs_low_cum.YPD37)
bs_YPD37_hi_c = np.array(bs_high_cum.YPD37)
bs_YPD37_CI_c = [bs_YPD37_lo_c,bs_YPD37_hi_c]
#YPD_s37H data
true_YPD37H_cum = np.array(true_cum.YPD37H)
bs_YPD37H_lo_c = np.array(bs_low_cum.YPD37H)
bs_YPD37H_hi_c = np.array(bs_high_cum.YPD37H)
bs_YPD37H_CI_c = [bs_YPD37H_lo_c,bs_YPD37H_hi_c]
#YPD_sAA data
true_YPDAA_cum = np.array(true_cum.YPDAA)
bs_YPDAA_lo_c = np.array(bs_low_cum.YPDAA)
bs_YPDAA_hi_c = np.array(bs_high_cum.YPDAA)
bs_YPDAA_CI_c = [bs_YPDAA_lo_c,bs_YPDAA_hi_c]
#importing permuted data for plotting
df1 = pd.DataFrame(pd.read_csv('20210416_permute_1k_thres.csv', delimiter=','))
df2 = pd.DataFrame(pd.read_csv('20210416_permute_1k_cum_thres.csv', delimiter=','))
df2['gen']='x'
#formatting into arrays for t-test
import scipy.stats as st
from scipy.stats import ttest_ind_from_stats
#37
g0_37 = np.array(df1[df1['gen']==0]['YPD37'])
g0_37_p = ap_ttest(g0_37,true_YPD37_mean[0])
g200_37 = np.array(df1[df1['gen']==200]['YPD37'])
g200_37_p = ap_ttest(g200_37,true_YPD37_mean[1])
g400_37 = np.array(df1[df1['gen']==400]['YPD37'])
g400_37_p = ap_ttest(g400_37,true_YPD37_mean[2])
g600_37 = np.array(df1[df1['gen']==600]['YPD37'])
g600_37_p = ap_ttest(g600_37,true_YPD37_mean[3])
g800_37 = np.array(df1[df1['gen']==800]['YPD37'])
g800_37_p = ap_ttest(g800_37,true_YPD37_mean[4])
g1000_37 = np.array(df1[df1['gen']==1000]['YPD37'])
g1000_37_p = ap_ttest(g1000_37,true_YPD37_mean[5])
cum_37 = np.array(df2['YPD37'])
cum_37_p = ap_ttest(cum_37,true_YPD37_cum[0])
#hap
g0_37h = np.array(df1[df1['gen']==0]['YPD37H'])
g0_37h_p = ap_ttest(g0_37h,true_YPD37H_mean[0])
g200_37h = np.array(df1[df1['gen']==200]['YPD37H'])
g200_37h_p = ap_ttest(g200_37h,true_YPD37H_mean[1])
g400_37h = np.array(df1[df1['gen']==400]['YPD37H'])
g400_37h_p = ap_ttest(g400_37h,true_YPD37H_mean[2])
g600_37h = np.array(df1[df1['gen']==600]['YPD37H'])
g600_37h_p = ap_ttest(g600_37h,true_YPD37H_mean[3])
g800_37h = np.array(df1[df1['gen']==800]['YPD37H'])
g800_37h_p = ap_ttest(g800_37h,true_YPD37H_mean[4])
g1000_37h = np.array(df1[df1['gen']==1000]['YPD37H'])
g1000_37h_p = ap_ttest(g1000_37h,true_YPD37H_mean[5])
cum_37h = np.array(df2['YPD37H'])
cum_37h_p = ap_ttest(cum_37h,true_YPD37H_cum[0])
#AA
g0_AA = np.array(df1[df1['gen']==0]['YPDAA'])
g0_AA_p = ap_ttest(g0_AA,true_YPDAA_mean[0])
g200_AA = np.array(df1[df1['gen']==200]['YPDAA'])
g200_AA_p = ap_ttest(g200_AA,true_YPDAA_mean[1])
g400_AA = np.array(df1[df1['gen']==400]['YPDAA'])
g400_AA_p = ap_ttest(g400_AA,true_YPDAA_mean[2])
g600_AA = np.array(df1[df1['gen']==600]['YPDAA'])
g600_AA_p = ap_ttest(g600_AA,true_YPDAA_mean[3])
g800_AA = np.array(df1[df1['gen']==800]['YPDAA'])
g800_AA_p = ap_ttest(g800_AA,true_YPDAA_mean[4])
g1000_AA = np.array(df1[df1['gen']==1000]['YPDAA'])
g1000_AA_p = ap_ttest(g1000_AA,true_YPDAA_mean[5])
cum_AA = np.array(df2['YPDAA'])
cum_AA_p = ap_ttest(cum_AA,true_YPDAA_cum[0])
#30
g0_30 = np.array(df1[df1['gen']==0]['YPD'])
g0_30_p = ap_ttest(g0_30,true_YPD_mean[0])
g200_30 = np.array(df1[df1['gen']==200]['YPD'])
g200_30_p = ap_ttest(g200_30,true_YPD_mean[1])
g400_30 = np.array(df1[df1['gen']==400]['YPD'])
g400_30_p = ap_ttest(g400_30,true_YPD_mean[2])
g600_30 = np.array(df1[df1['gen']==600]['YPD'])
g600_30_p = ap_ttest(g600_30,true_YPD_mean[3])
g800_30 = np.array(df1[df1['gen']==800]['YPD'])
g800_30_p = ap_ttest(g800_30,true_YPD_mean[4])
g1000_30 = np.array(df1[df1['gen']==1000]['YPD'])
g1000_30_p = ap_ttest(g1000_30,true_YPD_mean[5])
cum_30 = np.array(df2['YPD37H'])
cum_30_p = ap_ttest(cum_30,true_YPD_cum[0])
##taking average and mean for plotting
p_true_means = df1.groupby('gen').mean()
perm_025 = df1.groupby('gen').quantile(0.025)
perm_975 = df1.groupby('gen').quantile(0.975)
p_bs_low_all = p_true_means - perm_025
p_bs_high_all = perm_975 - p_true_means
p_true_cum = df2.groupby('gen').mean()
cum_025 = df2.groupby('gen').quantile(0.025)
cum_975 = df2.groupby('gen').quantile(0.975)
p_bs_low_cum = p_true_cum - cum_025
p_bs_high_cum = cum_975 - p_true_cum
pval_dict = {'YPD':(g0_30_p,g200_30_p,g400_30_p,g600_30_p,g800_30_p,g1000_30_p,cum_30_p),'YPDAA':(g0_AA_p,g200_AA_p,g400_AA_p,g600_AA_p,g800_AA_p,g1000_AA_p,cum_AA_p),'YPD37H':(g0_37h_p,g200_37h_p,g400_37h_p,g600_37h_p,g800_37h_p,g1000_37h_p,cum_37h_p),'YPD37':(g0_37_p,g200_37_p,g400_37_p,g600_37_p,g800_37_p,g1000_37_p,cum_37_p)}
pval_df = pd.DataFrame(pval_dict)
pval_df
#reformat for plotting
#reformatting epoch data as x series
epoch = ["0","200","400","600","800","1000"]
cum = ["all"]
epoch_array = np.array(epoch)
cum_array = np.array(cum)
############################################ making arrays for plotting
#YPD_s30 data
true_YPD_mean = np.array(true_means.YPD)
bs_YPD_lo = np.array(bs_low_all.YPD)
bs_YPD_hi = np.array(bs_high_all.YPD)
bs_YPD_CI = [bs_YPD_lo,bs_YPD_hi]
#YPD_s37 data
true_YPD37_mean = np.array(true_means.YPD37)
bs_YPD37_lo = np.array(bs_low_all.YPD37)
bs_YPD37_hi = np.array(bs_high_all.YPD37)
bs_YPD37_CI = [bs_YPD37_lo,bs_YPD37_hi]
#YPD_s37H data
true_YPD37H_mean = np.array(true_means.YPD37H)
bs_YPD37H_lo = np.array(bs_low_all.YPD37H)
bs_YPD37H_hi = np.array(bs_high_all.YPD37H)
bs_YPD37H_CI = [bs_YPD37H_lo,bs_YPD37H_hi]
#YPD_sAA data
true_YPDAA_mean = np.array(true_means.YPDAA)
bs_YPDAA_lo = np.array(bs_low_all.YPDAA)
bs_YPDAA_hi = np.array(bs_high_all.YPDAA)
bs_YPDAA_CI = [bs_YPDAA_lo,bs_YPDAA_hi]
########################################same for cume data
#YPD_s30 data
true_YPD_cum = np.array(true_cum.YPD)
bs_YPD_lo_c = np.array(bs_low_cum.YPD)
bs_YPD_hi_c = np.array(bs_high_cum.YPD)
bs_YPD_CI_c = [bs_YPD_lo_c,bs_YPD_hi_c]
#YPD_s37 data
true_YPD37_cum = np.array(true_cum.YPD37)
bs_YPD37_lo_c = np.array(bs_low_cum.YPD37)
bs_YPD37_hi_c = np.array(bs_high_cum.YPD37)
bs_YPD37_CI_c = [bs_YPD37_lo_c,bs_YPD37_hi_c]
#YPD_s37H data
true_YPD37H_cum = np.array(true_cum.YPD37H)
bs_YPD37H_lo_c = np.array(bs_low_cum.YPD37H)
bs_YPD37H_hi_c = np.array(bs_high_cum.YPD37H)
bs_YPD37H_CI_c = [bs_YPD37H_lo_c,bs_YPD37H_hi_c]
#YPD_sAA data
true_YPDAA_cum = np.array(true_cum.YPDAA)
bs_YPDAA_lo_c = np.array(bs_low_cum.YPDAA)
bs_YPDAA_hi_c = np.array(bs_high_cum.YPDAA)
bs_YPDAA_CI_c = [bs_YPDAA_lo_c,bs_YPDAA_hi_c]
########permuted data
#YPD_s30 data
p_true_YPD_mean = np.array(p_true_means.YPD)
p_bs_YPD_lo = np.array(p_bs_low_all.YPD)
p_bs_YPD_hi = np.array(p_bs_high_all.YPD)
p_bs_YPD_CI = [p_bs_YPD_lo,p_bs_YPD_hi]
#YPD_s37 data
p_true_YPD37_mean = np.array(p_true_means.YPD37)
p_bs_YPD37_lo = np.array(p_bs_low_all.YPD37)
p_bs_YPD37_hi = np.array(p_bs_high_all.YPD37)
p_bs_YPD37_CI = [p_bs_YPD37_lo,p_bs_YPD37_hi]
#YPD_s37H data
p_true_YPD37H_mean = np.array(p_true_means.YPD37H)
p_bs_YPD37H_lo = np.array(p_bs_low_all.YPD37H)
p_bs_YPD37H_hi = np.array(p_bs_high_all.YPD37H)
p_bs_YPD37H_CI = [p_bs_YPD37H_lo,p_bs_YPD37H_hi]
#YPD_sAA data
p_true_YPDAA_mean = np.array(p_true_means.YPDAA)
p_bs_YPDAA_lo = np.array(p_bs_low_all.YPDAA)
p_bs_YPDAA_hi = np.array(p_bs_high_all.YPDAA)
p_bs_YPDAA_CI = [p_bs_YPDAA_lo,p_bs_YPDAA_hi]
########################################same for cume data
#YPD_s30 data
p_true_YPD_cum = np.array(p_true_cum.YPD)
p_bs_YPD_lo_c = np.array(p_bs_low_cum.YPD)
p_bs_YPD_hi_c = np.array(p_bs_high_cum.YPD)
p_bs_YPD_CI_c = [p_bs_YPD_lo_c,p_bs_YPD_hi_c]
#YPD_s37 data
p_true_YPD37_cum = np.array(p_true_cum.YPD37)
p_bs_YPD37_lo_c = np.array(p_bs_low_cum.YPD37)
p_bs_YPD37_hi_c = np.array(p_bs_high_cum.YPD37)
p_bs_YPD37_CI_c = [p_bs_YPD37_lo_c,p_bs_YPD37_hi_c]
#YPD_s37H data
p_true_YPD37H_cum = np.array(p_true_cum.YPD37H)
p_bs_YPD37H_lo_c = np.array(p_bs_low_cum.YPD37H)
p_bs_YPD37H_hi_c = np.array(p_bs_high_cum.YPD37H)
p_bs_YPD37H_CI_c = [p_bs_YPD37H_lo_c,p_bs_YPD37H_hi_c]
#YPD_sAA data
p_true_YPDAA_cum = np.array(p_true_cum.YPDAA)
p_bs_YPDAA_lo_c = np.array(p_bs_low_cum.YPDAA)
p_bs_YPDAA_hi_c = np.array(p_bs_high_cum.YPDAA)
p_bs_YPDAA_CI_c = [p_bs_YPDAA_lo_c,p_bs_YPDAA_hi_c]
###Output
_____no_output_____
###Markdown
plotting
###Code
#setting up plot layout
sns.set_style("ticks")
import matplotlib
matplotlib.rcParams['font.sans-serif'] = "Arial"
# Then, "ALWAYS use sans-serif fonts"
matplotlib.rcParams['font.family'] = "sans-serif"
colors=['#2497FD','#025F17','#E1AB06','#D81B60']
#plotting
fig,ax1 = plt.subplots(figsize=(2.72,1.75))
#YPD
ax1.plot(epoch_array,true_YPD_mean, linewidth=0.8, marker='o',markersize=5,color=colors[0],label="YPD")
ax1.errorbar(epoch_array,true_YPD_mean, linewidth=0.8, yerr=bs_YPD_CI, color=colors[0], label=None)
#YPD cumulative
ax1.plot(cum_array, true_YPD_cum, marker='o', markersize=5,color=colors[0])
ax1.errorbar(cum_array, true_YPD_cum, linewidth=0.8, yerr=bs_YPD_CI_c, color=colors[0], label=None)
#YPD AA
ax1.plot(epoch_array,true_YPDAA_mean, linewidth=0.8, marker='o',markersize=5,color=colors[1],label="YPD + Acetic acid")
ax1.errorbar(epoch_array,true_YPDAA_mean, linewidth=0.8, yerr=bs_YPDAA_CI, color=colors[1], label=None)
#YPD AA cumulative
ax1.plot(cum_array, true_YPDAA_cum, marker='o', markersize=5,color=colors[1])
ax1.errorbar(cum_array, true_YPDAA_cum, linewidth=0.8, yerr=bs_YPDAA_CI_c, color=colors[1], label=None)
#YPD 37
ax1.plot(epoch_array,true_YPD37_mean, linewidth=0.8, marker='o',markersize=5,color=colors[2],label="YPD, 37˚C (Dip.)")
ax1.errorbar(epoch_array,true_YPD37_mean, linewidth=0.8, yerr=bs_YPD37_CI, color=colors[2], label=None)
#YPD 37 cumulative
ax1.plot(cum_array, true_YPD37_cum, marker='o', markersize=5,color=colors[2])
ax1.errorbar(cum_array, true_YPD37_cum, linewidth=0.8, yerr=bs_YPD37_CI_c, color=colors[2], label=None)
#YPD HAP
ax1.plot(epoch_array,true_YPD37H_mean, linewidth=0.8, marker='o',markersize=5,color=colors[3],label="YPD, 37˚C (Hap.)")
ax1.errorbar(epoch_array,true_YPD37H_mean, linewidth=0.8, yerr=bs_YPD37H_CI, color=colors[3], label=None)
#YPD HAP cumulative
ax1.plot(cum_array, true_YPD37H_cum, marker='o', markersize=5,color=colors[3])
ax1.errorbar(cum_array, true_YPD37H_cum, linewidth=0.8, yerr=bs_YPD37H_CI_c, color=colors[3], label=None)
# uncomment to show permuted data
# #YPD
# ax1.plot(epoch_array,p_true_YPD_mean, linewidth=0.8, marker='^',markersize=5,color=colors[0],linestyle='--',dashes=(4, 2),label="YPD")
# ax1.errorbar(epoch_array,p_true_YPD_mean, linewidth=0.8, yerr=p_bs_YPD_CI, color=colors[0], label=None,linestyle='')
# #YPD cumulative
# ax1.plot(cum_array, p_true_YPD_cum, marker='^', markersize=5,color=colors[0])
# ax1.errorbar(cum_array, p_true_YPD_cum, linewidth=0.8, yerr=p_bs_YPD_CI_c, color=colors[0], label=None,linestyle='')
# #YPD AA
# ax1.plot(epoch_array,p_true_YPDAA_mean, marker='^',linewidth=0.8, markersize=5,color=colors[1],linestyle='--',dashes=(4, 2),label="YPD + Acetic acid")
# ax1.errorbar(epoch_array,p_true_YPDAA_mean, linewidth=0.8, yerr=p_bs_YPDAA_CI, color=colors[1], label=None,linestyle='')
# #YPD AA cumulative
# ax1.plot(cum_array, p_true_YPDAA_cum, marker='^', markersize=5,color=colors[1])
# ax1.errorbar(cum_array, p_true_YPDAA_cum, linewidth=0.8, yerr=p_bs_YPDAA_CI_c, color=colors[1], label=None,linestyle='')
# #YPD 37
# ax1.plot(epoch_array,p_true_YPD37_mean, marker='^',linewidth=0.8, markersize=5,color=colors[2],linestyle='--',dashes=(4, 2),label="YPD, 37˚C (Dip.)")
# ax1.errorbar(epoch_array,p_true_YPD37_mean, linewidth=0.8, yerr=p_bs_YPD37_CI, color=colors[2], label=None,linestyle='')
# #YPD 37 cumulative
# ax1.plot(cum_array, p_true_YPD37_cum, marker='^', markersize=5,color=colors[2])
# ax1.errorbar(cum_array, p_true_YPD37_cum, linewidth=0.8, yerr=p_bs_YPD37_CI_c, color=colors[2], label=None,linestyle='')
# #YPD HAP
# ax1.plot(epoch_array,p_true_YPD37H_mean, marker='^',linewidth=0.8, markersize=5,color=colors[3],linestyle='--', dashes=(4, 2),label="YPD, 37˚C (Hap.)")
# ax1.errorbar(epoch_array,p_true_YPD37H_mean, linewidth=0.8, yerr=p_bs_YPD37H_CI, color=colors[3], label=None,linestyle='')
# #YPD HAP cumulative
# ax1.plot(cum_array, p_true_YPD37H_cum, marker='^', markersize=5,color=colors[3])
# ax1.errorbar(cum_array, p_true_YPD37H_cum, linewidth=0.8, yerr=p_bs_YPD37H_CI_c, color=colors[3], label=None,linestyle='')
#plt.title('PCA clustering over time', fontweight='bold', fontsize=8)
plt.ylabel("Mean clustering coefficient", fontsize=8)
plt.xlabel("Generation", fontsize=8)
#plt.legend(fontsize=7,loc='lower right',title="Evolution condition:",title_fontsize=7)
plt.tick_params(direction='out', length=3, width=1)
plt.setp(ax1.get_xticklabels(), fontsize=7)
plt.setp(ax1.get_yticklabels(), fontsize=7)
plt.ylim((0,5))
plt.plot()
plt.savefig('20210622_PCAcluster_thres_CI_adj_noperm.png',bbox_inches = 'tight', dpi = 4000)
###Output
_____no_output_____ |
Argentina - Mondiola Rock - 90 pts/Practica/Final Modelos y Simulaciones/Modelos y Simulaciones.ipynb | ###Markdown
Generador UnixUtilizando el generador UNIX de números aleatorios, pero con los coeficientes del generador Visual Basic, programe una serie de 60 números aleatorios en hoja de cálculo, verificando que, a igual semilla corresponde igual serie.Utilice como “Blanco” una serie del mismo tamaño generada con una macro en Visual Basic for Apps con la misma semilla (NOTA: las series no darán los mismos valores, aunque usen la misma semilla porque los algoritmos son diferentes)- 1) Demuestre que con semillas aleatorias esa serie no se repite.- 2) Utilizando la prueba de Chi cuadrado demuestre que se trata de una serie uniforme- 3) Haga las dos pruebas anteriores para la serie “VBA”- 4) Conclusiones_Obtener conclusiones, por ejemplo, calcular el promedio y la desviación estándar de ambasmuestras y hacer análisis de varianza para determinar si las medias son iguales o no._Datos para construir el generador: - m = 2^24- a = 1140671485 - b = 12820163
###Code
import numpy as np
from scipy.stats import chi2
from random import randint
import matplotlib.pyplot as plt
from scipy.stats import norm
np.set_printoptions(formatter={'float': lambda x: "{0:0.20f}".format(x)})
a = 1140671485
b = 12820163
m = 2**24
def semillar(X,tantos):
semillas = np.random.rand(X, tantos)
semillas.dtype = np.float64
r = np.zeros((X, tantos))
r.dtype = np.float64
for j in range(0,tantos):
oldSeed = np.random.randint(0,m)
for i in range(0,X):
newSeed = (a*oldSeed+b) % m
oldSeed = newSeed
semillas[i,j] = newSeed
r[i,j] = semillas[i,j] / m
return r
def agrupar(N,Q):
g = np.zeros((N,Q.shape[1]))
incremento = 1.0/np.float64(N)
for i in range(0,ensayos):
for j in range(0,serie):
aux = 0
for k in range(0,N):
aux += incremento
if Q[j,i] <= aux and Q[j,i] > (aux-incremento):
g[k,i] += 1
return g
def chiCuadrado(r):
chi = np.zeros((divIn,r.shape[1]))
FE = (serie/np.float64(divIn))
for i in range(0,r.shape[1]):
for j in range(0,divIn):
chi[j,i] = ((FE-r[j,i])**2)/FE
return chi.sum(0)
###Output
_____no_output_____
###Markdown
El programa
###Code
serie = 60
ensayos = 5000
resultados = semillar(serie,ensayos)
' divIn = np.int(np.sqrt(serie).round()) '
divIn = 10
grupos = agrupar(divIn,resultados)
resultados.shape
###Output
_____no_output_____
###Markdown
Pruebas Medias
###Code
av = resultados.mean(0).mean()
print 'Media:',av
print 'Error:', (0.5-av)
###Output
Media: 0.500023711415
Error: -2.37114151319e-05
###Markdown
Evaluar Varianza y Desviacion
###Code
print 'Varianza media:',resultados.var(0).mean()
print 'Desviacion media:',resultados.std(0).mean()
###Output
Varianza media: 0.0819381533413
Desviacion media: 0.285754606347
###Markdown
Evaluar ChiLa prueba Chi-Cuadrada en lugar de medir la diferencia de cada punto entre la muestra y la desviación verdadera, checa la desviación del valor esperado. nX2 =∑ (Oi−Ei)^2/Ei i Donde n es el número de intervalos de clase (ejemplo: Oi es el número observado en la clase i, y Ei es el número esperado en cada clase i , y n es el número de clases. Para una distribución uniforme, Ei , el número en cada clase esta dado por;* Ei = N / n_Para clases igualmente espaciadas, donde N es el número total de observaciones. Puede ser mostrado que la distribución de la muestra Chi-Cuadrada esta aproximadamente a la distribución Chi-Cuadrada con n-1 grados de libertad._
###Code
p=0.95
gradosDeLibertad = divIn-1
print 'Chi2 Observado | Inverso de Chi2'
print ' {0:0.05f} | {1:0.09f} '.format(chiCuadrado(grupos).mean(),chi2.ppf(p, gradosDeLibertad))
print'\nConfianza(%):',p
print 'Grados de Libertad:',gradosDeLibertad
###Output
Chi2 Observado | Inverso de Chi2
8.94100 | 16.918977605
Confianza(%): 0.95
Grados de Libertad: 9
###Markdown
**Debido a que X2calculada < que el valor de X2(0.95,9) de la tabla, la hipótesis Nula de que no existe diferencia entre la distribución de la muestra y la distribución uniforme se Acepta.**_ el estadístico chi-cuadrado cuantifica qué tanto varía la distribución observada de conteos con respecto a la distribución hipotética._Chi Inversa me dice para una distribución chi-cuadrado de k grados de libertad, cual es el valor de x que deja a su izquierda una probabilidad p.
###Code
x = np.linspace(0, serie, serie)
obtenido = resultados[:,np.random.randint(0,ensayos)]*serie
fig,ax = plt.subplots(1,1)
obtenido.sort()
linestyles = ['--', '-.']
deg_of_freedom = divIn-1
comparar = [obtenido,x]
for comp, ls in zip(comparar, linestyles):
ax.plot(comp, chi2.pdf(comp, deg_of_freedom), linestyle=ls, label=r'$df=%i$' % deg_of_freedom)
plt.xlim(0, serie)
plt.ylim(0, 0.15)
plt.axvline(x=chi2.ppf(p, gradosDeLibertad),linestyle='-.',color='orange')
plt.xlabel('$\chi^2$')
plt.ylabel(r'$f(\chi^2)$')
plt.title(r'$\chi^2\ \mathrm{Distribution}$')
plt.legend()
plt.show()
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
mediaDeGrupos = grupos[:,:].mean(axis=1)
plt.hist(resultados[:,np.random.randint(0,ensayos)])
mm = serie*ensayos
plt.plot(np.repeat(6,serie), linewidth=2)
plt.xlabel('Cantidad')
plt.ylabel('Grupos')
plt.title('Histograma')
plt.axis([0, 1, 0, 12])
plt.grid(True)
plt.show()
plt.plot(mediaDeGrupos,'ro')
plt.plot(np.repeat(6,divIn), linewidth=2, color='red')
plt.show()
###Output
_____no_output_____
###Markdown
EXPORTAR
###Code
import pandas as pd
df = pd.DataFrame(resultados)
df.to_csv("exportarPython.csv",sep=';',header=None)
###Output
_____no_output_____ |
interview-prep/move-zeroes/python/Move Zeroes.ipynb | ###Markdown
Given an array of integers, move all the zeroes to the end while preserving the order of the other elements with no extra data structures.First thoughts:* What if we could use extra data structures? We could go through the array, inserting elements from the first array if they're non-zero. When we reach the end of the first array, we just fill the second array with zeroes until it's full.* The algorithm will probably be a little more involved if we can't use extra data structures.Approaches:* Can we somehow do it in a single pass? What if we keep track of two indices while iterating. We traverse the array until we reach a zero. At this point, we make a note of where this first zero is. Then, we use a second index to traverse the array until we find a non-zero element. Then, we fill where the first index is pointing with the contents of the second index, and change the second index to a 0. Then, we traverse the array with the first index again and repeat the process until we've covered the whole array. Let's try that.
###Code
def move_zeroes(arr):
index1 = index2 = 0
# traverse the array until we reach a zero element
while index1 < len(arr):
if arr[index1] == 0:
# traverse the array with the second index until a non-zero
# element is reached
index2 = index1 + 1
while arr[index2] == 0:
index2 += 1
# if index2 reaches the end of the array, we're done
if index2 == len(arr):
return arr
# swap the contents
arr[index1] = arr[index2]
arr[index2] = 0
index1 += 1
return arr
###Output
###Markdown
Since at most we'll traverse the array twice (one for the first index, one for the second), our running time is O(2n), or just O(n). We have a space complexity of O(1). Tests
###Code
# all zeroes
print move_zeroes([0,0,0,0,0,0,0,0])
# zeroes already at right
print move_zeroes([7,32,5,1,9,0,0,0])
# zeroes interspersed
print move_zeroes([0,1,0,2,0,3,0,4,0,5,0,6,0,7,0,8,0,9,0,10])
###Output
[0, 0, 0, 0, 0, 0, 0, 0]
[7, 32, 5, 1, 9, 0, 0, 0]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|
periodic_signals/spectrum.ipynb | ###Markdown
Periodic Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* SpectrumPeriodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. The latter holds often when considering only a limited time-interval. Examples for periodic signals are a superposition of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are derived in the following. RepresentationA [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill\begin{equation}x(t) = x(t + n \cdot T_\text{p})\end{equation}for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as \begin{equation}x_0(t) = \begin{cases}x(t) & \text{for } 0 \leq t < T_\text{p} \\0 & \text{otherwise}\end{cases}\end{equation}A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of one period $x_0(t)$\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p})\end{equation}which can be rewritten as convolution\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p})\end{equation}using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses. **Example**The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as\begin{equation}x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right)\end{equation}Introduced into above representation of a periodic signal yields\begin{align}x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\&= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\&= \cos (\omega_0 t)\end{align}since the sum over the shifted rectangular signals is equal to one. The Dirac CombThe sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as\begin{equation}{\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu)\end{equation}It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following.Fourier transformation of the left- and right-hand side of above definition yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega}\end{equation}The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$. Convolving a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynbRectangular-Signal) with the Dirac comb results in\begin{equation}{\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1\end{equation}Fourier transform of the left- and right-hand side yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega)\end{equation}For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged as\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega)\end{equation}Note that the [multiplication property](../continuous_signals/standard_signals.ipynbDirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known for the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation as\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right)\end{equation}The last equality follows from the scaling property of the Dirac impulse. The Fourier transform can now be rewritten in terms of the Dirac comb\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)\end{equation}The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$. **Example**The following example computes the truncated series\begin{equation}X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega}\end{equation}as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`.
###Code
%matplotlib inline
import sympy as sym
sym.init_printing()
mu = sym.symbols('mu', integer=True)
w = sym.symbols('omega', real=True)
M = 20
X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit()
sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$', adaptive=False, nb_of_points=1000);
###Output
_____no_output_____
###Markdown
**Exercise*** Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities. Fourier-TransformIn order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields\begin{equation}x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynbConvolution-Theorem)\begin{align}X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\&= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot\delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{align}where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*. Parseval's Theorem[Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt\end{equation}Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) yields\begin{equation}\frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2\end{equation}The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform. Fourier Transform of the Pulse TrainThe [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$\begin{equation}x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}}\end{equation}The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynbTransformation-of-the-Rectangular-Signal)\begin{equation}X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right)\end{equation}from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal\begin{equation}X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{equation}The weights of the Dirac impulses are defined in `SymPy` for fixed values $T$ and $T_\text{p}$
###Code
mu = sym.symbols('mu', integer=True)
T = 2
Tp = 5
X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp)
X_mu
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdifysympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses.
###Code
import numpy as np
import matplotlib.pyplot as plt
Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy')
n = np.arange(-15, 15)
plt.stem(n*2*np.pi/Tp, Xn(n))
plt.xlabel('$\omega$')
plt.ylabel('$|X(j \omega)|$');
###Output
_____no_output_____
###Markdown
Periodic Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* SpectrumPeriodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. This holds especially when considering only a limited time-interval. Examples of periodic signals are superpositions of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are discussed in the following. RepresentationA [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill\begin{equation}x(t) = x(t + n \cdot T_\text{p})\end{equation}for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as \begin{equation}x_0(t) = \begin{cases}x(t) & \text{for } 0 \leq t < T_\text{p} \\0 & \text{otherwise}\end{cases}\end{equation}A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of shifted copies of one period $x_0(t)$\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p})\end{equation}which can be rewritten as convolution\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p})\end{equation}using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses. **Example**The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as\begin{equation}x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right)\end{equation}Introduced into above representation of a periodic signal yields\begin{align}x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\&= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\&= \cos (\omega_0 t)\end{align}since the sum over the shifted rectangular signals is equal to one. The Dirac CombThe sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as\begin{equation}{\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu)\end{equation}It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following. Fourier transformation of the left- and right-hand side of above definition yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega}\end{equation}The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$. In order to gain further insight, the following convolution of a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynbRectangular-Signal) with a Dirac comb is considered\begin{equation}{\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1\end{equation}The right hand side follows from the fact that the rectangular signals equals one for $-\frac{1}{2} < t < \frac{1}{2}$ which is then periodically summed up with a period of one. Fourier transform of the left- and right-hand side yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega)\end{equation}For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged to\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega)\end{equation}Note that the [multiplication property](../continuous_signals/standard_signals.ipynbDirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known in the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right)\end{equation}The last equality follows from the scaling property of the Dirac impulse. Using the definition of the Dirac comb, the Fourier transform can now be rewritten in terms of the Dirac comb\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)\end{equation}The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$. **Example**The following example computes the truncated series\begin{equation}X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega}\end{equation}as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`.
###Code
import sympy as sym
sym.init_printing()
mu = sym.symbols('mu', integer=True)
w = sym.symbols('omega', real=True)
M = 20
X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit()
sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$',
adaptive=False, nb_of_points=1000);
###Output
_____no_output_____
###Markdown
**Exercise*** Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities. Fourier-TransformIn order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields\begin{equation}x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynbConvolution-Theorem)\begin{align}X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\&= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot\delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{align}where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*. Parseval's Theorem[Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt\end{equation}Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) yields\begin{equation}\frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2\end{equation}The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform. Fourier Transform of the Pulse TrainThe [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$\begin{equation}x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}}\end{equation}The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynbTransformation-of-the-Rectangular-Signal)\begin{equation}X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right)\end{equation}from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal\begin{equation}X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{equation} **Example**The pulse train and its spectrum are illustrated by the subsequent computational example. First the pulse train is defined and plotted in `SymPy`
###Code
mu = sym.symbols('mu', integer=True)
t = sym.symbols('t', real=True)
T = 2
Tp = 5
def pulse_train(T, Tp):
n = sym.symbols('n', integer=True)
x0 = sym.Piecewise((0, t < 0), (1, t < T), (0, True))
return sym.summation(x0.subs(t, t+n*Tp), (n, -10, 10))
sym.plot(pulse_train(T, Tp), (t, -5, 20), xlabel=r'$t$', ylabel=r'$x(t)$');
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are defined for fixed values $T$ and $T_\text{p}$
###Code
X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp)
X_mu
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdifysympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses.
###Code
import numpy as np
import matplotlib.pyplot as plt
Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy')
n = np.arange(-15, 15)
plt.stem(n*2*np.pi/Tp, Xn(n))
plt.xlabel('$\omega$')
plt.ylabel('$|X(j \omega)|$');
###Output
_____no_output_____
###Markdown
Periodic Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* SpectrumPeriodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. This holds especially when considering only a limited time-interval. Examples of periodic signals are superpositions of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are discussed in the following. RepresentationA [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill\begin{equation}x(t) = x(t + n \cdot T_\text{p})\end{equation}for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as \begin{equation}x_0(t) = \begin{cases}x(t) & \text{for } 0 \leq t < T_\text{p} \\0 & \text{otherwise}\end{cases}\end{equation}A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of shifted copies of one period $x_0(t)$\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p})\end{equation}which can be rewritten as convolution\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p})\end{equation}using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses. **Example**The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as\begin{equation}x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right)\end{equation}Introduced into above representation of a periodic signal yields\begin{align}x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\&= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\&= \cos (\omega_0 t)\end{align}since the sum over the shifted rectangular signals is equal to one. The Dirac CombThe sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as\begin{equation}{\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu)\end{equation}It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following. Fourier transformation of the left- and right-hand side of above definition yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega}\end{equation}The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$. In order to gain further insight, the following convolution of a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynbRectangular-Signal) with a Dirac comb is considered\begin{equation}{\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1\end{equation}The right hand side follows from the fact that the rectangular signals equals one for $-\frac{1}{2} < t < \frac{1}{2}$ which is then periodically summed up with a period of one. Fourier transform of the left- and right-hand side yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega)\end{equation}For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged to\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega)\end{equation}Note that the [multiplication property](../continuous_signals/standard_signals.ipynbDirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known in the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right)\end{equation}The last equality follows from the scaling property of the Dirac impulse. Using the definition of the Dirac comb, the Fourier transform can now be rewritten in terms of the Dirac comb\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)\end{equation}The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$. **Example**The following example computes the truncated series\begin{equation}X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega}\end{equation}as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`.
###Code
%matplotlib inline
import sympy as sym
sym.init_printing()
mu = sym.symbols('mu', integer=True)
w = sym.symbols('omega', real=True)
M = 20
X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit()
sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$', adaptive=False, nb_of_points=1000);
###Output
_____no_output_____
###Markdown
**Exercise*** Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities. Fourier-TransformIn order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields\begin{equation}x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynbConvolution-Theorem)\begin{align}X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\&= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot\delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{align}where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*. Parseval's Theorem[Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt\end{equation}Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) yields\begin{equation}\frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2\end{equation}The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform. Fourier Transform of the Pulse TrainThe [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$\begin{equation}x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}}\end{equation}The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynbTransformation-of-the-Rectangular-Signal)\begin{equation}X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right)\end{equation}from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal\begin{equation}X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{equation} **Example**The pulse train and its spectrum are illustrated by the subsequent computational example. First the pulse train is defined and plotted in `SymPy`
###Code
mu = sym.symbols('mu', integer=True)
t = sym.symbols('t', real=True)
T = 2
Tp = 5
def pulse_train(T, Tp):
n = sym.symbols('n', integer=True)
x0 = sym.Piecewise((0, t < 0), (1, t < T), (0, True))
return sym.summation(x0.subs(t, t+n*Tp), (n, -10, 10))
import warnings
warnings.filterwarnings("ignore", module="sympy.plot")
sym.plot(pulse_train(T, Tp), (t, -5, 20), xlabel='$t$', ylabel='$x(t)$', adaptive=False);
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are defined for fixed values $T$ and $T_\text{p}$
###Code
X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp)
X_mu
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdifysympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses.
###Code
import numpy as np
import matplotlib.pyplot as plt
Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy')
n = np.arange(-15, 15)
plt.stem(n*2*np.pi/Tp, Xn(n))
plt.xlabel('$\omega$')
plt.ylabel('$|X(j \omega)|$');
###Output
_____no_output_____
###Markdown
Periodic Signals*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).* SpectrumPeriodic signals are an import class of signals. Many practical signals can be approximated reasonably well as periodic functions. The latter holds often when considering only a limited time-interval. Examples for periodic signals are a superposition of harmonic signals, signals captured from vibrating structures or rotating machinery, as well as speech signals or signals from musical instruments. The spectrum of a periodic signal exhibits specific properties which are derived in the following. RepresentationA [periodic signal](https://en.wikipedia.org/wiki/Periodic_function) $x(t)$ is a signal that repeats its values in regular periods. It has to fulfill\begin{equation}x(t) = x(t + n \cdot T_\text{p})\end{equation}for $n \in \mathbb{Z}$ where its period is denoted by $T_\text{p} > 0$. A signal is termed *aperiodic* if is not periodic. One period $x_0(t)$ of a periodic signal is given as \begin{equation}x_0(t) = \begin{cases}x(t) & \text{for } 0 \leq t < T_\text{p} \\0 & \text{otherwise}\end{cases}\end{equation}A periodic signal can be represented by [periodic summation](https://en.wikipedia.org/wiki/Periodic_summation) of one period $x_0(t)$\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t - \mu T_\text{p})\end{equation}which can be rewritten as convolution\begin{equation}x(t) = \sum_{\mu = - \infty}^{\infty} x_0(t) * \delta(t - \mu T_\text{p}) = x_0(t) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p})\end{equation}using the sifting property of the Dirac impulse. It can be concluded that a periodic signal can be represented by one period $x_0(t)$ of the signal convolved with a series of Dirac impulses. **Example**The cosine signal $x(t) = \cos (\omega_0 t)$ has a periodicity of $T_\text{p} = \frac{2 \pi}{\omega_0}$. One period is given as\begin{equation}x_0(t) = \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right)\end{equation}Introduced into above representation of a periodic signal yields\begin{align}x(t) &= \cos (\omega_0 t) \cdot \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} \right) * \sum_{\mu = - \infty}^{\infty} \delta(t - \mu T_\text{p}) \\&= \cos (\omega_0 t) \sum_{\mu = - \infty}^{\infty} \text{rect} \left( \frac{t}{T_\text{p}} - \frac{T_\text{p}}{2} - \mu T_\text{p} \right) \\&= \cos (\omega_0 t)\end{align}since the sum over the shifted rectangular signals is equal to one. The Dirac CombThe sum of shifted Dirac impulses, as used above to represent a periodic signal, is known as [*Dirac comb*](https://en.wikipedia.org/wiki/Dirac_comb). The Dirac comb is defined as\begin{equation}{\bot \!\! \bot \!\! \bot}(t) = \sum_{\mu = - \infty}^{\infty} \delta(t - \mu)\end{equation}It is used for the representation of periodic signals and for the modeling of ideal sampling. In order to compute the spectrum of a periodic signal, the Fourier transform of the Dirac comb $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ is derived in the following.Fourier transformation of the left- and right-hand side of above definition yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} e^{-j \mu \omega}\end{equation}The exponential function $e^{-j \mu \omega}$ for $\mu \in \mathbb{Z}$ is periodic with a period of $2 \pi$. Hence, the Fourier transform of the Dirac comb is also periodic with a period of $2 \pi$. Convolving a [rectangular signal](../notebooks/continuous_signals/standard_signals.ipynbRectangular-Signal) with the Dirac comb results in\begin{equation}{\bot \!\! \bot \!\! \bot}(t) * \text{rect}(t) = 1\end{equation}Fourier transform of the left- and right-hand side yields\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} \cdot \text{sinc}\left(\frac{\omega}{2}\right) = 2 \pi \delta(\omega)\end{equation}For $\text{sinc}( \frac{\omega}{2} ) \neq 0$, which is equal to $\omega \neq 2 n \cdot \pi$ with $n \in \mathbb{Z} \setminus \{0\}$, this can be rearranged as\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = 2 \pi \, \delta(\omega) \cdot \frac{1}{\text{sinc}\left(\frac{\omega}{2}\right)} = 2 \pi \, \delta(\omega)\end{equation}Note that the [multiplication property](../continuous_signals/standard_signals.ipynbDirac-Impulse) of the Dirac impulse and $\text{sinc}(0) = 1$ has been used to derive the last equality. The Fourier transform is now known for the interval $-2 \pi < \omega < 2 \pi$. It has already been concluded that the Fourier transform is periodic with a period of $2 \pi$. Hence, the Fourier transformation of the Dirac comb can be derived by periodic continuation as\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \delta(\omega - 2 \pi \mu) = \sum_{\mu = - \infty}^{\infty} 2 \pi \, \left( \frac{\omega}{2 \pi} - \mu \right)\end{equation}The last equality follows from the scaling property of the Dirac impulse. The Fourier transform can now be rewritten in terms of the Dirac comb\begin{equation}\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \} = {\bot \!\! \bot \!\! \bot} \left( \frac{\omega}{2 \pi} \right)\end{equation}The Fourier transform of a Dirac comb with unit distance between the Dirac impulses is a Dirac comb with a distance of $2 \pi$ between the Dirac impulses which are weighted by $2 \pi$. **Example**The following example computes the truncated series\begin{equation}X(j \omega) = \sum_{\mu = -M}^{M} e^{-j \mu \omega}\end{equation}as approximation of the Fourier transform $\mathcal{F} \{ {\bot \!\! \bot \!\! \bot}(t) \}$ of the Dirac comb. For this purpose the sum is defined and plotted in `SymPy`.
###Code
%matplotlib inline
import sympy as sym
sym.init_printing()
mu = sym.symbols('mu', integer=True)
w = sym.symbols('omega', real=True)
M = 20
X = sym.Sum(sym.exp(-sym.I*mu*w), (mu, -M, M)).doit()
sym.plot(X, xlabel='$\omega$', ylabel='$X(j \omega)$', adaptive=False, nb_of_points=1000);
###Output
_____no_output_____
###Markdown
**Exercise*** Change the summation limit $M$. How does the approximation change? Note: Increasing $M$ above a certain threshold may lead to numerical instabilities. Fourier-TransformIn order to derive the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a periodic signal $x(t)$ with period $T_\text{p}$, the signal is represented by one period $x_0(t)$ and the Dirac comb. Rewriting above representation of a periodic signal in terms of a sum of Dirac impulses by noting that $\delta(t - \mu T_\text{p}) = \frac{1}{T_\text{p}} \delta(\frac{t}{T_\text{p}} - \mu)$ yields\begin{equation}x(t) = x_0(t) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}The Fourier transform is derived by application of the [convolution theorem](../fourier_transform/theorems.ipynbConvolution-Theorem)\begin{align}X(j \omega) &= X_0(j \omega) \cdot {\bot \!\! \bot \!\! \bot} \left( \frac{\omega T_\text{p}}{2 \pi} \right) \\&= \frac{2 \pi}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \cdot\delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{align}where $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ denotes the Fourier transform of one period of the periodic signal. From the last equality it can be concluded that the Fourier transform of a periodic signal consists of a series of weighted Dirac impulses. These Dirac impulse are equally distributed on the frequency axis $\omega$ at an interval of $\frac{2 \pi}{T_\text{p}}$. The weights of the Dirac impulse are given by the values of the spectrum $X_0(j \omega)$ of one period at the locations $\omega = \mu \frac{2 \pi}{T_\text{p}}$. Such a spectrum is termed *line spectrum*. Parseval's Theorem[Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) relates the energy of a signal in the time domain to its spectrum. The energy of a periodic signal is in general not defined. This is due to the fact that its energy is unlimited, if the energy of one period is non-zero. As alternative, the average power of a periodic signal $x(t)$ is used. It is defined as\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt\end{equation}Introducing the Fourier transform of a periodic signal into [Parseval's theorem](../fourier_transform/theorems.ipynbParseval%27s-Theorem) yields\begin{equation}\frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} |x(t)|^2 \; dt = \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} \left| X_0 \left( j \, \mu \frac{2 \pi}{T_\text{p}} \right) \right|^2\end{equation}The average power of a periodic signal can be calculated in the time-domain by integrating over the squared magnitude of one period or in the frequency domain by summing up the squared magnitude weights of the coefficients of the Dirac impulses of its Fourier transform. Fourier Transform of the Pulse TrainThe [pulse train](https://en.wikipedia.org/wiki/Pulse_wave) is commonly used for power control using [pulse-width modulation (PWM)](https://en.wikipedia.org/wiki/Pulse-width_modulation). It is constructed from a periodic summation of a rectangular signal $x_0(t) = \text{rect} (\frac{t}{T} - \frac{T}{2})$\begin{equation}x(t) = \text{rect} \left( \frac{t}{T} - \frac{T}{2} \right) * \frac{1}{T_\text{p}} {\bot \!\! \bot \!\! \bot} \left( \frac{t}{T_\text{p}} \right)\end{equation}where $0 < T < T_\text{p}$ denotes the width of the pulse and $T_\text{p}$ its periodicity. Its usage for power control becomes evident when calculating the average power of the pulse train\begin{equation}P = \frac{1}{T_\text{p}} \int_{0}^{T_\text{p}} | x(t) |^2 dt = \frac{T}{T_\text{p}}\end{equation}The Fourier transform of one period $X_0(j \omega) = \mathcal{F} \{ x_0(t) \}$ is derived by applying the scaling and shift theorem of the Fourier transform to the [Fourier transform of the retangular signal](../fourier_transform/definition.ipynbTransformation-of-the-Rectangular-Signal)\begin{equation}X_0(j \omega) = e^{-j \omega \frac{T}{2}} \cdot T \, \text{sinc} \left( \frac{\omega T}{2} \right)\end{equation}from which the spectrum of the pulse train follows by application of above formula for the Fourier transform of a periodic signal\begin{equation}X(j \omega) = 2 \pi \frac{1}{T_\text{p}} \sum_{\mu = - \infty}^{\infty} e^{-j \mu \pi \frac{T}{T_\text{p}}} \cdot T \, \text{sinc} \left( \mu \pi \frac{T}{T_\text{p}} \right) \cdot \delta \left( \omega - \mu \frac{2 \pi}{T_\text{p}} \right)\end{equation}The weights of the Dirac impulses are defined in `SymPy` for fixed values $T$ and $T_\text{p}$
###Code
mu = sym.symbols('mu', integer=True)
T = 2
Tp = 5
X_mu = sym.exp(-sym.I * mu * sym.pi * T/Tp) * T * sym.sinc(mu * sym.pi * T/Tp)
X_mu
###Output
_____no_output_____
###Markdown
The weights of the Dirac impulses are plotted with [`matplotlib`](http://matplotlib.org/index.html), a Python plotting library. The library expects the values of the function to be plotted at a series of sampling points. In order to create these, the function [`sympy.lambdify`](http://docs.sympy.org/latest/modules/utilities/lambdify.html?highlight=lambdifysympy.utilities.lambdify) is used which numerically evaluates a symbolic function at given sampling points. The resulting plot illustrates the positions and weights of the Dirac impulses.
###Code
import numpy as np
import matplotlib.pyplot as plt
Xn = sym.lambdify(mu, sym.Abs(X_mu), 'numpy')
n = np.arange(-15, 15)
plt.stem(n*2*np.pi/Tp, Xn(n))
plt.xlabel('$\omega$')
plt.ylabel('$|X(j \omega)|$');
###Output
_____no_output_____ |
notebooks/dev/.ipynb_checkpoints/n05_missing_data-checkpoint.ipynb | ###Markdown
Special care should be taken with missing data on this problem. Missing data shall never be filled in the target variable, or the results evaluation would be corrupted. That is a risk on this problem, if things are done without care, because the target variable and the features are the same, only time-shifted.First forward and then backwards fill is the best way to try to keep causality as much as possible.Some filtering of symbols that have a lot of missing data could help, or the predictor may find itself full of constant data.Filling missing data and dropping "bad samples" can be done in two or three levels: In the total data level, in the training time level, or in the base samples level. The differences are probably small for the filling part, but may be significant when dropping samples.
###Code
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
from utils import preprocessing as pp
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
print(data_df.shape)
data_df.head()
data_df.columns.nlevels
###Output
_____no_output_____
###Markdown
Let's first filter at the symbol level
###Code
data_df['Close'].shape
good_ratios = 1.0 - (data_df['Close'].isnull().sum()/ data_df['Close'].shape[0])
good_ratios.sort_values(ascending=False).plot()
filtered_data_df = pp.drop_irrelevant_symbols(data_df['Close'], good_data_ratio=0.99)
good_ratios = 1.0 - (filtered_data_df.isnull().sum()/ filtered_data_df.shape[0])
good_ratios.sort_values(ascending=False).plot()
filtered_data_df.shape
filtered_data_df.head()
filtered_data_df.isnull().sum().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Let's try to filter the whole dataset using only the 'Close' values
###Code
good_data_ratio = 0.99
FEATURE_OF_INTEREST = 'Close'
filtered_data_df = data_df[FEATURE_OF_INTEREST].dropna(thresh=math.ceil(good_data_ratio*data_df[FEATURE_OF_INTEREST].shape[0]), axis=1)
filtered_data_df.head()
filtered_data_df.columns
fdata_df = data_df.loc[:,(slice(None),filtered_data_df.columns.tolist())]
new_cols = fdata_df.columns.get_level_values(1)
np.setdiff1d(new_cols, filtered_data_df.columns)
np.setdiff1d(filtered_data_df.columns, new_cols)
np.intersect1d(filtered_data_df.columns, new_cols).shape
filtered_data_df.columns.shape
###Output
_____no_output_____
###Markdown
Looks good to me... Let's test it on the full dataset
###Code
filtered_data_df = pp.drop_irrelevant_symbols(data_df, good_data_ratio=0.99)
good_ratios = 1.0 - (filtered_data_df['Close'].isnull().sum()/ filtered_data_df['Close'].shape[0])
good_ratios.sort_values(ascending=False).plot()
###Output
_____no_output_____
###Markdown
Now, let's filter at the sample level
###Code
import predictor.feature_extraction as fe
train_time = -1 # In real time days
base_days = 7 # In market days
step_days = 30 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
tic = time()
x, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
x.shape
y.shape
x_y_df = pd.concat([x, y], axis=1)
x_y_df.shape
x_y_df.head()
x_y_df.isnull().sum(axis=1)
###Output
_____no_output_____ |
jupyter/Watson Studio Public/Balance production of pasta.ipynb | ###Markdown
The Pasta Production ProblemThis tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model with IBM ILOG CPLEX Optimizer. Table of contents:- [Describe the business problem](Describe-the-business-problem)* [How decision optimization (prescriptive analytics) can help](How--decision-optimization-can-help)* [Use decision optimization](Use-decision-optimization) - [Step 1: Model the data](Step-1:-Model-the-data) * [Step 2: Prepare the data](Step-2:-Prepare-the-data) - [Step 3: Set up the prescriptive model](Step-3:-Set-up-the-prescriptive-model) * [Define the decision variables](Define-the-decision-variables) * [Express the business constraints](Express-the-business-constraints) * [Express the objective](Express-the-objective) * [Solve with Decision Optimization](Solve-with-Decision-Optimization) * [Step 4: Investigate the solution and run an example analysis](Step-4:-Investigate-the-solution-and-then-run-an-example-analysis)* [Summary](Summary)**** Describe the business problemThis notebook describes how to use CPLEX Modeling for Python to manage the production of pasta to meet demand with your resources.The model aims at minimizing the production cost for a number of products while satisfying customer demand. * Each product can be produced either inside the company or outside, at a higher cost. * The inside production is constrained by the company's resources, while outside production is considered unlimited.The model first declares the products and the resources.The data consists of the description of the products (the demand, the inside and outside costs,and the resource consumption) and the capacity of the various resources.The variables for this problem are the inside and outside production for each product. How decision optimization can help* Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. With prescriptive analytics, you can: * Automate the complex decisions and trade-offs to better manage your limited resources.* Take advantage of a future opportunity or mitigate a future risk.* Proactively update recommendations based on changing events.* Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Model the data
###Code
products = [("kluski", 100, 0.6, 0.8),
("capellini", 200, 0.8, 0.9),
("fettucine", 300, 0.3, 0.4)]
# resources are a list of simple tuples (name, capacity)
resources = [("flour", 20),
("eggs", 40)]
consumptions = {("kluski", "flour"): 0.5,
("kluski", "eggs"): 0.2,
("capellini", "flour"): 0.4,
("capellini", "eggs"): 0.4,
("fettucine", "flour"): 0.3,
("fettucine", "eggs"): 0.6}
###Output
_____no_output_____
###Markdown
Step 2: Prepare the dataThe data used is very simple and is ready to use without any cleaning, massage, refactoring. Step 3: Set up the prescriptive modelSet up the prescriptive model using the Mathematical Programming (docplex.mp) modeling package.
###Code
from docplex.mp.environment import Environment
env = Environment()
env.print_information()
###Output
_____no_output_____
###Markdown
Create the DOcplex modelThe model contains all the business constraints and defines the objective.We now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem.
###Code
from docplex.mp.model import Model
mdl = Model(name="pasta")
###Output
_____no_output_____
###Markdown
Define the decision variables
###Code
inside_vars = mdl.continuous_var_dict(products, name='inside')
outside_vars = mdl.continuous_var_dict(products, name='outside')
###Output
_____no_output_____
###Markdown
Express the business constraints * Each product can be produced either inside the company or outside, at a higher cost. * The inside production is constrained by the company's resources, while outside production is considered unlimited.
###Code
# --- constraints ---
# demand satisfaction
mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products)
# --- resource capacity ---
mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources)
mdl.print_information()
###Output
_____no_output_____
###Markdown
Express the objectiveMinimizing the production cost for a number of products while satisfying customer demand.
###Code
total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products)
total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products)
mdl.minimize(total_inside_cost + total_outside_cost)
###Output
_____no_output_____
###Markdown
Solve with Decision OptimizationNow solve the model, using `Model.solve()`. The following cell solves using your local CPLEX (if any, and provided you have added it to your `PYTHONPATH` variable).
###Code
mdl.solve()
###Output
_____no_output_____
###Markdown
Step 4: Investigate the solution and then run an example analysis
###Code
obj = mdl.objective_value
print("* Production model solved with objective: {:g}".format(obj))
print("* Total inside cost=%g" % total_inside_cost.solution_value)
for p in products:
print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value))
print("* Total outside cost=%g" % total_outside_cost.solution_value)
for p in products:
print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value))
###Output
_____no_output_____ |
cell_tower_coverage/cell_tower.ipynb | ###Markdown
Cell Tower Coverage Objective and PrerequisitesIn this example, we'll solve a simple covering problem: how to build a network of cell towers to provide signal coverage to the largest number of people possible. We'll construct a mathematical model of the business problem, implement this model in the Gurobi Python interface, and compute an optimal solution.This modeling example is at the beginner level, where we assume that you know Python and that you have some knowledge about building mathematical optimization models.**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*. MotivationOver the last ten years, smartphones have revolutionized our lives in ways that go well beyond how we communicate. Besides calling, texting, and emailing, more than two billion people around the world now use these devices to navigate to book cab rides, to compare product reviews and prices, to follow the news, to watch movies, to listen to music, to play video games,to take photographs, to participate in social media, and for numerous other applications.A cellular network is a network of handheld smartphones in which each phone communicates with the telephone network by radio waves through a local antenna at a cellular base station (cell tower). One important problem is the placement of cell towers to provide signal coverage to the largest number of people. Problem DescriptionA telecom company needs to build a set of cell towers to provide signal coverage for the inhabitants of a given city. A number of potential locations where the towers could be built have been identified. The towers have a fixed range, and -due to budget constraints- only a limited number of them can be built. Given these restrictions, the company wishes to provide coverage to the largest percentage of the population possible. To simplify the problem, the company has split the area it wishes to cover into a set of regions, each of which has a known population. The goal is then to choose which of the potential locations the company should build cell towers on -in order to provide coverage to as many people as possible.The Cell Tower Coverage Problem is an instance of the Maximal Covering Location Problem [1]. It is also related to the Set Cover Problem. Set covering problems occur in many different fields, and very important applications come from the airlines industry. For example, Crew Scheduling and Tail Assignment Problem [2]. Solution ApproachMathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.A mathematical optimization model has five components, namely:* Sets and indices.* Parameters.* Decision variables.* Objective function(s).* Constraints.We now present a mixed-integer programming (MIP) formulation for the Cell Tower Coverage Problem. Model Formulation Sets and Indices$i \in T$: Index and set of potential sites to build a tower.$j \in R$: Index and set of regions.$G(T,R,E)$: A bipartite graph defined over the set $T$ of potential sites to build a tower, the set of regions $R$ that we want to cover, and $E$ is the set of edges, where we have an edge $(i,j) \in E$ if region $j \in R$ can be covered by a tower on location $i \in T$. Parameters$c_{i} \in \mathbb{R}^+$: The cost of setting up a tower at site $i$.$p_{j} \in \mathbb{N}$: The population at region $j$. Decision Variables$covered_{j} \in \{0, 1 \}$: This variable is equal to 1 if region $j$ is covered; and 0 otherwise.$build_{i} \in \{0, 1 \}$: This variable is equal to 1 if tower $i$ is built; and 0 otherwise. Objective Function(s)- **Population covered**. We seek to maximize the total population covered by the towers.\begin{equation}\text{Max} \quad Z = \sum_{j \in R} p_{j} \cdot covered_{j}\tag{0}\end{equation} Constraints- **Coverage**. For each region $j \in R$ ensure that at least one tower that covers a region must be selected.\begin{equation}\sum_{(i,j) \in E} build_{i} \geq covered_{j} \quad \forall j \in R\tag{1}\end{equation}- **Budget**. We need to ensure that the total cost of building towers do not exceed the allocated budget.\begin{equation}\sum_{i \in T} c_{i} \cdot build_{i} \leq \text{budget}\tag{2}\end{equation} Python ImplementationThis example considers a bipartite graph for 6 towers and 9 regions. The following table illustrates which regions (columns) are covered by each cell tower site (rows).| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Tower 0 | 1 | 1 | - | - | - | 1 | - | - | - || Tower 1 | 1 | - | - | - | - | - | - | 1 | 1 || Tower 2 | - | - | 1 | 1 | 1 | - | 1 | - | - || Tower 3 | - | - | 1 | - | - | 1 | 1 | - | - || Tower 4 | 1 | - | 1 | - | - | - | 1 | 1 | 1 || Tower 5 | - | - | - | 1 | 1 | - | - | - | 1 |The population at each region is stated in the following table.| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Population | 523 | 690 | 420 | 1010 | 1200 | 850 | 400 | 1008 | 950 |The cost to build a cell tower at each location site is stated inthe following table.| | Cost (millions of USD) || --- | --- || Tower 0 | 4.2 || Tower 1 | 6.1 || Tower 2 | 5.2 || Tower 3 | 5.5 || Tower 4 | 4.8 || Tower 5 | 9.2 | The allocated budget is $\$20,000,000$.We now import the Gurobi Python Module. Then, we initialize the data structures with the given data.
###Code
import gurobipy as gp
from gurobipy import GRB
# tested with Gurobi v9.0.0 and Python 3.7.0
# Parameters
budget = 20
regions, population = gp.multidict({
0: 523, 1: 690, 2: 420,
3: 1010, 4: 1200, 5: 850,
6: 400, 7: 1008, 8: 950
})
sites, coverage, cost = gp.multidict({
0: [{0,1,5}, 4.2],
1: [{0,7,8}, 6.1],
2: [{2,3,4,6}, 5.2],
3: [{2,5,6}, 5.5],
4: [{0,2,6,7,8}, 4.8],
5: [{3,4,8}, 9.2]
})
###Output
_____no_output_____
###Markdown
Model DeploymentWe now determine the model for the Cell Tower Coverage Problem, by defining the decision variables, constraints, and objective function. Next, we start the optimization process and Gurobi finds the plan to build towers that maximizes the coverage of the population given the budget allocated.
###Code
# MIP model formulation
m = gp.Model("cell_tower")
build = m.addVars(len(sites), vtype=GRB.BINARY, name="Build")
is_covered = m.addVars(len(regions), vtype=GRB.BINARY, name="Is_covered")
m.addConstrs((gp.quicksum(build[t] for t in sites if r in coverage[t]) >= is_covered[r]
for r in regions), name="Build2cover")
m.addConstr(build.prod(cost) <= budget, name="budget")
m.setObjective(is_covered.prod(population), GRB.MAXIMIZE)
m.optimize()
###Output
Using license file c:\gurobi\gurobi.lic
Set parameter TokenServer to value SANTOS-SURFACE-
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (win64)
Optimize a model with 10 rows, 15 columns and 36 nonzeros
Model fingerprint: 0xfa0fabb2
Variable types: 0 continuous, 15 integer (15 binary)
Coefficient statistics:
Matrix range [1e+00, 9e+00]
Objective range [4e+02, 1e+03]
Bounds range [1e+00, 1e+00]
RHS range [2e+01, 2e+01]
Found heuristic solution: objective -0.0000000
Presolve removed 4 rows and 5 columns
Presolve time: 0.00s
Presolved: 6 rows, 10 columns, 21 nonzeros
Variable types: 0 continuous, 10 integer (10 binary)
Root relaxation: objective 7.051000e+03, 1 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 7051.0000000 7051.00000 0.00% - 0s
Explored 0 nodes (1 simplex iterations) in 0.02 seconds
Thread count was 8 (of 8 available processors)
Solution count 2: 7051 -0
Optimal solution found (tolerance 1.00e-04)
Best objective 7.051000000000e+03, best bound 7.051000000000e+03, gap 0.0000%
###Markdown
AnalysisThe result of the optimization model shows that the maximum population that can be covered with the $\$20,000,000$ budget is 7,051 people. Let's see the solution that achieves that optimal result. Cell Tower Build PlanThis plan determines at which site locations to build a cell tower.
###Code
# display optimal values of decision variables
for tower in build.keys():
if (abs(build[tower].x) > 1e-6):
print(f"\n Build a cell tower at location Tower {tower}.")
###Output
Build a cell tower at location Tower 0.
Build a cell tower at location Tower 2.
Build a cell tower at location Tower 4.
###Markdown
Demand Fulfillment Metrics- **Coverage**: Percentage of the population covered by the cell towers built.
###Code
# Percentage of the population covered by the cell towers built is computed as follows.
total_population = 0
for region in range(len(regions)):
total_population += population[region]
coverage = round(100*m.objVal/total_population, 2)
print(f"\n The population coverage associated to the cell towers build plan is: {coverage} %")
###Output
The population coverage associated to the cell towers build plan is: 100.0 %
###Markdown
Resources Utilization Metrics- **Budget consumption**: Percentage of the budget allocated to build the cell towers.
###Code
# Percentage of budget consumed to build cell towers
total_cost = 0
for tower in range(len(sites)):
if (abs(build[tower].x) > 0.5):
total_cost += cost[tower]*int(build[tower].x)
budget_consumption = round(100*total_cost/budget, 2)
print(f"\n The percentage of budget consumed associated to the cell towers build plan is: {budget_consumption} %")
###Output
The percentage of budget consumed associated to the cell towers build plan is: 71.0 %
###Markdown
Cell Tower Coverage Objective and PrerequisitesIn this example, we'll solve a simple covering problem: how to build a network of cell towers to provide signal coverage to the largest number of people possible. We'll construct a mathematical model of the business problem, implement this model in the Gurobi Python interface, and compute an optimal solution.This modeling example is at the beginner level, where we assume that you know Python and that you have some knowledge about building mathematical optimization models.**Download the Repository:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). **Gurobi License:** In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-TME-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_cell-Tower-Coverage_COM_EVAL_GITHUB_&utm_term=cell-tower-coverage-problem&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-TME-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_cell-Tower-Coverage_ACADEMIC_EVAL_GITHUB_&utm_term=cell-tower-coverage-problem&utm_content=C_JPM) as an *academic user*. MotivationOver the last ten years, smartphones have revolutionized our lives in ways that go well beyond how we communicate. Besides calling, texting, and emailing, more than two billion people around the world now use these devices to navigate to book cab rides, to compare product reviews and prices, to follow the news, to watch movies, to listen to music, to play video games,to take photographs, to participate in social media, and for numerous other applications.A cellular network is a network of handheld smartphones in which each phone communicates with the telephone network by radio waves through a local antenna at a cellular base station (cell tower). One important problem is the placement of cell towers to provide signal coverage to the largest number of people. Problem DescriptionA telecom company needs to build a set of cell towers to provide signal coverage for the inhabitants of a given city. A number of potential locations where the towers could be built have been identified. The towers have a fixed range, and -due to budget constraints- only a limited number of them can be built. Given these restrictions, the company wishes to provide coverage to the largest percentage of the population possible. To simplify the problem, the company has split the area it wishes to cover into a set of regions, each of which has a known population. The goal is then to choose which of the potential locations the company should build cell towers on -in order to provide coverage to as many people as possible.The Cell Tower Coverage Problem is an instance of the Maximal Covering Location Problem [1]. It is also related to the Set Cover Problem. Set covering problems occur in many different fields, and very important applications come from the airlines industry. For example, Crew Scheduling and Tail Assignment Problem [2]. Solution ApproachMathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.A mathematical optimization model has five components, namely:* Sets and indices.* Parameters.* Decision variables.* Objective function(s).* Constraints.We now present a mixed-integer programming (MIP) formulation for the Cell Tower Coverage Problem. Model Formulation Sets and Indices$i \in T$: Index and set of potential sites to build a tower.$j \in R$: Index and set of regions.$G(T,R,E)$: A bipartite graph defined over the set $T$ of potential sites to build a tower, the set of regions $R$ that we want to cover, and $E$ is the set of edges, where we have an edge $(i,j) \in E$ if region $j \in R$ can be covered by a tower on location $i \in T$. Parameters$c_{i} \in \mathbb{R}^+$: The cost of setting up a tower at site $i$.$p_{j} \in \mathbb{N}$: The population at region $j$. Decision Variables$covered_{j} \in \{0, 1 \}$: This variable is equal to 1 if region $j$ is covered; and 0 otherwise.$build_{i} \in \{0, 1 \}$: This variable is equal to 1 if tower $i$ is built; and 0 otherwise. Objective Function(s)- **Population covered**. We seek to maximize the total population covered by the towers.\begin{equation}\text{Max} \quad Z = \sum_{j \in R} p_{j} \cdot covered_{j}\tag{0}\end{equation} Constraints- **Coverage**. For each region $j \in R$ ensure that at least one tower that covers a region must be selected.\begin{equation}\sum_{(i,j) \in E} build_{i} \geq covered_{j} \quad \forall j \in R\tag{1}\end{equation}- **Budget**. We need to ensure that the total cost of building towers do not exceed the allocated budget.\begin{equation}\sum_{i \in T} c_{i} \cdot build_{i} \leq \text{budget}\tag{2}\end{equation} Python ImplementationThis example considers a bipartite graph for 6 towers and 9 regions. The following table illustrates which regions (columns) are covered by each cell tower site (rows).| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Tower 0 | 1 | 1 | - | - | - | 1 | - | - | - || Tower 1 | 1 | - | - | - | - | - | - | 1 | 1 || Tower 2 | - | - | 1 | 1 | 1 | - | 1 | - | - || Tower 3 | - | - | 1 | - | - | 1 | 1 | - | - || Tower 4 | 1 | - | 1 | - | - | - | 1 | 1 | 1 || Tower 5 | - | - | - | 1 | 1 | - | - | - | 1 |The population at each region is stated in the following table.| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Population | 523 | 690 | 420 | 1010 | 1200 | 850 | 400 | 1008 | 950 |The cost to build a cell tower at each location site is stated inthe following table.| | Cost (millions of USD) || --- | --- || Tower 0 | 4.2 || Tower 1 | 6.1 || Tower 2 | 5.2 || Tower 3 | 5.5 || Tower 4 | 4.8 || Tower 5 | 9.2 | The allocated budget is $\$20,000,000$.We now import the Gurobi Python Module. Then, we initialize the data structures with the given data.
###Code
import gurobipy as gp
from gurobipy import GRB
# tested with Gurobi v9.0.0 and Python 3.7.0
# Parameters
budget = 20
regions, population = gp.multidict({
0: 523, 1: 690, 2: 420,
3: 1010, 4: 1200, 5: 850,
6: 400, 7: 1008, 8: 950
})
sites, coverage, cost = gp.multidict({
0: [{0,1,5}, 4.2],
1: [{0,7,8}, 6.1],
2: [{2,3,4,6}, 5.2],
3: [{2,5,6}, 5.5],
4: [{0,2,6,7,8}, 4.8],
5: [{3,4,8}, 9.2]
})
###Output
_____no_output_____
###Markdown
Model DeploymentWe now determine the model for the Cell Tower Coverage Problem, by defining the decision variables, constraints, and objective function. Next, we start the optimization process and Gurobi finds the plan to build towers that maximizes the coverage of the population given the budget allocated.
###Code
# MIP model formulation
m = gp.Model("cell_tower")
build = m.addVars(len(sites), vtype=GRB.BINARY, name="Build")
is_covered = m.addVars(len(regions), vtype=GRB.BINARY, name="Is_covered")
m.addConstrs((gp.quicksum(build[t] for t in sites if r in coverage[t]) >= is_covered[r]
for r in regions), name="Build2cover")
m.addConstr(build.prod(cost) <= budget, name="budget")
m.setObjective(is_covered.prod(population), GRB.MAXIMIZE)
m.optimize()
###Output
Using license file c:\gurobi\gurobi.lic
Set parameter TokenServer to value SANTOS-SURFACE-
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (win64)
Optimize a model with 10 rows, 15 columns and 36 nonzeros
Model fingerprint: 0xfa0fabb2
Variable types: 0 continuous, 15 integer (15 binary)
Coefficient statistics:
Matrix range [1e+00, 9e+00]
Objective range [4e+02, 1e+03]
Bounds range [1e+00, 1e+00]
RHS range [2e+01, 2e+01]
Found heuristic solution: objective -0.0000000
Presolve removed 4 rows and 5 columns
Presolve time: 0.00s
Presolved: 6 rows, 10 columns, 21 nonzeros
Variable types: 0 continuous, 10 integer (10 binary)
Root relaxation: objective 7.051000e+03, 1 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 7051.0000000 7051.00000 0.00% - 0s
Explored 0 nodes (1 simplex iterations) in 0.02 seconds
Thread count was 8 (of 8 available processors)
Solution count 2: 7051 -0
Optimal solution found (tolerance 1.00e-04)
Best objective 7.051000000000e+03, best bound 7.051000000000e+03, gap 0.0000%
###Markdown
AnalysisThe result of the optimization model shows that the maximum population that can be covered with the $\$20,000,000$ budget is 7,051 people. Let's see the solution that achieves that optimal result. Cell Tower Build PlanThis plan determines at which site locations to build a cell tower.
###Code
# display optimal values of decision variables
for tower in build.keys():
if (abs(build[tower].x) > 1e-6):
print(f"\n Build a cell tower at location Tower {tower}.")
###Output
Build a cell tower at location Tower 0.
Build a cell tower at location Tower 2.
Build a cell tower at location Tower 4.
###Markdown
Demand Fulfillment Metrics- **Coverage**: Percentage of the population covered by the cell towers built.
###Code
# Percentage of the population covered by the cell towers built is computed as follows.
total_population = 0
for region in range(len(regions)):
total_population += population[region]
coverage = round(100*m.objVal/total_population, 2)
print(f"\n The population coverage associated to the cell towers build plan is: {coverage} %")
###Output
The population coverage associated to the cell towers build plan is: 100.0 %
###Markdown
Resources Utilization Metrics- **Budget consumption**: Percentage of the budget allocated to build the cell towers.
###Code
# Percentage of budget consumed to build cell towers
total_cost = 0
for tower in range(len(sites)):
if (abs(build[tower].x) > 0.5):
total_cost += cost[tower]*int(build[tower].x)
budget_consumption = round(100*total_cost/budget, 2)
print(f"\n The percentage of budget consumed associated to the cell towers build plan is: {budget_consumption} %")
###Output
The percentage of budget consumed associated to the cell towers build plan is: 71.0 %
###Markdown
Cell Tower Coverage Objective and PrerequisitesWant to learn how to configure a network of cell towers to provide signal coverage to the largest number of people possible? In this example, you’ll learn how to solve this simple covering problem. We’ll show you how to construct a mixed-integer programming (MIP) model of the problem, implement this model in the Gurobi Python API, and find an optimal solution using the Gurobi Optimizer.This modeling example is at the beginner level, where we assume that you know Python and that you have some knowledge about building mathematical optimization models.**Download the Repository:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). **Gurobi License:** In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-TME-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_cell-Tower-Coverage_COM_EVAL_GITHUB_&utm_term=cell-tower-coverage-problem&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-TME-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_cell-Tower-Coverage_ACADEMIC_EVAL_GITHUB_&utm_term=cell-tower-coverage-problem&utm_content=C_JPM) as an *academic user*. MotivationOver the last ten years, smartphones have revolutionized our lives in ways that go well beyond how we communicate. Besides calling, texting, and emailing, more than two billion people around the world now use these devices to navigate to book cab rides, to compare product reviews and prices, to follow the news, to watch movies, to listen to music, to play video games,to take photographs, to participate in social media, and for numerous other applications.A cellular network is a network of handheld smartphones in which each phone communicates with the telephone network by radio waves through a local antenna at a cellular base station (cell tower). One important problem is the placement of cell towers to provide signal coverage to the largest number of people. Problem DescriptionA telecom company needs to build a set of cell towers to provide signal coverage for the inhabitants of a given city. A number of potential locations where the towers could be built have been identified. The towers have a fixed range, and -due to budget constraints- only a limited number of them can be built. Given these restrictions, the company wishes to provide coverage to the largest percentage of the population possible. To simplify the problem, the company has split the area it wishes to cover into a set of regions, each of which has a known population. The goal is then to choose which of the potential locations the company should build cell towers on -in order to provide coverage to as many people as possible.The Cell Tower Coverage Problem is an instance of the Maximal Covering Location Problem [1]. It is also related to the Set Cover Problem. Set covering problems occur in many different fields, and very important applications come from the airlines industry. For example, Crew Scheduling and Tail Assignment Problem [2]. Solution ApproachMathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.A mathematical optimization model has five components, namely:* Sets and indices.* Parameters.* Decision variables.* Objective function(s).* Constraints.We now present a mixed-integer programming (MIP) formulation for the Cell Tower Coverage Problem. Model Formulation Sets and Indices$i \in T$: Index and set of potential sites to build a tower.$j \in R$: Index and set of regions.$G(T,R,E)$: A bipartite graph defined over the set $T$ of potential sites to build a tower, the set of regions $R$ that we want to cover, and $E$ is the set of edges, where we have an edge $(i,j) \in E$ if region $j \in R$ can be covered by a tower on location $i \in T$. Parameters$c_{i} \in \mathbb{R}^+$: The cost of setting up a tower at site $i$.$p_{j} \in \mathbb{N}$: The population at region $j$. Decision Variables$covered_{j} \in \{0, 1 \}$: This variable is equal to 1 if region $j$ is covered; and 0 otherwise.$build_{i} \in \{0, 1 \}$: This variable is equal to 1 if tower $i$ is built; and 0 otherwise. Objective Function(s)- **Population covered**. We seek to maximize the total population covered by the towers.\begin{equation}\text{Max} \quad Z = \sum_{j \in R} p_{j} \cdot covered_{j}\tag{0}\end{equation} Constraints- **Coverage**. For each region $j \in R$ ensure that at least one tower that covers a region must be selected.\begin{equation}\sum_{(i,j) \in E} build_{i} \geq covered_{j} \quad \forall j \in R\tag{1}\end{equation}- **Budget**. We need to ensure that the total cost of building towers do not exceed the allocated budget.\begin{equation}\sum_{i \in T} c_{i} \cdot build_{i} \leq \text{budget}\tag{2}\end{equation} Python ImplementationThis example considers a bipartite graph for 6 towers and 9 regions. The following table illustrates which regions (columns) are covered by each cell tower site (rows).| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Tower 0 | 1 | 1 | - | - | - | 1 | - | - | - || Tower 1 | 1 | - | - | - | - | - | - | 1 | 1 || Tower 2 | - | - | 1 | 1 | 1 | - | 1 | - | - || Tower 3 | - | - | 1 | - | - | 1 | 1 | - | - || Tower 4 | 1 | - | 1 | - | - | - | 1 | 1 | 1 || Tower 5 | - | - | - | 1 | 1 | - | - | - | 1 |The population at each region is stated in the following table.| | Region 0 | Region 1 | Region 2 | Region 3 | Region 4 | Region 5 | Region 6 | Region 7 | Region 8 || --- | --- | --- | --- | --- | --- | --- | --- | --- | --- || Population | 523 | 690 | 420 | 1010 | 1200 | 850 | 400 | 1008 | 950 |The cost to build a cell tower at each location site is stated inthe following table.| | Cost (millions of USD) || --- | --- || Tower 0 | 4.2 || Tower 1 | 6.1 || Tower 2 | 5.2 || Tower 3 | 5.5 || Tower 4 | 4.8 || Tower 5 | 9.2 | The allocated budget is $\$20,000,000$.We now import the Gurobi Python Module. Then, we initialize the data structures with the given data.
###Code
import gurobipy as gp
from gurobipy import GRB
# tested with Gurobi v9.0.0 and Python 3.7.0
# Parameters
budget = 20
regions, population = gp.multidict({
0: 523, 1: 690, 2: 420,
3: 1010, 4: 1200, 5: 850,
6: 400, 7: 1008, 8: 950
})
sites, coverage, cost = gp.multidict({
0: [{0,1,5}, 4.2],
1: [{0,7,8}, 6.1],
2: [{2,3,4,6}, 5.2],
3: [{2,5,6}, 5.5],
4: [{0,2,6,7,8}, 4.8],
5: [{3,4,8}, 9.2]
})
###Output
_____no_output_____
###Markdown
Model DeploymentWe now determine the model for the Cell Tower Coverage Problem, by defining the decision variables, constraints, and objective function. Next, we start the optimization process and Gurobi finds the plan to build towers that maximizes the coverage of the population given the budget allocated.
###Code
# MIP model formulation
m = gp.Model("cell_tower")
build = m.addVars(len(sites), vtype=GRB.BINARY, name="Build")
is_covered = m.addVars(len(regions), vtype=GRB.BINARY, name="Is_covered")
m.addConstrs((gp.quicksum(build[t] for t in sites if r in coverage[t]) >= is_covered[r]
for r in regions), name="Build2cover")
m.addConstr(build.prod(cost) <= budget, name="budget")
m.setObjective(is_covered.prod(population), GRB.MAXIMIZE)
m.optimize()
###Output
Using license file c:\gurobi\gurobi.lic
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 10 rows, 15 columns and 36 nonzeros
Model fingerprint: 0xfa0fabb2
Variable types: 0 continuous, 15 integer (15 binary)
Coefficient statistics:
Matrix range [1e+00, 9e+00]
Objective range [4e+02, 1e+03]
Bounds range [1e+00, 1e+00]
RHS range [2e+01, 2e+01]
Found heuristic solution: objective -0.0000000
Presolve removed 4 rows and 5 columns
Presolve time: 0.00s
Presolved: 6 rows, 10 columns, 21 nonzeros
Variable types: 0 continuous, 10 integer (10 binary)
Root relaxation: objective 7.051000e+03, 1 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 7051.0000000 7051.00000 0.00% - 0s
Explored 0 nodes (1 simplex iterations) in 0.02 seconds
Thread count was 8 (of 8 available processors)
Solution count 2: 7051 -0
Optimal solution found (tolerance 1.00e-04)
Best objective 7.051000000000e+03, best bound 7.051000000000e+03, gap 0.0000%
###Markdown
AnalysisThe result of the optimization model shows that the maximum population that can be covered with the $\$20,000,000$ budget is 7,051 people. Let's see the solution that achieves that optimal result. Cell Tower Build PlanThis plan determines at which site locations to build a cell tower.
###Code
# display optimal values of decision variables
for tower in build.keys():
if (abs(build[tower].x) > 1e-6):
print(f"\n Build a cell tower at location Tower {tower}.")
###Output
Build a cell tower at location Tower 0.
Build a cell tower at location Tower 2.
Build a cell tower at location Tower 4.
###Markdown
Demand Fulfillment Metrics- **Coverage**: Percentage of the population covered by the cell towers built.
###Code
# Percentage of the population covered by the cell towers built is computed as follows.
total_population = 0
for region in range(len(regions)):
total_population += population[region]
coverage = round(100*m.objVal/total_population, 2)
print(f"\n The population coverage associated to the cell towers build plan is: {coverage} %")
###Output
The population coverage associated to the cell towers build plan is: 100.0 %
###Markdown
Resources Utilization Metrics- **Budget consumption**: Percentage of the budget allocated to build the cell towers.
###Code
# Percentage of budget consumed to build cell towers
total_cost = 0
for tower in range(len(sites)):
if (abs(build[tower].x) > 0.5):
total_cost += cost[tower]*int(build[tower].x)
budget_consumption = round(100*total_cost/budget, 2)
print(f"\n The percentage of budget consumed associated to the cell towers build plan is: {budget_consumption} %")
###Output
The percentage of budget consumed associated to the cell towers build plan is: 71.0 %
|
eda.ipynb | ###Markdown
General Exploratory Data Analysi General Exploratory Data Analysi
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import os.path
from datetime import datetime
from datetime import date
from dateutil import parser
#import pickle
#import asyncio
from datetime import timedelta
import dateutil.parser
import imp
import json
import statistics
#import random
#from binance.client import Client
#import api
#import get_uptodate_binance_data
#import generate_random_file
#import track_pnl
%%time
filename = 'BTCUSDT-1h-binance.csv'
timeframe = '1h'
OHLC_directory = '/root/OResearch/Data/Binance_OHLC/'
complete_file_path = OHLC_directory + filename
df = pd.read_csv(complete_file_path)
df = df.drop(columns=['Unnamed: 0'], axis=0)
###Output
_____no_output_____
###Markdown
Adding log-return
###Code
df['closeprice_log_return']=np.log(df.close) - np.log(df.close.shift(1))
df = df.iloc[1: , :] #Remove first row which contains NA due to log-return
df['datetime'] = pd.to_datetime(df['timestamp'], errors='coerce')
df['day'] = df['datetime'].dt.day_name()
df['week'] = df['datetime'].dt.week
df['month'] = df['datetime'].dt.month_name()
df
###Output
_____no_output_____
###Markdown
We plot the average and median log_return by day, by week, and by month. by day
###Code
days=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
fig = df[['day', 'closeprice_log_return']].groupby('day', sort=True).mean().reindex(days).plot(kind='bar', title='Average hourly log-return for BTCUSDT per day', legend=True).get_figure()
fig.savefig('Images/Average hourly log-return for BTCUSDT per day.png')
fig = df[['day', 'closeprice_log_return']].groupby('day', sort=False).median().reindex(days).plot(kind='bar', title='Median hourly log-return for BTCUSDT per day', legend=True).get_figure()
fig.savefig('Images/Median hourly log-return for BTCUSDT per day.png')
###Output
_____no_output_____
###Markdown
by week
###Code
fig = df[['week', 'closeprice_log_return']].groupby('week', sort=True).mean().plot(kind='bar', title='Average hourly log-return for BTCUSDT per week number', legend=True).get_figure()
fig.savefig('Images/Average hourly log-return for BTCUSDT per week number.png')
###Output
_____no_output_____
###Markdown
We can notice quite a pattern on the 53th calendar week. Explain more. (cause 53th week only exists in 2020 so it's biased: not big enough sample
###Code
fig = df[['week', 'closeprice_log_return']].groupby('week').median().plot(kind='bar', title='Median hourly log-return for BTCUSDT per week number', legend=True).get_figure()
fig.savefig('Images/Median hourly log-return for BTCUSDT per week number.png')
###Output
_____no_output_____
###Markdown
Let's now look at extreme value or outliers among those by month
###Code
months=['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
fig = df[['month', 'closeprice_log_return']].groupby('month', sort=False).mean().reindex(months).plot(kind='bar', title='Average hourly log-return for BTCUSDT per month', legend=True).get_figure()
fig.savefig('Images/Average hourly log-return for BTCUSDT per month.png')
fig = df[['month', 'closeprice_log_return']].groupby('month').median().reindex(months).plot(kind='bar', title='Median hourly log-return for BTCUSDT per month', legend=True).get_figure()
fig.savefig('Images/Median hourly log-return for BTCUSDT per month.png')
###Output
_____no_output_____
###Markdown
Exercise: Wrangling Data: Acquisition, Integration, and ExplorationFor this lab’s exercise we are going to answer a few questions about AirBnB listings in San Francisco to make a better informed civic descisions. Spurred by Prop F in San Francisco, imagine you are the mayor of SF (or your respective city) and you need to decide what impact AirBnB has had on your own housing situation. We will collect the relevant data, parse and store this data in a structured form, and use statistics and visualization to both better understand our own city and potentially communicate these findings to the public at large.> I will explore SF's data, but the techniques should be generally applicable to any city. Inside AirBnB has many interesting cities to further explore: http://insideairbnb.com/ Outline* Start with Effective Questions * Intro + Data Science Overview * Proposition F * How can we answer this?* Acquiring Data * What's an API? (Zillow API, SF Open Data, datausa.io) * How the Web Works (Socrata API)* What if there is no API? * Scrape an AirBnB listing* What to do now that we have data? * Basics of HTML (CSS selectors and grabbing what you want) * Use `lxml` to parse web pages* Storing Data * Schemas and Structure * Relations (users, listings, and reviews) * Store listing in SQLite* Manipulating Data * basics of Pandas * summary stats * split-apply-combine * Aggregations * Prop F. revenue lost* Exploratory Data Analysis * Inside AirBnB * Why visual? * Chart Types (visualizing continuous, categorical, and distributions and facets) * Distributions of Prop F. Revenue vs. point statistics Visualize Time to visualize! Using pandas (and matplotlib) create a visualization of each of the following:* Distribution of room_type (for entire city)* Histogram of of listings per neighborhood* Histogram of of listings for each user* City wide distribution of listing price* Distribution of median listing price per neighborhood* Histogram of number of reviews per listing
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%pylab inline
# We will use the Inside AirBnB dataset from here on
df = pd.read_csv('data/sf_listings.csv')
df.head()
df.room_type.value_counts().plot.bar()
# Since SF doesn't have many neighborhoods (comparatively) we can also see the raw # per neighborhood
df.groupby('neighbourhood').count()['id'].plot.bar(figsize=(14,6))
df.groupby('host_id').count()['id'].plot.hist(bins=50)
# let's zoom in to the tail
subselect = df.groupby('host_id').count()['id']
subselect[subselect > 1].plot.hist(bins=50)
def scale_free_plot(df, num):
subselect = df.groupby('host_id').count()['id']
return subselect[subselect > num].plot.hist(bins=75)
scale_free_plot(df, 2)
# the shape of the distribution stays relatively the same as we subselect
for i in range(5):
scale_free_plot(df, i)
plt.show()
###Output
_____no_output_____
###Markdown
Scatterplot MatrixIn an effort to find potential correlations (or outliers) you want a little bit more fine grained loot at the data. Create a scatterplot matrix of the data for your city. http://pandas.pydata.org/pandas-docs/stable/visualization.htmlvisualization-scatter-matrix
###Code
from pandas.tools.plotting import scatter_matrix
# it only makes sense to plot the continuous columns
continuous_columns = ['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month', \
'calculated_host_listings_count','availability_365']
# semicolon prevents the axis objests from printing
scatter_matrix(df[continuous_columns], alpha=0.6, figsize=(16, 16), diagonal='kde');
###Output
_____no_output_____
###Markdown
Interesting insights from the scatter matrix:* `price` heavily skewed towards cheap prices (with a few extreme outliers). `host_listings_count` and `number_of_reviews` have similar distributions.* `minimum_nights` has a sharp bimodal distribution.* Listing are bimodal too and are either: * available for a relatively short period of the year * available for most of it (these are probably the ___"hotels"___)* Host with a large number of listings have them each for a relative low price.* Listings that are expensive have very few reviews (i.e. not many people stay at them)
###Code
sns.distplot(df[(df.calculated_host_listings_count > 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50)
sns.distplot(df[(df.calculated_host_listings_count <= 2) & (df.room_type == 'Entire home/apt')].availability_365, bins=50)
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
entire_home = df[df.room_type == 'Entire home/apt']
plt.figure(figsize=(14,6))
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings')
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing')
plt.legend();
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
plt.figure(figsize=(14,6))
sns.kdeplot(df[df.minimum_nights > 29].availability_365, label='Short term Sublet')
sns.kdeplot(df[df.minimum_nights <= 20].availability_365, label = 'Listing')
plt.legend();
# Host with multiple listing for the entire home distribution is skewed to availability the entire year
# implying that these hosts are renting the AirBnB as short term sublets (or hotels)
entire_home = df[df.minimum_nights > 29]
plt.figure(figsize=(14,6))
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count > 1].availability_365, label='Multiple Listings')
sns.kdeplot(entire_home[entire_home.calculated_host_listings_count == 1].availability_365, label = 'Single Listing')
plt.legend();
###Output
_____no_output_____
###Markdown
Extra! Advanced Plots with Seaborn Make a violin plot of the price distribution of each neighborhood.> If your city has a large number of neighborhoods plot the 10 with the most listing.
###Code
# just a tocuh hard to interpret...
plt.figure(figsize=(16, 6))
sns.violinplot(data=df, x='neighbourhood', y='price')
# boxplots can sometimes handle outliers better, we can see here there are some listings that are high priced extrema
plt.figure(figsize=(16, 6))
sns.boxplot(data=df, x='neighbourhood', y='price')
###Output
_____no_output_____
###Markdown
Lets try to only show the 10 neighborhoods with the most listings and to zoom in on the distribution of the lower prices (now that we can identify the outliers) we can remove listings priced at > $2000
###Code
top_neighborhoods = df.groupby('neighbourhood').count().sort_values('id', ascending = False).index[:10]
top_neighborhoods
neighborhood_subset = df[df.neighbourhood.isin(top_neighborhoods)]
plt.figure(figsize=(16, 6))
sns.boxplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price')
plt.figure(figsize=(16, 6))
sns.violinplot(data=neighborhood_subset[neighborhood_subset.price < 2000], x='neighbourhood', y='price')
###Output
_____no_output_____
###Markdown
Exploratory data analysis
###Code
# import data
data = pd.read_csv("sanger1018_brainarray_ensemblgene_rma.txt", sep='\t')
cellline = pd.read_excel("Cell_Lines_Details.xlsx")
dose = pd.read_excel("v17.3_fitted_dose_response.xlsx")
data.head()
cellline.head()
dose.head()
# check distribution of features(genes)
plt.hist(data.iloc[7].tolist()[1:],100)
plt.show()
data.describe()
# check the overall distribution of all IC50 over all drugs and all cell lines
#plt.hist(np.exp(dose.LN_IC50)[np.exp(dose.LN_IC50)<250], 200, normed=1, facecolor='g', alpha=0.75)
plt.hist(dose.LN_IC50, 100)
plt.show()
print(dose.LN_IC50.quantile([.25, .5, .75]), np.mean(dose.LN_IC50), np.median(dose.LN_IC50))
## how many cell lines were drug tested on
plt.hist(dose.DRUG_ID.value_counts(),100)
plt.show()
dose.DRUG_ID.value_counts()[:11]
## Check the name of the high count drugs
drug_ids = dose.DRUG_ID.value_counts().index.tolist()
drug_counts = dose.DRUG_ID.value_counts()
drug_counts[drug_ids[0]]
dose.loc[dose['DRUG_ID'] == drug_ids[0]]['DRUG_NAME'].tolist()[0]
print(drug_ids[0], dose.loc[dose['DRUG_ID'] == drug_ids[0]]['DRUG_NAME'].tolist()[0])
## Check the correlation between variables
## high correlation variables can be found at /results/correlated_genes.txt
id1 = 0
id2 = 48
plt.scatter(data.loc[id1,:].tolist()[1:], data.loc[id2,:].tolist()[1:])
plt.show()
print(pearsonr(data.loc[id1,:].tolist()[1:], data.loc[id2,:].tolist()[1:])[0])
###Output
_____no_output_____
###Markdown
Test some different models using 5-fold cross validation (on training data)
###Code
## one drug at a time
drug_id = 211
onedrug_dose = dose.loc[dose.DRUG_ID == drug_id]
plt.hist(onedrug_dose.LN_IC50, 200)
plt.show()
# one drug at a time
# select all cell lines that were tested on the drug
# select and sort rnaseq data by cell line order
onedrug_dose = dose.loc[dose.DRUG_ID == drug_id]
onedrug_ind = [str(x) for x in set(onedrug_dose.COSMIC_ID) if str(x) in data.columns and x in cellline['COSMIC identifier'].tolist()]
onedrug_cellline = cellline[cellline['COSMIC identifier'].isin(onedrug_ind)]
onedrug_data = data[['ensembl_gene'] + [i for i in onedrug_cellline['COSMIC identifier'].astype(str).tolist()]]
onedrug_dose = onedrug_dose[onedrug_dose['COSMIC_ID'].isin(onedrug_ind)]
onedrug_dose['sort'] = pd.Categorical(
onedrug_dose['COSMIC_ID'].astype(str).tolist(),
categories=onedrug_data.columns.tolist(),
ordered=True
)
onedrug_dose = onedrug_dose.sort_values('sort')
#onedrug_dose = onedrug_dose.set_index('COSMIC_ID')
#onedrug_dose = onedrug_dose.loc[[i for i in onedrug_cellline['COSMIC identifier'].astype(str).tolist()]]
#plt.hist(onedrug_dose.LN_IC50, 200)
#plt.show()
#onedrug_data = data[data.columns.intersection(onedrug_ind)]
#onedrug_cellline = cellline[cellline.columns.intersection(onedrug_ind)]
print(len(onedrug_ind))
print(onedrug_cellline.shape)
print(onedrug_data.shape)
print(onedrug_dose.shape)
onedrug_data
onedrug_cellline
onedrug_dose
# stratifiy the data based on GDSC Tissue descriptor, TCGA label, and Screen Medium
temp = onedrug_cellline['Cancer Type\n(matching TCGA label)'].astype(str) + onedrug_cellline['Screen Medium'].astype(str)
stratified_category = temp.replace(temp.value_counts().index[temp.value_counts() == 1], ['one'] * np.sum(temp.value_counts() == 1))
X = onedrug_data.drop(['ensembl_gene'], axis=1).T
y = np.array(onedrug_dose['LN_IC50'].tolist())
skf = StratifiedKFold(n_splits=5)
from sklearn.cross_decomposition import PLSRegression
for train_index, test_index in skf.split(X, stratified_category):
X_train, X_test = X.iloc[train_index , : ], X.iloc[test_index , : ]
y_train, y_test = y[train_index], y[test_index]
model = PLSRegression(n_components=10)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(model.__class__.__name__, mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
## 5-fold cross validation for different regression models
for train_index, test_index in skf.split(X, stratified_category):
X_train, X_test = X.iloc[train_index , : ], X.iloc[test_index , : ]
y_train, y_test = y[train_index], y[test_index]
#print('Train:', y_train.value_counts())
#print('Test', y_test.value_counts())
model = RandomForestRegressor(max_depth=5, random_state=0, n_estimators=100)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('RF:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
ind = np.argsort(model.feature_importances_)[-50:]
X_train_subset = X_train.iloc[:, ind]
X_test_subset = X_test.iloc[:, ind]
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
print('RF_50:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model = linear_model.Lasso(alpha=0.1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('Lasso:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
print('Lasso_50:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model = KNeighborsRegressor(n_neighbors=5)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('KNN:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
print('KNN_50:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model = AdaBoostRegressor(random_state=0, n_estimators=100)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('AdaBoost:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
print('AdaBoost_50:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('LM:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
print('LM_50:', mean_squared_error(y_test, y_pred), r2_score(y_test, y_pred) )
print('BL:', np.sum((y_test - np.median(y_test)) ** 2) / y_test.shape[0], '\n')
###Output
_____no_output_____
###Markdown
Save model information
###Code
# save example as patient info for the web app
import csv
x = data[['ensembl_gene', '687807']]
x.to_csv('patient3.csv', encoding='utf-8', index=False)
# read in the saved file to varify
df = pd.read_csv('patient3.csv', header=None, index_col=0)
df.loc[['ENSG00000000005', 'ENSG00000000430']]
# save the most important features for model to select
from os import listdir
from os.path import isfile, join
import json
# read in the saved single model configuration and concatenate them together
confs = {}
model_dir = '/Users/YaoSen/Desktop/insight/conf/'
model_paths = [join(model_dir, f) for f in listdir(model_dir) if isfile(join(model_dir, f))]
for i in model_paths:
with open(i) as data_file:
js = json.load(data_file)
confs = {**confs, **js}
# save the configurations as one file
with open('/Users/YaoSen/Desktop/insight_project/dash/conf/parms.json', 'w') as outfile:
json.dump(confs, outfile)
# read the one configuration file in to varify
with open('/Users/YaoSen/Desktop/insight_project/dash/conf/parms.json') as infile:
params = json.load(infile)
# save the model to disk
import pickle
filename = 'finalized_model.sav'
pickle.dump(model, open(filename, 'wb'))
# read the model back in to varify
from sklearn.externals import joblib
model = joblib.load('finalized_model.sav')
###Output
_____no_output_____
###Markdown
Validate different models on test data
###Code
drug_id = 261
onedrug_name = dose.loc[dose['DRUG_ID'] == drug_id]['DRUG_NAME'].tolist()[0]
onedrug_dose = dose.loc[dose.DRUG_ID == drug_id]
# select all cell lines that were tested on the drug
# select and sort rnaseq data by cell line order
onedrug_dose = dose.loc[dose.DRUG_ID == drug_id]
onedrug_ind = [str(x) for x in set(onedrug_dose.COSMIC_ID) if str(x) in data.columns and x in cellline['COSMIC identifier'].tolist()]
onedrug_cellline = cellline[cellline['COSMIC identifier'].isin(onedrug_ind)]
onedrug_data = data[['ensembl_gene'] + [i for i in onedrug_cellline['COSMIC identifier'].astype(str).tolist()]]
onedrug_dose = onedrug_dose[onedrug_dose['COSMIC_ID'].isin(onedrug_ind)]
onedrug_dose['sort'] = pd.Categorical(
onedrug_dose['COSMIC_ID'].astype(str).tolist(),
categories=onedrug_data.columns.tolist(),
ordered=True
)
onedrug_dose = onedrug_dose.sort_values('sort')
temp = onedrug_cellline['Cancer Type\n(matching TCGA label)'].astype(str) + onedrug_cellline['Screen Medium'].astype(str)
stratified_category = temp.replace(temp.value_counts().index[temp.value_counts() == 1], ['nanR'] * np.sum(temp.value_counts() == 1))
## First random forest
X = onedrug_data.drop(['ensembl_gene'], axis=1).T
y = np.array(onedrug_dose['LN_IC50'].tolist())
## 20/80 train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1, stratify=stratified_category) # test size 20%
model = RandomForestRegressor(max_depth=2, random_state=0, n_estimators=100)
model.fit(X_train, y_train)
## save important genes
importance_idx = np.argsort(model.feature_importances_)
important_genes = {onedrug_name: onedrug_data['ensembl_gene'][importance_idx[-50:]].tolist()}
#filepath = './conf/'+ onedrug_name + '.json'
#with open(filepath, 'w') as outfile:
# json.dump(important_genes, outfile)
## second random forest
X_train_subset = X_train.iloc[:, importance_idx[-100:]]
X_test_subset = X_test.iloc[:, importance_idx[-100:]]
model = RandomForestRegressor(max_depth=5, random_state=0, n_estimators=200)
model.fit(X_train_subset, y_train)
y_pred = model.predict(X_test_subset)
plt.hist(y_pred,100)
plt.show()
plt.hist(y_test,100)
plt.show()
plt.scatter(y_test, y_pred)
plt.show()
## same figures as above but in one plot
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
spacing = 0.005
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom + height + spacing, width, 0.2]
rect_histy = [left + width + spacing, bottom, 0.2, height]
# start with a rectangular Figure
plt.figure(figsize=(8, 8))
ax_scatter = plt.axes(rect_scatter)
ax_scatter.tick_params(direction='in', top=True, right=True, labelsize=15)
ax_histx = plt.axes(rect_histx)
ax_histx.tick_params(direction='in', labelbottom=False, labelsize=15)
ax_histy = plt.axes(rect_histy)
ax_histy.tick_params(direction='in', labelleft=False, labelsize=15)
# the scatter plot:
ax_scatter.scatter(y_test, y_pred)
# now determine nice limits by hand:
binwidth = 0.25
lim = np.ceil(np.abs([y_test, y_pred]).max() / binwidth) * binwidth
ax_scatter.set_xlim((-lim, lim))
ax_scatter.set_ylim((-lim, lim))
bins = np.arange(-lim, lim + binwidth, binwidth)
ax_histx.hist(y_test, bins=bins)
ax_histy.hist(y_pred, bins=bins, orientation='horizontal')
ax_histx.set_xlim(ax_scatter.get_xlim())
ax_histy.set_ylim(ax_scatter.get_ylim())
plt.show()
sns.kdeplot(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Cloud Segmentation EDA
###Code
# load libraries
import utils
import pandas as pd
import matplotlib.pyplot as plt
import cv2
from os import listdir
from os.path import join
import numpy as np
from PIL import Image
train_labels = pd.read_csv(utils.TRAIN_LABELS)
image_list = sorted(listdir(utils.TRAIN_IMAGES))
len(train_labels) / 4
###Output
_____no_output_____
###Markdown
The training labels consist of n * 4 labels, where n is the number of training images
###Code
label_first = train_labels[:utils.N_CLASSES*4 - 1]
label_first
image_first = cv2.imread(join(utils.TRAIN_IMAGES, image_list[0]))
image_first = cv2.cvtColor(image_first, cv2.COLOR_BGR2RGB)
plt.imshow(image_first)
plt.show()
image_shape = image_first.shape
print("Image shape:", image_shape)
###Output
Image shape: (1400, 2100, 3)
###Markdown
Segmentation samples
###Code
train_labels['Image_Label'] = train_labels['Image_Label'].apply(lambda x: x.split('_')[0])
train_labels
masks = utils.make_masks(train_labels, image_list[0], image_shape)
print(masks.shape)
train_labels
plt.imshow(image_first)
plt.imshow(masks[:,:,0], alpha=.5, cmap='gray')
plt.show()
plt.imshow(image_first)
plt.imshow(masks[:,:,1], alpha=.5, cmap='gray')
plt.show()
sub = pd.read_csv("submission.csv")
sub
test_image = cv2.imread(join(utils.TEST_IMAGES, "a7a97bb.jpg"))
test_image = cv2.cvtColor(image_first, cv2.COLOR_BGR2RGB)
test_image = cv2.resize(image_first, (525, 350))
plt.imshow(test_image)
plt.imshow(utils.rle2mask(sub['EncodedPixels'][14789], (350, 525)), cmap='gray', alpha=.5)
plt.show()
count = 0
for r in sub['EncodedPixels']:
if isinstance(r, str):
count += 1
count -14792
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisWhen placed in Metapack data package, this notebook will load the package and run a variety of common EDA operations on the first resource.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import metapack as mp
import pandas as pd
import numpy as np
from IPython.display import display
%matplotlib inline
sns.set_context('notebook')
pkg = mp.jupyter.open_package()
# For testing and development
#pkg = mp.open_package('http://s3.amazonaws.com/library.metatab.org/cde.ca.gov-accountability_dashboard-2.zip')
pkg
resource_name = next(iter(pkg.resources())).name
resource_name
pkg.resource(resource_name)
df = pkg.resource(resource_name).read_csv(parse_dates=True)
df.head()
empty_col_names = [cn for cn in df.columns if df[cn].nunique() == 0]
const_col_names= [cn for cn in df.columns if df[cn].nunique() == 1]
ignore_cols = empty_col_names+const_col_names
dt_col_names= list(df.select_dtypes(include=[np.datetime64]).columns)
number_col_names = [ cn for cn in df.select_dtypes(include=[np.number]).columns if cn not in ignore_cols ]
other_col_names = [cn for cn in df.columns if cn not in (empty_col_names+const_col_names+dt_col_names+number_col_names)]
pd.DataFrame.from_dict({'empty':[len(empty_col_names)],
'const':[len(const_col_names)],
'datetime':[len(dt_col_names)],
'number':[len(number_col_names)],
'other':[len(other_col_names)],
},
orient='index', columns=['count'])
###Output
_____no_output_____
###Markdown
Constant Columns
###Code
if const_col_names:
display(df[const_col_names].drop_duplicates().T)
###Output
_____no_output_____
###Markdown
Empty Columns
###Code
if empty_col_names:
display(df[empty_col_names].drop_duplicates().T)
###Output
_____no_output_____
###Markdown
Date and Time Columns
###Code
if dt_col_names:
display(df[dt_col_names].info())
display(df[dt_col_names].describe().T)
###Output
_____no_output_____
###Markdown
Number Columns
###Code
if number_col_names:
display(df[number_col_names].info())
display(df[number_col_names].describe().T)
###Output
_____no_output_____
###Markdown
Distributions
###Code
def plot_histograms(df):
col_names = list(df.columns)
n_cols = np.ceil(np.sqrt(len(col_names)))
n_rows = np.ceil(np.sqrt(len(col_names)))
#plt.figure(figsize=(3*n_cols,3*n_rows))
fig, ax = plt.subplots(figsize=(3*n_cols,3*n_rows))
for i in range(0,len(col_names)):
plt.subplot(n_rows + 1,n_cols,i+1)
try:
g = sns.distplot(df[col_names[i]].dropna(),kde=True)
g.set(xticklabels=[])
g.set(yticklabels=[])
except:
pass
plt.tight_layout()
plot_histograms(df[number_col_names])
###Output
_____no_output_____
###Markdown
Box Plots
###Code
def plot_boxes(df):
col_names = list(df.columns)
n_cols = np.ceil(np.sqrt(len(col_names)))
n_rows = np.ceil(np.sqrt(len(col_names)))
#plt.figure(figsize=(2*n_cols,3*n_rows))
fig, ax = plt.subplots(figsize=(2*n_cols,5*n_rows))
for i in range(0,len(col_names)):
plt.subplot(n_rows + 1,n_cols,i+1)
try:
g = sns.boxplot(df[col_names[i]].dropna(),orient='v')
except:
pass
plt.tight_layout()
plot_boxes(df[number_col_names])
## Correlations
cm = df[number_col_names].corr()
mask = np.zeros_like(cm, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(.5*len(number_col_names),.5*len(number_col_names)))
sns.heatmap(cm, mask=mask, cmap = 'viridis')
###Output
_____no_output_____
###Markdown
Other Columns
###Code
if other_col_names:
display(df[other_col_names].info())
display(df[other_col_names].describe().T)
###Output
_____no_output_____
###Markdown
Nulls
###Code
cols = dt_col_names + number_col_names + other_col_names
fig, ax = plt.subplots(figsize=(15,.5*len(cols)))
sns.heatmap(df[cols].isnull().T,cbar=False,xticklabels=False,cmap = 'viridis', ax=ax )
###Output
_____no_output_____
###Markdown
Read data
###Code
all_files = glob.glob(raw_data_path + "/top_songs_with_lyrics.csv")
raw_data = pd.concat(pd.read_csv(f) for f in all_files)
raw_data.head()
###Output
_____no_output_____
###Markdown
Pre processing (EDA)
###Code
# TODO: Apply pre processing if it apply from EDA
###Output
_____no_output_____
###Markdown
¿Question 1?
###Code
# TODO: Get graph
###Output
_____no_output_____
###Markdown
¿Question 2?
###Code
# TODO: Get graph
###Output
_____no_output_____
###Markdown
need to add sell prices from the 'future'
###Code
s = Series(sales_df, 'sales_count')
eqmodel, eqmodel_fit, = s.model_eq()
eq_preds = eqmodel.predict_wide(s.tseries)
eq_eval = evaluate(eq_preds, s.tseries)
eq_score = eq_eval['wpl'].mean()
model, forecast, pred_quantiles = fit(s)
proph_eval = evaluate(pred_quantiles, s.tseries)
proph_score = proph_eval['wpl'].mean()
print(f'Prophet model WPL: {proph_score}')
print(f'EQ model WPL: {eq_score}')
pal = sns.color_palette('deep')
nice = pal.as_hex()
reds = ['#FFA07A', '#FA8072', '#E9967A', '#F08080', '#CD5C5C', '#DC143C', '#B22222', '#FF0000', '#8B0000', '#800000', '#FF6347', '#FF4500']
fig, ax = plt.subplots(figsize = (12,10))
model.plot(forecast, ax = ax)
for q, v in eqmodel_fit.iteritems():
if v > 0:
ax.axhline(eqmodel_fit[q], label = q, color = reds.pop())
ax.legend(loc = 'upper right')
###Output
_____no_output_____
###Markdown
SQL Practice Exploratory Data Analysis
###Code
#configure jupyter notebook to run SQL commands
%load_ext sql
#connect to database
#database is a sqlite database file stored locally,
#it is a open source db, refer to README for download location
%sql sqlite:////Users/admin/personal_projs/sql_practice/data/Chinook_Sqlite.sqlite
%sql SELECT * FROM Track LIMIT 5;
###Output
* sqlite:////Users/admin/personal_projs/sql_practice/data/Chinook_Sqlite.sqlite
Done.
###Markdown
--- Practice Q's from https://www.chegg.com/homework-help/questions-and-answers/question-1-using-chinook-database-write-sql-select-queries-answer-following-questions-need-q29407465 **1. What is the title of the album with AlbumId 31?**
###Code
%%sql result <<
SELECT Title
FROM Album
Where AlbumId = 31;
result
###Output
_____no_output_____
###Markdown
**2. List all the albums by artists with the word ‘black’ in their name.**
###Code
%%sql result <<
SELECT Album.Title
FROM Album
JOIN Artist
ON Album.ArtistId = Artist.ArtistId
WHERE Artist.Name LIKE "%black%";
result
###Output
_____no_output_____
###Markdown
**3. Find the name and length (in seconds) of all tracks that have both length between 30 and 40 seconds, and genre Latin.**
###Code
%%sql result <<
SELECT Track.Name, Milliseconds/1000 AS seconds
FROM Track
JOIN Genre
ON Track.GenreId = Genre.GenreId
WHERE seconds BETWEEN 30 AND 40
AND
Genre.Name = "Latin";
result
###Output
_____no_output_____
###Markdown
**4. Produce a table that lists each country and the number of customers in that country. (You only need to include countries that have customers.)**
###Code
%%sql result <<
SELECT COUNT(*) AS total_customers, Country
FROM Customer
GROUP BY Country
HAVING total_customers > 0;
result
###Output
_____no_output_____
###Markdown
**5. Find the top five customers in terms of sales i.e. find the five customers whose total combined invoice amounts are the highest.**
###Code
%%sql result <<
SELECT FirstName || ' ' || LastName
FROM Customer
JOIN Invoice
ON Customer.CustomerId = Invoice.CustomerId
ORDER BY Invoice.Total DESC
LIMIT 5;
result
###Output
_____no_output_____
###Markdown
**6. For each genre of music, determine how many customers have bought at least one track from that genre.**
###Code
#group by genre, customerid
#make sure sum of quantity > 1
#subquery
%%sql result <<
SELECT COUNT(c.CustomerId) AS total_customers,
Genre.Name
FROM Customer c
JOIN Invoice
ON c.CustomerId = Invoice.CustomerId
JOIN InvoiceLine
ON Invoice.InvoiceId = InvoiceLine.InvoiceId
JOIN Track
ON InvoiceLine.TrackId = Track.TrackId
JOIN Genre
ON Track.GenreId = Genre.GenreId
GROUP BY Genre.Name
HAVING SUM(InvoiceLine.Quantity) >= 1;
result
###Output
_____no_output_____
###Markdown
**More EDA** **Display all the employee's fullnames and the fullnames of who they report to.**
###Code
%%sql result <<
SELECT a.FirstName || ' ' || a.LastName AS employee,
b.FirstName || ' ' || b.LastName AS supervisor
FROM Employee a
JOIN Employee b
ON a.ReportsTo = b.EmployeeId
result
###Output
_____no_output_____
###Markdown
Notes:
###Code
#df.columns
#df.info()
#df.describe()
#if(df2.empty): # check if df2 is empty
#d1.iloc[:,0]*=100 # multiply 0 column by 100
###Output
_____no_output_____
###Markdown
Moduls:
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Functions:
###Code
test = pd.read_csv('./data/test.csv')
test.columns
type(test['Fare'])
test.columns
train = pd.read_csv('./data/train.csv')
test = pd.read_csv('./data/test.csv')
def missingDataSummary(df1):
nevents=len(df1.index)
import numpy as np
import statsmodels.stats.proportion as ssp
CP = lambda num,denum : list(ssp.proportion_confint(num,denum, alpha=0.05,method='beta')) #Clopper-Pearson interval based on Beta distribution
s1=df1.isna().apply(np.sum,axis=1).value_counts() #pandas.Series
s1=s1.sort_index()
d1=s1.to_frame(name='miss rate, counts')#convert pandas.Series to pandas DataFrame
d1['miss rate, %']=d1.apply(lambda x : x[0]/nevents*100 ,axis=1)
d1['tmp']=d1.apply(lambda x : CP(x[0],nevents) ,axis=1)
d1[['err. low, %','err. up, %']] = pd.DataFrame(d1.tmp.values.tolist())*100
del d1['tmp']
d1=d1.round(decimals=1)
print(d1)
return;
missingDataSummary(df1=test)
missingDataSummary(df1=train)
#missingDataSummary(df1=test,df2=test)
import statsmodels.stats.proportion as ssp
print (870./4180)
print(( ssp.proportion_confint(870,4180, alpha=0.05,method='beta')))
test[1:2]
###Output
_____no_output_____
###Markdown
Options:
###Code
#pd.options.mode.use_inf_as_na = True
###Output
_____no_output_____
###Markdown
Load data
###Code
train = pd.read_csv('./data/train.csv')
test = pd.read_csv('./data/test.csv')
test2=pd.read_csv("./data/test.csv")
titanic=pd.concat([train, test], sort=False)
len_train=train.shape[0]
len_train, test.shape[0]
titanic.select_dtypes(include='int').head();
titanic.select_dtypes(include='object').head();
titanic.select_dtypes(include='float').head();
###Output
_____no_output_____
###Markdown
Missing data analysis 1. How much are empty and which type
###Code
test
titanic.isnull().sum()[titanic.isnull().sum()>0]
titanic.dtypes.sort_values()[titanic.isnull().sum()>0]
###Output
_____no_output_____
###Markdown
2. Fill empty data Cabin
###Code
train.Cabin=train.Cabin.fillna("unknow")
test.Cabin=test.Cabin.fillna("unknow")
###Output
_____no_output_____
###Markdown
Read Data Sample
###Code
import pandas as pd
import numpy as np
pd.set_option("display.max_rows",15)
%matplotlib inline
class dataset:
col_names = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root",
"num_file_creations","num_shells","num_access_files","num_outbound_cmds",
"is_host_login","is_guest_login","count","srv_count","serror_rate",
"srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate","label", "difficulty_level"]
kdd_train = pd.read_csv("dataset/KDDTrain+.txt",names = col_names,)
kdd_test = pd.read_csv("dataset/KDDTest+.txt",names = col_names,)
kdd_train_ = pd.read_csv("dataset/KDDTrain+_20Percent.txt",names = col_names,)
kdd_test_ = pd.read_csv("dataset/KDDTest-21.txt",names = col_names,)
kdd_diff_level_train = kdd_train["difficulty_level"].copy()
kdd_diff_level_test = kdd_test["difficulty_level"].copy()
kdd_train = kdd_train.drop("difficulty_level", axis = 1)
kdd_test = kdd_test.drop("difficulty_level", axis = 1)
kdd_train_ = kdd_train_.drop("difficulty_level", axis = 1) #labels ['difficulty_level'] not contained in axis
kdd_test_ = kdd_test_.drop("difficulty_level", axis = 1)
kdd_train.to_csv("dataset/KDDTrain+.csv")
kdd_test.to_csv("dataset/KDDTest+.csv")
kdd_train_.to_csv("dataset/KDDTrain_.csv")
kdd_test_.to_csv("dataset/KDDTest_.csv")
category_variables = ["protocol_type","service","flag"]
for cv in category_variables:
dataset.kdd_train[cv] = dataset.kdd_train[cv].astype("category")
dataset.kdd_test[cv] = dataset.kdd_test[cv].astype("category",
categories = dataset.kdd_train[cv].cat.categories)
dataset.kdd_train_[cv] = dataset.kdd_train_[cv].astype("category",
categories = dataset.kdd_train[cv].cat.categories)
dataset.kdd_test_[cv] = dataset.kdd_test_[cv].astype("category",
categories = dataset.kdd_train[cv].cat.categories)
print("Length of Categories for {} are {}".format(cv , len(dataset.kdd_train[cv].cat.categories)))
print("Categories for {} are {} \n".format(cv ,dataset.kdd_train[cv].cat.categories))
dataset.kdd_train
dataset.kdd_test
dataset.kdd_train.describe()
###Output
_____no_output_____
###Markdown
Zero Data Points
###Code
a = dataset.kdd_train.isin([0])
a.sum().sum() / a.size
dataset.kdd_test.describe()
print("Column - Label")
print("Unique values: \n{}".format(dataset.kdd_train.label))
print("\nStatistical properties: \n{}".format(dataset.kdd_train.label.describe()))
attack_types = {
'normal': 'normal',
'back': 'DoS',
'land': 'DoS',
'neptune': 'DoS',
'pod': 'DoS',
'smurf': 'DoS',
'teardrop': 'DoS',
'mailbomb': 'DoS',
'apache2': 'DoS',
'processtable': 'DoS',
'udpstorm': 'DoS',
'ipsweep': 'Probe',
'nmap': 'Probe',
'portsweep': 'Probe',
'satan': 'Probe',
'mscan': 'Probe',
'saint': 'Probe',
'ftp_write': 'R2L',
'guess_passwd': 'R2L',
'imap': 'R2L',
'multihop': 'R2L',
'phf': 'R2L',
'spy': 'R2L',
'warezclient': 'R2L',
'warezmaster': 'R2L',
'sendmail': 'R2L',
'named': 'R2L',
'snmpgetattack': 'R2L',
'snmpguess': 'R2L',
'xlock': 'R2L',
'xsnoop': 'R2L',
'worm': 'R2L',
'buffer_overflow': 'U2R',
'loadmodule': 'U2R',
'perl': 'U2R',
'rootkit': 'U2R',
'httptunnel': 'U2R',
'ps': 'U2R',
'sqlattack': 'U2R',
'xterm': 'U2R'
}
is_attack = {
"DoS":"Attack",
"R2L":"Attack",
"U2R":"Attack",
"Probe":"Attack",
"normal":"Normal"
}
dataset.kdd_train["type"] = dataset.kdd_train.label.map(lambda x: attack_types[x])
dataset.kdd_train["is"] = dataset.kdd_train.type.map(lambda x: is_attack[x])
dataset.kdd_test["type"] = dataset.kdd_test.label.map(lambda x: attack_types[x])
dataset.kdd_test["is"] = dataset.kdd_test.type.map(lambda x: is_attack[x])
dataset.kdd_train_["type"] = dataset.kdd_train_.label.map(lambda x: attack_types[x])
dataset.kdd_train_["is"] = dataset.kdd_train_.type.map(lambda x: is_attack[x])
dataset.kdd_test_["type"] = dataset.kdd_test_.label.map(lambda x: attack_types[x])
dataset.kdd_test_["is"] = dataset.kdd_test_.type.map(lambda x: is_attack[x])
a = dataset.kdd_train.set_index("is")
print(a.loc["Normal"].isin([0]).sum().sum())
print(a.loc["Normal"].size)
a.loc["Normal"].isin([0]).sum().sum() / a.loc["Normal"].size
a = dataset.kdd_train.set_index("is")
print(a.loc["Attack"].isin([0]).sum().sum())
print(a.loc["Attack"].size)
a.loc["Attack"].isin([0]).sum().sum() / a.loc["Attack"].size
1804888 / (1804888 + 1538253)
kdd_attack_type_group = dataset.kdd_train.groupby("type")
kdd_is_attack_group = dataset.kdd_train.groupby("is")
kdd_attack_type_group.type.count()
kdd_is_attack_group["is"].count()
kdd_attack_type_group
df = dataset.kdd_train.set_index("is")
df.loc["Attack"].label.unique()
df.loc["Normal"].label.unique()
#kdd_is_attack_group.hist(figsize=[25,22])
#kdd_attack_type_group.hist(figsize=[25,22])
gb = dataset.kdd_diff_level_train.groupby(dataset.kdd_diff_level_train)
(gb.count() / dataset.kdd_diff_level_train.count())*100
gb = dataset.kdd_diff_level_test.groupby(dataset.kdd_diff_level_test)
(gb.count() / dataset.kdd_diff_level_test.count())*100
dummy_variables_2labels = [*category_variables, "is"]
dummy_variables_5labels = [*category_variables, "type"]
attack_codes_2labels = {"Attack":1, "Normal":0}
attack_codes_5labels = {'DoS':1, 'normal':0, 'Probe':2, 'R2L':3, 'U2R':4}
class preprocessing:
kdd_train_2labels = pd.get_dummies(dataset.kdd_train, columns = dummy_variables_2labels, prefix=dummy_variables_2labels)
kdd_train_5labels = pd.get_dummies(dataset.kdd_train, columns = dummy_variables_5labels, prefix=dummy_variables_5labels)
kdd_test_2labels = pd.get_dummies(dataset.kdd_test, columns = dummy_variables_2labels, prefix=dummy_variables_2labels)
kdd_test_5labels = pd.get_dummies(dataset.kdd_test, columns = dummy_variables_5labels, prefix=dummy_variables_5labels)
kdd_train__2labels = pd.get_dummies(dataset.kdd_train_, columns = dummy_variables_2labels, prefix=dummy_variables_2labels)
kdd_train__5labels = pd.get_dummies(dataset.kdd_train_, columns = dummy_variables_5labels, prefix=dummy_variables_5labels)
kdd_test__2labels = pd.get_dummies(dataset.kdd_test_, columns = dummy_variables_2labels, prefix=dummy_variables_2labels)
kdd_test__5labels = pd.get_dummies(dataset.kdd_test_, columns = dummy_variables_5labels, prefix=dummy_variables_5labels)
kdd_train_2labels_y = dataset.kdd_train["is"].copy() # For SVM
kdd_train_5labels_y = dataset.kdd_train["type"].copy() # For SVM
kdd_test_2labels_y = dataset.kdd_test["is"].copy() # For SVM
kdd_test_5labels_y = dataset.kdd_test["type"].copy() # For SVM
kdd_train__2labels_y = dataset.kdd_train_["is"].copy() # For SVM
kdd_train__5labels_y = dataset.kdd_train_["type"].copy() # For SVM
kdd_test__2labels_y = dataset.kdd_test_["is"].copy() # For SVM
kdd_test__5labels_y = dataset.kdd_test_["type"].copy() # For SVM
kdd_train_2labels.drop(["label", "type"], axis=1, inplace=True)
kdd_test_2labels.drop(["label", "type"], axis=1, inplace=True)
kdd_train__2labels.drop(["label", "type"], axis=1, inplace=True)
kdd_test__2labels.drop(["label", "type"], axis=1, inplace=True)
kdd_train_5labels.drop(["label", "is"], axis=1, inplace=True)
kdd_test_5labels.drop(["label", "is"], axis=1, inplace=True)
kdd_train__5labels.drop(["label", "is"], axis=1, inplace=True)
kdd_test__5labels.drop(["label", "is"], axis=1, inplace=True)
kdd_train_2labels_y = kdd_train_2labels_y.map(lambda x: attack_codes_2labels[x])
kdd_test_2labels_y = kdd_test_2labels_y.map(lambda x: attack_codes_2labels[x])
kdd_train__2labels_y = kdd_train__2labels_y.map(lambda x: attack_codes_2labels[x])
kdd_test__2labels_y = kdd_test__2labels_y.map(lambda x: attack_codes_2labels[x])
kdd_train_5labels_y = kdd_train_5labels_y.map(lambda x: attack_codes_5labels[x])
kdd_test_5labels_y = kdd_test_5labels_y.map(lambda x: attack_codes_5labels[x])
kdd_train__5labels_y = kdd_train__5labels_y.map(lambda x: attack_codes_5labels[x])
kdd_test__5labels_y = kdd_test__5labels_y.map(lambda x: attack_codes_5labels[x])
preprocessing.kdd_train_2labels.columns.to_series().to_csv("dataset/columns_2labels.csv")
preprocessing.kdd_train_5labels.columns.to_series().to_csv("dataset/columns_5labels.csv")
preprocessing.kdd_train_2labels.columns
preprocessing.kdd_train_2labels.shape
preprocessing.kdd_train_5labels.shape
preprocessing.kdd_test_2labels.shape
preprocessing.kdd_test_5labels.shape
preprocessing.kdd_train_2labels_y.shape
preprocessing.kdd_test_2labels_y.shape
preprocessing.kdd_train_5labels_y.shape
preprocessing.kdd_test_5labels_y.shape
import matplotlib
from pandas.plotting import andrews_curves
from pandas.plotting import parallel_coordinates
from sklearn import preprocessing as ps
from pandas.plotting import radviz
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
df_train = preprocessing.kdd_train_2labels.drop(["is_Attack", "is_Normal"], axis = 1)
df_test = preprocessing.kdd_test_2labels.drop(["is_Attack", "is_Normal"], axis = 1)
df_train = pd.concat([df_train, preprocessing.kdd_train_2labels_y], axis = 1)
df_test = pd.concat([df_test, preprocessing.kdd_test_2labels_y], axis = 1)
from sklearn.manifold import TSNE
model = TSNE(n_components=2, random_state=0)
#np.set_printoptions(suppress=True)
#sample = df_train.sample(int(df_train.shape[0]*.1)) # 10% of total data
#sample.to_pickle("dataset/tsne_sample.pkl")
sample = pd.read_pickle("dataset/tsne_sample.pkl")
x_tsne = sample.iloc[:, :-1]
y_tsne = sample.iloc[:, -1]
from sklearn.decomposition import SparsePCA
pca_analysis = SparsePCA(n_components=40)
#x_tsne_pca = pca_analysis.fit_transform(x_tsne)
#pd.DataFrame(x_tsne_pca).to_pickle("dataset/tsne_pca_df.pkl")
x_tsne_pca = pd.read_pickle("dataset/tsne_pca_df.pkl").values
x_tsne_pca_df = pd.DataFrame(x_tsne_pca)
codes_to_attack = {1:"Attack", 0:"Normal"}
y_tsne_cta = y_tsne.map(lambda x: codes_to_attack[x])
x_tsne_pca_df['is'] = y_tsne_cta.values
plt.figure(figsize=(7,3))
andrews_curves(x_tsne_pca_df, "is")
#df = model.fit_transform(x_tsne_pca)
#df1 = model.fit_transform(df)
#df2 = model.fit_transform(df1)
#df3 = model.fit_transform(df2)
#pd.DataFrame(df).to_pickle("dataset/tsne_df.pkl")
#pd.DataFrame(df1).to_pickle("dataset/tsne_df1.pkl")
#pd.DataFrame(df2).to_pickle("dataset/tsne_df2.pkl")
#pd.DataFrame(df3).to_pickle("dataset/tsne_df3.pkl")
df = pd.read_pickle("dataset/tsne_df.pkl").values
df1 = pd.read_pickle("dataset/tsne_df1.pkl").values
df2 = pd.read_pickle("dataset/tsne_df2.pkl").values
df3 = pd.read_pickle("dataset/tsne_df3.pkl").values
#plt.figure(figsize=(15,8))
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row', figsize=(10,5))
ax1.scatter(x = df[y_tsne==0,0], y = df[y_tsne==0,1], label = 'Normal')
ax1.scatter(x = df[y_tsne==1,0], y = df[y_tsne==1,1], label = 'Attack')
ax1.title.set_text("After 1000 epochs")
ax2.scatter(x = df1[y_tsne==0,0], y = df1[y_tsne==0,1], label = 'Normal')
ax2.scatter(x = df1[y_tsne==1,0], y = df1[y_tsne==1,1], label = 'Attack')
ax2.title.set_text("After 2000 epochs")
ax3.scatter(x = df2[y_tsne==0,0], y = df2[y_tsne==0,1], label = 'Normal')
ax3.scatter(x = df2[y_tsne==1,0], y = df2[y_tsne==1,1], label = 'Attack')
ax3.title.set_text("After 3000 epochs")
ax4.scatter(x = df3[y_tsne==0,0], y = df3[y_tsne==0,1], label = 'Normal')
ax4.scatter(x = df3[y_tsne==1,0], y = df3[y_tsne==1,1], label = 'Attack')
ax4.title.set_text("After 4000 epochs")
plt.subplots_adjust(wspace=0.05, hspace=0.18)
ax1.legend(loc=0)
plt.figure(figsize=(15,8))
plt.scatter(x = df3[y_tsne==0,0], y = df3[y_tsne==0,1], label = 'Normal')
plt.scatter(x = df3[y_tsne==1,0], y = df3[y_tsne==1,1], label = 'Attack')
plt.title("After 4000 epochs")
preprocessing.kdd_train_2labels.to_pickle("dataset/kdd_train_2labels.pkl")
preprocessing.kdd_train_2labels_y.to_pickle("dataset/kdd_train_2labels_y.pkl")
preprocessing.kdd_train_5labels.to_pickle("dataset/kdd_train_5labels.pkl")
preprocessing.kdd_train_5labels_y.to_pickle("dataset/kdd_train_5labels_y.pkl")
preprocessing.kdd_train__2labels.to_pickle("dataset/kdd_train__2labels.pkl")
preprocessing.kdd_train__2labels_y.to_pickle("dataset/kdd_train__2labels_y.pkl")
preprocessing.kdd_train__5labels.to_pickle("dataset/kdd_train__5labels.pkl")
preprocessing.kdd_train__5labels_y.to_pickle("dataset/kdd_train__5labels_y.pkl")
preprocessing.kdd_test_5labels_y.to_pickle("dataset/kdd_test_5labels_y.pkl")
preprocessing.kdd_test__5labels.to_pickle("dataset/kdd_test__5labels.pkl")
preprocessing.kdd_test__5labels_y.to_pickle("dataset/kdd_test__5labels_y.pkl")
dataset.kdd_diff_level_train.to_pickle("dataset/kdd_diff_level_train.pkl")
dataset.kdd_diff_level_test.to_pickle("dataset/kdd_diff_level_test.pkl")
###Output
_____no_output_____
###Markdown
**Q1. Exploratory Data Analysis (EDA)** **OBJECTIVE**This Jupyter Notebook will seek to conduct an EDA on the dataset from aiap technical assessment and present its findings of the analysis at the end. The task is to predict the **total number of active users (guest - users and registered - users)** in order to help in demand forecasting **GENERAL OVERVIEW OF EDA** **1) CHECKING IF THE DATA IS INTUITIVE**Using domain knowledge, we will analyse the data and pick out areas that might require further analysis (e.g. incorrect data, identify outliers etc.) **2) UNIVARIATE ANALYSIS**We will analyse each feature in detail and conduct feature cleaning/engineering (if needed). **3) EXPLORE HIDDEN RELATIONSHIPS BETWEEN FEATURES**We will be checking for hidden relationships between features that might interfere with our model (e.g. multicollinearity, possible non-linear relationships). After which, we will perform feature selection (if needed). **4) SUMMARY OF ANALYSIS AND IMPLICATIONS**We will then summarize our findings from part 1, 2 and 3 above and identify things which we can do based on our findings.
###Code
# Importing the libraries
# System
import io, os, sys, datetime, math, calendar
from datetime import timedelta, date
# Data Manipulation
import numpy as np
import pandas as pd
# Visualisation
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
# Machine Learning Preprocessing Modules
from mlp.ml_module.eda_preprocessing import (plot_distribution, return_index, firstlast_datehour,
daterange, full_datehour, return_missing_datehour,
add_features_datetime_YMD, cyclical_features,
plot_correlation)
###Output
_____no_output_____
###Markdown
**1) CHECKING IF THE DATA IS INTUITIVE** **Summary:** This dataset provides hourly values for the number of active users for an e -scooter rental service in a city. The features include the date and various weather parameters. **Independent Features:** `date`: Date in YYYY-MM-DD`hr`: Hour (0 to 23) `weather`: Description of the weather conditions for that hour `temperature`: Average temperature for that hour (Fahrenheit)`feels-like-temperature`: Average feeling temperature for that hour (Fahrenheit)`relative-humidity`: Average relative humidity for that hour. Measure of the amount of water in the air (%)`windspeed`: Average speed of wind for that hour`psi`: Pollutant standard index. Measure of pollutants present in the air. (0 to 400) **Target Features:**`guest-users`: Number of guest users using the rental e-scooters in that hour`registered-users`: Number of registered users using the rental e-scooters in that hour
###Code
# Import the dataset
data_url = 'https://aisgaiap.blob.core.windows.net/aiap6-assessment-data/scooter_rental_data.csv'
dataset = pd.read_csv(data_url)
# Check the first 10 lines for the dataset for intuition
dataset.head(10)
# Check the details of the dataset for intuition
dataset.info()
# Convert 'date' to datetime format='%Y-%m-%d'
dataset['date'] = pd.to_datetime(dataset['date'], format='%Y-%m-%d')
# Check the details of the dataset for intuition
dataset.describe()
###Output
_____no_output_____
###Markdown
From the snapshots of the dataset provided above, please refer to the table below for the summary of our observations. For each observation, we will analyze them in further detail when we conduct our univariate analysis. | S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | feature engineering/cleaning (e.g. additional features - weekday vs weekend, cyclical features) for 'date' and 'hr' | univariate analysis || 2 | similar features 'temperature' and 'feels-like-temperature' (one of which might be redundant, might remove to prevent overfitting) | univariate analysis || 3 | zero value for 'relative-humidity', 'windspeed' and 'psi' (value should not be zero) | univariate analysis || 4 | negative values for 'guest-users' and 'registered-users' (values should not be negative) | univariate analysis || 5 | there are no null values (data might have been pre-processed, null data might have been replaced (e.g. replaced with mean, median, -1, -999 etc.)) | to check with data provider | **2) UNIVARIATE ANALYSIS**For our dataset, we can categorise into 3 main categories for our analysis: **Numerical:** feature that contains numeric values **Categorical:** feature that contains categories or texts **Time_Date:** feature that contains time/dateFor this section we will: **a) conduct relevant analysis based on the category** **b) conduct feature cleaning and engineering based on findings from part 1 and part 2a (if required)** **NUMERICAL FEATURES:** 'temperature', 'feels-like-temperature', 'relative-humidity', 'windspeed', 'guest-users', 'registered-users' **a) Analysis of numerical features - Boxplot**
###Code
# Plot distribution of all numerical features for analysis
num_features = ['temperature', 'feels-like-temperature', 'relative-humidity', 'windspeed', 'psi', 'guest-users', 'registered-users']
plot_distribution(dataset, num_features, cols=5, rows=2, width=20 , height=10, hspace=0.4, wspace=0.1)
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | The zero value(s) for 'relative-humidity', 'windspeed' and 'psi' which conincides with the findings above in part 1 | to replace with appropriate value(s) (if applicable) || 2 | Datapoints roughly >30 for 'windspeed' are classified as outliers. However, from research online, windspeed <60 is reasonable. In addition, it might be classified as outliers due to zero value(s). | no further actions required || 3 | 'guest-users' and 'registered-users' contains values <0 which conincides with the findings above in part 1 | to replace with appropriate value(s) (if applicable) | **b) Feature cleaning and engineering for numerical features - Boxplot** **Feature:** 'relative-humidity'
###Code
# Get index of data that has value zero for 'relative-humidity'
dataset_index_rh = return_index(dataset=dataset, column='relative-humidity', value=0, criteria='equal')
# Display data that has the value zero for 'relative-humidity'
dataset.loc[dataset.index.isin(dataset_index_rh)]
###Output
_____no_output_____
###Markdown
Assumption: All zero values come from the same date, most likely incorrect data entryAction: Since only 22 missing, to replace zero values with median
###Code
# Replace data that has the value zero for 'relative-humidity' with median
median_relativehumidity = dataset['relative-humidity'].median(skipna=True)
dataset = dataset.replace({'relative-humidity': {0: median_relativehumidity}})
# Check that the values are replaced correctly
dataset.loc[dataset.index.isin(dataset_index_rh)].head()
###Output
_____no_output_____
###Markdown
**Feature:** 'windspeed'
###Code
# Get index of data that has value zero for 'windspeed'
dataset_index_ws = return_index(dataset=dataset, column='windspeed', value=0, criteria='equal')
# Display data that has the value zero for 'windspeed'
dataset.loc[dataset.index.isin(dataset_index_ws)]
###Output
_____no_output_____
###Markdown
Assumption: 12.6% (2264/17958) zero values, most likely some systematic error with data collectionAction: To check if 'windspeed' is an important feature (corr with target features) - if corr is low, drop the feature - if corr is high, replace zero values with median
###Code
columns = ['windspeed',
'guest-users',
'registered-users']
dataset[columns][dataset['registered-users']>0].corr(method='pearson')
# Drop the 'windspeed' feature
dataset = dataset.drop(['windspeed'], axis = 1)
# Check that feature is dropped correctly
dataset.head()
###Output
_____no_output_____
###Markdown
**Feature:** 'psi'
###Code
# Get index of data that has value zero for 'psi'
dataset_index_psi = return_index(dataset=dataset, column='psi', value=0, criteria='equal')
# Display data that has the value zero for 'psi'
dataset.loc[dataset.index.isin(dataset_index_psi)]
###Output
_____no_output_____
###Markdown
Assumption: 359 zero values, most likely incorrect data entry.Action: To replace zero value(s) with median
###Code
# Replace data that has the value zero for 'psi' with mean
median_psi = dataset['psi'].median(skipna=True)
dataset = dataset.replace({'psi': {0: median_psi}})
# Check that the values are replaced correctly
dataset.loc[dataset.index.isin(dataset_index_psi)]
###Output
_____no_output_____
###Markdown
**Feature:** 'guest-users'
###Code
# Get index of data that has the negative values for 'guest-users'
dataset_index_gu = return_index(dataset=dataset, column='guest-users', value=0, criteria='less')
# Display data that has negative values for 'guest-users'
dataset.loc[dataset.index.isin(dataset_index_gu)].head()
###Output
_____no_output_____
###Markdown
Assumption: Incorrect data entryAction: To replace negative values with positive values
###Code
# Replace data that has negative values for 'guest-users' with positive values
dataset['guest-users'] = dataset['guest-users'].abs()
# Check that the values are replaced correctly
dataset.loc[dataset.index.isin(dataset_index_gu)].head()
###Output
_____no_output_____
###Markdown
**Feature:** 'registered-users'
###Code
# Get index of data that has negative values for 'registered-users'
dataset_index_ru = return_index(dataset=dataset, column='registered-users', value=0, criteria='less')
# Display data that has negative values for 'registered-users'
dataset.loc[dataset.index.isin(dataset_index_ru)].head()
###Output
_____no_output_____
###Markdown
Assumption: Incorrect data entryAction: To replace negative values with positive values
###Code
# Replace data that has negative values for 'registered-users' with positive values
dataset['registered-users'] = dataset['registered-users'].abs()
# Check that the values are replaced correctly
dataset.loc[dataset.index.isin(dataset_index_ru)].head()
###Output
_____no_output_____
###Markdown
**a) Analysis of numerical features - Feature Selection** From our analysis in part 1, 'temperature' and 'feels-like-temperature' is similar. There are 3 main feature selection methods (Wrapper, Filter, Embedded). I will be using filter method and select the feature based on their Pearson correlation coefficient with the target.
###Code
# Plot correlation grah for the required features
columns = ['temperature',
'feels-like-temperature',
'guest-users',
'registered-users']
dataset[columns].corr(method='pearson')
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see 'temperature' correlates more with 'guest-users' and 'registered-users' in addition both 'temperature' and 'feels-like-temperature are highly correlated (multicollinearity issue) | to drop 'feels-like-temperature' | **b) Feature cleaning and engineering - Feature Selection** **Feature:** 'feels-like-temperature'
###Code
# Drop the 'feels-like-temperature' feature
dataset = dataset.drop(['feels-like-temperature'], axis = 1)
# Check that feature is dropped correctly
dataset.head()
###Output
_____no_output_____
###Markdown
**CATEGORICAL FEATURES:** 'weather' **a) Analysis of categorical features - Countplot**
###Code
# Plot distribution of all categorical features
num_features = ['weather']
plot_distribution(dataset, num_features, cols=2, rows=2, width=15, height=15, hspace=0.3, wspace=0.4)
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see that some of the features are in uppercase (e.g. 'cloudy' vs 'CLOUDY') | convert string to lowercase || 2 | We can see that some of the features are typed incorrectly (e.g. 'loudy' vs 'cloudy', 'lear' vs 'clear') | amend the string | **b) Feature cleaning and engineering for categorical features - Countplot** **Feature:** 'weather'
###Code
# Convert uppercase strings to lowercase
dataset['weather'] = dataset['weather'].str.lower()
# Replace incorrect strings with correct strings
dataset.loc[dataset['weather'].str.contains('lear'), 'weather'] = 'clear'
dataset.loc[dataset['weather'].str.contains('loudy'), 'weather'] = 'cloudy'
# Re-plot the distribution
num_features = ['weather']
plot_distribution(dataset, num_features, cols=2, rows=2, width=10, height=10, hspace=0.3, wspace=0.4)
###Output
_____no_output_____
###Markdown
**DATE_TIME FEATURES:** 'date', 'hr' **a) Analysis of date_time features - Duplicated Entries**
###Code
# Sort the dataset by 'date' and 'hr' to make analysis to be easier
dataset = dataset.sort_values(['date', 'hr'], ascending=[True, True])
# Run duplicate checks on subset=['date','hr'], to identify possible duplicates
dataset.loc[dataset.duplicated(subset=['date','hr'], keep=False)]
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see that there are possible duplicated entries | to remove duplicated data (if reasonable) | **b) Feature cleaning and engineering for date_time features - Duplicated Entries** **Feature:** 'date' and 'hr'
###Code
# The above formula using subset=['date','hr'], returns 1148 rows.
# However, we are unable to conclude whether the observations is due to actual duplicated entries (i.e. entire row is a duplicate)
# OR due to data entry error (e.g. keying in wrong 'date','hr')
# Run the same formula, but without subset=['date','hr'] to see if the other columns are duplicates too
dataset.loc[dataset.duplicated(keep=False)]
###Output
_____no_output_____
###Markdown
Assumption: duplicated rows observed above are due to duplicated entries, not other forms of incorrect data entry such as keying in wrong 'date', 'hr'Action: to remove all duplicates
###Code
# Drop the duplicated entries
dataset = dataset.drop_duplicates(keep='first')
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 17379 entries, 0 to 17378
Data columns (total 8 columns):
date 17379 non-null datetime64[ns]
hr 17379 non-null int64
weather 17379 non-null object
temperature 17379 non-null float64
relative-humidity 17379 non-null float64
psi 17379 non-null float64
guest-users 17379 non-null int64
registered-users 17379 non-null int64
dtypes: datetime64[ns](1), float64(3), int64(3), object(1)
memory usage: 1.2+ MB
###Markdown
**a) Analysis of date_time features - Missing Entries**
###Code
# Extract first date/hr, last date/hr in dataset
first_date, first_hour, last_date, last_hour = firstlast_datehour(dataset=dataset, datecolumn='date', hrcolumn='hr')
# Create pandas DataFrame with columns 'date' and 'hr' from dataset (first date/hr, last date/hr)
full_datehour = full_datehour(first_date, first_hour, last_date, last_hour)
full_datehour.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 17544 entries, 0 to 17543
Data columns (total 2 columns):
date 17544 non-null datetime64[ns]
hr 17544 non-null int64
dtypes: datetime64[ns](1), int64(1)
memory usage: 411.2 KB
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see that there are missing entries from the dataset (full_datehour rows = 17544 > dataset rows = 17379) | to find out the missing entries | **b) Feature cleaning and engineering for date_time features - Missing Entries** No feature cleaning/engineering is required, instead we will generate list of missing entries for future use. **Feature:** 'date' and 'hr'
###Code
# Return pandas DataFrame of missing entries with columns 'date' and 'hr'
missing_datehour = return_missing_datehour(full_datehour, dataset, datecolumn='date', hrcolumn='hr')
missing_datehour.head()
###Output
_____no_output_____
###Markdown
**a) Analysis of date_time features - New Features** From our analysis in part 1, additional features can be created for 'date' and 'hr'. We will proceed to create these features. **b) Feature cleaning and engineering - New Features** **Feature:** 'date'
###Code
# Create 2 new features, 'month' and 'day' to replace 'date'.
# These features seperately will be more informative in predicting total number of active users.
dataset = add_features_datetime_YMD (dataset, column='date', feature_name=['month', 'day'])
dataset.head()
###Output
_____no_output_____
###Markdown
**Feature:** 'hr'and 'month'
###Code
# Create cyclical features for 'hr', 'day', 'month'
dataset = cyclical_features(dataset, columnheaders=['hr', 'day', 'month'])
dataset.head()
###Output
_____no_output_____
###Markdown
**3) EXPLORE HIDDEN RELATIONSHIPS BETWEEN FEATURES**For our dataset, we can categorise into 2 main categories for our analysis: **Independent features** **Target features** For this section we will: **a) conduct correlation analysis amongst all features** **b) plot graphs between Independent features and Target features to identify hidden relationships** **c) plot graph between Target features to identify hidden relationships** **a) Correlation analysis - Heatmap**
###Code
# Split into independent features X and target features y
target_features = ['guest-users', 'registered-users']
X = dataset.drop(target_features, axis=1)
y = dataset.loc[:, target_features]
# Conduct correlation analysis for numerical features
all_features = pd.concat([X,y], axis=1)
# Plot heatmap
plt.figure(figsize = (16,10))
sns.heatmap(all_features.corr(), annot=True,cmap ='RdYlGn')
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | Correlation amongst idependent features does not seem to be high, there should be no multicollinearity issue | - || 2 | Correlation between certain independent features and target features seems to be low (e.g. 'psi') | consider dropping the feature | **b) Identify hidden relationships - Scatterplot (numerical), Boxplot (categorical)**
###Code
#cyclical features are not plot, as sin/cos individually is difficult to interpret
X_columns = ['temperature', 'relative-humidity', 'psi']
y_columns = ['registered-users', 'guest-users']
# Plot for 'temperature'
plot_correlation(X, y, X_columns[0:1], y_columns, rows=4, cols=2, width=20 , height=40, hspace=0.2, wspace=0.2)
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see that both target variables seems to increase as temperature increases till 100F then decreases after that | consider non-linear transformation of 'temperature' |
###Code
# Plot for 'relative-humidity'
plot_correlation(X, y, X_columns[1:2], y_columns, rows=4, cols=2, width=20 , height=40, hspace=0.2, wspace=0.2)
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see that both target variables seems to decrease as 'relative-humidity' | - || 2 | There seems to have sudden increase in target variables at different bins of 'relative-humidity' (for example, the data when 'relative-humidity' == 100 looks unnatural | more analysis is required |
###Code
# Plot for 'psi'
plot_correlation(X, y, X_columns[2:3], y_columns, rows=4, cols=2, width=20 , height=30, hspace=0.2, wspace=0.2)
###Output
_____no_output_____
###Markdown
| S/N | Findings | Actions to be taken || :-: | :-- | :-: || 1 | We can see there is no obvious correlation between target features and 'psi', this is inline with our correlation heatmap | to drop 'psi' |
###Code
# Drop the 'psi' feature
dataset = dataset.drop(['psi'], axis = 1)
# Check that feature is dropped correctly
dataset.head()
###Output
_____no_output_____
###Markdown
**c) plot graph between Target features to identify hidden relationships**
###Code
# Plot lineplot if target features over time
ax = sns.lineplot(x='date', y='registered-users', data=dataset)
ax = sns.lineplot(x='date', y='guest-users', data=dataset)
ax.set_title('Active users over time', fontsize=14)
ax.set_xlabel('Date', fontsize=12)
ax.set_ylabel('Active Users', fontsize=12)
###Output
_____no_output_____
###Markdown
![](https://image.ibb.co/eyRTJd/dataset_cover.jpg) - 1. Introduction- 2. Setup for Retrieving the Data - 2.1 Load libraries - 2.2 Setup BigQuery Data Connection - 3. Kaggle Site Analysis - 3.1 First Contentful Paint Distribution - 3.2 First Contentful Paint Density Sum Less Than 5 sec - 3.3 First Contentful Paint Density Sum Less Than 5 sec By Different Connection Speeds - 3.4 First Contentful Paint Density Sum By Country - 3.5 First Input Delay Less Than 100 ms on Kaggle - 3.6 First Input Delay Less Than 100 ms on all Origins in The Dataset - 3.7 First Input Delay Less Than 100 ms on Kaggle By Form Factor Name- 4. Compare Top 3 Data Science Blog Sites - 4.1 First Contentful Paint Density Sum Less Than 1 sec - 4.2 First Contentful Paint Density Sum By Sec - 4.3 First Contentful Paint Density Sum By Form Factor Name - 4.4 First Contentful Paint Density Sum By Network - 4.5 First Input Delay Less Than 100 ms 1. Intoduction---------------------------------------The Chrome User Experience Report provides user experience metrics for how real-world Chrome users experience popular destinations on the web.The Chrome User Experience Report is powered by real user measurement of key user experience metrics across the public web, aggregated from users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled. The resulting data is made available via: 1. **PageSpeed** Insights, which provides URL-level user experience metrics for popular URLs that are known by Google's web crawlers. 2. **Public Google BigQuery** project, which aggregates user experience metrics by origin, for all origins that are known by Google's web crawlers, and split across multiple dimensions outlined below. Metrics---------------------------------------Metrics provided by the public Chrome User Experience Report hosted on Google BigQuery are powered by standard web platform APIs exposed by modern browsers and aggregated to origin-resolution. 1. **First Paint:** First Paint reports the time when the browser first rendered after navigation. This excludes the default background paint, but includes non-default background paint. This is the first key moment developers care about in page load – when the browser has started to render the page.2. **First Contentful Paint:** First Contentful Paint reports the time when the browser first rendered any text, image (including background images), non-white canvas or SVG. This includes text with pending webfonts. This is the first time users could start consuming page content. 3. **DOMContentLoaded:** The DOMContentLoaded reports the time when the initial HTML document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading.4. **onload:** The load event is fired when the page and its dependent resources have finished loading.5. **First Input Delay:** First Input Delay (FID) measures the time from when a user first interacts with your site (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction. Dimensions---------------------------------------Performance of web content can vary significantly based on device type, properties of the network, and other variables.1. **Effective Connection Type:** Provides the effective connection type (“slow-2g”, “2g”, “3g”, “4g”, or “offline”) as determined by round-trip and bandwidth values based on real user measurement observations.2. **Device Type:** Coarse device classification (“phone”, “tablet”, or “desktop”), as communicated via User-Agent.3. **Country:** Geographic location of users at the country-level, inferred by their IP address. Countries are identified by their respective ISO 3166-1 alpha-2 codes. | The Experience | The Metric || ------------- |:-------------:||Is it happening? | First Paint (FP) / First Contentful Paint (FCP) || Is it useful? | First Meaningful Paint (FMP) / Hero Element Timing || Is it usable? | Time to Interactive (TTI) || Is it delightful? | Long Tasks (technically the absence of long tasks) | 2. Setup for Retrieving the Data 2.1 Load libraries
###Code
import bq_helper
from bq_helper import BigQueryHelper
import numpy as np
import pandas as pd
import os
import plotly.plotly as py
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
import seaborn as sns
init_notebook_mode(connected=True)
color = sns.color_palette()
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('figure', figsize=(10, 8))
###Output
_____no_output_____
###Markdown
2.2 Setup BigQuery Data Connection
###Code
# https://www.kaggle.com/sohier/introduction-to-the-bq-helper-package
chromeUXreport = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="chrome-ux-report.all")
chromeUXreportUS = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="chrome-ux-report.country_us")
chromeUXreportIN = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="chrome-ux-report.country_in")
###Output
_____no_output_____
###Markdown
3. Kaggle Site Analysis 3.1 First Contentful Paint Distribution
###Code
query1 = """SELECT
bin.start,
SUM(bin.density) AS density
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
origin = 'https://www.kaggle.com'
GROUP BY
bin.start
ORDER BY
bin.start;
"""
print(chromeUXreport.estimate_query_size(query1))
response1 = chromeUXreport.query_to_pandas_safe(query1, max_gb_scanned= 5)
response1.head(20)
result1 = response1.head(10)
trace1 = go.Bar(
x = result1.start,
y = result1.density,
name = "citations",
marker = dict(color = 'rgba(0, 0, 255, 0.8)',
line=dict(color='rgb(0,0,0)',width=1.5)),
text = result1.start)
data = [trace1]
layout = go.Layout(barmode = "group",title='First Contentful Paint Density Per Bin', xaxis = dict(title='Start (ms)'), yaxis = dict(title='Density'))
fig = go.Figure(data = data, layout = layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
* As we know from above intro, First Contentful Paint reports the time when the browser first rendered any text, image (including background images), non-white canvas or SVG. This includes text with pending webfonts. This is the first time users could start consuming page content.* After *500ms (0.5s)* the kaggle webpage starts quick rendered. * Between *0.5s to 1.5s* the kaggle webpage rendered quite good amount. How much ? Will see in next section. :) 3.2 First Contentful Paint Density Sum Less Than 5 sec
###Code
query2 = """SELECT
SUM(bin.density) AS density
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
bin.start < 5000 AND
origin = 'https://www.kaggle.com';
"""
print(chromeUXreport.estimate_query_size(query2))
response2 = chromeUXreport.query_to_pandas_safe(query2,max_gb_scanned=5)
response2.head(20)
###Output
_____no_output_____
###Markdown
* As we all know kaggle has news feed (i.e kernel, discussion contents), So it can take little time. But kaggle site has good or infact we say better optimization, So that it rendered *80%* of page loads experience the FCP in under *5 second*. 3.3 First Contentful Paint Density Sum Less Than 5 sec By Different Connection Speeds
###Code
query3 = """
#standardSQL
SELECT
effective_connection_type.name AS ect,
SUM(bin.density) AS density
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
bin.end <= 5000 AND
origin = 'https://www.kaggle.com'
GROUP BY
ect
ORDER BY
density DESC;
"""
print(chromeUXreport.estimate_query_size(query3))
response3 = chromeUXreport.query_to_pandas_safe(query3,max_gb_scanned=5)
response3.head(20)
result3 = response3
sns.factorplot(x='ect', y='density', data=result3, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
* From *80%* of page loads experience the FCP in under 5 second comes from 4G network. * *61% *page loads experience from* 4G network** *19%* page loads experience from* 3G network* 3.4 First Contentful Paint Density Sum By Country
###Code
query4 = """
#standardSQL
WITH
countries AS (
SELECT *, 'All' AS country FROM `chrome-ux-report.all.201806`
UNION ALL
SELECT *, 'India' AS country FROM `chrome-ux-report.country_in.201806`
UNION ALL
SELECT *, 'US' AS country FROM `chrome-ux-report.country_us.201806`)
SELECT
country,
effective_connection_type.name AS ect,
SUM(bin.density) AS density
FROM
countries,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
bin.end <= 5000 AND
origin = 'https://www.kaggle.com'
GROUP BY
country,
ect
ORDER BY
density DESC;
"""
print(chromeUXreport.estimate_query_size(query4))
response4 = chromeUXreport.query_to_pandas_safe(query4,max_gb_scanned=6)
response4.head(20)
result4 = response4
sns.factorplot(x='country', y='density', hue='ect', data=result4, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
* In US *80%* page loads experience from *4G network* and *6%* from *3G network*.* In INDIA *48%* page loads experience from *4G network* and *30%* from *3G network* 3.5 First Input Delay Less Than 100 ms on Kaggle
###Code
query5 = """
SELECT
ROUND(SUM(IF(fid.start < 100, fid.density, 0)), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
WHERE
origin = 'https://www.kaggle.com';
"""
print(chromeUXreport.estimate_query_size(query5))
response5 = chromeUXreport.query_to_pandas_safe(query5,max_gb_scanned=3)
response5.head(20)
###Output
_____no_output_____
###Markdown
* As we know from above intro section, First Input Delay (FID) measures the time from when a user first interacts with your site (i.e. when they click a link, tap on a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction.* The results show that *90% of FID* experiences on *kaggle.com* origin are perceived as *instantaneous*. That seems really good, but how does it compare to all origins in the dataset? 3.6 First Input Delay Less Than 100 ms on all Origins in The Dataset
###Code
query6 = """
SELECT
ROUND(SUM(IF(fid.start < 100, fid.density, 0)) / SUM(fid.density), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid;
"""
print(chromeUXreport.estimate_query_size(query6))
response6 = chromeUXreport.query_to_pandas_safe(query6,max_gb_scanned=3)
response6.head(20)
###Output
_____no_output_____
###Markdown
* The results of this query show that *84% of FID* experiences are less than *100 ms*. So *kaggle.com* is above average. 3.7 First Input Delay Less Than 100 ms on Kaggle By Form Factor Name
###Code
query7 = """
SELECT
form_factor.name AS form_factor,
ROUND(SUM(IF(fid.start < 100, fid.density, 0)) / SUM(fid.density), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
WHERE
origin = 'https://www.kaggle.com'
GROUP BY
form_factor;
"""
print(chromeUXreport.estimate_query_size(query7))
response7 = chromeUXreport.query_to_pandas_safe(query7,max_gb_scanned=3)
response7.head(20)
result7 = response7
sns.factorplot(x='form_factor', y='fast_fid', data=result7, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
* Kaggle.com *94% of FID* on desktop than *71% on phone.* * Which means *kaggle.com* on phone pretty less. But phone kaggle users are less in population, so it will not affect much. 4. Compare Top 3 Data Science Blog Sites 4.1 First Contentful Paint Density Sum Less Than 1 sec
###Code
query8 = """#standardSQL
SELECT
origin,
ROUND(SUM(IF(fcp.start < 1000, fcp.density, 0)) / SUM(fcp.density) * 100) AS fast_fcp
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
origin IN ('https://www.analyticsvidhya.com', 'https://www.kdnuggets.com','https://medium.com')
GROUP BY
origin;
"""
print(chromeUXreport.estimate_query_size(query8))
response8 = chromeUXreport.query_to_pandas_safe(query8,max_gb_scanned=5)
response8.head(20)
result8 = response8
sns.factorplot(x='origin', y='fast_fcp', data=result8, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
* FCP in less than 1 sec of *kdnuggets.com* is better that other two.* FCP of *kdnuggets.com* is 23%.* FCP of* medium.com* is 14%.* FCP of *analyticsvidhya.com *is 12%. 4.2 First Contentful Paint Density Sum By Sec
###Code
query9 = """#standardSQL
SELECT
origin,
ROUND(SUM(IF(bin.start < 1000, bin.density, 0)) / SUM(bin.density), 4) AS fast_fcp,
ROUND(SUM(IF(bin.start >= 1000 AND bin.start < 3000, bin.density, 0)) / SUM(bin.density), 4) AS avg_fcp,
ROUND(SUM(IF(bin.start >= 3000, bin.density, 0)) / SUM(bin.density), 4) AS slow_fcp
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
origin IN ('https://www.analyticsvidhya.com', 'https://www.kdnuggets.com','https://medium.com')
GROUP BY
origin;
"""
print(chromeUXreport.estimate_query_size(query9))
response9 = chromeUXreport.query_to_pandas_safe(query9,max_gb_scanned=5)
response9.head(20)
barWidth = 0.85
r = response9.origin
greenBars = response9.fast_fcp
orangeBars = response9.avg_fcp
blueBars = response9.slow_fcp
# Create green Bars
plt.bar(r, greenBars, color='#b5ffb9', edgecolor='white', width=barWidth)
# Create orange Bars
plt.bar(r, orangeBars, bottom=greenBars, color='#f9bc86', edgecolor='white', width=barWidth)
# Create blue Bars
plt.bar(r, blueBars, bottom=[i+j for i,j in zip(greenBars, orangeBars)], color='#a3acff', edgecolor='white', width=barWidth)
###Output
_____no_output_____
###Markdown
* FCP of *kdnuggets.com* is very good in first 1 sec but than after medium.com is very good FCP page loads experience in b/w 1 & 3 sec.* FCP of *analyticsvidhya.com* and kdnuggets.com become same after 3 sec.* Overall the starting of *kdnuggets.com* is good but after that medium.com give very good page loads experience. 4.3 First Contentful Paint Density Sum By Form Factor Name
###Code
query10 = """#standardSQL
SELECT
origin,
ROUND(SUM(IF(form_factor.name = 'desktop', fcp.density, 0)) / SUM(fcp.density) * 100) AS pct_desktop,
ROUND(SUM(IF(form_factor.name = 'phone', fcp.density, 0)) / SUM(fcp.density) * 100) AS pct_phone,
ROUND(SUM(IF(form_factor.name = 'tablet', fcp.density, 0)) / SUM(fcp.density) * 100) AS pct_tablet
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
origin IN ('https://www.analyticsvidhya.com', 'https://www.kdnuggets.com','https://medium.com')
GROUP BY
origin;
"""
print(chromeUXreport.estimate_query_size(query10))
response10 = chromeUXreport.query_to_pandas_safe(query10,max_gb_scanned=3)
response10.head(20)
barWidth = 0.85
r = response10.origin
greenBars = response10.pct_desktop
orangeBars = response10.pct_phone
blueBars = response10.pct_tablet
# Create green Bars
plt.bar(r, greenBars, color='#b5ffb9', edgecolor='white', width=barWidth)
# Create orange Bars
plt.bar(r, orangeBars, bottom=greenBars, color='#f9bc86', edgecolor='white', width=barWidth)
# Create blue Bars
plt.bar(r, blueBars, bottom=[i+j for i,j in zip(greenBars, orangeBars)], color='#a3acff', edgecolor='white', width=barWidth)
###Output
_____no_output_____
###Markdown
* Almost all three gives same result. 4.4 First Contentful Paint Density Sum By Network
###Code
query11 = """#standardSQL
SELECT
origin,
effective_connection_type.name AS ect,
ROUND(SUM(bin.density), 4) AS density
FROM
`chrome-ux-report.all.201806`,
UNNEST(first_contentful_paint.histogram.bin) AS bin
WHERE
origin IN ('https://www.analyticsvidhya.com', 'https://www.kdnuggets.com','https://medium.com')
GROUP BY
origin,
ect
ORDER BY
origin,
ect;
"""
print(chromeUXreport.estimate_query_size(query11))
response11 = chromeUXreport.query_to_pandas_safe(query11,max_gb_scanned=3)
response11.head(20)
result11 = response11
sns.factorplot(x='origin', y='density', hue='ect', data=result11, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
* 4G network FCP scores dominating on 3G 4.5 First Input Delay Less Than 100 ms
###Code
query12 = """
SELECT
origin,
ROUND(SUM(IF(fid.start < 100, fid.density, 0)), 4) AS fast_fid
FROM
`chrome-ux-report.all.201806`,
UNNEST(experimental.first_input_delay.histogram.bin) AS fid
WHERE
origin IN ('https://www.analyticsvidhya.com', 'https://www.kdnuggets.com','https://medium.com')
GROUP BY
origin;
"""
print(chromeUXreport.estimate_query_size(query12))
response12 = chromeUXreport.query_to_pandas_safe(query12,max_gb_scanned=3)
response12.head(20)
result12 = response12
sns.factorplot(x='origin', y='fast_fid', data=result12, kind='bar', size=4, aspect=2.0)
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
def str_get_dummies(df, columns, sep=',', drop_first=False, prefix=None, prefix_sep='_'):
"""Wrapper of pd.Series.str.get_dummies() to behave like pd.get_dummies()"""
for p, col in zip(prefix, columns):
str_dummy_df = df[col].str.get_dummies(sep=sep)
if prefix is not None:
prefixed_cols = [prefix_sep.join([p, c]) for c in str_dummy_df.columns]
str_dummy_df.columns = prefixed_cols
if drop_first:
first_col = str_dummy_df.columns[0]
str_dummy_df = str_dummy_df.drop(columns=[first_col])
df = df.drop(columns=[col])
df = pd.concat((df, str_dummy_df), axis=1)
return df
def extract_rotten_rating(rating_list):
"""Extract info from ratings column using pd.Series.apply()"""
try:
ratings = json.loads(rating_list.replace("'", '"'))
for rating in ratings:
if rating['Source'] == 'Rotten Tomatoes':
return float(rating['Value'].replace('%', ''))
except AttributeError:
return np.nan
# Custom function to extract rotten tomatoes ratings
movie['rotten_tomatoes'] = movie['Ratings'].apply(extract_rotten_rating)
# Convert numeric columns stored as strings
movie['Runtime'] = pd.to_numeric(movie['Runtime'].str.split(' ').str[0])
movie['BoxOffice'] = pd.to_numeric(movie['BoxOffice'].str.replace(r'[\$,]', ''))
movie['imdbVotes'] = pd.to_numeric(movie['imdbVotes'].str.replace(',', ''))
# Convert datetime columns stored as strings
movie['Released'] = pd.to_datetime(movie['Released'])
movie['added_to_netflix'] = pd.to_datetime(movie['added_to_netflix'])
movie['added_to_netflix_year'] = movie['added_to_netflix'].dt.year
# Extract numbers from Awards columns
movie['award_wins'] = movie['Awards'].str.extract(r'(\d) win').astype(float)
movie['award_noms'] = movie['Awards'].str.extract(r'(\d) nomination').astype(float)
movie['oscar_wins'] = movie['Awards'].str.extract(r'Nominated for (\d) Oscar').astype(float)
award_cols = ['award_wins', 'award_noms', 'oscar_wins']
movie[award_cols] = movie[award_cols].fillna(0)
drop_columns = ['Poster', 'flixable_url', 'Response',
'Awards', 'Rated', 'imdbID', 'DVD', 'Website',
'BoxOffice', 'Released', 'added_to_netflix',
'Writer', 'Actors', 'Plot',
'rotten_tomatoes', 'Metascore', 'Production',
'totalSeasons', 'Runtime', 'Director',
'Title', 'Ratings']
movie = movie.drop(columns=drop_columns)
list_cols = ['Genre', 'Language', 'Country']
movie_dummy = str_get_dummies(movie,
columns=list_cols,
sep=', ',
prefix=list_cols,
drop_first=False)
movie_dummy = movie_dummy.dropna(subset=['imdbRating'])
movie_dummy.isna().mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
EDA
###Code
def barplot_dummies(df, prefix, max_n=15):
cols = [c for c in df if c.startswith(prefix)]
counts = df[cols].sum().sort_values(ascending=False)
counts = counts[:max_n]
counts.index = [i.replace(prefix, '') for i in counts.index]
counts.plot.barh()
plt.title(prefix)
plt.show()
plot_cols = ['Type', 'mpaa_rating']
for plot_col in plot_cols:
fig = sns.countplot(plot_col, data=movie)
fig.set_xticklabels(fig.get_xticklabels(), rotation=90)
plt.show()
prefixes = ['Genre_', 'Country_', 'Language_']
for prefix in prefixes:
barplot_dummies(movie_dummy, prefix)
sns.heatmap(movie.corr(), vmin=-1, vmax=1)
plt.show()
movie_countries = movie[pd.notnull(movie["Country"])]
movie_countries.head()
data = movie_countries.groupby("added_to_netflix_year")["Country"].value_counts(normalize=True).reset_index(name="Percentage")
data["Percentage"] = data["Percentage"] * 100
data["added_to_netflix_year"] = data["added_to_netflix_year"].astype("int")
data.head()
fig = px.choropleth(data, locations="Country", color="Percentage",
locationmode="country names",
animation_frame="added_to_netflix_year",
range_color=[0,100],
)
fig.update_layout(title="Percentage of Content Added to Netflix by Country")
fig.show()
###Output
_____no_output_____
###Markdown
Model Prep
###Code
movie_dummy = str_get_dummies(movie,
columns=list_cols,
sep=', ',
prefix=list_cols,
drop_first=True)
movie_dummy = pd.get_dummies(movie_dummy,
columns=['Type', 'mpaa_rating'],
drop_first=True)
movie_dummy = movie_dummy.dropna()
movie_dummy.shape
y_col = 'imdbRating'
X = movie_dummy.drop(columns=[y_col])
y = movie_dummy[y_col]
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=42)
parameters = {
"learning_rate":[0.01],
"n_estimators":[1000],
"max_depth":[3],
"subsample":[0.8],
"colsample_bytree":[1],
"gamma":[1]
}
xgb = GridSearchCV(XGBRegressor(objective="reg:squarederror"), param_grid=parameters, verbose=2)
xgb.fit(X_train, y_train)
train_score = xgb.score(X_train, y_train)
test_score = xgb.score(X_test, y_test)
print(f'Train score: {train_score:.2f}')
print(f'Test score: {test_score:.2f}')
y_pred = xgb.predict(X_test)
min_pred = min(y_pred)
max_pred = max(y_pred)
x = [min_pred, max_pred]
y = [min_pred, max_pred]
plt.scatter(y_pred, y_test)
plt.plot(x, y)
plt.xlabel('Fitted')
plt.ylabel('Actual')
plt.xlim((min_pred, max_pred))
plt.ylim((min_pred, max_pred))
plt.show()
###Output
_____no_output_____
###Markdown
Dataset link: https://www.kaggle.com/tejashvi14/employee-future-prediction Uploading dataset
###Code
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
Initialization
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import chi2
from sklearn.feature_selection import mutual_info_classif
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('Employee.csv')
X = df.drop(['LeaveOrNot'], axis=1)
y = df['LeaveOrNot']
###Output
_____no_output_____
###Markdown
Splitting into training (validation included) and test setsEarly splitting will help ensure that the data used for training and validation has no information which available in the testing/final evaluation dataset.
###Code
X_full_train, X_test, y_full_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_full_train, y_full_train, test_size=0.25, random_state=42)
X_train.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA)
###Code
X_train.info()
numerical = ['Age']
categorical = ['Education', 'JoiningYear', 'City', 'PaymentTier', 'Gender', 'EverBenched', 'ExperienceInCurrentDomain']
###Output
_____no_output_____
###Markdown
Target
###Code
y_train.head()
y_train.value_counts()
###Output
_____no_output_____
###Markdown
`0` means employee did not leave in the next 2 years.`1` means employee did leave in the next 2 years. Numerical FeaturesThere is only 1 numerical feature in the dataset `Age`.
###Code
X_numerical = X_train[numerical]
X_numerical.head()
X_numerical.describe()
###Output
_____no_output_____
###Markdown
Missing ValuesThe training data does not have any missing values but the testing data can. So, we need to decide how to fill missing values for each feature.The methodology used for numerical features is:- Fill with mean if the feature has Gaussian distribution- Fill with meadian otherwiseTo find if the feature is Gaussian or not we will plot histograms of each feature.
###Code
plt.hist(X_numerical['Age'], bins=20)
plt.xlabel('Age')
plt.show()
###Output
_____no_output_____
###Markdown
Distribution of all the feature is somewhat left skewed so we will fill the missing values with median. Categorical Features
###Code
X_categorical = X_train[categorical]
X_categorical.head()
X_categorical.describe()
X_categorical.drop(['JoiningYear', 'PaymentTier', 'ExperienceInCurrentDomain'], axis=1).describe()
X_categorical_encoded = pd.get_dummies(X_categorical, drop_first=True)
X_categorical_encoded.head()
###Output
_____no_output_____
###Markdown
Missing ValuesThe training data does not have any missing values but the testing data can. So, we will fill the missing values with the most frequent value in the feature. Feature RedundanceNow, we will find redundant categorical features.We will try to find linear correlation between features using Pearson's correlation coefficient and non-linear correlation using Spearman's correlation.For both we will plot a correlation matrix to make the result readable.Source: https://machinelearningmastery.com/how-to-use-correlation-to-understand-the-relationship-between-variables/
###Code
pearson_corr = X_categorical_encoded.corr(method='pearson').abs()
fig, ax = plt.subplots(figsize=(6, 6))
plt.title("Correlation Plot\nAbsolute value of Pearson's Correlation Coefficient\n\n")
sns.heatmap(pearson_corr,
cmap=sns.diverging_palette(230, 10, as_cmap=True),
square=True,
vmin=0,
vmax=1,
ax=ax)
plt.show()
spearman_corr = X_categorical_encoded.corr(method='spearman').abs()
fig, ax = plt.subplots(figsize=(6, 6))
plt.title("Correlation Plot\nAbsolute value of Spearman Correlation Coefficient\n\n")
sns.heatmap(spearman_corr,
cmap=sns.diverging_palette(230, 10, as_cmap=True),
square=True,
vmin=0,
vmax=1,
ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
The correlation is not too strong between any pair of features to remove them.Therefore, we can conclude that there are no redundant features. Feature SelectionNow, we will try to find feature relevance with the target.For this we will use Chi-Squared test and Mutual Information.Source: https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/
###Code
chi_square = chi2(X_categorical_encoded, y_train)[0]
chi_square = pd.Series(chi_square, index=X_categorical_encoded.columns)
chi_square
###Output
_____no_output_____
###Markdown
The more the Chi-squared value the more important the feature is in predicting the result.
###Code
mutual_info = mutual_info_classif(X_categorical_encoded, y_train, discrete_features=True, random_state=42)
mutual_info = pd.Series(mutual_info, index=X_categorical_encoded.columns)
mutual_info
###Output
_____no_output_____
###Markdown
Meta data
###Code
from pathlib import Path
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
meta_data = pd.read_csv(Path("data/meta_data.csv"))
name_mapping = pd.read_csv(Path("data/name_mapping.csv"))
survival_info = pd.read_csv(Path("data/survival_info.csv"))
meta_data.head()
name_mapping.head()
survival_info.head()
survival_info.Extent_of_Resection = survival_info.Extent_of_Resection.fillna("Unknown")
survival_info.Extent_of_Resection.value_counts()
# gross total resection (GTR)
# subtotal resection (STR)
# Given the invariable proximity to critical neurovascular structures,
# true complete resection of Craniopharyngiomas is challenging, and gross total resection (GTR)
# has been defined as removal of 95% of the tumor.5 Conversely, a subtotal resection (STR)
# is intended to deliberately leave residual lesion to minimize risk of iatrogenic complication;
# while there is no uniform residual tumor percentage cutoff to define STR,
# some studies delineate it around 10%.
sns.lmplot(x="Age", y="Survival_days", hue="Extent_of_Resection", data=survival_info,
truncate=False, ci=None, scatter_kws={"alpha": .5});
###Output
_____no_output_____
###Markdown
Images
###Code
import h5py
import numpy as np
import cv2
def scale_to_255(img):
img_min, img_max = np.min(img), np.max(img)
return ((img-img_min)/img_max) * 255
def get_volume(idx, meta_data, data_path="data/BraTS2020_training_data"):
df = meta_data[meta_data.volume == idx]
for i, row in df.iterrows():
path = Path(data_path).joinpath(row.slice_path)
hf = h5py.File(path, "r")
img = np.array(hf.get("image"))
mask = np.array(hf.get("mask"))
yield {"image": img, "mask": mask, "slice": row.slice}
slice_generator = get_volume(1, meta_data)
volume = next(slice_generator)
# a) native (T1)
# b) post-contrast T1-weighted (T1Gd)
# c) T2-weighted (T2)
# d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes
t1, t1gd, t2, flair = cv2.split(volume["image"])
fig, axs = plt.subplots(2, 4)
fig.set_size_inches(15, 8)
axs[0, 0].imshow(t1, cmap="gray")
axs[0, 0].set_title("T1")
axs[1, 0].hist(t1, bins=10)
axs[0, 1].imshow(t1gd, cmap="gray")
axs[0, 1].set_title("T1GD")
axs[1, 1].hist(t1gd, bins=10)
axs[0, 2].imshow(t2, cmap="gray")
axs[0, 2].set_title("T2")
axs[1, 2].hist(t2, bins=10)
axs[0, 3].imshow(flair, cmap="gray")
axs[0, 3].set_title("FLAIR")
axs[1, 3].hist(flair, bins=10);
# necrotic and non-enhancing tumor core (NCR/NET — label 1)
# the peritumoral edema (ED — label 2) #
# the GD-enhancing tumor (ET — label 4)
ncr, ed, et = cv2.split(volume["mask"])
fig, axs = plt.subplots(1, 3)
fig.set_size_inches(15, 8)
axs[0].imshow(ncr, cmap="gray")
axs[0].set_title("NCR")
axs[1].imshow(ed, cmap="gray")
axs[1].set_title("ED")
axs[2].imshow(et, cmap="gray");
axs[2].set_title("ET")
###Output
_____no_output_____
###Markdown
Area In Square Meters
###Code
fig = plt.figure(figsize=(10, 8))
sns.violinplot(x='type', y='area', data=data)
plt.title('Property Area By Property Type')
plt.show()
###Output
_____no_output_____
###Markdown
A long tailed distribution is observed for `apartment`, `terraced` and `detached`. Quantile-based discretization should be used in the model. Price
###Code
fig = plt.figure(figsize=(10, 8))
sns.violinplot(x='type', y='log_price_per_sq_m', hue='outlier_price', split=True, data=data)
plt.title('Log 10 Of Property Price By Property Type')
plt.show()
fig = plt.figure(figsize=(10, 8))
sns.violinplot(x='new_building', y='log_price_per_sq_m', data=data)
plt.title('Log 10 Of Property Price For Existing Buildings And New Projects')
plt.show()
n_outliers = len(data[data.outlier_price==True])
n_ads = len(data)
print(f'There are {n_outliers} outliers which constitutes {n_outliers/n_ads:.0%} of the dataset.')
###Output
There are 76 outliers which constitutes 1% of the dataset.
###Markdown
Price As Time Series
###Code
sns.lmplot(x='week',
y='price_per_sq_m',
hue='new_building',
data=data[data.outlier_price==False],
scatter=False, size=7)
plt.title('Property Price Development By Week')
plt.show()
###Output
_____no_output_____
###Markdown
--- - Column descriptions can be found at "https://covidtracking.com/data/api" under "Historic values for a single state" ---
###Code
df.info()
#remove columns with no data recorded, columns not necessary for analysis,
#columns displaying the same data as others, & depreciated data
columns = ['deathConfirmed', 'deathProbable', 'hospitalized',
'hospitalizedCumulative', 'inIcuCumulative', 'negativeTestsAntibody', 'pending',
'negativeTestsPeopleAntibody', 'onVentilatorCumulative',
'positiveTestsAntigen', 'positiveTestsPeopleAntibody', 'positiveTestsPeopleAntigen',
'totalTestEncountersViral', 'totalTestsAntigen', 'totalTestsPeopleAntibody',
'totalTestsPeopleAntigen', 'hospitalizedIncrease', 'hash', 'commercialScore',
'negativeRegularScore', 'negativeScore', 'positiveScore', 'score',
'grade', 'totalTestResultsSource', 'state', 'lastUpdateEt', 'dateModified',
'checkTimeEt', 'dateChecked', 'fips', 'total', 'posNeg', 'dataQualityGrade']
df_1 = df.drop(columns, axis=1)
df_1.head()
df_1.info()
#create column with daily positive result rate
df_1['positive_rate'] = round((df_1['positiveIncrease']/df_1['totalTestResultsIncrease']) * 100, 2)
df_1['positive_rate'].head()
#plot daily positive rate by date
# Create figure and plot space
fig, ax = plt.subplots(figsize=(25, 10))
# Add x-axis and y-axis
ax.plot(df_1['date'],
df_1['positive_rate'],
color='red')
# Set title and labels for axes
ax.set(xlabel="Date",
ylabel="Positive Rate",
title="Daily Positive Test Rate")
plt.show()
df_1['positive_rate'].unique()
df_1.loc[df_1['positive_rate']== 100.00]
#for positive_rate graph: need to remove columns with 100% positive rate,
#minimal testing done and all positive results.
#seems like incomplete data
df_1.loc[df_1['positive_rate']== 0.00]
df_1.loc[df_1['positive_rate']== -1.12]
df_1.loc[df_1['positive_rate'].isna()]
# Investigate data quality grade further
# FOR PLOTTING PURPOSES OF POSITIVE RATE, CREATE NEW DF WITH:
#remove row 153: incomplete data with negative positive rate
#create new df without sundays
#remove rows with 100% positive rate
###Output
_____no_output_____
###Markdown
DATA CLEANING AND EDA
###Code
import pandas as pd
import numpy as np
import pyreadstat
import missingno as msno
import matplotlib.pyplot as plt
df, metadata = pyreadstat.read_sav('tastdb-exp-2019.sav', apply_value_formats=True, dates_as_pandas_datetime =True)
###Output
_____no_output_____
###Markdown
The dataset has a 36108 observations and 274 variables
###Code
pd.options.display.max_columns = 280
df.shape
###Output
_____no_output_____
###Markdown
Checking the State of the Dataset Total and Percentage of Missing Data
###Code
pd.set_option("display.max_rows", 280)
df.shape
mask = df.isnull()
total= mask.sum()
percent = 100*mask.mean()
missing_data = pd.concat([total, percent], axis= 1, join = 'outer',
keys=['count_missing', 'perc_missing'])
missing_data.sort_values(by= 'perc_missing', ascending= False, inplace = True)
missing_data
num_missing = missing_data['perc_missing'] > 80
#num_missing
miss = missing_data[num_missing].shape
miss
per_above_80 = miss[0] / 274
per_above_80
###Output
_____no_output_____
###Markdown
**Basically, 60% of the columns have at least 80% of their data missing. Data science workshops recommend dropping columns that have more than 80% of their data missing.** Visualizing Missing Data in the Dataset
###Code
nullable_columns = df.columns[mask.any()].tolist()
fig=msno.matrix(df[nullable_columns].sample(4000))
fig_copy = fig.get_figure()
fig_copy.savefig('./Plots and Figures/nullity_matrix.png')
plt.show()
fig = msno.bar(df[nullable_columns].sample(4000))
fig_copy = fig.get_figure()
fig_copy.savefig('./Plots and Figures/nullity_bar.png')
fig = msno.dendrogram(df[nullable_columns])
fig_copy = fig.get_figure()
fig_copy.savefig('./Plots and Figures/nullity_dendogramm.png')
# Filtering and Keeping Columns where missing values < 80
#df = df[[col for col in df.columns if 100 * df[col].isnull().sum().mean() < 80]]
df = df.drop(columns = ['YRCONS', 'YRREG', 'filter_$', 'WOMEN1', 'WOMEN2', 'WOMEN3', 'WOMEN4', 'WOMEN5',
'WOMEN6', 'WOMEN7', 'WOMRAT1', 'WOMRAT3', 'WOMRAT7', 'TSLAVESP', 'TSLMTIMP', 'VOY1IMP','VOY2IMP',
'VOYAGE', 'VYMRTIMP','SOURCEH', 'SOURCEI', 'SOURCEJ', 'SOURCEK', 'SOURCEL', 'SOURCEM', 'SOURCEN',
'SOURCEO', 'SOURCEP', 'SOURCEQ', 'SOURCER', 'SOURCEB', 'SOURCEC', 'SOURCED', 'SOURCEE', 'SOURCEF', 'SOURCEG',
'SLAS32', 'SLAS36', 'SLAS39', 'SLAVEMA1', 'SLAVEMA3', 'SLAVEMA7', 'SLAVEMX1', 'SLAVEMX3', 'SLAVEMX7',
'SLAVMAX1', 'SLAVMAX3','SLAVMAX7', 'SLINTEN2','SLADAFRI', 'SLADAMER', 'SLADVOY', 'SAILD1', 'SAILD2', 'SAILD3',
'SAILD4', 'SAILD5','REGDIS3', 'REGARR2', 'PLAC2TRA', 'PLAC3TRA', 'OWNERB', 'OWNERC', 'OWNERD', 'OWNERE',
'OWNERF', 'OWNERG', 'OWNERH', 'OWNERI', 'OWNERJ', 'OWNERK', 'OWNERL', 'OWNERM', 'OWNERN',
'OWNERO', 'OWNERP', 'NCAR13', 'NCAR15', 'NCAR17', 'NDESERT', 'NPAFTTRA', 'NPPRETRA','NPPRIOR',
'MALE1', 'MALE2', 'MALE3', 'MALE4', 'MALE5', 'MALE6', 'MALE7','MALE1IMP','MALE2IMP','MALE3IMP','MALRAT1',
'MALRAT3', 'MALRAT7', 'MEN1', 'MEN2','MEN3', 'MEN4','MEN5', 'MEN6', 'MEN7','MENRAT1','MENRAT3','MENRAT7',
'INFANT1', 'INFANT2', 'INFANT3', 'INFANT4','INFANT5', 'INFANT6', 'JAMCASPR', 'FEMALE1','FEMALE2', 'FEMALE3',
'FEMALE4', 'FEMALE5', 'FEMALE6', 'FEMALE7', 'FEML1IMP','FEML2IMP','FEML3IMP', 'GIRL2','GIRL3', 'GIRL4', 'GIRL5',
'GIRL6', 'GIRL7','GIRLRAT1', 'GIRLRAT3','GIRLRAT7', 'EMBPORT2', 'DATARR38', 'DATARR39', 'DATARR40',
'DATARR41', 'DATARR36', 'DATARR37', 'DATARR38', 'DATARR39', 'DATARR40', 'DATARR41', 'DATARR43', 'DATARR44',
'CREW', 'CREW1', 'CREW2','CREW3','CREW4','CREW5','CREWDIED','CHILRAT3', 'CHILD1', 'CHILD2','CHILD3', 'CHILD4',
'CHILD5', 'CHILD6', 'CHILD7', 'CAPTAINB', 'CAPTAINC', 'ARRPORT2', 'BOY1', 'BOY2', 'BOY3', 'BOY4', 'BOY5', 'BOY6', 'BOY7',
'BOYRAT1', 'BOYRAT3', 'BOYRAT7','ADPSALE2', 'ADULT1', 'ADULT2','ADULT3','ADULT4','ADULT5','ADULT6','ADULT7',
'ADLT2IMP', 'ADLT3IMP'])
df.sample(2)
df = df.drop(columns = ['CHIL1IMP','CHIL2IMP','CHIL3IMP','CHIL1IMP','CHIL2IMP', 'CHIL3IMP','CHILRAT7', 'CONSTREG','D1SLATRA','D1SLATRB','D1SLATRC',
'DATARR32','DATARR33', 'DATARR45','DATEBUY', 'DATELAND1','DATELAND2','DATELAND3','DATELEFTAFR','DDEPAM','DDEPAMB','DDEPAMC',
'DLSLATRA', 'DLSLATRB', 'DLSLATRC','EMBPORT','EMBREG','EMBREG2'])
df.sample(50)
df = df.drop(columns = ['ADLT1IMP', 'CHILRAT1', 'GIRL1' , 'REGDIS2', 'REGEM2','REGEM3', 'REGISREG', 'RETRNREG', 'RETRNREG1' ])
df.shape
###Output
_____no_output_____
###Markdown
Checking the data types **Since all numerical data as basically integers in the dataset,I will go ahead and convert float64 to int32**
###Code
df_int = [ col for col in df if df[col].dtype == 'float64']
df_int
df['VOYAGEID'] = df['VOYAGEID'].astype('int32')
df_cat = [ col for col in df if df[col].dtype == 'object' and 'category']
df_cat
df.dtypes
###Output
_____no_output_____
###Markdown
Analyzing the Content of a Categorical Variable
###Code
## Number of Countries/Territories where Ships were registered
df['NATIONAL'].nunique()
df['NATIONAL']
df['NATIONAL'].unique()
obj_df = df.select_dtypes(include = 'object')
# Create a function to describ
import altair as alt
alt.data_transformers.disable_max_rows()
df['NATIONAL'].value_counts(dropna= False, normalize = False)
source = pd.DataFrame({'Flag': ['Great Britain','Unspecified',
'Portugal',
'France' ,
'USA',
'Spain',
'Netherlands',
'Brazil',
'Denmark',
'Hanse Towns, Brandenburg',
'Sweden',
'Spain / Uruguay',
'Uruguay',
'Mexico',
'Portugal / Brazil',
'Sardinia',
'Argentina',
'Norway',
'Genoa',
'Russia',
'Unknown'],
'Values': [11239, 9538, 5332, 4090,
1799,
1637,
1249,
792,
311,
61,
16,
15,
8,
6,
6,
3,
2,
1,
1,
1,
1]})
alt.Chart(source).transform_joinaggregate(
TotalFlags='sum(Values)',
).transform_calculate(
PercentOfTotal="datum.Values / datum.TotalFlags"
).mark_bar().encode(
alt.X('PercentOfTotal:Q', axis=alt.Axis(format='.0%')),
y='Flag:N'
)
from statistics import mode
df['SHIPNAME'].value_counts(dropna= True)
###Output
_____no_output_____
###Markdown
Most Common Ship names Mary : 254Nancy: 197NS do Rosario S Antônio e Almas: 182NS da Conceição S Antônio e Almas: 175
###Code
df['SHIPNAME'].sample(10)
###Output
_____no_output_____
###Markdown
**SHIPS FLEW THE FLAG OF THE TERRITORY WHERE THEY WERE CONSTRUCTED**
###Code
df.groupby(['PLACCONS', 'NATIONAL'])['PLACCONS', 'NATIONAL'].count().dropna().sample(50)
# Which rig type was the most common?
df['RIG'].value_counts(dropna = False).head(20)
source1 = df[['RIG', 'NATIONAL']]
source1 = pd.DataFrame({'Rig': ['Unknown','Ship',
'Brig',
'Schooner' ,
'Bergantim',
'Curveta',
'Snauw',
'Galera',
'Brigantine',
'Sumaca',
'Sloop',
'Patacho',
'Navio mercante',
'Fregat',
'Galeta',
'Não',
'Barque',
'Fregata',
'Schooner-brig',
'Yaght'],
'Values': [12414,
4854,
2893,
2374,
1945,
1882,
1444,
1231,
1177,
1142,
738,
607,
553,
429,
317,
290,
252,
226,
141,
85]})
source1
alt.Chart(source1).mark_bar().encode(
x= 'Values:Q',
y= 'Rig:O'
)
# Who owned the most ventures? Check variable OWNERA--->
m = df['OWNERA'].value_counts(dropna = False,).head(20)
list(m)
m.keys()
source2= pd.DataFrame({'Vessel Owner': ['Unspecified','Royal African Company', 'West-Indische Compagnie',
'Companhia Geral do Grão Pará e Maranhão',
'Companhia Geral de Pernambuco e Paraíba', 'James, William',
'Compagnie des Indes', 'South Sea Company',
'Middelburgsche Commercie Compagnie', 'Boats, William',
'Company of Royal Adventurers', 'Laroche, James*',
'Compagnie du Sénégal', 'Case, George', 'Tarleton, John',
'Ferreira, João Antônio', 'Dawson, John', 'Leyland, Thomas',
'Harper, William', 'Davenport, William'],
'Values': [12414, 4854, 2893, 2374, 1945, 1882, 1444, 1231, 1177,1142,
738,
607,
553,
429,
317,
290,
252,
226,
141,
85]})
alt.Chart(source2).mark_bar().encode(
x= 'Values:Q',
y= 'Vessel Owner:O'
)
###Output
_____no_output_____
###Markdown
Percentage of voyages completed as intended(FATE 1 == 1), shipwrecked, captured by pirates, British
###Code
s= df['FATE'].value_counts().head(25)
source3 = pd.DataFrame({'Fate': s.keys(),
'Values': list(s)})
alt.Chart(source3).transform_joinaggregate(
TotalFate='sum(Values)',
).transform_calculate(
PercentOfTotal="datum.Values / datum.TotalFate"
).mark_bar().encode(
alt.X('PercentOfTotal:Q', axis=alt.Axis(format='.0%')),
y='Fate:N'
)
###Output
_____no_output_____
###Markdown
9. Outcome for slaves ---> FATE2, most common fate for slaves Outcome for slaves ---> FATE2, most common fate for slaves
###Code
d = df['FATE2'].value_counts().head(25)
d
source4 = pd.DataFrame({'Fate for the Slaves': d.keys(),
'Values': list(d)})
alt.Chart(source4).transform_joinaggregate(
TotalFate2='sum(Values)',
).transform_calculate(
PercentOfTotal="datum.Values / datum.TotalFate2"
).mark_bar().encode(
alt.X('PercentOfTotal:Q', axis=alt.Axis(format='.0%')),
y='Fate for the Slaves:N'
)
###Output
_____no_output_____
###Markdown
. Outcome of voyage if vessel captured? FATE3 --->11. Outcome of voyage for owners -> FATE 4 --> 'Luckiest owner'?
###Code
c = df['FATE3'].value_counts().head(25)
c
source5 = pd.DataFrame({'Fate for the Vessel': c.keys(),
'Values': list(c)})
alt.Chart(source5).transform_joinaggregate(
TotalFate3='sum(Values)',
).transform_calculate(
PercentOfTotal="datum.Values / datum.TotalFate3"
).mark_bar().encode(
alt.X('PercentOfTotal:Q', axis=alt.Axis(format='.0%')),
y='Fate for the Vessel:N'
)
e = df['PORTDEP'].value_counts().head(25)
e
df.to_csv('voyages.csv', index = False)
###Output
_____no_output_____
###Markdown
Bin Selected Numerical Features
###Code
train['YearBuilt_cat'] = pd.cut(train['YearBuilt'],
bins=[0, 1910, 1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010],
labels=['0', '1910', '1920', '1930', '1940', '1950', '1960', '1970', '1980', '1990', '2000']).astype(np.dtype('O'))
train['YearRemodAdd_cat'] = pd.cut(train['YearRemodAdd'],
bins=[0, 1910, 1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010],
labels=['0', '1910', '1920', '1930', '1940', '1950', '1960', '1970', '1980', '1990', '2000']).astype(np.dtype('O'))
train['years_remod_sold_bins'] = pd.cut(train['years_remod_sold'],
bins=[0, 5, 10, 20, 30, 40, 50, 60],
labels=['0', '5', '10', '20', '30', '40', '50']).astype(np.dtype('O'))
train['years_built_sold_bins'] = pd.cut(train['years_built_sold'],
bins=[0, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110],
labels=['0', '5', '10', '20', '30', '40', '50', '60', '70', '80', '90', '100']).astype(np.dtype('O'))
###Output
_____no_output_____
###Markdown
Add Feature Interactions
###Code
train['year_built_x_year_remod'] = train['YearBuilt_cat'] + '_x_' + train['YearRemodAdd_cat']
###Output
_____no_output_____
###Markdown
Categorical Feature Encoding
###Code
cat_features = train.dtypes[train.dtypes==np.dtype('O')].index.to_list()
target_enc = ce.CatBoostEncoder(cols=cat_features)
target_enc.fit(train[cat_features], train.SalePrice)
encoded_features = target_enc.transform(train[cat_features])
train.loc[:, cat_features] = encoded_features
###Output
_____no_output_____
###Markdown
Impute Missing Values
###Code
median_values = dict()
for f in train.columns:
median_values[f] = train[f].median()
train.fillna(median_values, inplace=True)
###Output
_____no_output_____
###Markdown
Take The LOG Of All Long Tail Distributions
###Code
for f in ['LotArea', 'LotFrontage', '1stFlrSF', 'GrLivArea']:
train[f + '_log'] = np.log1p(train[f])
train.drop(f, axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Train A Benchmark Model
###Code
params = {'features_missing_values': features_missing_values,
'target_encoder': target_enc,
'median_values': median_values}
valid = transform_dataset(valid, **params)
X_train = train.drop('SalePrice', axis=1)
y_train = train.SalePrice.values
X_valid = valid.drop('SalePrice', axis=1)
y_valid = valid.SalePrice.values
model = XGBRegressor(random_state=random_seed, objective='reg:squarederror', reg_lambda=1.)
model.fit(X_train, y_train)
y_pred = model.predict(X_valid)
r2_score = metrics.r2_score(y_valid, y_pred)
explained_variance = metrics.explained_variance_score(y_valid, y_pred)
mean_abs_error = metrics.mean_absolute_error(y_valid, y_pred)
max_error = metrics.max_error(y_valid, y_pred)
print(f'The r2 score is: {r2_score:.0%}')
print(f'The explained variance score is: {explained_variance:.0%}')
print(f'The mean absolute error is: {mean_abs_error:.0f}')
print(f'The maximal error is: {max_error:.0f}')
###Output
The r2 score is: 91%
The explained variance score is: 91%
The mean absolute error is: 16262
The maximal error is: 139551
###Markdown
Display The Error
###Code
error = y_valid - y_pred
sns.distplot(error)
plt.title('Error Distribution')
plt.show()
plt.scatter(y_valid, y_pred)
plt.title('Error Scatter')
plt.xlabel('y_valid')
plt.ylabel('y_pred')
plt.plot([0, 7e5], [0, 7e5], ls='--')
plt.show()
###Output
_____no_output_____
###Markdown
Select Best Features
###Code
selector = feature_selection.GenericUnivariateSelect(feature_selection.chi2, 'k_best', 30)
selector.fit(X_train, train.SalePrice.values)
k_best = pd.Series(selector.scores_, index=X_train.columns)
k_best = k_best / k_best.max()
###Output
_____no_output_____
###Markdown
Display Feature Importances
###Code
model_imp = pd.Series(model.feature_importances_, index=X_train.columns)
fig = plt.figure(figsize=(16, 10))
top_n = 25
plt.subplot(1, 2, 1)
to_plot = k_best.sort_values()[-top_n:]
plt.barh(to_plot.index, to_plot.values)
plt.title('Chi2 Test')
plt.subplot(1, 2, 2)
to_plot = model_imp.sort_values()[-top_n:]
plt.barh(to_plot.index, to_plot.values)
plt.title('L2 Regularization')
plt.subplots_adjust(wspace=0.4)
plt.suptitle('Feature Selection', y=0.96, fontsize=20)
plt.show()
best_features_l2 = model_imp.sort_values()[-top_n:].index.to_list()
###Output
_____no_output_____
###Markdown
Script EDA NotebookThis notebook contains basic EDA work related to the raw script text files
###Code
from collections import Counter
from os import listdir
import pandas as pd
from tqdm import tqdm_notebook
SCRIPTS = './scripts/'
PATH = './scripts/{}'
eps = listdir(SCRIPTS)
eps.remove('.DS_Store')
def scan(file):
with open(PATH.format(file), encoding="latin-1") as f:
data = f.read()
chars = [c for c in data]
char_num = len(chars)
char_counts = Counter(chars)
lines = data.split(sep='\n')
line_num = len(lines)
line_lengths = [len(x) for x in lines]
line_max = max(line_lengths)
line_avg = sum(line_lengths) / line_num
results = {
'episode':file,
'chars':char_num,
'lines':line_num,
'line_max':line_max,
'line_avg':line_avg,
}
char_results = {
'episode':file
}
char_results.update(char_counts)
return results, char_results
df_basic = pd.DataFrame()
df_chars = pd.DataFrame()
for ep in tqdm_notebook(eps):
try:
a, b = scan(ep)
df_basic = df_basic.append(a, ignore_index=True)
df_chars = df_chars.append(b, ignore_index=True)
except:
print(ep)
df_basic.sort_values('episode')
from matplotlib import pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(figsize=(19.2, 10.8), dpi=200)
df_chars.mean(axis=0).plot(kind='bar', ax=ax)
df_chars.mean()[79:]
df_chars
###Output
_____no_output_____
###Markdown
1. Load Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mlp import pipeline
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Load Dataset
###Code
# Extract and retrieve rentals data from Microsoft SQL server
# Refer to documentation within data module for technical and configuration details
df_rentals = pipeline.get_rentals()
df_rentals.head()
###Output
_____no_output_____
###Markdown
3. Data Insights
###Code
df_rentals.shape
###Output
_____no_output_____
###Markdown
- Dataset contains 18,643 observations with 10 features.- There are 24 hours a day, 365 days a year. So over 2 years, there should be a maximum 17,520 (24 x 365 x 2) observations.- Given that there are more hourly observations than hours over a 2 year period, some of the observations may be duplicates or erroneous. - The problem statement is to predict the total number of active e-scooter users given the above dataset.- Each observation records the number of guest and registered users using rental e-scooters in a particular hour of a day.- I shall assume that the total number of active e-scooter users in a particular hour of a day is the sum of the guest and registered users i.e. active users = guest users + registered users.
###Code
df_rentals.columns.values
###Output
_____no_output_____
###Markdown
- Column labels of the rentals dataset
###Code
df_rentals.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18643 entries, 0 to 18642
Data columns (total 10 columns):
date 18643 non-null object
hr 18643 non-null int64
weather 18643 non-null object
temperature 18643 non-null float64
feels_like_temperature 18643 non-null float64
relative_humidity 18643 non-null float64
windspeed 18643 non-null float64
psi 18643 non-null int64
guest_scooter 18643 non-null int64
registered_scooter 18643 non-null int64
dtypes: float64(4), int64(4), object(2)
memory usage: 1.4+ MB
###Markdown
- No column with null/missing value. 4. Summary Statistics
###Code
df_rentals.describe()
###Output
_____no_output_____
###Markdown
- Large differnece in the 75th %tile and max values of columns windspeed, guest_scooter, registered_scooter- This observation suggests that there are extreme values or outliers in these columns. - There is sizable difference in the range of values across the independent variables. e.g. values for psi are always below 100, whereas values for registered_scooter could be in the thousands.- Some form of scaling needs to be done at the pre-processing stage. 5. Data Cleaning 5.1 date Column
###Code
# Check data type of the date column
df_rentals.dtypes['date']
###Output
_____no_output_____
###Markdown
- Convert the date column from string to date data type.- Combine the date and hr columns to form a datetime column.- This is to facilitate the use of datetime/timeseries operations when doing exploration and feature engineering later.
###Code
# Rename date column to date_str to indicate string data type
df_rentals.rename(columns={'date': 'date_str'}, inplace=True)
# Convert date column from string to datetime data type
df_rentals['date'] = pd.to_datetime(df_rentals['date_str'])
# Verify date column data type
df_rentals.dtypes['date']
# Create datetime column by concatenating the date and hr columns
df_rentals['datetime'] = df_rentals.apply(lambda row: row.date_str + ' ' + str(row.hr), axis=1) + ':00'
# Convert datetime column from string to datetime data type
df_rentals.datetime = pd.to_datetime(df_rentals.datetime)
# Verify datetime column data type
df_rentals.dtypes['datetime']
###Output
_____no_output_____
###Markdown
5.2 hr Column
###Code
# Check data type of the hr column
df_rentals.dtypes['hr']
###Output
_____no_output_____
###Markdown
- The hour of the day when the rentals are made should be categorical in nature.- Convert the hr column from integer to string data type.
###Code
# Rename hr column to hr_str to indicate string data type
df_rentals.rename(columns={'hr': 'hr_str'}, inplace=True)
# Convert hr column from int to string data type
df_rentals.hr_str = df_rentals.hr_str.apply(str)
# Verify hr_str column data type
df_rentals.dtypes['hr_str']
# Check the number of unique hr values
unique_hrs = df_rentals.hr_str.unique()
unique_hrs
len(unique_hrs)
###Output
_____no_output_____
###Markdown
- All 24 hours of the day are represented in the rentals dataset.
###Code
# Convert the hr column from string to categorical data type
df_rentals['hr'] = df_rentals.hr_str.astype('category')
df_rentals.dtypes['hr']
###Output
_____no_output_____
###Markdown
5.3 weather Column
###Code
df_rentals.weather.unique()
###Output
_____no_output_____
###Markdown
- The weather column contains categorical data.- The weather data is 'dirty', clean up is neccessary. - Mixed cases i.e. clear and CLEAR..- Incorrect spelling e.g. lear, clar- Correct values 'lear' and 'clar' to be 'clear'.- Correct values 'cludy' and 'loudy' to be 'cloudy'.- Correct value 'liht snow/rain' to be 'light snow/rain'.
###Code
# Standardized weather column to lower case characters
df_rentals.weather = df_rentals.weather.str.lower()
dict_weather = {
# Replace incorrect values 'lear' and 'clar' with 'clear'
'lear': 'clear',
'clar': 'clear',
# Replace incorrect values 'cludy' and 'loudy' with 'cloudy'
'cludy': 'cloudy',
'loudy': 'cloudy',
# Replace incorrect value 'liht snow/rain' with 'light snow/rain'
'liht snow/rain': 'light snow/rain'
}
# Replace incorrect values in weather column
df_rentals.replace({'weather': dict_weather}, inplace=True)
# Verify that the incorrect values have been replaced
df_rentals.weather.unique()
# Convert the weather column from string to categorical data type
df_rentals['weather'] = df_rentals.weather.astype('category')
df_rentals.dtypes['weather']
###Output
_____no_output_____
###Markdown
- The weather column contains 4 unique categorical values i.e. clear, cloudy, light snow/rain and heavy snow/rain.- One-hot encoding can be applied to the weather column later in feature engineering. 5.4 temperature, feels_like_temperature Columns
###Code
# Get the maximum and minimum temperature recorded
max(df_rentals.temperature), min(df_rentals.temperature)
# Get maximum and minimum feels_like_temperature recorded
max(df_rentals.feels_like_temperature), min(df_rentals.feels_like_temperature)
# Number of observations with temperatures above 120°F
len(df_rentals[df_rentals.temperature > 120])
###Output
_____no_output_____
###Markdown
- I shall assume that values from the temperature and feels_like_temperature columns are in fahrenheit.- I shal assume that this dataset is gathered from a city/town since people are renting e-scooters and e-bikes.- The maximum value of the temperature column is 131°F which is pretty close to the [highest temperature ever recorded](https://en.wikipedia.org/wiki/List_of_weather_recordsHighest_temperatures_ever_recorded) of 134.1°F.- According to [TripSavvy](https://www.tripsavvy.com/the-worlds-hottest-cities-4070053), some of the highest temperatures recorded in a city include Phoenix 122°F, Marrakech 120°F, Mecca 121.6°F, Kuwait City 126°F, Ahvaz 129°F and Timbuktu 120°F.- There are 240 observations with temperatures above 120°F. This dataset should be from a city known for its high temperatures. If otherwise, the temperatures in these observations need to be verified.- 'Feels like' temperature is also known as the [heat index](https://en.wikipedia.org/wiki/Heat_index). In short, it is a temperature reading that factors in a component of relative humidity.- We can verify the values of the feels_like_temperature column using the heat index [formula](https://en.wikipedia.org/wiki/Heat_indexFormula).- Without any geographical information on this dataset given, I shall assume that all temperature readings are accurate. 5.5 relative_humidity Column
###Code
# Get the maximum and minimum values of relative humidity recorded
max(df_rentals.relative_humidity), min(df_rentals.relative_humidity)
# Number of observations with 0 relative humidity
len(df_rentals[df_rentals.relative_humidity==0])
###Output
_____no_output_____
###Markdown
- [Relative humidity](https://en.wikipedia.org/wiki/Relative_humidity) (RH) is the actual amount of water vapor present in relation to the capacity that the air has at a particular temperature. It is express as a percentage.- A relative humidity reading of 0 implies [air devoid of water vapor](https://www.chicagotribune.com/news/ct-xpm-2011-12-16-ct-wea-1216-asktom-20111216-story.html). This is quite impossible given the climate conditions of a city/town, where I assume this dataset is gathered. Values of 0 in the relative_humidity column need to be verified.- Since there are only 25 observations with 0 relative humidity, I've decided to drop them.- A relative humidity reading of 100 means that the air is totally saturated with water vapor and cannot hold any more, creating the possibility of rain. So values of 100 in the relative_humidity column are valid.
###Code
# Number of observations in dataset
len(df_rentals)
# Drop observations with relative humidity value of 0
df_rentals.drop(df_rentals[df_rentals.relative_humidity==0].index, inplace=True)
# Check number of observations left after dropping
len(df_rentals)
###Output
_____no_output_____
###Markdown
5.6 windspeed Column
###Code
# Get the maximum and minimum values of the windspeed column
max(df_rentals.windspeed), min(df_rentals.windspeed)
###Output
_____no_output_____
###Markdown
- No units were given for the windspeed column.- Apparently, wind speed can be measured using a variety of [units](https://en.wikipedia.org/wiki/Wind_speedUnits) e.g. beaufort, knots, m/s, km/h, mph, depending on purpose, region or target audience.- [Wind speed of 0](https://www.wral.com/weather/blogpost/1116592/) is possible and said to be calm.- I'm unable to gauge if the maximum wind speed of 57 is valid. 57 m/s implies a hurricane, but 57 km/h is just a near gale. - As such, I shall assume that values in the windspeed column are valid. 5.7 psi Column
###Code
# Get the maximum and minimum values of the psi column
max(df_rentals.psi), min(df_rentals.psi)
###Output
_____no_output_____
###Markdown
- The [Pollutant Standard Index (psi)](https://en.wikipedia.org/wiki/Pollutant_Standards_Index) is a measure of pollutants present in the air (0 to 400). - Values in the psi column are valid. 5.8 guest_scooter, registered_scooter Columns
###Code
# Get the maximum and minimum values of the guest_scooter column
max(df_rentals.guest_scooter), min(df_rentals.guest_scooter)
# Get the maximum and minimum values of the registered_scooter column
max(df_rentals.registered_scooter), min(df_rentals.registered_scooter)
# Number of observations with a negative value in either the guest_scooter or registered_scooter columns
len(df_rentals[(df_rentals.guest_scooter<0) | (df_rentals.registered_scooter<0)])
###Output
_____no_output_____
###Markdown
- Values in the guest_scooter and registered_scooter columns indicate the number of guest and registered users renting e-scooters in a particular hour, of a particular date.- As such, the values in the guest_scooter and registered_scooter columns should not be negative.- There are 658 observations with a negative value in either the guest_scooter or registered_scooter columns.- As there is no way of verifying these erroneous values, I shall set all negative values in the guest_scooter or registered_scooter columns to 0.
###Code
# Set all negative values in the guest_scooter column to 0
df_rentals.loc[df_rentals.guest_scooter < 0, 'guest_scooter'] = 0
# Set all negative values in the registered_scooter column to 0
df_rentals.loc[df_rentals.registered_scooter < 0, 'registered_scooter'] = 0
# Verify that there all negative values in the guest_scooter and registered_scooter columns have been set to 0
len(df_rentals[(df_rentals.guest_scooter<0) | (df_rentals.registered_scooter<0)])
###Output
_____no_output_____
###Markdown
5.9 Duplicate Observations
###Code
# Number of observations in dataset
len(df_rentals)
###Output
_____no_output_____
###Markdown
- As mentioned in Section 3. [Data Insights](data_insights), there are more hourly observations than hours over 2 years from 2011 to 2012.- There are 18,618 hourly observations versus 17,520 (24 x 365 x 2) hours in the years 2011 and 2012.- Therefore, there are duplicate or erroneous observations in the dataset.
###Code
# Number of observations that are duplicates
len(df_rentals[df_rentals.duplicated()])
###Output
_____no_output_____
###Markdown
- There are 1,609 duplicate observations in the dataset. - I shall drop these duplicated observations.
###Code
# Drop duplicate observations
df_rentals.drop_duplicates(inplace=True)
# Verify that the duplicate observations have been removed
len(df_rentals), any(df_rentals.duplicated())
# Verify that all 17,009 observations have unique datetime values
len(df_rentals.datetime.unique())
###Output
_____no_output_____
###Markdown
6. Target/Dependent Variable- The target variable i.e. active e-scooter users, is numerical and discrete in nature.- As mentioned in Section 3. [Data Insights](data_insights), the target variable (active e-scooter users) will be the sum of the guest and registered e-scooter users.- The active_scooter column should be created AFTER data cleaning as both the guest_scooter and registered_scooter columns contain errors.- Creating the active_scooter column before data cleaning would have introduced those pre-existing errors into the target variable column.
###Code
# Create active_scooter column as target variable
df_rentals['active_scooter'] = df_rentals.guest_scooter + df_rentals.registered_scooter
# Verify target variable column has been created
df_rentals.columns
###Output
_____no_output_____
###Markdown
7. Feature Engineering 7.1 Day of the Week - The day of the week will probably have an impact on the number of rentals. There could be more rentals on work days (Mon-Fri) as people commute to work, and less on weekends (Sat-Sun) as people stay at home.- I will create a new feature/variable based on the day of week.- The day of the week is categorical in nature.
###Code
# Create day_of_wk column as independent variable
df_rentals['day_of_wk'] = df_rentals.apply(lambda row: row.datetime.strftime('%A'), axis=1)
# Verify day_of_wk variable column has been created
df_rentals.columns
# Convert the day_of_wk column from string to categorical data type
df_rentals['day_of_wk'] = df_rentals.day_of_wk.astype('category')
df_rentals.dtypes['day_of_wk']
###Output
_____no_output_____
###Markdown
7.2 One-Hot Encoding- There are several independent variables that are categorical in nature .i.e. the [hr](hr_column) (Section 5.2), [weather](weather_column) (Section 5.3) and [day_of_wk](day_of_the_week) (Section 6.2) columns.- To facilitate further exploration and modelling later, I will one-hot encode these columns.- I did not choose to label encode as assigning a running number series to categories has the disadvantage that the numerical values can be misinterpreted by machine learning algorithms as having some sort of hierarchy/order in them.- After encoding, the original column encoded will be removed. As such, I shall store the encoded dataset in a separate dataframe i.e. df_rentals_1hot, as I may need the original un-encoded dataset at a later stage. Also, not all algorithms require categorical variables to be one-hot encoded.
###Code
# One-hot encode the hr column
df_rentals_1hot = pd.get_dummies(df_rentals, columns=['hr'], prefix=['hr'])
# Verify hr encoding columns were created
df_rentals_1hot.columns
# Create binary values for weather category values
df_rentals_1hot = pd.get_dummies(df_rentals_1hot, columns=['weather'], prefix=['weather'])
# Verify weather encoding columns were created
df_rentals_1hot.columns
# One-hot encode the day_of_wk column
df_rentals_1hot = pd.get_dummies(df_rentals_1hot, columns=['day_of_wk'], prefix=['day_of_wk'])
# Verify day_of_wk encoding columns were created
df_rentals_1hot.columns
###Output
_____no_output_____
###Markdown
7. Data Visualization 7.1 Correlation
###Code
# Column labels of all numerical independent variables
cols_numerical = ['guest_scooter', 'registered_scooter', 'temperature', 'feels_like_temperature',
'relative_humidity', 'windspeed', 'psi']
# Column labels of weather one-hot encoded variables
cols_weather = ['weather_clear', 'weather_cloudy', 'weather_heavy snow/rain', 'weather_light snow/rain']
# Column labels of day of week one-hot encoded variables
cols_day_of_wk = ['day_of_wk_Friday', 'day_of_wk_Monday', 'day_of_wk_Saturday', 'day_of_wk_Sunday',
'day_of_wk_Thursday', 'day_of_wk_Tuesday', 'day_of_wk_Wednesday']
# Column labels of hour one-hot encoded variables
cols_hr = ['hr_0', 'hr_1', 'hr_10', 'hr_11', 'hr_12', 'hr_13', 'hr_14', 'hr_15', 'hr_16', 'hr_17', 'hr_18', 'hr_19',
'hr_2', 'hr_20', 'hr_21', 'hr_22', 'hr_23', 'hr_3', 'hr_4', 'hr_5', 'hr_6', 'hr_7', 'hr_8', 'hr_9']
cols_categorical = []
cols_categorical.extend(cols_weather)
cols_categorical.extend(cols_day_of_wk)
cols_categorical.extend(cols_hr)
# Construct list of column labels of numerical and one-hot encoded variables
cols_all = ['active_scooter']
cols_all.extend(cols_numerical)
cols_all.extend(cols_categorical)
# Create new dataset for visualization purposes
df_rentals_viz = df_rentals_1hot.loc[:, cols_all]
# Generate the correlation matrix between features of the rental dataset
corr_rentals = df_rentals_viz.corr()
plt.figure(figsize=(19, 17))
# Display heat map of the correlation matrix
sns.heatmap(corr_rentals, cmap='coolwarm', annot=False, vmax=1, vmin=-1)
# Get the correlation coefficients of the target variable (active_scooter)
coefficients = corr_rentals['active_scooter'].sort_values(ascending=False)
# Get 10 most positively correlated indepedent variable
coefficients.iloc[:10]
# Get 10 most negatively correlated indepedent variable
coefficients.iloc[-10:]
###Output
_____no_output_____
###Markdown
- registered_scooter has strong positive correlation (> 0.99) to the target variable active_scooter. - This is expected as active_scooter is the sum of registered_scooter and guest_scooter. - guest_scooter also has a positive correlation but not as strongly as registered_scooter. - This is because registered_scooter is the bigger part of the summation, on average it contributes up to 90% of the value of active_scooter.- Features most correlated to the target variable are registered_scooter, guest_scooter, temperature, feels_like_temperature and relative_humidity.
###Code
# Create a dataset to compare guest, registered and total active users
df_users = df_rentals_viz.loc[:, ['active_scooter', 'guest_scooter', 'registered_scooter']]
# Create a column that shows the percentage of registered users in the total active users
df_users['registered_scooter_%'] = df_users['registered_scooter'] / df_users['active_scooter'] * 100
# Get the mean (in %) of registered_scooter's contribution towards the active_scooter value
df_users['registered_scooter_%'].mean()
###Output
_____no_output_____
###Markdown
- temperature and feels_like_temperature presents strong positive correlation (0.99). This is probably because feels_like_temperature, also known as heat index (refer to Section 5.4 [temperature, feels_like_temperature Columns](temperature_feels_like_temperature_columns), is derived from temperature and relative_humidity.- weather_clear and weather_light snow/rain are the more significant weather conditions.- day_of_wk_Saturday and day_of_wk_Sunday are the more significant days in the week.- It is intersting to note that day_of_wk_Saturday and day_of_wk_Sunday are positively correlated to guest_scooter but negatively correlated to registered_scooter.- We can infer that on weekends (Saturday, Sunday), the number of guest users will increase while the registered users will drop. - hr_8, hr_16, hr_17, hr_18, hr_19 are the more positively correlated hourly intervals. These time slots relate to the commute to work in the morning (08:00am) and the commute after work in the evening (04:00pm - 07:00pm).- hr_23, hr_0, hr_1, hr_2, hr_3, hr_4, hr_5, hr_6 are the more negatively correlated hourly intervals. These time slots relate to people resting at home (11:00pm - 06:00am) and thus do not commute or need e-scooters. 7.2 Outliers
###Code
# Select columns with numerical data excluding the target variable
ol_cols = cols_numerical.copy()
df_outliers = df_rentals_viz[ol_cols]
cols_count = len(ol_cols)
plt.figure(figsize=(12, 6))
# Generate box plots for all numerical independent variables
for i in range(0, cols_count):
plt.subplot(1, cols_count, i+1)
sns.set_style('whitegrid')
sns.boxplot(df_outliers[ol_cols[i]], color='green', orient='v')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
- Independent variables guest_scooter, registered_scooter and windspeed contain large numbers of outliers. These outliers may be removed later during pre-processing.- Independent variables temperature and feels_like_temperature have similar distribution of values.
###Code
# Generate quantile and maximum values of numerical features
ser_quantile_1st = df_outliers.quantile(0.25)
ser_quantile_3rd = df_outliers.quantile(0.75)
ser_iqr = ser_quantile_3rd - ser_quantile_1st
ser_3iqr = ser_iqr * 1.5
ser_max = ser_quantile_3rd + ser_3iqr
df_boxplots = pd.DataFrame({'3rd': ser_quantile_3rd, '1st': ser_quantile_1st, 'iqr': ser_iqr,
'3iqr': ser_3iqr, 'max': ser_max})
df_boxplots
# Get number of outliers in the windspeed, guest_scooter, registered_scooter variables
len(df_outliers[df_outliers.windspeed>32]), len(df_outliers[df_outliers.guest_scooter>346]), len(df_outliers[df_outliers.registered_scooter>3491])
###Output
_____no_output_____
###Markdown
7.3 Distribution Skewness
###Code
plt.figure(figsize=(15, 6))
# Generate distribution plots for all numerical independent variables
for i in range(0, cols_count):
plt.subplot(1, cols_count, i+1)
sns.distplot(df_outliers[ol_cols[i]], kde=True)
###Output
_____no_output_____
###Markdown
- Independent variables guest_scooter, registered_scooter and windspeed are right/positively skewed.- Distribution plots of variables temperature, feels_like_temperature and relative_humidity are similar. 7.4 Bar Charts
###Code
# Show distribution of users across weather conditions
df_users = df_rentals[['weather', 'active_scooter']]
df_users.groupby(['weather']).sum().sort_values(by='active_scooter', ascending=False).plot(kind='bar', figsize=(10,5))
plt.ylabel('Active Users')
plt.xlabel('Weather');
###Output
_____no_output_____
###Markdown
- There is an overwhelming number of users when weather was clear compared to other conditions.
###Code
# Show distribution of users across days in a week
df_users = df_rentals[['day_of_wk', 'active_scooter']]
df_users.groupby(['day_of_wk']).sum().sort_values(by='active_scooter', ascending=False).plot(kind='bar', figsize=(10,5))
plt.ylabel('Active Users')
plt.xlabel('Day of Week');
###Output
_____no_output_____
###Markdown
- The number of users across work days (Mon-Fri) seem to be rather consistent. On weekends (Sat-Sun), there is a slight drop.
###Code
# Show distribution of users across days in a week
df_users = df_rentals[['hr', 'active_scooter']]
df_users.groupby(['hr']).sum().sort_values(by='active_scooter', ascending=False).plot(kind='bar', figsize=(10,5))
plt.ylabel('Active Users')
plt.xlabel('Hour of Day');
###Output
_____no_output_____
###Markdown
- There are most number of users during the morning (08:00am) and evening (16:00pm - 19:00pm) commute, probably to and from work.- There are the least number of users after midnight into the early hours of the morning (01:00am - 05:00am), probably because people are resting at home.- This is in line with findings from Section 7.1 [Correlation](correlation). 8. Feature Selection
###Code
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
cols_select = cols_all.copy()
cols_select.remove('active_scooter')
X_select = df_rentals_1hot[cols_select]
y_select = df_rentals_1hot['active_scooter']
# Select the best 20 features based on univariate regression f-value
kbest = SelectKBest(score_func=f_regression, k=20)
fit_select = kbest.fit(X_select, y_select)
df_scores = pd.DataFrame(fit_select.scores_)
df_columns = pd.DataFrame(X_select.columns)
scores = pd.concat([df_columns, df_scores], axis=1)
scores.columns = ['Column','Score']
# Print top 20 features with the highest scores
print(scores.nlargest(20, 'Score'))
###Output
Column Score
1 registered_scooter 1.342875e+06
0 guest_scooter 9.516842e+03
2 temperature 2.696592e+03
3 feels_like_temperature 2.641487e+03
27 hr_17 1.992050e+03
4 relative_humidity 1.742026e+03
28 hr_18 1.476275e+03
40 hr_8 9.374951e+02
36 hr_4 7.417971e+02
35 hr_3 7.048507e+02
37 hr_5 6.486411e+02
30 hr_2 6.258045e+02
19 hr_1 5.594278e+02
18 hr_0 4.176335e+02
29 hr_19 3.758596e+02
26 hr_16 2.878321e+02
38 hr_6 2.553515e+02
10 weather_light snow/rain 2.478676e+02
34 hr_23 2.300845e+02
7 weather_clear 1.946128e+02
###Markdown
- Comparing the top 20 features from SelectKBest (above) and the most correlated (positively/negatively) features from Section 7.1 [Correlation](correlation), the following are the common features: - registered_scooter - guest_scooter - temperature - feels_like_temperature - relative_humidity - weather_light snow/rain - hr_0, hr_1, hr_2, hr_3, hr_4, hr_5, hr_8, hr_16, hr_17, hr_18, hr_19, hr_23 - As mentioned in Section 7.1 [Correlation](correlation), temperature and feels_like_temperature are highly correlated. feels_like_temperature is a heat index, calculated from temperature and relative_humidity (refer to Section 5.4 [temperature, feels_like_temperature Columns](temperature_feels_like_temperature_columns)). As such, I will drop the feature feels_like_temperature as a predictor of active_scooter.- From Section 7.4 [Bar Charts](bar_charts), a large proportion of active users rented e-scooters when weather conditions were clear. As such, I will include the feature weather_clear.- The feature day_of_wk_Sunday is amongst one of the top 20 scores from SelectKBest. Also from Section 7.4 [Bar Charts](bar_charts), there seems to be less users on e-scooters on Sundays. As such, I will include the feature day_of_wk_Sunday.- Below is the list of 19 selected features: - registered_scooter - guest_scooter - temperature - relative_humidity - weather_clear - weather_light snow/rain - day_of_wk_Sunday - hr_0, hr_1, hr_2, hr_3, hr_4, hr_5, hr_8, hr_16, hr_17, hr_18, hr_19, hr23
###Code
cols_selected = ['active_scooter', 'registered_scooter', 'guest_scooter', 'temperature', 'relative_humidity',
'weather_clear', 'weather_light snow/rain',
'day_of_wk_Sunday',
'hr_0', 'hr_1', 'hr_2', 'hr_3', 'hr_4', 'hr_5', 'hr_8', 'hr_16', 'hr_17', 'hr_18', 'hr_19', 'hr_23']
df_selected = df_rentals_1hot[cols_selected]
###Output
_____no_output_____
###Markdown
9. Feature Pre-processing 9.1 Remove Outliers- From Section 7.2 [Outliers](outliers), independent variables guest_scooter, registered_scooter and windspeed contain large numbers of outliers.- I shall remove these outliers, taking reference from their box plots.
###Code
len(df_selected)
###Output
_____no_output_____
###Markdown
- Before removing outliers, we have 17,009 observations.
###Code
# Remove outliers from registered_scooter and guest_scooter base on their maximum values in the box plots
df_selected = df_selected[df_selected.registered_scooter<=3491]
df_selected = df_selected[df_selected.guest_scooter<=346]
len(df_selected)
###Output
_____no_output_____
###Markdown
- After removing outliers, I'm left with 15,240 observations. That's about a 10% reduction of observations.
###Code
ol_cols = ['registered_scooter', 'guest_scooter', 'temperature', 'relative_humidity']
df_outliers = df_selected[ol_cols]
cols_count = len(ol_cols)
plt.figure(figsize=(12, 6))
# Generate box plots for all numerical independent variables
for i in range(0, cols_count):
plt.subplot(1, cols_count, i+1)
sns.set_style('whitegrid')
sns.boxplot(df_outliers[ol_cols[i]], color='green', orient='v')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
- There still exist outliers in the variables guest_scooter and registered_scooter. However, their numbers have been siginificantly reduced as evident from the length of trailing dots. 8.2 Scaling - As mention in Section 4. [Summary Statistics](summary_statistics), there is a need to perform scaling to the features due to the difference in the range of values. - Many machine learning algorithms perform better or converge faster when features are on a relatively similar scale and/or close to normally distributed.- To cater for algorithms that require close to 0 mean and unit variance, I've decided to use standard scaling.
###Code
cols_X = df_selected.columns.to_list()
cols_X.remove('active_scooter')
#cols_X.remove('registered_scooter')
# Independent variables
X = df_selected[cols_X]
# Target/dependent variable
y = df_selected['active_scooter']
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# Split dataset into train and test subsets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=19)
# Compare number of observations in train, test and original datasets
len(X_train), len(X_test), len(df_selected)
std_scaler = StandardScaler()
# Standard scale the independent variables of the train dataset
df_X_train_ss = std_scaler.fit_transform(X_train)
df_X_train_ss = pd.DataFrame(df_X_train_ss, columns=X_train.columns)
# Standard scale the independent variables of the test dataset
df_X_test_ss = std_scaler.fit_transform(X_test)
df_X_test_ss = pd.DataFrame(df_X_test_ss, columns=X_train.columns)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After StandardScaler')
sns.kdeplot(df_X_train_ss['guest_scooter'], ax=ax1)
sns.kdeplot(df_X_train_ss['registered_scooter'], ax=ax1)
sns.kdeplot(df_X_train_ss['temperature'], ax=ax1)
sns.kdeplot(df_X_train_ss['relative_humidity'], ax=ax1)
#sns.kdeplot(df_X_train_ss['weather_clear'], ax=ax1);
#sns.kdeplot(df_X_train_ss['day_of_wk_Sunday'], ax=ax1);
#sns.kdeplot(df_X_train_ss['hr_0'], ax=ax1);
###Output
_____no_output_____
###Markdown
9. Modelling 9.1 Multi Linear Regression
###Code
from sklearn.linear_model import LinearRegression
from sklearn import metrics
lr = LinearRegression()
lr.fit(df_X_train_ss, y_train)
# Show coefficients regression model
df_coef = pd.DataFrame(lr.coef_, df_X_train_ss.columns, columns=['Coefficient'])
# Predict active users using test dataset
y_pred = lr.predict(df_X_test_ss)
# Compare prediction against actual active users
df_compare = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df_compare.head(10)
df_compare = df_compare.head(50)
df_compare.plot(kind='bar',figsize=(18,10))
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
###Output
Mean Absolute Error: 3.7980859212990157
Mean Squared Error: 21.24319797301177
Root Mean Squared Error: 4.6090343861824
###Markdown
Exploratory Data Analysis
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Histogram
###Code
# read dataset
df = pd.read_csv('/data/winequality/winequality-red.csv',
sep=';')
# create histogram
bin_edges = np.arange(0, df['residual sugar'].max() + 1, 1)
fig = plt.hist(df['residual sugar'], bins=bin_edges)
# add plot labels
plt.xlabel('count')
plt.ylabel('residual sugar')
plt.show()
###Output
_____no_output_____
###Markdown
Scatterplot
###Code
# create scatterplot
fig = plt.scatter(df['pH'], df['residual sugar'])
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
###Output
_____no_output_____
###Markdown
Scatterplot Matrix
###Code
df.columns
# create scatterplot matrix
fig = sns.pairplot(data=df[['alcohol', 'pH', 'residual sugar', 'quality']],
hue='quality')
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
###Output
_____no_output_____
###Markdown
Bee Swarm Plot - useful for small datasets but can be slow on large datasets
###Code
# create bee swarm plot
sns.swarmplot(x='quality', y='residual sugar',
data=df[df['quality'] < 6])
plt.show()
###Output
_____no_output_____
###Markdown
Empirical Cumulative Distribution Function Plots
###Code
# sort and normalize data
x = np.sort(df['residual sugar'])
y = np.arange(1, x.shape[0] + 1) / x.shape[0]
# create ecd fplot
plt.plot(x, y, marker='o', linestyle='')
# add plot labels
plt.ylabel('ECDF')
plt.xlabel('residual sugar')
percent_four_or_less = y[x <= 4].max()
print('%.2f percent have 4 or less units residual sugar' %
(percent_four_or_less*100))
eightieth_percentile = x[y <= 0.8].max()
plt.axhline(0.8, color='black', linestyle='--')
plt.axvline(eightieth_percentile, color='black', label='80th percentile')
plt.legend()
plt.show()
###Output
92.18 percent have 4 or less units residual sugar
###Markdown
Boxplots - Distribution of data in terms of median and percentiles (median is the 50th percentile)
###Code
percentiles = np.percentile(df['alcohol'], q=[25, 50, 75])
percentiles
###Output
_____no_output_____
###Markdown
manual approach:
###Code
for p in percentiles:
plt.axhline(p, color='black', linestyle='-')
plt.scatter(np.zeros(df.shape[0]) + 0.5, df['alcohol'])
iqr = percentiles[-1] - percentiles[0]
upper_whisker = min(df['alcohol'].max(), percentiles[-1] + iqr * 1.5)
lower_whisker = max(df['alcohol'].min(), percentiles[0] - iqr * 1.5)
plt.axhline(upper_whisker, color='black', linestyle='--')
plt.axhline(lower_whisker, color='black', linestyle='--')
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
###Output
_____no_output_____
###Markdown
using matplotlib.pyplot.boxplot:
###Code
plt.boxplot(df['alcohol'])
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Violin Plots
###Code
plt.violinplot(df['alcohol'], [0],
points=100,
bw_method='scott',
showmeans=False,
showextrema=True,
showmedians=True)
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Outputs- Training set as dataframe parquet (contains image file location, ready for cvmodel inference)
###Code
from collections import deque
import matplotlib.pyplot as plt
import os
import numpy as np
import pandas as pd
import seaborn as sns
from typing import Deque, Dict, Any, List
# Characters such as empty strings '' or numpy.inf are considered NA values
pd.set_option('use_inf_as_na', True)
pd.set_option('display.max_columns', 999)
pd.set_option('display.max_rows', 999)
sns.set(style="whitegrid")
train = pd.read_csv(f'input/train.csv')
train.info()
original_len = len(train)
train.set_index(['Patient', 'Weeks'], inplace=True, drop=False)
assert original_len == len(train)
train.info()
train.head(20)
pids = train['Patient'].unique()
print(f'len(pids)={len(pids)}')
train['Weeks'].hist(bins=100)
train['Weeks'].describe()
train['FVC'].hist(bins=100)
train['FVC'].describe()
blacklist = {'ID00011637202177653955184', 'ID00052637202186188008618'}
train = train.query('Patient not in @blacklist')
assert len(train.Patient.unique()) == 174
###Output
_____no_output_____
###Markdown
Get last three FVC readings per patientAdd the features extracted from images e.g. lung area, tissue area
###Code
imf = pd.read_parquet(f'input/processed/imf.parquet')
imf.info()
def explode(row: Dict[str, Any]) -> List[Dict[str, Any]]:
res: List[Dict[str, Any]] = []
pid = row['pid']
path = f'input/processed/{pid}'
for filename in os.listdir(path):
r = dict(row)
r['img'] = f'{pid}/{filename}'
res.append(r)
return res
def set_last_visits(
row: Dict[str, Any],
last_weeks: Deque[int],
last_fvc: Deque[float]
) -> None:
if len(last_fvc) == 0:
raise ValueError('there should be at least one fvc reading per patient')
elif len(last_fvc) == 1:
last_fvc.append(last_fvc[0])
last_fvc.append(last_fvc[0])
elif len(last_fvc) == 2:
last_fvc.append(last_fvc[1])
elif len(last_fvc) > 3:
raise ValueError('get last 3 fvc readings per patient')
if len(last_weeks) == 0:
raise ValueError('there should be at least one week number per patient')
elif len(last_weeks) == 1:
last_weeks.append(last_weeks[0])
last_weeks.append(last_weeks[0])
elif len(last_weeks) == 2:
last_weeks.append(last_weeks[1])
elif len(last_weeks) > 3:
raise ValueError('get last 3 fvc readings per patient')
row['fvc_last_1'] = last_fvc[2]
row['fvc_last_2'] = last_fvc[1]
row['fvc_last_3'] = last_fvc[0]
row['week_last_1'] = last_weeks[2]
row['week_last_2'] = last_weeks[1]
row['week_last_3'] = last_weeks[0]
rows = []
row: Dict[str, Any] = {}
prev = None
last_weeks: Deque[int] = deque()
last_fvc: Deque[float] = deque()
for t in train.itertuples():
# new patient
if prev is not None and prev != t.Patient:
set_last_visits(row, last_weeks, last_fvc)
rows += explode(row)
if prev is None or prev != t.Patient:
row = {}
last_weeks = deque()
last_fvc = deque()
row['pid'] = t.Patient
row['age'] = t.Age
row['sex'] = t.Sex
row['smoking'] = t.SmokingStatus
row['week_1'] = t.Weeks
row['fvc_1'] = t.FVC
row['percent_1'] = t.Percent
prev = t.Patient
last_weeks.append(t.Weeks)
if len(last_weeks) == 4:
last_weeks.popleft()
last_fvc.append(t.FVC)
if len(last_fvc) == 4:
last_fvc.popleft()
# add the last patient!
if len(row) != 0:
set_last_visits(row, last_weeks, last_fvc)
rows += explode(row)
train = pd.DataFrame.from_records(rows)
assert len(train) == len(imf)
train.set_index(['img'], drop=False, inplace=True)
train.sort_index(inplace=True)
imf.set_index(['img'], drop=False, inplace=True)
imf.sort_index(inplace=True)
assert train.iloc[0]['img'] == imf.iloc[0]['img']
train['lung_area'] = imf['lung_area']
train['tissue_area'] = imf['tissue_area']
train['lung_tissue_ratio'] = train['lung_area'] / train['tissue_area']
train = train.astype({
'pid': str,
'img': str,
'age': np.uint8,
'sex': str,
'smoking': str,
'week_1': np.int16,
'fvc_1': np.uint16,
'percent_1': np.float32,
'fvc_last_1': np.uint16,
'fvc_last_2': np.uint16,
'fvc_last_3': np.uint16,
'week_last_1': np.int16,
'week_last_2': np.int16,
'week_last_3': np.int16,
'lung_area': np.uint32,
'tissue_area': np.uint32,
'lung_tissue_ratio': np.float32
})
train.info()
train.head()
train['lung_area'].describe()
train['tissue_area'].describe()
train['lung_tissue_ratio'].describe()
groups = train.groupby(['pid']).min()
groups['week_1'].describe()
groups['week_last_3'].describe()
groups['week_last_2'].describe()
groups['week_last_1'].describe()
groups['fvc_1'].describe()
groups['fvc_last_1'].describe()
groups['fvc_last_2'].describe()
groups['fvc_last_3'].describe()
train.to_parquet('output/train.parquet', index=False)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
#!pip install pandas_profiling
#profile = ProfileReport(client_df, title="Pandas Profiling Report")
#profile
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from pandas_profiling import ProfileReport
# Shows plots in jupyter notebook
%matplotlib inline
# Set plot style
sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
--- Loading data with PandasWe need to load `client_data.csv` and `price_data.csv` into individual dataframes so that we can work with them in Python. For this notebook and all further notebooks, it will be assumed that the CSV files will the placed in the same file location as the notebook. If they are not, please adjust the directory within the `read_csv` method accordingly.
###Code
client_df = pd.read_csv('./client_data.csv')
price_df = pd.read_csv('./price_data.csv')
###Output
_____no_output_____
###Markdown
You can view the first 3 rows of a dataframe using the `head` method. Similarly, if you wanted to see the last 3, you can use `tail(3)`
###Code
client_df.head(3)
price_df.head(3)
###Output
_____no_output_____
###Markdown
--- Descriptive statistics of data Data typesIt is useful to first understand the data that you're dealing with along with the data types of each column. The data types may dictate how you transform and engineer features.To get an overview of the data types within a data frame, use the `info()` method.
###Code
client_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 14606 entries, 0 to 14605
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 14606 non-null object
1 channel_sales 14606 non-null object
2 cons_12m 14606 non-null int64
3 cons_gas_12m 14606 non-null int64
4 cons_last_month 14606 non-null int64
5 date_activ 14606 non-null object
6 date_end 14606 non-null object
7 date_modif_prod 14606 non-null object
8 date_renewal 14606 non-null object
9 forecast_cons_12m 14606 non-null float64
10 forecast_cons_year 14606 non-null int64
11 forecast_discount_energy 14606 non-null float64
12 forecast_meter_rent_12m 14606 non-null float64
13 forecast_price_energy_off_peak 14606 non-null float64
14 forecast_price_energy_peak 14606 non-null float64
15 forecast_price_pow_off_peak 14606 non-null float64
16 has_gas 14606 non-null object
17 imp_cons 14606 non-null float64
18 margin_gross_pow_ele 14606 non-null float64
19 margin_net_pow_ele 14606 non-null float64
20 nb_prod_act 14606 non-null int64
21 net_margin 14606 non-null float64
22 num_years_antig 14606 non-null int64
23 origin_up 14606 non-null object
24 pow_max 14606 non-null float64
25 churn 14606 non-null int64
dtypes: float64(11), int64(7), object(8)
memory usage: 2.9+ MB
###Markdown
You can see that all of the `datetime` related columns are not currently in datetime format. We will need to convert these later.
###Code
price_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 193002 entries, 0 to 193001
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 193002 non-null object
1 price_date 193002 non-null object
2 price_off_peak_var 193002 non-null float64
3 price_peak_var 193002 non-null float64
4 price_mid_peak_var 193002 non-null float64
5 price_off_peak_fix 193002 non-null float64
6 price_peak_fix 193002 non-null float64
7 price_mid_peak_fix 193002 non-null float64
dtypes: float64(6), object(2)
memory usage: 11.8+ MB
###Markdown
StatisticsNow let's look at some statistics about the datasets. We can do this by using the `describe()` method.
###Code
client_df.describe()
pd.DataFrame({"Missing values (%)":round(client_df.isnull().sum()/len(client_df), 2)})
price_df.describe()
(price_df.isnull().sum()/len(price_df.index)*100).plot(kind="bar", figsize=(18,10)) # Set axis labels
plt.xlabel("Variables")
plt.ylabel("Missing values (%)")
plt.title("Proporting of missing values")
plt.show()
data = client_df.merge(price_df, how = 'left', on = 'id')
data.head(3)
###Output
_____no_output_____
###Markdown
**Checking the duplicates**
###Code
data[data.duplicated()]
###Output
_____no_output_____
###Markdown
**Datatype correction**
###Code
data[['channel_sales','has_gas', 'origin_up']].nunique()
#____________ Convert to Categories _____________
data[['channel_sales','has_gas', 'origin_up']] = data[['channel_sales','has_gas', 'origin_up']].astype('category')
#____________ Convert date to Datetime________
#import datetime as dt
#data[['date_activ','date_end','date_modif_prod','date_renewal', 'price_date']] = pd.to_datetime(data[['date_activ','date_end','date_modif_prod','date_renewal', 'price_date']])
###Output
_____no_output_____
###Markdown
**Outlier Removal**Remove the bottom 10% of observations. This outlier removal method will remove negative prices and forecasted prices
###Code
original_data = data.copy()
churn_data = data[['id','churn']].copy()
#data = data.drop('churn', axis=1)
int_cols = data.select_dtypes(include=[np.int64])
int_cols = list(int_cols.columns)
float_cols = data.select_dtypes(include=[np.float64])
float_cols = list(float_cols.columns)
categ_cols = data.select_dtypes(include=['category'])
categ_cols = list(categ_cols.columns)
from scipy import stats
data = pd.DataFrame(stats.trim1(data, 0.1, tail='left'), columns=data.columns)
data[int_cols] = data[int_cols].astype(np.int64)
data[float_cols] = data[float_cols].astype(np.float64)
data[categ_cols] = data[categ_cols].astype('category')
data.describe(), data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 157635 entries, 0 to 157634
Data columns (total 33 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 157635 non-null object
1 channel_sales 157635 non-null category
2 cons_12m 157635 non-null int64
3 cons_gas_12m 157635 non-null int64
4 cons_last_month 157635 non-null int64
5 date_activ 157635 non-null object
6 date_end 157635 non-null object
7 date_modif_prod 157635 non-null object
8 date_renewal 157635 non-null object
9 forecast_cons_12m 157635 non-null float64
10 forecast_cons_year 157635 non-null int64
11 forecast_discount_energy 157635 non-null float64
12 forecast_meter_rent_12m 157635 non-null float64
13 forecast_price_energy_off_peak 157635 non-null float64
14 forecast_price_energy_peak 157635 non-null float64
15 forecast_price_pow_off_peak 157635 non-null float64
16 has_gas 157635 non-null category
17 imp_cons 157635 non-null float64
18 margin_gross_pow_ele 157635 non-null float64
19 margin_net_pow_ele 157635 non-null float64
20 nb_prod_act 157635 non-null int64
21 net_margin 157635 non-null float64
22 num_years_antig 157635 non-null int64
23 origin_up 157635 non-null category
24 pow_max 157635 non-null float64
25 churn 157635 non-null int64
26 price_date 157635 non-null object
27 price_off_peak_var 157635 non-null float64
28 price_peak_var 157635 non-null float64
29 price_mid_peak_var 157635 non-null float64
30 price_off_peak_fix 157635 non-null float64
31 price_peak_fix 157635 non-null float64
32 price_mid_peak_fix 157635 non-null float64
dtypes: category(3), float64(17), int64(7), object(6)
memory usage: 36.5+ MB
###Markdown
--- Data VisualizationIf you're working in Python, two of the most popular packages for visualization are `matplotlib` and `seaborn`. We highly recommend you use these, or at least be familiar with them because they are ubiquitous!Below are some functions that you can use to get started with visualizations. **3.1 Co-relation**
###Code
plt.figure(figsize=(20, 10))
mask = np.triu(np.ones_like(data.corr(), dtype=np.bool))
heatmap = sns.heatmap(data.corr(), mask=mask, annot = True, cmap='Spectral')
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':18}, pad=16);
def plot_stacked_bars(dataframe, title_, size_=(18, 10), rot_=0, legend_="upper right"):
"""
Plot stacked bars with annotations
"""
ax = dataframe.plot(
kind="bar",
stacked=True,
figsize=size_,
rot=rot_,
title=title_
)
# Annotate bars
annotate_stacked_bars(ax, textsize=14)
# Rename legend
plt.legend(["Retention", "Churn"], loc=legend_)
# Labels
plt.ylabel("Company base (%)")
plt.show()
def annotate_stacked_bars(ax, pad=0.99, colour="white", textsize=13):
"""
Add value annotations to the bars
"""
# Iterate over the plotted rectanges/bars
for p in ax.patches:
# Calculate annotation
value = str(round(p.get_height(),1))
# If value is 0 do not annotate
if value == '0.0':
continue
ax.annotate(
value,
((p.get_x()+ p.get_width()/2)*pad-0.05, (p.get_y()+p.get_height()/2)*pad),
color=colour,
size=textsize
)
def plot_distribution(dataframe, column, ax, bins_=50):
"""
Plot variable distirbution in a stacked histogram of churned or retained company
"""
# Create a temporal dataframe with the data to be plot
temp = pd.DataFrame({"Retention": dataframe[dataframe["churn"]==0][column],
"Churn":dataframe[dataframe["churn"]==1][column]})
# Plot the histogram
temp[["Retention","Churn"]].plot(kind='hist', bins=bins_, ax=ax, stacked=True)
# X-axis label
ax.set_xlabel(column)
# Change the x-axis to plain style
ax.ticklabel_format(style='plain', axis='x')
###Output
_____no_output_____
###Markdown
3.2) Churn The dataset was imbalanced and only 10% customers churned Thhe first function `plot_stacked_bars` is used to plot a stacked bar chart. An example of how you could use this is shown below:
###Code
sns.catplot(x="churn", kind="count", palette="YlOrBr", data=data)
plt.ylabel('Frequency')
plt.xlabel('Churn')
plt.xticks([0,1], ['No', 'Yes'])
plt.title('Churning Status')
churn = client_df[['id', 'churn']]
churn.columns = ['Companies', 'churn']
churn_total = churn.groupby(churn['churn']).count()
churn_percentage = churn_total / churn_total.sum() * 100
plot_stacked_bars(churn_percentage.transpose(), "Churning status", (5, 5), legend_="lower right")
###Output
_____no_output_____
###Markdown
3.3) SME Activity We have different sales channels, 7 in no.; but customer churn happened mainly in 2 channels, which needs to be analyzed.
###Code
pd.DataFrame({'Frequency':data['channel_sales'].value_counts()})
channel = data[['id', 'channel_sales', 'churn']]
channel = channel.groupby([channel["channel_sales"], channel["churn"]])["id"].count().unstack(level=1)
channel_churn = (channel.div(channel.sum(axis=1), axis=0)*100).sort_values(by=[1], ascending=False)
plot_stacked_bars(channel_churn, "Sales Channel", rot_=45)
###Output
_____no_output_____
###Markdown
3.4) Consumption The second function `annotate_bars` is used by the first function, but the third function `plot_distribution` helps you to plot the distribution of a numeric column. An example of how it can be used is given below:
###Code
consumption = client_df[['id', 'cons_12m', 'cons_gas_12m', 'cons_last_month', 'imp_cons', 'has_gas', 'churn']]
fig, axs = plt.subplots(nrows=1, figsize=(18, 5))
plot_distribution(consumption, 'cons_12m', axs)
fig, axs = plt.subplots(nrows=4, figsize=(18,25))
# Plot histogram
plot_distribution(consumption, "cons_12m", axs[0])
# Note that the gas consumption must have gas contract
plot_distribution(consumption[consumption["has_gas"] == "t"],"cons_gas_12m", axs[1])
plot_distribution(consumption, "cons_last_month", axs[2])
plot_distribution(consumption, "imp_cons", axs[3])
fig, axs = plt.subplots(nrows=4, figsize=(18,25))
# Plot histogram
sns.boxplot(x=consumption["cons_12m"], ax=axs[0])
sns.boxplot(x=consumption[consumption["has_gas"] == "t"]["cons_gas_12m"], ax=axs[1])
sns.boxplot(x=consumption["cons_last_month"], ax=axs[2])
sns.boxplot(x=consumption["imp_cons"], ax=axs[3])
# Remove scientific notation
for ax in axs:
ax.ticklabel_format(style='plain', axis='x')
# Set x-axis limit
axs[0].set_xlim(-200000, 2000000)
axs[1].set_xlim(-200000, 2000000)
axs[2].set_xlim(-20000, 100000)
plt.show()
sns.countplot(x="has_gas",hue='churn', data=data, color="r")
plt.xlabel('Client is also a gas client')
plt.ylabel('Frequency')
plt.title('Churn status of clients based on if they are also gas clients')
power = data[["id","pow_max", "churn"]]
fig, axs = plt.subplots(nrows=1, figsize=(18,10))
plot_distribution(power, "pow_max", axs)
others = data[["id","nb_prod_act","num_years_antig", "origin_up", "churn"]]
###Output
_____no_output_____
###Markdown
3.5. Electricity campaign the customer first subscribed to
###Code
origin = others.groupby([others["origin_up"],others["churn"]])["id"].count().unstack(level=1)
origin_percentage = (origin.div(origin.sum(axis=1), axis=0)*100)
plot_stacked_bars(origin_percentage, "Electricity campaign the customer first subscribed to")
###Output
_____no_output_____
###Markdown
3.6) Forecast
###Code
#list(data.columns)
data.rename(columns = {'forecast_price_energy_off_peak': 'forecast_price_energy_p1', 'forecast_price_energy_peak':'forecast_price_energy_p2', 'forecast_price_pow_off_peak':'forecast_price_pow_p1'}, inplace = True)
forecast = data[['forecast_cons_12m', 'forecast_cons_year', 'forecast_discount_energy',
'forecast_meter_rent_12m', 'forecast_price_energy_p1',
'forecast_price_energy_p2', 'forecast_price_pow_p1', 'id', 'churn']]
fig, axs = plt.subplots(nrows=4, figsize=(18,25))
# Plot histogram
plot_distribution(forecast, "forecast_cons_12m", axs[0])
plot_distribution(forecast, "forecast_cons_year", axs[1])
plot_distribution(forecast, "forecast_discount_energy", axs[2])
plot_distribution(forecast, "forecast_meter_rent_12m", axs[3])
###Output
_____no_output_____
###Markdown
3.7) Relationships between price of energy and customer churn for the three periods For the first period, there is a statistically significant difference in the price of energy for churned and retained customers.
###Code
data.rename(columns = {'price_off_peak_var': 'price_p1_var', 'price_peak_var':'price_p2_var', 'price_mid_peak_var':'price_p3_var', 'price_off_peak_fix':'price_p1_fix','price_peak_fix': 'price_p2_fix', 'price_mid_peak_fix':'price_p3_fix'}, inplace = True)
sns.catplot(x="churn", y="price_p1_var", kind="box", data=data)
plt.ylabel('Price of energy for the 1st period')
plt.xticks([0,1],['No', 'Yes'])
plt.title('Boxplot of energy price for customers in the 1st period')
sns.catplot(x="churn", y="price_p2_var", kind="box", data=data)
plt.ylabel('Price of energy for the 2nd period')
plt.xticks([0,1],['No', 'Yes'])
plt.title('Boxplot of energy price for customers in the 2nd period')
sns.catplot(x="churn", y="price_p3_var", kind="box", data=data)
plt.ylabel('Price of energy for the 3rd period')
plt.xticks([0,1],['No', 'Yes'])
plt.title('Boxplot of energy price for customers in the 3rd period')
###Output
_____no_output_____
###Markdown
**Price of the power in the first period and the Customer Churn**
###Code
sns.catplot(x="churn", y="price_p1_fix", kind="box", data=data)
plt.ylabel('Price of power for the 1st period')
plt.xticks([0,1],['No', 'Yes'])
plt.title('Boxplot of power price for customers in the 1st period')
data.to_csv('C:/Users/Karthika/Documents/Data Analytics Course/Module 2/processed_data_w_outliers.csv', index=False)
###Output
_____no_output_____
###Markdown
处理图片文件夹和标签
###Code
data = []
dir = "E:/MLDataset/Classify/AID/AID/"
labels = sorted(os.listdir(dir))
for i in labels:
for j in os.listdir(os.path.join(dir,i)):
data.append([f"{i}/{j}",i])
data = pd.DataFrame(data=data,columns=["path","label"])
print(data.head())
data.to_csv("data/data.csv",index=False,sep="\t")
Counter(data.label)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000 entries, 0 to 9999
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 path 10000 non-null object
1 label 10000 non-null object
dtypes: object(2)
memory usage: 156.4+ KB
###Markdown
样例
###Code
def load_image(path):
image = cv2.imread(dir+path)
return cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
plt.imshow(load_image(data.path[0]))
###Output
_____no_output_____
###Markdown
分析
###Code
plt.figure("",figsize=(12,8))
print(data.label.value_counts())
sns.countplot(y="label",data=data,orient='v')
###Output
Pond 420
Viaduct 420
DenseResidential 410
River 410
Beach 400
Industrial 390
Parking 390
Port 380
Farmland 370
Playground 370
Airport 360
Bridge 360
StorageTanks 360
Park 350
Commercial 350
Mountain 340
Square 330
BareLand 310
Desert 300
School 300
SparseResidential 300
Resort 290
MediumResidential 290
Stadium 290
Meadow 280
Center 260
RailwayStation 260
Forest 250
Church 240
BaseballField 220
Name: label, dtype: int64
###Markdown
各类可视化
###Code
labels
# num = 4
# for i in labels:
# a = data[data.label==i]
# a = a.sample(n=num)
# fig,ax = plt.subplots(1,num,sharex=True,sharey=True,figsize=(15,6))
# ax = ax.flatten()
# for j in range(num):
# ax[j].imshow(load_image(a.iloc[j,0]))
# ax[j].set_title(f"{os.path.split(a.iloc[j, 0])[-1]},{i}")
# plt.tight_layout()
###Output
_____no_output_____
###Markdown
拆分训练集和测试集
###Code
split = int(len(data)*0.8)
data = data.sample(frac=1,random_state=2020) #打乱
data[:split].to_csv("data/train.csv",index=False,sep="\t")
data[split:].to_csv("data/val.csv",index=False,sep="\t")
plt.figure("",figsize=(12,8))
co3 = Counter(data.label)
sns.lineplot(x=list(co3.keys()),y = list(co3.values()),label="All Data")
co = Counter(data[:split].label)
sns.lineplot(x=list(co.keys()),y = list(co.values()),label="train")
co2 = Counter(data[split:].label)
sns.lineplot(x=list(co2.keys()),y = list(co2.values()),label="val")
###Output
_____no_output_____
###Markdown
Analysis by Quintile Values
###Code
fig, axes = plt.subplots(ncols=2, nrows=3, figsize=(10, 8), dpi=100,
constrained_layout=True)
stats = ['fare', 'trip_total',
'trip_seconds', 'trip_miles',
'fare_per_sec', 'fare_per_mile']
titles = ['Fare', 'Trip Total',
'Duration (sec.)', 'Distance (mi.)',
'Fare per Second', 'Fare per Mile']
for s, t, ax in zip(stats, titles, axes.flatten()):
pdf_taxi = df_taxi.loc[:, s].to_frame()
pdf_taxi.loc[:, 'percentile'] = pd.qcut(df_taxi.loc[:, s], 5, labels=range(1, 6))
pdf_taxi = pdf_taxi.groupby('percentile').mean()
pdf_tnp = df_tnp.loc[:, s].to_frame()
pdf_tnp.loc[:, 'percentile'] = pd.qcut(df_tnp.loc[:, s], 5, labels=range(1, 6))
pdf_tnp = pdf_tnp.groupby('percentile').mean()
pdf = pd.concat([pdf_taxi, pdf_tnp], axis=1)
pdf.columns = ['Taxi', 'Ridesharing']
pdf.plot.bar(ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
ax.set_title(t)
sns.despine()
fig, axes = plt.subplots(ncols=2, figsize=(10, 3), dpi=100,
constrained_layout=True)
stats = ['fare', 'trip_total']
titles = ['Avg. Fare', 'Avg. Trip Total']
ylabels = ['$', '$']
for s, t, yl, ax in zip(stats, titles, ylabels, axes.flatten()):
pdf_taxi = df_taxi.loc[:, s].to_frame()
pdf_taxi.loc[:, 'percentile'] = pd.qcut(df_taxi.loc[:, s], 5, labels=range(1, 6))
pdf_taxi = pdf_taxi.groupby('percentile').mean()
pdf_tnp = df_tnp.loc[:, s].to_frame()
pdf_tnp.loc[:, 'percentile'] = pd.qcut(df_tnp.loc[:, s], 5, labels=range(1, 6))
pdf_tnp = pdf_tnp.groupby('percentile').mean()
pdf = pd.concat([pdf_taxi, pdf_tnp], axis=1)
pdf.columns = ['Taxi', 'Ridesharing']
pdf.plot.bar(ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
ax.set_title(t)
ax.set_ylabel(yl)
ax.set_xlabel("Quintile")
print(pdf)
sns.despine()
fig, axes = plt.subplots(ncols=2, figsize=(10, 3), dpi=100,
constrained_layout=True)
stats = ['trip_seconds', 'trip_miles']
titles = ['Avg. Duration (in minutes)', 'Avg. Distance (in miles)']
ylabels = ['minutes', 'miles']
for s, t, yl, ax in zip(stats, titles, ylabels, axes.flatten()):
pdf_taxi = df_taxi.loc[:, s].to_frame()
pdf_taxi.loc[:, 'percentile'] = pd.qcut(df_taxi.loc[:, s], 5, labels=range(1, 6))
pdf_taxi = pdf_taxi.groupby('percentile').mean()
pdf_tnp = df_tnp.loc[:, s].to_frame()
pdf_tnp.loc[:, 'percentile'] = pd.qcut(df_tnp.loc[:, s], 5, labels=range(1, 6))
pdf_tnp = pdf_tnp.groupby('percentile').mean()
pdf = pd.concat([pdf_taxi, pdf_tnp], axis=1)
if (s == 'trip_seconds'):
pdf = pdf / 60
pdf.columns = ['Taxi', 'Ridesharing']
pdf.plot.bar(ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
ax.set_title(t)
ax.set_ylabel(yl)
ax.set_xlabel("Quintile")
print(pdf)
sns.despine()
fig, axes = plt.subplots(ncols=2, figsize=(10, 3), dpi=100,
constrained_layout=True)
stats = ['fare_per_sec', 'fare_per_mile']
titles = ['Avg. Fare per Minute', 'Avg. Fare per Mile']
ylabels = ['$ per minute', '$ per mile']
for s, t, yl, ax in zip(stats, titles, ylabels, axes.flatten()):
pdf_taxi = df_taxi.loc[:, s].to_frame()
pdf_taxi.loc[:, 'percentile'] = pd.qcut(df_taxi.loc[:, s], 5, labels=range(1, 6))
pdf_taxi = pdf_taxi.groupby('percentile').mean()
pdf_tnp = df_tnp.loc[:, s].to_frame()
pdf_tnp.loc[:, 'percentile'] = pd.qcut(df_tnp.loc[:, s], 5, labels=range(1, 6))
pdf_tnp = pdf_tnp.groupby('percentile').mean()
pdf = pd.concat([pdf_taxi, pdf_tnp], axis=1)
if (s == 'fare_per_sec'):
pdf = pdf * 60
pdf.columns = ['Taxi', 'Ridesharing']
pdf.plot.bar(ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
ax.set_title(t)
ax.set_ylabel(yl)
ax.set_xlabel("Quintile")
print(pdf)
print(pdf.diff(axis=1))
sns.despine()
###Output
Taxi Ridesharing
percentile
1 0.663055 0.372156
2 0.819056 0.551847
3 0.946160 0.645146
4 1.126548 0.760798
5 2.022173 1.178683
Taxi Ridesharing
percentile
1 NaN -0.290899
2 NaN -0.267209
3 NaN -0.301013
4 NaN -0.365749
5 NaN -0.843490
Taxi Ridesharing
percentile
1 2.677449 1.287966
2 3.919445 1.947984
3 5.302143 2.539083
4 6.796507 3.352901
5 35.707535 10.525294
Taxi Ridesharing
percentile
1 NaN -1.389483
2 NaN -1.971461
3 NaN -2.763061
4 NaN -3.443606
5 NaN -25.182241
###Markdown
Tipping Behavior Comparison between Taxi and Ridesharing
###Code
pdf = df.pivot_table(index='TransportType', columns='has_tip', aggfunc="size")
pdf = pdf.divide(pdf.sum(axis=1), axis=0)
pdf.columns = ['Did Not Tip', 'Tipped']
fig, ax = plt.subplots(figsize=(3, 4), dpi=100)
pdf.loc[:, ['Tipped', 'Did Not Tip']].plot \
.bar(stacked=True, color=['#FF6712', '#DDDDDD'], rot=0, ax=ax)
ax.set_xticklabels(['Taxi', 'Ridesharing'])
ax.set_xlabel("")
ax.set_ylabel("% of Trips")
ax.set_title("% of Trips that Tipped / Did Not Tip")
sns.despine()
pdf
###Output
_____no_output_____
###Markdown
Tips per Payment Type
###Code
df_payment_tips = df_taxi.pivot_table(index=['payment_type'],
columns='has_tip', aggfunc='size')
df_payment_tips.columns = ['Did Not Tip', 'Tipped']
fig, ax = plt.subplots(figsize=(7, 4), dpi=100)
df_payment_tips.loc[:, ['Tipped', 'Did Not Tip']].divide(df_payment_tips.sum(axis=1), axis=0) \
.plot.bar(stacked=True, ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
sns.despine()
ax.set_xticklabels(['Cash', 'Credit Card', 'Mobile', 'Prepaid Card', 'Unknown'])
ax.set_xlabel('Payment Type')
ax.set_ylabel('% of Trips')
ax.set_title("% of Trips with Tips per Payment Type")
print(df_payment_tips.loc[:, ['Tipped', 'Did Not Tip']].divide(df_payment_tips.sum(axis=1), axis=0))
df_payment_tips = df_taxi.pivot_table(index=['payment_type'],
columns='has_tip', aggfunc='size')
df_payment_tips.columns = ['Did Not Tip', 'Tipped']
fig, ax = plt.subplots(figsize=(7, 2), dpi=100)
# df_payment_tips.loc[:, ['Tipped', 'Did Not Tip']].divide(df_payment_tips.sum(axis=1), axis=0) \
# .plot.bar(stacked=True, ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
df_payment_tips.loc[:, ['Tipped', 'Did Not Tip']] \
.plot.bar(stacked=True, ax=ax, color=['#FF6712', '#DDDDDD'], rot=0)
sns.despine()
ax.set_xticklabels(['Cash', 'Credit Card', 'Mobile', 'Prepaid Card', 'Unknown'])
ax.set_xlabel('Payment Type')
ax.set_ylabel('Trip Count')
ax.set_title("No. Trips with Tips per Payment Type")
###Output
_____no_output_____
###Markdown
Define Utility Functions
###Code
def retrieve_file(file_name, gs_bucket):
blob = storage.Blob(file_name, gs_bucket)
content = blob.download_as_string()
return content
def display_description(field_name, description):
print(re.search(re.escape(field_name) + r':.+\n\n(.+\n)+', description)[0])
def split_data_sequentially(data, test_size=0.1):
test_length = int(len(data) * test_size)
train = data[:-test_length].copy()
test = data[-test_length:].copy()
return train, test
def transform_dataset(df, **kwargs):
if kwargs:
features_missing_values = kwargs['features_missing_values']
df.drop(features_missing_values, axis=1, inplace=True)
for f in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
df[f].fillna('None', inplace=True)
df.GarageYrBlt.fillna(0, inplace=True, downcast='infer')
for f in ['BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2']:
df[f].fillna('None', inplace=True)
df['years_built_sold'] = [max(0, sold-built) for sold, built in zip(df['YrSold'], df['YearBuilt'])]
df['years_remod_sold'] = [max(0, sold-remod) for sold, remod in zip(df['YrSold'], df['YearRemodAdd'])]
df['YearBuilt_cat'] = pd.cut(df['YearBuilt'],
bins=[0, 1910, 1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010],
labels=['0', '1910', '1920', '1930', '1940', '1950', '1960', '1970', '1980', '1990', '2000']).astype(np.dtype('O'))
df['YearRemodAdd_cat'] = pd.cut(df['YearRemodAdd'],
bins=[0, 1910, 1920, 1930, 1940, 1950, 1960, 1970, 1980, 1990, 2000, 2010],
labels=['0', '1910', '1920', '1930', '1940', '1950', '1960', '1970', '1980', '1990', '2000']).astype(np.dtype('O'))
df['years_remod_sold_bins'] = pd.cut(df['years_remod_sold'],
bins=[0, 5, 10, 20, 30, 40, 50, 60],
labels=['0', '5', '10', '20', '30', '40', '50']).astype(np.dtype('O'))
df['years_built_sold_bins'] = pd.cut(df['years_built_sold'],
bins=[0, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110],
labels=['0', '5', '10', '20', '30', '40', '50', '60', '70', '80', '90', '100']).astype(np.dtype('O'))
df['year_built_x_year_remod'] = df['YearBuilt_cat'] + '_x_' + df['YearRemodAdd_cat']
cat_features = df.dtypes[df.dtypes==np.dtype('O')].index.to_list()
if kwargs:
target_enc = kwargs['target_encoder']
encoded_features = target_enc.transform(df[cat_features])
df.loc[:, cat_features] = encoded_features
median_values = kwargs['median_values']
for f in df.columns:
median_values[f] = df[f].median()
df.fillna(median_values, inplace=True, downcast='infer')
for f in ['LotArea', 'LotFrontage', '1stFlrSF', 'GrLivArea']:
df[f + '_log'] = np.log1p(df[f])
df.drop(f, axis=1, inplace=True)
return df
###Output
_____no_output_____
###Markdown
Ingest Data
###Code
client = storage.Client()
bucket = client.get_bucket('ames-house-dataset')
data = pd.read_csv(BytesIO(retrieve_file('train.csv', bucket)), index_col=0)
desc = retrieve_file('data_description.txt', bucket).decode('utf-8')
data.head()
###Output
_____no_output_____
###Markdown
Split Data
###Code
test_size = 0.2
random_seed = 42
train, valid = model_selection.train_test_split(data, test_size=test_size, random_state=random_seed)
train = train.copy()
valid = valid.copy()
###Output
_____no_output_____
###Markdown
Identify Features With Missing Values
###Code
fig = plt.figure(figsize=(8, 8))
missing_values = train.isna().sum() / len(train)
to_plot = missing_values[missing_values > 0].sort_values()
plt.barh(to_plot.index, to_plot.values)
plt.xlabel('missing values')
plt.ylabel('column name')
plt.title('Missing Values')
plt.show()
display_description('LotFrontage', desc)
###Output
LotFrontage: Linear feet of street connected to property
LotArea: Lot size in square feet
###Markdown
We can drop all features where more than 40% of the information is missing.
###Code
features_missing_values = missing_values[missing_values>0.4].index.to_list()
features_missing_values
train.drop(features_missing_values, axis=1, inplace=True)
missing_values = train.isna().sum() / len(train)
###Output
_____no_output_____
###Markdown
An interesting fact to notice is that all garage-related fields have the same number of missing values. This could indicate that houses where those features have missing values do not have garage.
###Code
missing_values[missing_values.index.str.match(r'^Garage') % missing_values>0]
for f in ['GarageType', 'GarageFinish', 'GarageQual', 'GarageCond']:
train[f].fillna('None', inplace=True)
train.GarageYrBlt.fillna(0, inplace=True, downcast='infer')
missing_values = train.isna().sum() / len(train)
###Output
_____no_output_____
###Markdown
We can observe the same for basement.
###Code
missing_values[missing_values.index.str.match(r'^Bsmt') % missing_values>0]
for f in ['BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2']:
train[f].fillna('None', inplace=True)
missing_values = train.isna().sum() / len(train)
###Output
_____no_output_____
###Markdown
In the case of the `MasVnrType`, `MasVnrArea` and `Electrical`, there is alreade a `None` value, so missing might mean something else. The sum of all rows where those features have missing values is less than 10, so we can just drop them. However, if we deploy the model in production, missing value encountered during serving won't be handled. Therefore, we can set a strategy where:* For numeric features, the median will be imputed;* For categorical features, category boosing encoding will be applied. Subsequently, the median value of the transformed feature will be imputed; Add New Features
###Code
train['years_built_sold'] = [max(0, sold-built) for sold, built in zip(train['YrSold'], train['YearBuilt'])]
train['years_remod_sold'] = [max(0, sold-remod) for sold, remod in zip(train['YrSold'], train['YearRemodAdd'])]
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
def str_get_dummies(df, columns, sep=',', drop_first=False, prefix=None, prefix_sep='_'):
"""Wrapper of pd.Series.str.get_dummies() to behave like pd.get_dummies()"""
for p, col in zip(prefix, columns):
str_dummy_df = df[col].str.get_dummies(sep=sep)
if prefix is not None:
prefixed_cols = [prefix_sep.join([p, c]) for c in str_dummy_df.columns]
str_dummy_df.columns = prefixed_cols
if drop_first:
first_col = str_dummy_df.columns[0]
str_dummy_df = str_dummy_df.drop(columns=[first_col])
df = df.drop(columns=[col])
df = pd.concat((df, str_dummy_df), axis=1)
return df
def extract_rotten_rating(rating_list):
"""Extract info from ratings column using pd.Series.apply()"""
try:
ratings = json.loads(rating_list.replace("'", '"'))
for rating in ratings:
if rating['Source'] == 'Rotten Tomatoes':
return float(rating['Value'].replace('%', ''))
except AttributeError:
return np.nan
# Custom function to extract rotten tomatoes ratings
movie['rotten_tomatoes'] = movie['Ratings'].apply(extract_rotten_rating)
# Convert numeric columns stored as strings
movie['Runtime'] = pd.to_numeric(movie['Runtime'].str.split(' ').str[0])
movie['BoxOffice'] = pd.to_numeric(movie['BoxOffice'].str.replace(r'[\$,]', ''))
movie['imdbVotes'] = pd.to_numeric(movie['imdbVotes'].str.replace(',', ''))
# Convert datetime columns stored as strings
movie['Released'] = pd.to_datetime(movie['Released'])
movie['added_to_netflix'] = pd.to_datetime(movie['added_to_netflix'])
movie['added_to_netflix_year'] = movie['added_to_netflix'].dt.year
# Extract numbers from Awards columns
movie['award_wins'] = movie['Awards'].str.extract(r'(\d) win').astype(float)
movie['award_noms'] = movie['Awards'].str.extract(r'(\d) nomination').astype(float)
movie['oscar_wins'] = movie['Awards'].str.extract(r'Nominated for (\d) Oscar').astype(float)
award_cols = ['award_wins', 'award_noms', 'oscar_wins']
movie[award_cols] = movie[award_cols].fillna(0)
drop_columns = ['Poster', 'flixable_url', 'Response',
'Awards', 'Rated', 'imdbID', 'DVD', 'Website',
'BoxOffice', 'Released', 'added_to_netflix',
'Writer', 'Actors', 'Plot',
'rotten_tomatoes', 'Metascore', 'Production',
'totalSeasons', 'Runtime', 'Director',
'Title', 'Ratings']
movie = movie.drop(columns=drop_columns)
list_cols = ['Genre', 'Language', 'Country']
movie_dummy = str_get_dummies(movie,
columns=list_cols,
sep=', ',
prefix=list_cols,
drop_first=False)
movie_dummy = movie_dummy.dropna(subset=['imdbRating'])
movie_dummy.isna().mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
EDA
###Code
def barplot_dummies(df, prefix, max_n=15):
cols = [c for c in df if c.startswith(prefix)]
counts = df[cols].sum().sort_values(ascending=False)
counts = counts[:max_n]
counts.index = [i.replace(prefix, '') for i in counts.index]
counts.plot.barh()
plt.title(prefix)
plt.show()
plot_cols = ['Type', 'mpaa_rating']
for plot_col in plot_cols:
fig = sns.countplot(plot_col, data=movie)
fig.set_xticklabels(fig.get_xticklabels(), rotation=90)
plt.show()
prefixes = ['Genre_', 'Country_', 'Language_']
for prefix in prefixes:
barplot_dummies(movie_dummy, prefix)
sns.heatmap(movie.corr(), vmin=-1, vmax=1)
plt.show()
###Output
_____no_output_____
###Markdown
Model Prep
###Code
movie_dummy = str_get_dummies(movie,
columns=list_cols,
sep=', ',
prefix=list_cols,
drop_first=True)
movie_dummy = pd.get_dummies(movie_dummy,
columns=['Type', 'mpaa_rating'],
drop_first=True)
movie_dummy = movie_dummy.dropna()
movie_dummy.shape
y_col = 'imdbRating'
X = movie_dummy.drop(columns=[y_col])
y = movie_dummy[y_col]
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=42)
n_trees = 200
params = {'subsample': [0.5, 0.75, 1.0],
'colsample_bytree': [0.5, 0.75, 1.0],
'max_depth': [3, 4, 7]}
xgb_cv = XGBRegressor(objective='reg:squarederror',
n_estimators=n_trees,
earning_rate=2 / n_trees)
xgb_cv = GridSearchCV(xgb_cv, params, cv=2, verbose=1)
xgb_cv.fit(X_train, y_train)
train_score = xgb_cv.score(X_train, y_train)
test_score = xgb_cv.score(X_test, y_test)
print(f'Train score: {train_score:.2f}')
print(f'Test score: {test_score:.2f}')
y_pred = xgb_cv.predict(X_test)
min_pred = min(y_pred)
max_pred = max(y_pred)
x = [min_pred, max_pred]
y = [min_pred, max_pred]
plt.scatter(y_pred, y_test)
plt.plot(x, y)
plt.xlabel('Fitted')
plt.ylabel('Actual')
plt.xlim((min_pred, max_pred))
plt.ylim((min_pred, max_pred))
plt.show()
###Output
_____no_output_____
###Markdown
load some image
###Code
from keras.preprocessing.image import load_img, img_to_array
from IPython.display import display
from PIL import Image
npic = 5 # Displaying 5 images from the dataset
npix = 224
target_size = (npix,npix,3)
count = 1
fig = plt.figure(figsize=(10,20))
for jpgfnm in uni_filenames[-5:]:
filename = images_dir + '/' + jpgfnm
captions = list(df["captions"].loc[df["id"]==jpgfnm].values)
image_load = load_img(filename, target_size=target_size)
ax = fig.add_subplot(npic,2,count,xticks=[],yticks=[])
ax.imshow(image_load)
count += 1
ax = fig.add_subplot(npic,2,count)
plt.axis('off')
ax.plot()
ax.set_xlim(0,1)
ax.set_ylim(0,len(captions))
for i, caption in enumerate(captions):
ax.text(0,i,caption,fontsize=20)
count += 1
plt.show()
###Output
_____no_output_____
###Markdown
Clean captions for further research
###Code
# Defining a function to calculate the top 5 words in all the captions available for the images
def df_word(df):
vocabulary = []
for txt in df.captions.values:
vocabulary.extend(txt.split())
print('Vocabulary Size: %d' % len(set(vocabulary)))
ct = Counter(vocabulary)
dfword = pd.DataFrame({"word":list(ct.keys()),"count":list(ct.values())})
dfword = dfword.sort_values("count",ascending=False)
dfword = dfword.reset_index()[["word","count"]]
return(dfword)
dfword = df_word(df)
dfword.head(5)
import string
print("\nLowercase..")
def lowercase(text_original):
text_lower = text_original.lower()
return(text_lower)
print("\nRemove punctuations..")
def remove_punctuation(text_original):
text_no_punctuation = text_original.translate(str.maketrans('','',string.punctuation))
return(text_no_punctuation)
print("\nRemove a single character word..")
def remove_single_character(text):
text_len_more_than1 = ""
for word in text.split():
if len(word) > 1:
text_len_more_than1 += " " + word
return(text_len_more_than1)
print("\nRemove words with numeric values..")
def remove_numeric(text,printTF=False):
text_no_numeric = ""
for word in text.split():
isalpha = word.isalpha()
if printTF:
print(" {:10} : {:}".format(word,isalpha))
if isalpha:
text_no_numeric += " " + word
else:
print(word)
return(text_no_numeric)
def text_clean(text_original):
text = lowercase(text_original)
text = remove_punctuation(text)
# text = remove_single_character(text)
text = remove_numeric(text)
return(text)
for i, caption in enumerate(df.captions.values):
newcaption = text_clean(caption)
df["captions"].iloc[i] = newcaption
###Output
covid19
n95
covid19
covid19
200
###Markdown
Length Statistic
###Code
import numpy as np
lengths = []
for i, caption in enumerate(df.captions.values):
lengths.append(len(caption.split(' ')))
print("Mean lengths: ", np.mean(lengths))
print("Number of sentences: {}".format(len(df)))
index_min_length = np.argmin(lengths)
for i, length in enumerate(lengths):
if length == lengths[index_min_length]:
print(i)
print("Index min length: ", index_min_length)
print(df.id.values[index_min_length])
print("Min length {}, sentence: {}".format(lengths[index_min_length], df.captions.values[index_min_length]))
# # print(max(lengths))
###Output
Mean lengths: 12.882901994060246
Number of sentences: 9428
80
1463
1464
2266
3156
6234
6292
7683
Index min length: 80
2802F3DAED.jpg
Min length 5, sentence: hình minh họa virus
###Markdown
Plotting the top 50 words that appear in the cleaned dataset
###Code
topn = 50
def plthist(dfsub, title="The top 50 most frequently appearing words"):
plt.figure(figsize=(20,3))
plt.bar(dfsub.index,dfsub["count"])
plt.yticks(fontsize=20)
plt.xticks(dfsub.index,dfsub["word"],rotation=90,fontsize=20)
plt.title(title,fontsize=20)
plt.show()
dfword = df_word(df)
plthist(dfword.iloc[:topn,:],
title="The top 50 most frequently appearing words")
plthist(dfword.iloc[-topn:,:],
title="The least 50 most frequently appearing words")
###Output
Vocabulary Size: 1626
###Markdown
calculate mean and std
###Code
from config import CFG
import pandas as pd
from transformation import get_transforms
from dataset import TestDataset, TrainDataset
import torch
from torch.utils.data import DataLoader
from utils import *
from tqdm.auto import tqdm
#---------READ DATA--------------------
df = pd.read_csv('../data/vietcap4h-public-test/test_captions.csv')
def get_test_file_path(image_id):
return CFG.test_path + "/images_public_test/{}".format(image_id)
def get_test_id(path_file):
return path_file.split('/')[-1]
test = df
test['file_path'] = test['id'].apply(get_test_file_path)
print(f'test.shape: {test.shape}')
test.head()
# ---------------- CALCULATE MEAN, STD---------------------
def calculate_mean_std():
test_ds = TestDataset(test, transform = get_transforms(data = 'valid'))
test_loader = DataLoader(test_ds, batch_size = CFG.batch_size, shuffle = False, num_workers = CFG.num_workers)
print('==> Computing mean and std..')
mean = 0.
std = 0.
for images in tqdm(test_loader, total = len(test_loader)):
inputs = images.float()
# Rearrange batch to be the shape of [B, , C, W * H]
inputs = inputs.view(inputs.size(0), inputs.size(1), -1)
# Compute mean and std here
mean += inputs.mean(2).sum(0)/255.0
std += inputs.std(2).sum(0)/255.0
mean /= len(test_ds)
std /= len(test_ds)
print(mean, std)
calculate_mean_std()
###Output
==> Computing mean and std..
###Markdown
eda.ipynb作者:艾宏峰创建时间:2020.10.19修改时间:2020.10.22EDA:1. 数据展示2. 描述性统计3. 特征相关性分析4. 异常样本检测5. 缺失值检测6. 重复样本检测7. 其他针对性研究
###Code
import pandas as pd
import numpy as np
from IPython.display import display
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei']
import math
from tqdm import tqdm
import seaborn as sns
import palettable
import datetime
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
1. 数据展示 |字段| 类型 | 说明||--|--|--||QUEUE_ID| INT | 队列标识,每个ID代表一个唯一的队列||CU| INT | 队列规格,不同规格的资源大小不一样。1CU为1核4GB||STATUS| STRING| 队列状态,当前队列的状态是否可用||QUEUE_TYPE| STRING| 队列类型,不同类型适用于不同的任务,常见的有通用队列(general)和SQL队列||PLATFORM| STRING| 队列平台,创建队列的机器平台||CPU_USAGE| INT | CPU使用率,集群中各机器节点的CPU平均使用率||MEM_USAGE| INT | 内存使用率,集群中各机器节点的内存平均使用率||LAUNCHING_JOB_NUMS| INT | 提交中的作业数,即正在等待执行的作业||RUNNING_JOB_NUMS| INT | 运行中的作业数||SUCCEED_JOB_NUMS| INT | 已完成的作业数||CANCELLED_JOB_NUMS| INT | 已取消的作业数||FAILED_JOB_NUMS| INT | 已失败的作业数||DOTTING_TIME| BIGINT| 采集时间,每5分钟进行一次采集||RESOURCE_TYPE| STRING| 资源类型,创建队列的机器类型||DISK_USAGE| INT | 磁盘使用|
###Code
# 载入数据
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/evaluation_public.csv")
print("训练集部分数据展示:")
display(train.head())
print("测试集部分数据展示:")
display(test.head(10))
###Output
训练集部分数据展示:
###Markdown
2. 描述性统计
###Code
# 结合官网说明和数据展示,总结出类别型变量名, 连续型变量名和时间序列变量名
contvar_names = ['CPU_USAGE', 'MEM_USAGE', 'LAUNCHING_JOB_NUMS', 'RUNNING_JOB_NUMS', 'SUCCEED_JOB_NUMS', 'CANCELLED_JOB_NUMS', 'FAILED_JOB_NUMS', 'DISK_USAGE']
catvar_names = ['CU', 'STATUS', 'QUEUE_TYPE', 'PLATFORM', 'RESOURCE_TYPE']
timevar_names = ['DOTTING_TIME']
def stat_analysis(data, cont_var_names, cat_var_names):
'''
参数:
1. data (pd.DataFrame): 输入数据源。
2. cont_var_names (list) : 连续型变量们的名字。
3. cat_var_names (list) : 类别型变量们的名字。
'''
# 样本数量、变量数量
sample_num, var_num = data.shape
print("样本数量: %d" % sample_num)
print("变量数量(包括因变量): %d" % var_num)
# 连续型变量 (类型、平均数、极值)
print("连续型变量 (类型、平均数、极值):")
temp_dict = {"类型":[], "平均值":[], "最小值":[], "最大值":[]}
names = []
for cont_var_name in cont_var_names:
names.append(cont_var_name)
temp_dict["类型"].append(data.dtypes[cont_var_name])
temp_dict["平均值"].append(np.mean(data[cont_var_name]))
temp_dict["最小值"].append(min(data[cont_var_name]))
temp_dict["最大值"].append(max(data[cont_var_name]))
cont_df = pd.DataFrame(temp_dict, index = names)
display(cont_df)
# 类别型变量:数量
print("类别型变量(类型、数量):")
temp_dict = {"类型":[], "数量":[], "unique_cats":[]}
names = []
for cat_var_name in cat_var_names:
names.append(cat_var_name)
temp_dict["类型"].append(data.dtypes[cat_var_name])
temp_dict["数量"].append(len(np.unique(data[cat_var_name].astype(str))))
temp_dict["unique_cats"].append(np.unique(data[cat_var_name].astype(str)))
cat_df = pd.DataFrame(temp_dict, index = names)
display(cat_df)
# 变量分布图
print("连续型变量分布直方图:")
display_num = len(cont_var_names)
display_var_names = cont_var_names
col_num = 3
row_num = math.ceil(display_num / col_num)
fig = plt.figure(figsize = (12,12))
for i in range(1, display_num + 1):
plt.subplot(row_num, col_num, i)
sns.distplot(data[data[display_var_names[i-1]].notnull()][display_var_names[i-1]])
plt.show()
print("类别型变量分布直方图:")
display_num = len(cat_var_names)
display_var_names = cat_var_names
col_num = 3
row_num = math.ceil(display_num / col_num)
fig = plt.figure(figsize = (12,12))
for i in range(1, display_num + 1):
plt.subplot(row_num, col_num, i)
x = list(data[display_var_names[i-1]].value_counts().index)
y = list(data[display_var_names[i-1]].value_counts().values)
plt.bar(range(len(x)), y)
plt.xticks(range(len(x)), [str(v) for v in x])
plt.xlabel(display_var_names[i-1])
for x_loc, y_loc in zip(range(len(x)), y):
plt.text(float(x_loc)+0.05, float(y_loc)+0.05,'%.2f' % float(y_loc), ha='center',va='bottom')
plt.show()
print("训练集:")
stat_analysis(train, contvar_names, catvar_names)
print("测试集:")
stat_analysis(test, contvar_names, catvar_names)
###Output
训练集:
样本数量: 501730
变量数量(包括因变量): 15
连续型变量 (类型、平均数、极值):
###Markdown
3. 特征相关性分析
###Code
# corr函数计算相关性矩阵(correlation matrix)
train_corr = train.corr(method='pearson')
plt.figure(figsize=(11, 9),dpi=100)
sns.heatmap(data=train_corr,
vmax=0.3,
cmap=palettable.cmocean.diverging.Curl_10.mpl_colors,
annot=True,
fmt=".2f",
annot_kws={'size':8, 'weight':'normal', 'color':'#253D24'},
mask=np.triu(np.ones_like(train_corr,dtype=np.bool)),#显示对脚线下面部分图
square=True, linewidths=.5,#每个方格外框显示,外框宽度设置
cbar_kws={"shrink": .5}
)
plt.show()
###Output
_____no_output_____
###Markdown
4. 异常样本检测
###Code
# 从具体数值上,观察异常样本情况
# 绘制箱须图
def boxplot(data, cont_var_names):
'''只对连续性变量画箱须图'''
fig = plt.figure(figsize = (16, 4))
x = [train[i] for i in contvar_names]
labs = cont_var_names
plt.boxplot(x[1:], labels=labs[1:]) # 这里地方除掉了time_point这个变量
plt.xlabel("Continuous Variables")
plt.ylabel("Values")
plt.show()
boxplot(train, contvar_names)
###Output
_____no_output_____
###Markdown
5. 缺失值检测
###Code
def cal_miss_info(data):
miss_count = data.isnull().sum().sort_values(ascending=False)
miss_pert = miss_count / len(data)
miss_info = pd.concat([miss_count, miss_pert], axis=1, keys=["缺失计数", "缺失百分比"])
print(miss_info)
print("===============\n训练集:\n===============")
cal_miss_info(train)
print("===============\n测试集:\n===============")
cal_miss_info(test)
# 查看DISK_USAGE和RESOURCE_TYPE下,缺失数据都集中在哪些队列?
du_miss_qids = np.unique(new_train[new_train['DISK_USAGE'].isnull()]['QUEUE_ID'])
rt_miss_qids = np.unique(new_train[new_train['RESOURCE_TYPE'].isnull()]['QUEUE_ID'])
print("缺失的DISK_USAGE集中在哪些队列:", du_miss_qids)
print("缺失的RESOURCE_TYPE集中在哪些队列:", rt_miss_qids)
###Output
缺失的DISK_USAGE集中在哪些队列: [ 297 298 20889 21487 21671 21673 81221 82695 82697 82929 83109 83609]
缺失的RESOURCE_TYPE集中在哪些队列: [ 297 298 20889 21487 21671 21673 81221 82695 82697 82929 83109 83609]
###Markdown
此外,我对比了下DISK_USAGE和RESOURCE_TYPE,它们是同时缺失的。 6. 重复样本检测
###Code
def duplicate_det(data):
if data.duplicated(subset=['QUEUE_ID', 'CU', 'DOTTING_TIME']).any() != True:
print("无重复无样本")
else:
print("有重复样本")
print("重复样本数量:", len(data[data.duplicated(subset=['QUEUE_ID', 'CU', 'DOTTING_TIME'])]))
print("===============\n训练集:\n===============")
duplicate_det(train)
print("===============\n测试集:\n===============")
duplicate_det(test)
###Output
===============
训练集:
===============
有重复样本
重复样本数量: 13586
===============
测试集:
===============
无重复无样本
###Markdown
7. 其他研究(1) 相同QUEUE_ID下,CU、STATUS、QUEUE_TYPE、PLATFORM、RESOURCE_TYPE都一致吗?(2) 将DOTTING_TIME原格式转为UNIX时间格式,查看有无数据异常情况?(3) 相同QUEUE_ID下,DOTTING_TIME是完美间隔排序的吗?还是存在数据缺失?(4) 抽样观察下某些重复样本相邻正常样本的数据状态,看能不能发现样本重复的原因,或有没有较好的预处理方式? (1) 相同QUEUE_ID下,CU、STATUS、QUEUE_TYPE、PLATFORM、RESOURCE_TYPE都一致吗?
###Code
def study_qid2cu(data, data_type):
qids = []
cus = []
unique_qids = np.unique(data['QUEUE_ID'])
for qid in tqdm(unique_qids):
qids.append(qid)
tmp_cus = []
for i in range(len(data)):
QID = data['QUEUE_ID'][i]
CU = data['CU'][i]
if QID == qid and CU not in tmp_cus:
tmp_cus.append(CU)
cus.append(tmp_cus)
qid2cu_df = pd.DataFrame({'QUEUE_ID':qids, 'CU':cus})
print("在%s中,相同QUEUE_ID下,都有哪些CU:" % data_type)
print(qid2cu_df)
study_qid2cu(train, 'train')
study_qid2cu(test, 'test')
# 查看下85977队列下,“存在单一队列下有不同的队列规格”这一情况的具体详情
train[train['QUEUE_ID']==85977]['CU'].value_counts()
catvar_names = ['CU', 'STATUS', 'QUEUE_TYPE', 'PLATFORM', 'RESOURCE_TYPE']
def study_qid2cu(data, data_type):
qids = []
cus = []
unique_qids = np.unique(data['QUEUE_ID'])
for qid in tqdm(unique_qids):
qids.append(qid)
tmp_cus = []
for i in range(len(data)):
QID = data['QUEUE_ID'][i]
CU = data['STATUS'][i]
if QID == qid and CU not in tmp_cus:
tmp_cus.append(CU)
cus.append(tmp_cus)
qid2cu_df = pd.DataFrame({'QUEUE_ID':qids, 'STATUS':cus})
print("在%s中,相同QUEUE_ID下,都有哪些STATUS:" % data_type)
print(qid2cu_df)
study_qid2cu(train, 'train')
study_qid2cu(test, 'test')
catvar_names = ['CU', 'STATUS', 'QUEUE_TYPE', 'PLATFORM', 'RESOURCE_TYPE']
def study_qid2cu(data, data_type):
qids = []
cus = []
unique_qids = np.unique(data['QUEUE_ID'])
for qid in tqdm(unique_qids):
qids.append(qid)
tmp_cus = []
for i in range(len(data)):
QID = data['QUEUE_ID'][i]
CU = data['QUEUE_TYPE'][i]
if QID == qid and CU not in tmp_cus:
tmp_cus.append(CU)
cus.append(tmp_cus)
qid2cu_df = pd.DataFrame({'QUEUE_ID':qids, 'QUEUE_TYPE':cus})
print("在%s中,相同QUEUE_ID下,都有哪些QUEUE_TYPE:" % data_type)
print(qid2cu_df)
study_qid2cu(train, 'train')
study_qid2cu(test, 'test')
catvar_names = ['CU', 'STATUS', 'QUEUE_TYPE', 'PLATFORM', 'RESOURCE_TYPE']
def study_qid2cu(data, data_type):
qids = []
cus = []
unique_qids = np.unique(data['QUEUE_ID'])
for qid in tqdm(unique_qids):
qids.append(qid)
tmp_cus = []
for i in range(len(data)):
QID = data['QUEUE_ID'][i]
CU = data['PLATFORM'][i]
if QID == qid and CU not in tmp_cus:
tmp_cus.append(CU)
cus.append(tmp_cus)
qid2cu_df = pd.DataFrame({'QUEUE_ID':qids, 'PLATFORM':cus})
print("在%s中,相同QUEUE_ID下,都有哪些PLATFORM:" % data_type)
print(qid2cu_df)
study_qid2cu(train, 'train')
study_qid2cu(test, 'test')
catvar_names = ['CU', 'STATUS', 'QUEUE_TYPE', 'PLATFORM', 'RESOURCE_TYPE']
def study_qid2cu(data, data_type):
qids = []
cus = []
unique_qids = np.unique(data['QUEUE_ID'])
for qid in tqdm(unique_qids):
qids.append(qid)
tmp_cus = []
for i in range(len(data)):
QID = data['QUEUE_ID'][i]
CU = data['RESOURCE_TYPE'][i]
if QID == qid and CU not in tmp_cus:
tmp_cus.append(CU)
cus.append(tmp_cus)
qid2cu_df = pd.DataFrame({'QUEUE_ID':qids, 'RESOURCE_TYPE':cus})
print("在%s中,相同QUEUE_ID下,都有哪些RESOURCE_TYPE:" % data_type)
print(qid2cu_df)
study_qid2cu(train, 'train')
study_qid2cu(test, 'test')
###Output
100%|██████████| 43/43 [05:48<00:00, 12.15s/it]
0%| | 0/23 [00:00<?, ?it/s]
###Markdown
从上面我们发现:(1) 相同队列ID下,CU、STATUS可能不尽相同。但QUEUE_TYPE、PLATFORM、RESOURCE_TYPE(2) 训练集队列包含了测试集中出现的队列 (2) 将DOTTING_TIME原格式转为UNIX时间格式,查看有无数据异常情况?
###Code
def unix_transform(java_time):
'''将UNIX时间戳转换为PYTHON可读的正常时间戳'''
seconds = java_time / 1000.0
sub_seconds = (java_time % 1000.0) / 1000.0
date = datetime.datetime.fromtimestamp(seconds + sub_seconds)
return date
def time_preprocessing(df):
'''将数据集中DOTTING_TIME进行格式转换'''
for i in tqdm(range(len(df))):
formatted_date = unix_transform(df['DOTTING_TIME'][i])
df['DOTTING_TIME'][i] = formatted_date
time_preprocessing(train)
time_preprocessing(test)
# 保存时间转换后的结果
train.to_csv("../data/train1.csv", index = 0)
test.to_csv("../data/test1.csv", index = 0)
def compare_time(df):
'''跟赛题数据发布时间对比,看是否时间异常'''
# 初赛数据发布时间:2020/10/13(20:00:00)
base_time = datetime.datetime(2020, 10, 13, 20, 0, 0)
error_record_num = 0
for i in tqdm(range(len(df))):
record_time = df['DOTTING_TIME'][i]
delta = base_time - record_time
if delta.days < 0:
error_record_num += 1
print("时间异常记录数:%d, 占比:%.4f" % (error_record_num, error_record_num / len(df)))
compare_time(train)
compare_time(test)
###Output
100%|██████████| 501730/501730 [00:04<00:00, 104137.53it/s]
100%|██████████| 14980/14980 [00:00<00:00, 101351.08it/s]
###Markdown
训练集没问题,测试集可能为了脱敏处理。 (3) 相同QUEUE_ID下,DOTTING_TIME是完美间隔排序的吗?还是存在数据缺失?
###Code
# # 先确认下队列85977下的时间排序是不管CU吗?
# t = train[train['QUEUE_ID']==85977]
# t = t.sort_values(by = ['QUEUE_ID', 'DOTTING_TIME']).reset_index(drop=True)
# t.to_csv("../data/train2.csv")
# # 在每个CU切换的时候,时间是连续的,因此时间排序不受CU的影响
# 先排序
train = pd.read_csv("../data/train1.csv")
test = pd.read_csv("../data/test1.csv")
new_train = train.sort_values(by = ['QUEUE_ID', 'DOTTING_TIME']).reset_index(drop=True)
new_test = test.sort_values(by = ['QUEUE_ID', 'DOTTING_TIME']).reset_index(drop=True)
# 查看不同时间间隔的样本量,查看是否完美间隔排序,还是存在数据缺失
def check_time_interval(df):
total_time_intervals = []
less_5_nums = []
equal_5_nums = []
more_5_nums = []
total_nums = []
for qid in tqdm(np.unique(df['QUEUE_ID'])):
dts = df[df['QUEUE_ID']==qid]['DOTTING_TIME']
less_5_num = 0
equal_5_num = 0
more_5_num = 0
total_num = 0
time_intervals = []
for i, dt in enumerate(dts):
if i == 0:
time_intervals.append(0.0)
pass
else:
dt0 = datetime.datetime.strptime(dts.iloc[i-1],'%Y-%m-%d %H:%M:%S')
dt1 = datetime.datetime.strptime(dt,'%Y-%m-%d %H:%M:%S')
time_interval = dt1 - dt0
time_intervals.append(time_interval.total_seconds())
if time_interval == datetime.timedelta(seconds=300):
equal_5_num += 1
elif time_interval > datetime.timedelta(seconds=300):
more_5_num += 1
else:
less_5_num += 1
total_num += 1
less_5_nums.append(less_5_num)
equal_5_nums.append(equal_5_num)
more_5_nums.append(more_5_num)
total_nums.append(total_num)
total_time_intervals.extend(time_intervals)
# 给原始数据追加时间间隔
df['TIME_INTERVAL'] = total_time_intervals
# 收集结果
time_situation = pd.DataFrame({'QUEUE_ID': np.unique(df['QUEUE_ID']),
'less_5':less_5_nums,
'equal_5':equal_5_nums,
'more_5':more_5_nums,
'total':total_nums})
print(time_situation)
return df
new_train = check_time_interval(new_train)
new_test = check_time_interval(new_test)
new_train.to_csv("../data/train2.csv")
new_test.to_csv("../data/test2.csv")
# 查看下‘TIME_INTERVAL’的分布情况
new_train['TIME_INTERVAL'].value_counts()
new_test['TIME_INTERVAL'].value_counts()
###Output
_____no_output_____
###Markdown
上面说明,时间间隔不都是完美间隔,建议后续保留时间间隔去训练模型试试。 (4) 抽样观察下某些重复样本相邻正常样本的数据状态,看能不能发现样本重复的原因,或有没有较好的预处理方式?
###Code
# 查看训练集的重复样本
t = new_train[new_train.duplicated(subset=['QUEUE_ID', 'CU', 'DOTTING_TIME'])]
t
print("共{0}队列有存在重复样本,具体队列是{1}".format(len(np.unique(t['QUEUE_ID'])), np.unique(t['QUEUE_ID'])))
###Output
共10队列有存在重复样本,具体队列是[ 297 298 20889 21487 21671 81221 82697 82929 83109 83609]
###Markdown
对比缺失数据集中的队列:[ 297 298 20889 21487 21671 21673 81221 82695 82697 82929 83109 83609],你会发现:绝大部分缺失数据的队列也存在重复样本问题。极可能说明这几个队列有严重的数据丢失或监测不好的问题。
###Code
# 查看队列298的重复样本
t1 = new_train[new_train['QUEUE_ID']==298]
t1[t1.duplicated(subset=['QUEUE_ID', 'CU', 'DOTTING_TIME'])]
# 查看队列297下,DOTTING_TIME为'2020-02-25 00:04:00'的重复样本
t1[t1['DOTTING_TIME']=='2020-02-25 00:04:00']
# 查看队列298下,DOTTING_TIME为'2020-02-25 01:15:00'的重复样本
t1[t1['DOTTING_TIME']=='2020-02-25 01:15:00']
# 查看队列297下,DOTTING_TIME为'2020-02-25 00:04:00'的重复样本的周围样本
t1.iloc[0:10,:]
# 查看队列298下,DOTTING_TIME为'2020-02-25 01:15:00'的重复样本的周围样本
t1.iloc[12:25,:]
###Output
_____no_output_____
###Markdown
Individual Feature Patterns using Visualization
###Code
df.corr()
fig = plt.figure(figsize=(10,10))
sns.heatmap(df.corr())
plt.show()
df.corr()['price'].plot(kind='bar')
df.corr()['horsepower'].plot(kind='bar')
###Output
_____no_output_____
###Markdown
relationship between columns continous or numerical
###Code
sns.regplot(x='engine-size',y='price', data= df,)
plt.title("positive linear relationship")
plt.show()
sns.regplot(x='stroke',y='price', data= df,)
plt.title("neutral linear relationship")
plt.show()
sns.regplot(x='city-mpg',y='price', data= df,)
plt.title("negative linear relationship")
plt.show()
###Output
_____no_output_____
###Markdown
categorical data
###Code
df.info()
sns.boxplot(x='body-style',y='price',data=df)
import plotly.express as px
fig = px.box(df, x='body-style',y='price')
fig.show()
fig = px.box(df, x='engine-location',y='price')
fig.show()
fig = px.box(df, x='drive-wheels',y='price')
fig.show()
fig = px.box(df, x='make',y='price',title='this column might not be usable for predicting price')
fig.show()
###Output
_____no_output_____
###Markdown
Descriptive Statistical Analysis
###Code
df.describe()
df.describe(include=['object'])
df['drive-wheels'].value_counts()
df['drive-wheels'].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Groups
###Code
df_wheels = df[['drive-wheels','body-style','price']]
df_wheels.groupby('drive-wheels').mean()
df_wheels.groupby(['drive-wheels','body-style']).mean()
wheel_body_df = df_wheels.groupby(['drive-wheels','body-style']).mean().reset_index()
wheel_body_df.pivot(index='body-style',columns='drive-wheels')
###Output
_____no_output_____
###Markdown
p-valueWhat is this P-value? The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.By convention, when the p-value is the p-value is the p-value is the p-value is > 0.1: there is no evidence that the correlation is significant.
###Code
from scipy import stats
p_coef , p_value = stats.pearsonr(df['wheel-base'],df['price'])
if p_value < .05:
print(f"this column is statistically significant {p_value}")
p_coef , p_value = stats.pearsonr(df['length'],df['price'])
if p_value < .05:
print(f"this column is statistically significant {p_value}")
###Output
this column is statistically significant 8.016477466157846e-30
###Markdown
ANOVA - analysis of varianceThe Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:F-test score: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.P-value: P-value tells how statistically significant is our calculated score value.If our price variable is strongly correlated with the variable we are analyzing, expect ANOVA to return a sizeable F-test score and a small p-value.
###Code
df_wheels.groupby('drive-wheels').get_group('fwd')['price']
f_val , p_value = stats.f_oneway(df_wheels.groupby('drive-wheels').get_group('fwd')['price'],
df_wheels.groupby('drive-wheels').get_group('4wd')['price'],
df_wheels.groupby('drive-wheels').get_group('rwd')['price'],)
print(f'f-test-score : {f_val}, p_value : {p_value}')
f_val , p_value = stats.f_oneway(df_wheels.groupby('drive-wheels').get_group('fwd')['price'],
df_wheels.groupby('drive-wheels').get_group('rwd')['price'],)
print(f'f-test-score : {f_val}, p_value : {p_value}')
f_val , p_value = stats.f_oneway(df_wheels.groupby('drive-wheels').get_group('4wd')['price'],
df_wheels.groupby('drive-wheels').get_group('rwd')['price'],)
print(f'f-test-score : {f_val}, p_value : {p_value}')
###Output
f-test-score : 8.580681368924756, p_value : 0.004411492211225333
###Markdown
Let's quickly check out our data.
###Code
import pandas as pd
# https://www.kaggle.com/olistbr/brazilian-ecommerce?select=olist_geolocation_dataset.csv
orders = pd.read_csv('../dynamic_pricing/olist/olist_orders_dataset.csv')
orders.head()
order_list = pd.read_csv('../dynamic_pricing/olist/olist_order_items_dataset.csv')
order_list.head()
order_list.loc[order_list.product_id == 'e44f675b60b3a3a2453ec36421e06f0f']
order_list.product_id.value_counts()
olg = order_list[['product_id', 'price','order_id']].groupby(['product_id', 'price']).count().sort_values('order_id', ascending=False).reset_index() #.reset_index().sort_values('order_id', ascending=False)
olg.columns = ['product_id', 'price','orders']
olg.head(50)
olg.loc[olg.product_id == 'aca2eb7d00ea1a7b8ebd4e68314663af']
olg.product_id.value_counts()
olg.loc[olg.product_id == '437c05a395e9e47f9762e677a7068ce7']
olg[['price','orders']].loc[(olg.product_id == '437c05a395e9e47f9762e677a7068ce7') & (olg.orders > 4)].plot.scatter(x='price',y='orders')
products = pd.read_csv('../dynamic_pricing/olist/olist_products_dataset.csv')
nms = pd.read_csv('../dynamic_pricing/olist/product_category_name_translation.csv')
products = products.merge(nms, on='product_category_name')
products.head()
mpol = olg.merge(products[['product_id', 'product_category_name_english']], on='product_id')
mpol.head()
# mpol.to_csv('merged.csv', index=False)
mpol[['price', 'orders', 'product_category_name_english']].groupby('product_category_name_english').describe()
mpol.product_category_name_english.value_counts()
mpol.loc[(mpol.product_category_name_english == 'bed_bath_table') & (mpol.price < 250)].hist(column='price')
a = mpol.loc[(mpol['product_category_name_english'] == 'bed_bath_table') & (mpol.price <= 300) & (mpol.orders >= 2)][['product_id']].value_counts().reset_index()
a.columns = ['prod', 'cnt']
a.loc[a['cnt'] > 2]
###Output
_____no_output_____
###Markdown
Pick one Item to run through the Demand Model Build
###Code
samp = mpol.loc[mpol.product_id == '2a2d22ae30e026f1893083c8405ca522']
samp
olg[['price','orders']].loc[(olg.product_id == '2a2d22ae30e026f1893083c8405ca522') & (olg.orders >= 2)].plot.scatter(x='price',y='orders')
import numpy as np
X = samp[['price']].to_numpy()
y = samp[['orders']].to_numpy()
nm = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
nm
samp = olg[['price','orders']].loc[(olg.product_id == '2a2d22ae30e026f1893083c8405ca522') & (olg.orders >= 2)]
X = np.array(list(samp['price']))
X = np.vstack([X, np.ones(len(X))]).T
y = np.array(list(samp['orders']))
X
# y = mx + c,
m, c = np.linalg.lstsq(X, y, rcond=None)[0]
m, c
nm
import matplotlib.pyplot as plt
plt.plot(X[:,0], y, 'o', label='Original data', markersize=10)
plt.plot(X[:,0], m*X[:,0] + c, 'r', label='Fitted line')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Deep Learning Try
###Code
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.metrics import accuracy_score, log_loss
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
train_df , test_df = train_test_split(more_data,test_size = 0.2 , random_state = 23)
token = Tokenizer(num_words = None , char_level = False)
token.fit_on_texts(more_data.text)
train_seq = pad_sequences(token.texts_to_sequences(train_df.text),maxlen = 50)
test_seq = pad_sequences(token.texts_to_sequences(test_df.text),maxlen = 50)
train_lab = np.array(train_df.is_offensive)
test_lab = np.array(test_df.is_offensive)
emb = Embedding(len(token.word_index)+1,128,input_length = 50)
dense = Dense(1,activation = 'sigmoid')
model = Sequential()
model.add(emb)
model.add(Flatten())
model.add(dense)
model.compile(optimizer = 'sgd',loss = 'binary_crossentropy',metrics = ['accuracy'])
print(model.summary())
model.fit(train_seq,train_lab,epochs = 5)
loss,accuracy = model.evaluate(test_seq,test_lab)
print(round(accuracy * 100,2),model)
names = ['I hope you die']
name_token = pad_sequences(token.texts_to_sequences(names),maxlen = 50)
pred = model.predict_classes(name_token)
pred
###Output
_____no_output_____
###Markdown
OPEN AI TEST
###Code
import openai
openai.api_key = "sk-AxNaQumDedma7T3Kr98vS0UZ83OP037TMqSvBbQD"
response = openai.Completion.create(engine="davinci", prompt="Most people think machines will take their jobs", max_tokens=50)
response['choices'][0]['text']
###Output
_____no_output_____
###Markdown
SAT-6 Exploratory Data Analysis===
###Code
import os
import numpy as np
import pandas as pd
import scipy.io
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
np.random.seed(1)
###Output
/anaconda3/lib/python3.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import inner1d
###Markdown
Data Format---The data has been saved in two formats. The `sat-6-full.mat` contains all of the data in MATLAB format. It can be loaded in with `scipy` as a dictionary with each key holding a partition. The data is also divided into respective `.csv` files. While this format is more convenient, it is markedly slower to load than the MATLAB format. For the EDA, I will use the `.csv` format.
###Code
data_dir = '/Users/tyler/Datasets/deepsat-sat6/'
os.listdir(data_dir)
# How to load MATLAB file:
# dataset = scipy.io.loadmat(data_dir+'sat-6-full.mat')
###Output
_____no_output_____
###Markdown
Dataset Size and Shape---In all there are **405,000 images** with **324,000 images in the training set** and **81,000 images in the test set**, which gives us a **train/test split of 80/20**.There are six classes for classification: `building`, `barren_land`,`trees`, `grassland`, `road`, `water`.Each image is 28x28 with 4 channels: R (red), G (green), B (blue), and IR (infrared). Together, these dimensions and channels require 3,126 columns to represent the image as a row vector.Each label is a 1x6 row vector. Below is the table of values for the labels:
###Code
label_table = pd.read_csv(data_dir+'sat6annotations.csv', header=None)
label_table
###Output
_____no_output_____
###Markdown
Distribution of Classes---An important question for any classification project is whether each class is well represented.As we see in the charts below, the train and test set have a similar distribution of the six classes. `building` and `road` each have low representation in the datasets (less than 5%).
###Code
classes = ['building', 'barren_land','trees', 'grassland', 'road', 'water']
training_labels = pd.read_csv(data_dir+'y_train_sat6.csv', header=None)
training_labels.columns = classes
training_labels[:5]
perc_of_class_train = {x: round(training_labels[x].sum()/324000, 3) * 100 for (i, x) in enumerate(classes)}
plt.bar(range(len(perc_of_class_train)), list(perc_of_class_train.values()), align='center')
plt.xticks(range(len(perc_of_class_train)), list(perc_of_class_train.keys()))
plt.ylabel('Percent of Class')
plt.xlabel('Classes')
plt.title('Representation of Classes in Training Set')
plt.show()
perc_of_class_train
test_labels = pd.read_csv(data_dir+'y_test_sat6.csv', header=None)
test_labels.columns = classes
perc_of_class_test = {x: round(test_labels[x].sum()/81000, 3) * 100 for (i, x) in enumerate(classes)}
plt.bar(range(len(perc_of_class_test)), list(perc_of_class_test.values()), align='center')
plt.xticks(range(len(perc_of_class_test)), list(perc_of_class_test.keys()))
plt.ylabel('Percent of Class')
plt.xlabel('Classes')
plt.title('Representation of Classes in Test Set')
plt.show()
perc_of_class_test
###Output
_____no_output_____
###Markdown
Visualize Examples---Below are some example from the training set.Note the necessary preprocessing steps. We first must reshape the row vector into a `1x28x28x4` tensor. We clip values outside `[0, 255]`. Then the initial axis is removed from the tensor. We return the first three channels (RGB).To visualize the IR layer, we repeat the process only this time return the fourth channel instead of the first 3.
###Code
X_train = pd.read_csv(data_dir+'X_train_sat6.csv', header=None, nrows=300)
def row_to_img(row_values, ir=False):
if ir:
return row_values.reshape(-1, 28, 28, 4).clip(0, 255).astype(np.uint8).squeeze(axis=0)[:,:,-1]
else:
return row_values.reshape(-1, 28, 28, 4).clip(0, 255).astype(np.uint8).squeeze(axis=0)[:,:,:3]
def get_labels(row_values):
annotations = ['building', 'barren_land','trees', 'grassland', 'road', 'water']
labels = [annotations[i] for i, x in enumerate(row_values) if x == 1]
return labels[0]
fig, axs = plt.subplots(5, 5, figsize = (20, 20))
for i, ax in enumerate(axs.flatten()):
ax.set_title(get_labels(training_labels.iloc[i].values))
ax.imshow(row_to_img(X_train.iloc[i].values))
fig, axs = plt.subplots(5, 5, figsize = (20, 20))
for i, ax in enumerate(axs.flatten()):
ax.set_title(get_labels(training_labels.iloc[i].values))
ax.imshow(row_to_img(X_train.iloc[i].values, ir=True))
###Output
_____no_output_____
###Markdown
Dimensionality Reduction and Cluster Analysis---We can get a sense of how clearly defined our image classes are by projecting sample images down to 2D space and visualizing them. Let us first try using PCA to visualize the sample, then TSNE. If two principal components are not enough to achieve a 95% explained variance score, let us calculate how many principal components are necessary to achieve that. We will use the first 100 rows in the training set as our sample.
###Code
sample = X_train.copy()
sample['labels'] = [get_labels(x.values) for i, x in training_labels[:300].iterrows()]
pca = PCA(n_components=2)
components = pd.DataFrame(pca.fit_transform(sample.drop('labels', axis=1)), columns=['component_1', 'component_2'])
components['labels'] = sample['labels']
components.head()
subsets = [components.loc[components['labels'] == x] for x in classes]
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('PCA of SAT-6', fontsize = 20)
color_map = {'building': '#011627', 'barren_land': '#F71735', 'trees': '#41EAD4', 'grassland': '#5AFF15', 'road': '#FF9F1C', 'water': '#3C6E71'}
for subset in subsets:
label = subset['labels'].values.tolist()[0]
ax.scatter(x=subset['component_1'], y=subset['component_2'], s=50, c=color_map[label])
ax.legend(color_map.keys())
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Clearly the classes may be easily distinguised with a learner. The `water` class appears to be the most distinct. We see two more clusters between the `trees`, `grassland` and `barren_land` classes and the `building` and `road` classes.
###Code
pca.explained_variance_ratio_.cumsum()
###Output
_____no_output_____
###Markdown
We note that the 2 principal components constitute together only achieve an explained variance ratio of 83%.
###Code
pca_95 = PCA(n_components=0.95, svd_solver='full')
components_95 = pca_95.fit_transform(sample.drop('labels', axis=1))
components_95.shape
###Output
_____no_output_____
###Markdown
We would need to include at least 51 components to achieve an explained variance ratio of 95%.
###Code
tsne = TSNE(n_components=2, perplexity=32)
components = pd.DataFrame(tsne.fit_transform(sample.drop('labels', axis=1)), columns=['component_1', 'component_2'])
components['labels'] = sample['labels']
components.head()
subsets = [components.loc[components['labels'] == x] for x in classes]
fig = plt.figure(figsize = (10,10))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_title('TSNE of SAT-6', fontsize = 20)
color_map = {'building': '#011627', 'barren_land': '#F71735', 'trees': '#41EAD4', 'grassland': '#5AFF15', 'road': '#FF9F1C', 'water': '#3C6E71'}
for subset in subsets:
label = subset['labels'].values.tolist()[0]
ax.scatter(x=subset['component_1'], y=subset['component_2'], s=50, c=color_map[label])
ax.legend(color_map.keys())
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The TSNE plot shows a similar relationship to the PCA except that the two clusters of the 5 classes excluding `water` are located much more closely together. Baseline---Before training deep neural networks training on hundres of thousands of images, lets attempt to train at least a weak classifier on the subsample of data. Anything classifier that is 16% accurate or more is better than random.For our baseline, we will use the a random forest classifier trained on 51 principal components.
###Code
clf = RandomForestClassifier(verbose=True)
X = components_95
y = sample['labels']
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
y_pred = clf.predict(X_test)
print(classification_report(y_pred, y_test))
###Output
precision recall f1-score support
barren_land 1.00 0.78 0.88 23
building 0.33 1.00 0.50 2
grassland 0.82 0.75 0.78 12
road 0.00 0.00 0.00 0
trees 0.80 0.80 0.80 10
water 1.00 0.96 0.98 28
avg / total 0.93 0.85 0.88 75
###Markdown
Exploratory Data Analysis
###Code
import numpy as np
import pandas as pd
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from sklearn.linear_model import BayesianRidge
from sklearn.model_selection import TimeSeriesSplit
###Output
_____no_output_____
###Markdown
Gold DataQuandl is a great site (https://www.quandl.com/) with a great selection of free (and premium) financial databases that are easy to access and use.
###Code
import quandl
data = quandl.get("LBMA/GOLD", authtoken="NONE", start_date="2017-02-25")
data = data.iloc[::-1]
# I removed my authtoken here so this will break. You can get your own at quandl.
data.head()
###Output
_____no_output_____
###Markdown
Google Trends DataThis data was pulled using API calls in another notebook and persisted to these csvs
###Code
trend_data = pd.read_csv('data/google_trends_data_eth.csv')
trend_data_2 = pd.read_csv('data/google_trends_data_btc.csv')
trend_data_combo = trend_data.merge(trend_data_2)
trend_data_combo = trend_data_combo.drop(columns = ['isPartial', 'buy ethereum'])
trend_data_combo.set_index(trend_data_combo.date, drop = True, inplace = True)
trend_data_combo.head(3)
trend_data_combo.dtypes
trend_data_combo.plot(figsize = (20,6))
plt.show()
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
trend_data_combo.plot(subplots=True, figsize = [20, 14])
# trend_data_combo.plot(kind = 'scatter', x = trend_data_combo.index, y = trend_data_combo.columns)
trend_data_combo.describe()
###Output
_____no_output_____
###Markdown
Crypto Price Data Building complete dataset - Daily
###Code
import pandas as pd
eth_price_data = pd.read_csv('data/eth_hourly_data.csv')
btc_price_data = pd.read_csv('data/btc_hourly_data.csv')
eth_trend_data = pd.read_csv('data/google_trends_data_eth.csv')
btc_trend_data = pd.read_csv('data/google_trends_data_btc.csv')
print(eth_price_data.shape)
print(btc_price_data.shape)
print(eth_trend_data.shape)
print(btc_trend_data.shape)
eth_price_data = pd.read_csv('eth_hourly_data.csv')
eth_price_data.drop_duplicates(inplace=True)
eth_price_data.reset_index(inplace=True, drop=True)
eth_price_data['time'] = eth_price_data['time'].apply(lambda x: datetime.datetime.utcfromtimestamp(x).strftime('%Y-%m-%d %H:%M:%S'))
eth_price_data.drop(eth_price_data.index[:575], inplace=True)
eth_price_data.reset_index(inplace=True, drop=True)
eth_price_data.drop(eth_price_data.index[13249:], inplace=True)
eth_price_data = eth_price_data[::-1]
eth_price_data.reset_index(inplace=True, drop=True)
eth_price_data.tail(10)
btc_price_data = pd.read_csv('data/btc_hourly_data.csv')
btc_price_data.drop_duplicates(inplace=True)
btc_price_data.reset_index(inplace=True, drop=True)
btc_price_data['time'] = btc_price_data['time'].apply(lambda x: datetime.datetime.utcfromtimestamp(x).strftime('%Y-%m-%d %H:%M:%S'))
btc_price_data.drop(btc_price_data.index[:577], inplace=True)
btc_price_data.drop(btc_price_data.index[13249:], inplace=True)
btc_price_data = btc_price_data[::-1]
btc_price_data.reset_index(inplace=True, drop=True)
btc_price_data.head(1)
btc_price_data.tail(1)
eth_trend_data.drop_duplicates(inplace=True)
eth_trend_data.reset_index(inplace=True, drop=True)
btc_trend_data.drop_duplicates(inplace=True)
btc_trend_data.reset_index(inplace=True, drop=True)
trend_data_combo = eth_trend_data.merge(btc_trend_data)
trend_data_combo.drop_duplicates(inplace=True)
trend_data_combo.reset_index(inplace=True, drop=True)
trend_data_combo.drop(trend_data_combo.index[:144], inplace=True)
trend_data_combo.drop(trend_data_combo.index[13478:], inplace=True)
trend_data_combo.reset_index(inplace=True, drop=True)
trend_data_combo.drop(columns = ['isPartial', 'buy ethereum'], inplace=True)
trend_data_combo.drop_duplicates(subset='date', inplace=True)
trend_data_combo.rename(index=str, columns={"date": "time"}, inplace=True)
trend_data_combo.head(1)
trend_data_combo.tail(1)
# full_data = eth_price_data.merge(btc_price_data, on = eth_price_data.index)
# full_data.head(3)
print(eth_price_data.shape)
print(btc_price_data.shape)
print(trend_data_combo.shape)
# eth_price_data['1-hour-change'] = eth_price_data['open'] - eth_price_data['close']
def hour_change(shift, dataframe, shift_on):
shift_column_name = '{}-hour-{}-shift'.format(shift, shift_on)
change_column_name = '{}-hour-{}-change'.format(shift, shift_on)
dataframe[shift_column_name] = np.nan
dataframe[shift_column_name] = dataframe[shift_on].shift(shift)
dataframe.fillna(method='bfill', inplace=True)
dataframe[change_column_name] = dataframe[shift_on] - dataframe[shift_column_name]
dataframe.drop(columns=[shift_column_name], inplace=True)
return dataframe
shifts = [1, 2, 3, 4, 6, 8, 10, 12]
for x in shifts:
eth_data = hour_change(x, eth_price_data, 'close')
eth_data.tail(3)
eth_data['next_hour_change'] = np.nan
eth_data['next_hour_change'] = eth_data['1-hour-close-change'].shift(-1)
eth_data['sign_change'] = np.sign(eth_data.next_hour_change)
#need this or the last row will be NaN for these columns and mess everything else.
eth_data.fillna(method='ffill', inplace=True)
# (lambda row: label_race (row),axis=1)
eth_data.tail(3)
# for binary classification "no change" was changed from 0 to 1
# only run once! will break and convert everything to nan if you run twice
eth_data['sign_change'] = eth_data.sign_change.astype('int32')
eth_data['sign_change'] = eth_data.sign_change.astype('str')
improve_sign_change = {'-1' : 0, '0': 1, '1': 1}
eth_data.sign_change = eth_data.sign_change.map(improve_sign_change)
#feature engineering
# convering high and low metrics into single "range" metric
# scaling the volumeto data.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(np.array(eth_data['volumeto']).reshape(-1,1))
new_y = scaler.transform(np.array(eth_data['volumeto']).reshape(-1,1))
eth_data['volume'] = new_y
eth_data['range'] = eth_data['high'] - eth_data['low']
eth_data.drop(columns = ['close', 'high', 'low', 'open', 'volumefrom', 'volumeto', 'next_hour_change'], inplace=True)
eth_data.head()
shifts = [1, 2, 3, 4, 6, 8, 10, 12]
for x in shifts:
btc_data = hour_change(x, btc_price_data, 'close')
#need this or the last row will be NaN for these columns and mess everything else.
btc_data.fillna(method='ffill', inplace=True)
#feature engineering
# convering high and low metrics into single "range" metric
# scaling the volumeto data.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(np.array(btc_data['volumeto']).reshape(-1,1))
new_y = scaler.transform(np.array(btc_data['volumeto']).reshape(-1,1))
btc_data['volume'] = new_y
btc_data['range'] = btc_data['high'] - btc_data['low']
btc_data.drop(columns = ['close', 'high', 'low', 'open', 'volumefrom', 'volumeto'], inplace=True)
btc_data.head()
full_data = eth_data.merge(btc_data, on = 'time', suffixes = ('-eth', '-btc'))
full_data.head(3)
full_data_with_trends = full_data.merge(trend_data_combo, on = 'time')
full_data_with_trends.head(1)
###Output
_____no_output_____
###Markdown
CAUTION THE FOLLOWING WILL OVERWRITE EXISTING FILES OF THE SAME NAME
###Code
full_data.to_csv('data/ml_class_data_ver1.csv', mode = "w+")
full_data_with_trends.to_csv('data/ml_class_data_with_trends_ver1.csv', mode = "w+")
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisThis notebook highlights some simple, yet invaluable, exploratory data science techniques.
###Code
# Numpy and Pandas are data science heavy lifters
import numpy as np
import pandas as pd
# Read CSV Argus output from a file
filename = "data/two-hour-sample.parquet"
df = pd.read_parquet(filename)
# Shape is the number of rows and columns of the dataframe
df.shape
# Head prints the first several rows of the dataframe
df.head(20)
df.columns
# `describe` computes "5-number" summaries of the numerical fields
df.describe()
# Get Unique Destination ports
df["Dport"].unique()
# Plot a Degree Distribution
import matplotlib.pyplot as plt
plt.hist(df.groupby("DstAddr").size())
plt.show()
# Select only DNS flows and draw BoxPlots
dns = df[df["Dport"] == 53]
dns.shape
dns[["TotPkts","TotBytes"]].plot(kind='box', subplots=True, layout=(
1, 2), sharex=False, sharey=False)
plt.show()
from pandas.plotting import scatter_matrix
scatter_matrix(df[["Dur","TotPkts", "TotBytes"]])
plt.show()
###Output
_____no_output_____
###Markdown
Reading the h5 files...
###Code
achile_h5path = "/raid/shadab/prateek/genedisco/gd_cache/achilles.h5"
string_h5path = "/raid/shadab/prateek/genedisco/gd_cache/string_embedding.h5"
ccle_h5path = "/raid/shadab/prateek/genedisco/gd_cache/ccle_protein_quantification.h5"
ifng_fpath = "/raid/shadab/prateek/genedisco/gd_cache/schmidt_2021_ifng.h5"
def allkeys(obj):
"Recursively find all keys in an h5py.Group."
keys = (obj.name,)
if isinstance(obj, h5py.Group):
for key, value in obj.items():
if isinstance(value, h5py.Group):
keys = keys + allkeys(value)
else:
keys = keys + (value.name,)
return keys
# achile_h5
# open the file as 'f'
achile_colnames = None
achile_covariates = None
achile_rownames = None
##Achilles
with h5py.File(achile_h5path, 'r') as f:
# achile_h5_data = f['default']
tempKeys = allkeys(f)
print(tempKeys)
for k in tempKeys:
print(f"for k={k} \t value={f.get(k)}")
achile_colnames = list(np.array(f.get('colnames')))
achile_covariates = np.array(f.get('covariates'))
achile_rownames = list(np.array(f.get('rownames')))
achile_colnames = [x.decode("utf-8") for x in achile_colnames]
achile_rownames = [x.decode("utf-8") for x in achile_rownames]
string_colnames = None
string_covariates = None
string_rownames = None
##String
with h5py.File(string_h5path, 'r') as f:
# achile_h5_data = f['default']
tempKeys = allkeys(f)
print(tempKeys)
for k in tempKeys:
print(f"for k={k} \t value={f.get(k)}")
string_colnames = list(np.array(f.get('colnames')))
string_covariates = np.array(f.get('covariates'))
string_rownames = list(np.array(f.get('rownames')))
string_colnames = [x.decode("utf-8") for x in string_colnames]
string_rownames = [x.decode("utf-8") for x in string_rownames]
ccle_colnames = None
ccle_covariates = None
ccle_rownames = None
##ccle
with h5py.File(ccle_h5path, 'r') as f:
# achile_h5_data = f['default']
tempKeys = allkeys(f)
print(tempKeys)
for k in tempKeys:
print(f"for k={k} \t value={f.get(k)}")
ccle_colnames = list(np.array(f.get('colnames')))
ccle_covariates = np.array(f.get('covariates'))
ccle_rownames = list(np.array(f.get('rownames')))
ccle_colnames = [x.decode("utf-8") for x in ccle_colnames]
ccle_rownames = [x.decode("utf-8") for x in ccle_rownames]
print(f"Achille: {achile_covariates.shape} \t string: {string_covariates.shape} \t ccle: {ccle_covariates.shape}")
print("--- Achile ---")
print(achile_colnames[:5])
print(achile_rownames[:5])
print(achile_covariates.shape)
print("--- String ---")
print(string_colnames[:5])
print(string_rownames[:5])
print(string_covariates.shape)
print("--- ccle ---")
print(ccle_colnames[:5])
print(ccle_rownames[:5])
print(ccle_covariates.shape)
ifng_colnames = None
ifng_covariates = None
ifng_rownames = None
with h5py.File(ifng_fpath, 'r') as f:
tempKeys = allkeys(f)
print(tempKeys)
for k in tempKeys:
print(f"for k={k} \t value={f.get(k)}")
ifng_colnames = list(np.array(f.get('colnames')))
ifng_covariates = np.array(f.get('covariates'))
ifng_rownames = list(np.array(f.get('rownames')))
ifng_colnames = [x.decode("utf-8") for x in ifng_colnames]
ifng_rownames = [x.decode("utf-8") for x in ifng_rownames]
print("colnames: ",ifng_colnames[:10])
print("rownames: ",ifng_rownames[:10])
print("covariates: ",ifng_covariates[:10])
hgnc_mapfpath = "/raid/shadab/prateek/genedisco/gd_cache/hgnc_mapping.tsv"
hgnc_df = pd.read_csv(hgnc_mapfpath, sep="\t")
hgnc_df
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Project BriefYou have been hired as a data scientist at a used car dealership. The sales team have been having problems with pricing used cars that arrive at the dealership and would like your help. They have already collected some data from other retailers on the price that a range of cars were listed at. It is known that cars that are more than $2000 above the estimated price will not sell. The sales team wants to know whether you can make predictions within this range.Credit : The dataset was obtained from Kaggle https://www.kaggle.com/adityadesai13/used-car-dataset-ford-and-mercedes Executive SummaryReproduce the conclusion of the EDA here.Summarize any important steps that were taken. Steps1. Understand the Experiment Domain2. Clean and validate data3. Bivariate analysis4. Multivariate analysis5. Conclusion
###Code
# Variables
raw_data_root = './data'
max_features_to_explore = 40
random_seed = 77
import os
import math
import numpy as np
import scipy
from scipy.stats import spearmanr, kendalltau
import pandas as pd
import empiricaldist
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder, StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Understand the Experiment Domain Explain key terms in the experiment domain. List out related works and their conclusions. Disclaimer : I am not an expert in the domain, while I have done my best to do research in the available time. Please clarify if there are any insights explained wrongly, so that I can improve the analysis. Thanks!
###Code
file_path = f'{raw_data_root}/audi.csv'
file_size = os.stat(file_path).st_size / 1024 / 1024
print(f'Since the data file size is small ({file_size:.3} MB), I first load the whole dataset into memory')
raw_data = pd.read_csv(file_path, keep_default_na=False)
raw_data.columns
###Output
_____no_output_____
###Markdown
I split the data into training and test early, to protect my EDA from pre-knowledge of the test data. (For classification) For the training data to be representative, I maintain the proportions of the target column.
###Code
TARGET = 'price'
X_data = raw_data.drop(TARGET, axis=1)
y_data = raw_data[TARGET]
X_train, X_test, y_train, y_test = train_test_split(
X_data, y_data,
# stratify=TARGET,
# test_size = 0.25,
random_state=random_seed)
Train = pd.concat([X_train, y_train], axis=1)
print('First look at the data')
print(f"Number of rows/records: {Train.shape[0]}")
print(f"Number of columns/variables: {Train.shape[1]}")
Train.sample(10, random_state=random_seed).T
# Understand the variables
variables = pd.DataFrame(
columns=['Variable','Number of unique values', 'Some Values']
)
for i, var in enumerate(Train.columns):
variables.loc[i] = [
var,
Train[var].nunique(),
sorted( Train[var].unique().tolist())[:10]
]
var_dict = pd.read_csv(f'{raw_data_root}/variable_explanation.csv', index_col=0)
variables.set_index('Variable').join(var_dict[['Description']])
var_dict.join(variables.set_index('Variable'))
###Output
_____no_output_____
###Markdown
Features From the introduction above we know what features are available and their types. For convenience we can organize the features of the dataset in useful groups:NUMERIC features containing numeric data CATEGORICAL features with categorical values TARGET the target feature for training the model
###Code
NUMERIC = ["year", "mileage", "tax", "mpg", "engineSize",]
CATEGORICAL = ["model", "transmission", "fuelType", ]
###Output
_____no_output_____
###Markdown
Clean and Validate Data
###Code
# Look at null and zero values
variables = pd.DataFrame(
columns=['Variable','NumUnique','NumNulls', 'NumZeros']
)
for i, var in enumerate(Train.columns):
variables.loc[i] = [
var,
Train[var].nunique(),
Train[var].isnull().sum(), # TODO add zero values
len(Train[Train[var] == 0 ]), # TODO add zero values
]
# Join with the variables dataframe
var_dict = pd.read_csv('./data/variable_explanation.csv', index_col=0)
variables.set_index('Variable').join(var_dict[['Description']])
var_dict[['Type']].join(variables.set_index('Variable'))
print('These look ok, 0 is a valid engineSize')
Train[ Train['engineSize'] == 0].sample(10)
def plot_cdf(series, ax = None) :
if not ax :
_fig, ax = plt.subplots()
ax.plot(empiricaldist.Cdf.from_seq(series), label=series.name)
norm_dist = scipy.stats.norm(np.mean(series), np.std(series))
xs = np.linspace(np.min(series), np.max(series))
ax.plot(xs, norm_dist.cdf(xs), ':', label='normal')
ax.set_xlabel(series.name)
ax.legend()
num_charts = len(NUMERIC)
num_cols = 2
num_rows = math.ceil(num_charts / 2)
fig, _ax = plt.subplots( num_rows, num_cols,
constrained_layout=True, figsize=(15,10), )
for i, ax in enumerate(fig.axes) :
if i >= num_charts :
break
plot_cdf( Train[NUMERIC[i]], ax)
_ = plt.suptitle('Cumulative Distribution Functions of Numeric Features', weight='bold')
###Output
_____no_output_____
###Markdown
year and mileage follow an exponential distribution.engineSize and tax look like categories.tax, mpg, and enginesize have positive outliers.year has negative outliers.
###Code
series = Train['engineSize']
plot_cdf(series)
iqr = np.quantile(series, 0.75) - np.quantile(series, 0.25)
fence = np.quantile(series, 0.75) + 3*iqr
plt.axvline( x=fence, ls='--', color='red', label='upper outer fence')
plt.legend()
plt.title('CDF', weight='bold')
outlier_percent = len( Train[ series > fence ]) / len(Train) * 100
print (f'{outlier_percent:.3f}% of the data are extreme upper outliers (> {fence:.3f}) for {series.name}')
series = Train['mileage'].apply(lambda x : np.log(x) )
series.name = 'log(mileage)'
plot_cdf(series)
iqr = np.quantile(series, 0.75) - np.quantile(series, 0.25)
fence = np.quantile(series, 0.25) - 3*iqr
print(f'Fence: {fence:.3f}')
plt.axvline( x=fence, ls='--', color='red', label='lower outer fence')
plt.legend()
plt.title('CDF', weight='bold')
plt.xlabel('log(mileage)')
filter = Train['mileage'].apply(lambda x : np.log(x) ) < fence
outlier_percent = len( Train[ filter ]) / len(Train) * 100
print (f'{outlier_percent:.3f}% of the data are extreme lower outliers (< {fence:.3f}) for {series.name}')
print('Outliers may skew aggregations can create bias in the training model. Remove the outliers that are a small perentage. ')
filter = (Train['engineSize'] <= 3.5) & (Train['mileage'] <= 127000)
Train = Train[ filter ]
y_train = Train[TARGET]
Train.shape
###Output
Outliers may skew aggregations can create bias in the training model. Remove the outliers that are a small perentage.
###Markdown
Bivariate AnalysisLet's see if the categorical variables have any correlation with the target.
###Code
def violin_plot_columns_against_target(df_cat_features, y_train) :
columns = df_cat_features.columns
max_categories = 10
num_cols = 1
num_rows = math.ceil( len(columns) / 1)
fig, _axes = plt.subplots(num_rows, num_cols, figsize=(15, 10), constrained_layout=True, sharey=True)
fig.suptitle('Distribution of categorical variables against price', weight='bold')
for i, ax in enumerate(fig.axes) :
column_name = df_cat_features.columns[i]
if column_name == TARGET:
continue
df_plot = pd.concat([df_cat_features, y_train], axis=1)
title = column_name
if df_plot[column_name].nunique() > max_categories :
title += f' (Top {max_categories} of {df_plot[column_name].nunique()} categories)'
df_plot = df_plot[ df_plot[column_name].isin(
df_plot[column_name].value_counts(
)[:max_categories].index.tolist()
) ]
sns.violinplot(
x = column_name,
y = TARGET,
data = df_plot,
ax = ax,
inner='quartile',
)
ax.xaxis.set_tick_params(rotation=45)
ax.set_title(title)
ax.set_ylabel(TARGET)
coeff, p = scipy.stats.pearsonr(
OrdinalEncoder().fit_transform(
df_plot[[column_name]]
).flatten(),
df_plot[TARGET],
)
if p < 0.1 :
ax.set_xlabel( f' Corr coeff {coeff:0.3} p {p:.3e}', loc='left')
else :
ax.set_xlabel('')
violin_plot_columns_against_target(Train[CATEGORICAL], y_train)
###Output
_____no_output_____
###Markdown
The variable model has a correlation with the target. For transmission, manual has a lower median and IQR than the others. For fuelType, hybrid has a higher median and IQR than the others. Let's see if the numeric variables have any correlation with the target.
###Code
def scatter_plot_columns_against_target(numeric_df, y_train) :
columns = numeric_df.columns
num_cols = 3
num_rows = math.ceil( len(columns) / 3)
fig, _axes = plt.subplots(num_rows, num_cols,
figsize=(15, 5 * num_rows), constrained_layout=True, sharey=True)
fig.suptitle('Distribution of numeric variables against price', weight='bold')
color=iter( plt.cm.tab10( np.linspace(0,1, len(columns))))
for i, ax in enumerate(fig.axes) :
if i >= len(columns):
break
column_name = numeric_df.columns[i]
x = numeric_df[column_name]
# TODO outliers should have been removed, but if not they have to here
ax.plot(x, y_train, '.', alpha=0.3, color=next(color))
coeff, p = scipy.stats.pearsonr(x.to_numpy(), y_train)
if p < 0.1 :
ax.set_xlabel( f' Corr coeff {coeff:0.3} p {p:.3}', loc='left')
ax.set_title(column_name)
ax.xaxis.set_tick_params(rotation=45)
ax.set_ylabel('price')
scatter_plot_columns_against_target(Train[NUMERIC], y_train)
###Output
_____no_output_____
###Markdown
There is a strong negative correlation between year and price. There is a strong negative correlation between mileage and price. There is a medium positive correlation between tax and price. There is a strong negative correlation between mpg and price. There is a medium positive correlation between engineSize and price.
###Code
def plot_corr(df_numeric, cutoff = 0) :
corr = df_numeric.corr()
for coord in zip(*np.tril_indices_from(corr, k=-1) ): # Simplify by emptying all the data below the diagonal
corr.iloc[coord[0], coord[1]] = np.NaN
corr_plot = corr[ corr.apply(lambda x : abs(x) >= cutoff) ]
fig_height = math.ceil(len(corr.columns) / 2)
plt.figure(figsize=(fig_height + 4, fig_height))
g = sns.heatmap( corr_plot,
cmap='viridis', vmax=1.0, vmin=-1.0, linewidths=0.1,
annot=True, annot_kws={"size": 8}, square=True)
plt.xticks(rotation=45)
plt.title('Correlation matrix (weak correlations masked)')
ord_arr = OrdinalEncoder().fit_transform( Train[ CATEGORICAL] )
all_numeric = pd.concat([
Train[NUMERIC],
pd.DataFrame( ord_arr, columns=CATEGORICAL, ),
], axis=1
)
plot_corr(all_numeric, cutoff = 0.3)
print('There is some multicollinearity between the variables.')
def list_correlations(df_numeric, coeff_cutoff = 0.3) :
corr = df_numeric.corr()
for coord in zip(*np.tril_indices_from(corr, k=-1) ): # Simplify by emptying all the data below the diagonal
corr.iloc[coord[0], coord[1]] = np.NaN
df_corr_stack = (corr
.stack() # Stack the data and convert to a data frame
.to_frame()
.reset_index()
.rename(columns={'level_0':'feature1',
'level_1':'feature2',
0:'correlation'}))
df_corr_stack['abs_correlation'] = df_corr_stack.correlation.abs()
df_large_corr_stack = df_corr_stack.loc[ np.where(
(df_corr_stack['abs_correlation'] >= coeff_cutoff) &
(df_corr_stack['abs_correlation'] != 1)
)]
if df_large_corr_stack.empty :
print('*No strong correlation or anti-correlations*')
result = df_corr_stack
else :
result = df_large_corr_stack
result = result.sort_values('abs_correlation', ascending=False,
).drop('abs_correlation', axis = 1)
return result
ord_arr = OrdinalEncoder().fit_transform( Train[ CATEGORICAL] )
all_numeric = pd.concat([
Train[NUMERIC],
pd.DataFrame( ord_arr, columns=CATEGORICAL, ),
], axis=1
)
print('There is some multicollinearity between the variables.')
list_correlations(all_numeric)
###Output
There is some multicollinearity between the variables.
###Markdown
Multivariate AnalysisLet's see if we can drill more into the data to tighten the relationships.
###Code
filter = (Train['engineSize'] == 1.4)
df_plot = Train[filter]
plt.figure(figsize=(20, 5))
sns.scatterplot( x=df_plot['mileage'].apply( lambda x : np.log(x)),
y=df_plot['price'],
hue=df_plot['model'],
palette='bright', alpha=0.3)
plt.xlim(8,12)
plt.xlabel('log(mileage)')
plt.title('Distribution for Cars with engineSize == 1.4')
print('We plot log(mileage) beause the mileage distribution seems to be exponential.')
print('When controlling to cars with engineSize 1.4, there are tighter anti-correlations between log(mileage) and price.')
print('There may be even more differentiation by model.')
filter = (Train['engineSize'] == 1.4) & (Train['model'] == ' Q3')
df_plot = Train[filter]
plt.figure(figsize=(15, 5))
transformed_mileage = df_plot['mileage'].apply( lambda x : np.log(x) )
sns.scatterplot( x=transformed_mileage, y=df_plot['price'],
palette='bright', alpha=0.5, label='data')
plt.xlabel('log(mileage)')
regression_x = np.array([8, 11])
res = scipy.stats.linregress( transformed_mileage, df_plot['price'])
plt.plot(regression_x, res.intercept + res.slope*regression_x,
'r--', label=f'regression line')
plt.title(f'Regression line with slope {res.slope:.3} p {res.pvalue:.3}')
plt.legend()
print('When controlling to cars with model Q3, we see a linear negative correlation between sqrt(mileage) and price.')
print('We can plot a regression line to show the linear collinearity.')
###Output
When controlling to cars with model Q3, we see a linear negative correlation between sqrt(mileage) and price.
We can plot a regression line to show the linear collinearity.
###Markdown
exploratory data analysis
###Code
import pandas as pd
pd.__version__
import statsmodels.api as sm
import numpy as np
file = "../nowdata/traincf_2015_15_250_counts.pkl"
rawdta = pd.read_pickle(file)
rawdta
list(rawdta.columns)
rawdta['WEBTEXT'].head()
def is_empty_list(series):
lst = []
for element in series:
if len(element) == 0:
lst.append(False)
else:
lst.append(True)
return lst
rawdta.dropna(axis = 0, how = 'any')
rawdta['constant'] = 1
rawdta = rawdta.loc[is_empty_list(rawdta['WEBTEXT']), :]
len(rawdta['ESS_STR'])
X = rawdta[['% Total Population: White Alone',"% Population 25 Years and Over: Bachelor's Degree",\
'% Civilian Population in Labor Force 16 Years and Over: Unemployed',\
'% Families: Income in Below Poverty Level','% Total Population: Foreign Born',\
'Population Density (Per Sq. Mile)', 'constant']]
i = 0
for index, row in rawdta.iterrows():
aaa = row.isnull()
for e in range(0, len(aaa)):
if aaa[e] == False:
i += 1
break
print(i)
Y = rawdta['ESS_STR']
X = rawdta[['% Total Population: White Alone',"% Population 25 Years and Over: Bachelor's Degree",\
'% Civilian Population in Labor Force 16 Years and Over: Unemployed',\
'% Families: Income in Below Poverty Level','% Total Population: Foreign Born',\
'Population Density (Per Sq. Mile)', 'constant']]
results = sm.OLS(Y, X).fit()
results.summary()
'% Total Population: White Alone'
"% Population 25 Years and Over: Bachelor's Degree"
'% Civilian Population in Labor Force 16 Years and Over: Unemployed'
'% Families: Income in Below Poverty Level'
'% Total Population: Foreign Born'
'Population Density (Per Sq. Mile)'
rawdta[rawdta['ESS_STR'] == np.inf])
###Output
Empty DataFrame
Columns: [CMO_NAME, CMO_MEMSUM, SCH_NAME, CMO_STATE, CMO_SCHNUM, CMO_URL, CMO_NUMSTATES, CMO_ALLSTATES, CMO_SECTOR, CMO_NUMSTUDENTS_CREDO17, CMO_TYPE, CMO_WEBTEXT, SURVYEAR, FIPST, STABR, SEANAME, LEAID, ST_LEAID, SCHID, ST_SCHID, NCESSCH, MSTREET1, MSTREET2, MSTREET3, MCITY, MSTATE, MZIP, MZIP4, PHONE, LSTREET1, LSTREET2, LSTREET3, LCITY, LSTATE, LZIP, LZIP4, UNION, OUT_OF_STATE_FLAG, SCH_TYPE_TEXT, SCH_TYPE, RECON_STATUS, GSLO, GSHI, LEVEL, VIRTUAL, BIES, SY_STATUS_TEXT, SY_STATUS, UPDATED_STATUS_TEXT, UPDATED_STATUS, EFFECTIVE_DATE, CHARTER_TEXT, G13OFFERED, AEOFFERED, UGOFFERED, NOGRADES, CHARTAUTH1, CHARTAUTHN1, CHARTAUTH2, CHARTAUTHN2, IGOFFERED, WEBSITE, FRELCH, REDLCH, AE, TOTAL, AM, AMALM, AMALF, AS, ASALM, ASALF, HI, HIALM, HIALF, BL, BLALM, BLALF, WH, WHALM, WHALF, HP, HPALM, HPALF, TR, TRALM, TRALF, TITLEI_TEXT, TITLEI_STATUS, STITLEI, SHARED_TIME, MAGNET_TEXT, NSLPSTATUS_TEXT, NSLPSTATUS_CODE, NAME, OPSTFIPS, LSTREE, STFIP15, CNTY15, NMCNTY15, ...]
Index: []
[0 rows x 402 columns]
###Markdown
Campi con nullvi sono due feature con un numero significativo di valori null, in particolare: MonthlyIncome e NumberOfDependents Valori distinti:
###Code
cat_columns = ['age','NumberOfTime30-59DaysPastDueNotWorse',
'NumberOfOpenCreditLinesAndLoans', 'NumberOfTimes90DaysLate',
'NumberRealEstateLoansOrLines', 'NumberOfTime60-89DaysPastDueNotWorse',
'NumberOfDependents']
count_unique_values(orig_data, cat_columns)
# number of bins for corresponding hist.
%autoreload
vet_bins = [10, 30, 30, 30, 30, 30, 30]
print('\n Plot n. 1')
plot_hist_numerical(orig_data, cat_columns, vet_bins)
show_group_stats_viz(orig_data, 'SeriousDlqin2yrs');
###Output
SeriousDlqin2yrs
0 139974
1 10026
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
il Dataset è fortemente sbilanciato: solo il 6.7% è positivo (abbastanza atteso)
###Code
# analizziamo il dataset di test
FILE_TEST = 'cs-test.csv'
orig_test = pd.read_csv(FILE_TEST)
orig_test.head()
orig_test.isnull().sum()
###Output
_____no_output_____
###Markdown
anche nel dataset di test MonthlyIncome e NumberOfDependents contengono NaNquindi lo stesso preprecessing che si deve fare per training deve essere applicato a test
###Code
orig_data.describe().transpose()
df_stats = orig_data.describe().transpose()
# mean_mi = df_stats.loc['MonthlyIncome', 'mean']
# mean_nod = df_stats.loc['NumberOfDependents', 'mean']
# per inputation uso:
# NumberOfDependents, la moda = 0
# Monthly income: la mediana = 50 perc =
mode_nod = 0
med_mi = df_stats.loc['MonthlyIncome', '50%']
# inpute
# make a copy
df = orig_data.copy()
# inpute MonthlyIncome
condition = (df['MonthlyIncome'].isna())
df['isna_mi'] = 0
df.loc[condition, 'isna_mi'] = 1
df.loc[condition, 'MonthlyIncome'] = med_mi
# inpute
condition = (df['NumberOfDependents'].isna())
df['isna_nod'] = 0
df.loc[condition, 'isna_nod'] = 1
df.loc[condition, 'NumberOfDependents'] = mode_nod
df.info()
# save the transformed dataset
df.to_csv('cs-training-nonull.csv', index=False)
###Output
_____no_output_____
###Markdown
Cleanup
###Code
train['Discussion'] = train['Discussion'].apply(lambda x: " ".join(x.lower() for x in x.split()))
train['Discussion'].head()
train['Discussion'] = train['Discussion'].str.replace('[^\w\s]','')
train['Discussion'].head()
train['Discussion'] = train['Discussion'].apply(lambda x: " ".join(x for x in x.split() if x not in stop))
train['Discussion'].head()
frequent_words = pd.Series(' '.join(train['Discussion']).split()).value_counts()[:5]
frequent_words = list(frequent_words.index)
pip install textblob
import textblob
tb = textblob.TextBlob
train['Discussion'][:10].apply(lambda x: str(tb(x).correct()))
train['Discussion'].head()
st = nltk.stem.PortStremmer
train['Discussion'].apply(lambda x: " ".join([st.stem(word) for word in x.split()]))
train['Discussion'].head()
nltk.download('wordnet')
train['Discussion'] = train['Discussion'].apply(lambda x: " ".join([textblob.Word(word).lemmatize() for word in x.split()]))
train["N-grams"] = list(tb(train['Discussion']).ngrams(3))
train['Discussion'].head()
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
unicorn.rename(columns={"Select Inverstors": "Select Investors"},inplace=True)
# checking the datatype
print(type("Company"))
print(type("Valuation ($B)"))
print(type("Date Joined"))
print(type("Country"))
print(type("City"))
print(type("Industry"))
print(type("Select Inverstors"))
print(type("Founded Year"))
print(type("Total Raised"))
print(type("Financial Stage"))
print(type("Investors Count"))
print(type("Deal Terms"))
print(type("Portfolio Exits"))
###Output
_____no_output_____
###Markdown
Correcting the datatype Updating "Valuation ($B)" Column
###Code
# Getting rid of ($)
# Converting datatype str to float
unicorn["Valuation ($B)"].replace({"\$": ""}, inplace=True)
unicorn["Valuation ($B)"] = unicorn["Valuation ($B)"].replace({"\$": " "}, regex=True)
unicorn["Valuation ($B)"] = unicorn["Valuation ($B)"].astype(float)
# unicorn
###Output
_____no_output_____
###Markdown
Updating "Date Joined" column
###Code
# Converting datatype str to datetime
pd.to_datetime(unicorn["Date Joined"])
###Output
_____no_output_____
###Markdown
Updating "Total Raised" column
###Code
# Getting rid of ($)
unicorn["Total Raised"] = unicorn["Total Raised"].replace({"\$": " "}, regex=True)
# Slicing ("B" and "M") from the str
new_total_raised = unicorn["Total Raised"].str[-1::]
# Adding new column and adding the value
unicorn["Total Raised in Billion or Million"] = unicorn["Total Raised"].str[-1::]
# Replacing the value
unicorn["Total Raised in Billion or Million"].replace({"B": "Billion"}, inplace=True)
unicorn["Total Raised in Billion or Million"].replace({"M": "Million"}, inplace=True)
unicorn.rename(columns={"Total Raised": "Total Raised ($)"}, inplace=True)
#Getting rid of ("B" and "M") from total raised column
unicorn["Total Raised ($)"] = unicorn["Total Raised ($)"].map(lambda x: x.rstrip("BM"))
# unicorn
# Replacing values
# Converting datatype from str to float
unicorn["Total Raised ($)"] = unicorn["Total Raised ($)"].replace({"None": "0"}, inplace=True)
unicorn["Total Raised ($)"] = unicorn["Total Raised ($)"].astype(float)
# unicorn
###Output
_____no_output_____
###Markdown
Updating "Investors count" column
###Code
# Replacing values
# Converting datatype from str to float
unicorn["Investors Count"] = unicorn["Investors Count"].replace({"None": "0"}, regex=True)
unicorn["Investors Count"] = unicorn["Investors Count"].astype(float)
# unicorn
###Output
_____no_output_____
###Markdown
Updating "Industry" column
###Code
# Checking for unique values
unicorn["Industry"].value_counts().reset_index()
# Replacing the value
unicorn.replace({"Artificial Intelligence": "Artificial intelligence", "Finttech": "Fintech"}, inplace=True)
unicorn["Industry"].value_counts().reset_index()
###Output
_____no_output_____
###Markdown
Updating "Deal Terms" column
###Code
# Checking the unique value and replacing them
# Converting datatype from str to float
unicorn["Deal Terms"].unique()
unicorn["Deal Terms"] = unicorn["Deal Terms"].replace({"None": "0"})
unicorn["Deal Terms"] = unicorn["Deal Terms"].astype(float)
# Checking for unique values
unicorn["Portfolio Exits"].value_counts().reset_index()
# Replacing values
# Converting datatype from str to float
unicorn["Portfolio Exits"] = unicorn["Portfolio Exits"].replace({"None": "0"},)
unicorn["Portfolio Exits"] = unicorn["Portfolio Exits"].astype(float)
# Checking the number of unique contries
unicorn["Country"].value_counts().reset_index()
###Output
_____no_output_____
###Markdown
A. Individual Variables: Physical Measurements Most of the physical measurements appear to be bell-shaped and roughly normally distributed, as we might expect. `Body Fat` is an obvious exception, and the hand measurements have some notable outliers. The height variables are somewhat bimodal, with all other variables being mostly unimodal. We'll take a look at each of the unusual features.
###Code
import warnings
warnings.filterwarnings("ignore")
physicals = ['Height (No Shoes)', 'Height (With Shoes)', 'Wingspan', 'Standing reach',
'Weight', 'Body Fat', 'Hand (Length)', 'Hand (Width)']
fig = plt.figure(figsize = (15, 5))
for i in range(1, 9):
plt.subplot(2, 4, i)
# plt.hist(physicals[i - 1], data = cc[cc[physicals[i - 1]].notnull()], histtype = 'bar', ec = 'black')
sns.violinplot(x = physicals[i - 1], data = cc[cc[physicals[i - 1]].notnull()], color = 'lightblue')
# sns.swarmplot(x = physicals[i - 1], data = cc[cc[physicals[i - 1]].notnull()], alpha = 0.2)
plt.subplots_adjust(hspace = 0.5)
plt.title(physicals[i - 1])
if physicals[i - 1] == 'Weight':
plt.xlabel('lb')
elif physicals[i - 1] == 'Body Fat':
plt.xlabel('%')
else:
plt.xlabel('in')
plt.show()
###Output
_____no_output_____
###Markdown
First, let's take a closer look at `Body Fat`, specifically the players that have a body fat percentage greater than 13, the upper end of what is expected of most athletes. Unsurprisingly, the players with the highest body fat tend to be picked lower, since teams may view them as "overweight" or "unathletic"; indeed, most of these players were fairly low picks and did not end up playing much in the NBA. The notable exception is DeMarcus Cousins, an All-Star center who dominated possibly because he knew how to use his weight and bulk.
###Code
cc[cc['Body Fat'] > 13][['Player', 'Pk', 'Pos', 'Weight', 'Body Fat', 'G', 'MPG']].sort_values('Body Fat')
###Output
_____no_output_____
###Markdown
Second, let's look at `Hand (Length)` and `Hand (Width)`.
###Code
cc[cc['Hand (Length)'] > 9.5]
cc.iloc[np.abs(stats.zscore(cc['Hand (Width)'].dropna())) > 3, :]
###Output
_____no_output_____
###Markdown
B. Individual Variables: Athletic Measurements
###Code
athletics = ['Vertical (Max)', 'Vertical (Max Reach)', 'Vertical (No Step)',
'Vertical (No Step Reach)','Agility', 'Sprint']
fig = plt.figure(figsize = (15, 5))
for i in range(1, 7):
plt.subplot(2, 3, i)
# plt.hist(athletics[i - 1], data = cc[cc[athletics[i - 1]].notnull()], histtype = 'bar', ec = 'black')
sns.violinplot(x = physicals[i - 1], data = cc[cc[physicals[i - 1]].notnull()], color = 'lightgray', bw = 0.25)
plt.subplots_adjust(hspace = 0.5)
plt.title(athletics[i - 1])
if athletics[i - 1] in ['Agility', 'Sprint']:
plt.xlabel('s')
else:
plt.xlabel('in')
plt.show()
###Output
_____no_output_____
###Markdown
Notestime_id is not sequential so wtf is it for? identifying outliers?
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
import ipywidgets as widgets
from glob import glob
from ipywidgets import interact
from IPython.display import display, clear_output
warnings.filterwarnings("ignore")
%matplotlib inline
# train_df is aggregated to stock x time
# trade_book_df is aggregated to stock x time x price??
# order_book is the fact table here
list_order_book_file_train = glob(r"data/book_train.parquet/*")
train_df = pd.read_csv(r"data/train.csv")
trade_book_df = pd.read_parquet(r"data/trade_train.parquet")
testing_order_book = pd.read_parquet(list_order_book_file_train[0])
print(train_df.shape)
print(trade_book_df.shape)
print(testing_order_book.shape)
trade_book_df.head()
train_df_pivot = train_df.pivot_table("target", "time_id", "stock_id").reset_index(drop=False)
train_df_pivot.columns = ["stock_" + str(col) for col in train_df_pivot.columns]
train_df_pivot.head()
###Output
_____no_output_____
###Markdown
Train: Drawing Carpets
###Code
# Useless
df_corr = train_df_pivot.drop("stock_time_id", axis=1).corr()
f = plt.figure(figsize=(8, 6))
plt.matshow(df_corr, fignum=f.number)
cb = plt.colorbar()
cb.ax.tick_params(labelsize=14)
plt.title('Stock Correlation Matrix', fontsize=16)
###Output
_____no_output_____
###Markdown
Train: KDE plots for response
###Code
# Does this make sense? Do we not bother to split df before doing this? nah fk it
# Does this mean there is a possibility for a parametric method? glm log-link, gamma assumption?
sns.distplot(
train_df["target"],
hist=True,
kde=True,
bins=int(180/5),
color = "darkblue",
hist_kws={ "edgecolor":"black" },
kde_kws={ "linewidth": 3 }
)
def plot_stock_kde(stock_id: int) -> None:
_df = train_df.loc[train_df["stock_id"]==stock_id, "target"]
sns.distplot(
_df,
hist=True,
kde=True,
bins=int(180/5),
color = "darkblue",
hist_kws={ "edgecolor":"black" },
kde_kws={ "linewidth": 3 }
)
interact(plot_stock_kde, stock_id=widgets.IntSlider(value=0))
###Output
_____no_output_____
###Markdown
Trade:
###Code
trade_book_df.head()
trade_book_df.loc[trade_book_df["stock_id"]==0, :]
df = pd.read_parquet(list_order_book_file_train[0])
df
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis* Describing the dataAttribute Information:This research employed a binary variable, default payment (Yes = 1, No = 0), as the response variable. This study reviewed the literature and used the following 23 variables as explanatory variables:X1: Amount of the given credit (NT dollar): it includes both the individual consumer credit and his/her family (supplementary) credit.X2: Gender (1 = male; 2 = female).X3: Education (1 = graduate school; 2 = university; 3 = high school; 4 = others).X4: Marital status (1 = married; 2 = single; 3 = others).X5: Age (year).X6 - X11: History of past payment. We tracked the past monthly payment records (from April to September, 2005) as follows: X6 = the repayment status in September, 2005; X7 = the repayment status in August, 2005; . . .;X11 = the repayment status in April, 2005. The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above.X12-X17: Amount of bill statement (NT dollar). X12 = amount of bill statement in September, 2005; X13 = amount of bill statement in August, 2005; . . .; X17 = amount of bill statement in April, 2005.X18-X23: Amount of previous payment (NT dollar). X18 = amount paid in September, 2005; X19 = amount paid in August, 2005; . . .;X23 = amount paid in April, 2005. * Datailing the main objectives of the analysis* Variations of classifier models and specifies which one is the model that best suits the main objective(s) of this analysis* Key findings related to the main objective(s) of the analysis?* Highlight possible flaws in the model and a plan of action to revisit this analysis with additional data or different predictive modeling techniques 0. Imports
###Code
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import f1_score
###Output
_____no_output_____
###Markdown
1. Load data
###Code
data_path = os.path.join('data', 'default_credit_card_clients.csv')
data_raw = pd.read_csv(data_path, skiprows=1)
data_raw.head()
data_raw.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30000 entries, 0 to 29999
Data columns (total 24 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 LIMIT_BAL 30000 non-null int64
1 SEX 30000 non-null int64
2 EDUCATION 30000 non-null int64
3 MARRIAGE 30000 non-null int64
4 AGE 30000 non-null int64
5 PAY_0 30000 non-null int64
6 PAY_2 30000 non-null int64
7 PAY_3 30000 non-null int64
8 PAY_4 30000 non-null int64
9 PAY_5 30000 non-null int64
10 PAY_6 30000 non-null int64
11 BILL_AMT1 30000 non-null int64
12 BILL_AMT2 30000 non-null int64
13 BILL_AMT3 30000 non-null int64
14 BILL_AMT4 30000 non-null int64
15 BILL_AMT5 30000 non-null int64
16 BILL_AMT6 30000 non-null int64
17 PAY_AMT1 30000 non-null int64
18 PAY_AMT2 30000 non-null int64
19 PAY_AMT3 30000 non-null int64
20 PAY_AMT4 30000 non-null int64
21 PAY_AMT5 30000 non-null int64
22 PAY_AMT6 30000 non-null int64
23 default payment next month 30000 non-null int64
dtypes: int64(24)
memory usage: 5.5 MB
###Markdown
2. Previous Exploratory Analysis
###Code
data_raw.describe().T
###Output
_____no_output_____
###Markdown
Depending on the model, the data should be scaled.
###Code
target = data_raw['default payment next month'].value_counts()
print(f"Default payment next month?\nNo: {target[0]}\nYes: {target[1]}")
###Output
Default payment next month?
No: 23364
Yes: 6636
###Markdown
It's a unbalanced dataset. 3. Load Train sets
###Code
X_train = pd.read_parquet('data/x_train.parquet')
y_train = pd.read_parquet('data/y_train.parquet')
X_train.head()
y_train.head()
X_train.info()
y_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 21000 entries, 11018 to 27126
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 default payment next month 21000 non-null int64
dtypes: int64(1)
memory usage: 844.2 KB
###Markdown
4. Exploratory Data Analysis
###Code
sns.set_style(style='white')
X_train.hist(figsize=(15, 10), bins=20, grid=False)
plt.show()
X_train.skew()
train = X_train.merge(y_train, left_index=True, right_index=True)
corr = train.corr()
corr.iloc[:, -1]
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots()
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=plt.get_cmap('coolwarm'), center=0,
square=True, linewidths=.3, cbar_kws={"shrink": .6})
plt.savefig('./report/heatmap.png', dpi=400)
train['pay'] = X_train.iloc[:, 5:11].sum(axis=1)/6
train['bill_amt'] = X_train.iloc[:, 11:17].sum(axis=1)/6
corr = train.corr()
corr.iloc[:, -3]
rndf_clf = RandomForestClassifier(random_state=42, n_jobs=-1)
extt_clf = ExtraTreesClassifier(random_state=42, n_jobs=-1)
params = {
'n_estimators': [200, 300],
'max_features': ['log2', None],
'max_leaf_nodes': [15, None]
}
estimators = {}
for estimator in [rndf_clf, extt_clf]:
estimators[estimator.__class__.__name__] = GridSearchCV(estimator=estimator, param_grid=params,
n_jobs=-1, cv=5, return_train_score=True,
scoring='f1').fit(X_train, np.ravel(y_train))
estimators.keys()
for estimator in estimators.keys():
print(f"{estimator} --- {estimators[estimator].best_score_}")
f1_score(np.ravel(y_train), np.ones(21_000))
for estimator in estimators.keys():
y_train_pred = cross_val_predict(estimators[estimator].best_estimator_, X_train_scaled, np.ravel(y_train), cv=3)
print(f'{estimator}')
print(confusion_matrix(np.ravel(y_train), y_train_pred))
for estimator in estimators.keys():
print(f"{estimator} --- {estimators[estimator].best_params_}")
list(zip(X_train.columns, estimators['ExtraTreesClassifier'].best_estimator_.feature_importances_))
list(zip(X_train.columns, estimators['RandomForestClassifier'].best_estimator_.feature_importances_))
X_train['log_limit_bal'] = np.log1p(X_train['LIMIT_BAL'])
X_train.drop(labels=['LIMIT_BAL'], axis=1, inplace=True)
train = X_train.merge(y_train, left_index=True, right_index=True)
train.corr()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
svm_clf = LinearSVC(random_state=42, loss='hinge')
log_clf = LogisticRegression(random_state=42, n_jobs=-1)
knn_clf = KNeighborsClassifier(n_jobs=-1)
params_svm = {
'C': [10e-4, 10e-3, 10e-2, 10e-1, 1]
}
params_log = {
'C': [10e-4, 10e-3, 10e-2, 10e-1, 1]
}
params_knn = {
'n_neighbors': [4, 5, 6],
'weights': ['uniform', 'distance']
}
estimators = {}
for estimator in [(svm_clf, params_svm), (log_clf, params_log), (knn_clf, params_knn)]:
estimators[estimator[0].__class__.__name__] = GridSearchCV(estimator=estimator[0], param_grid=estimator[1],
n_jobs=-1, cv=5, return_train_score=True,
scoring='f1').fit(X_train_scaled , np.ravel(y_train))
for estimator in estimators.keys():
print(f"{estimator} --- {estimators[estimator].best_params_}")
for estimator in estimators.keys():
print(f"{estimator} --- {estimators[estimator].best_score_}")
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn.metrics import confusion_matrix
for estimator in estimators.keys():
y_train_pred = cross_val_predict(estimators[estimator].best_estimator_, X_train_scaled, np.ravel(y_train), cv=3)
print(f'{estimator}')
print(confusion_matrix(np.ravel(y_train), y_train_pred))
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
y_train_pred = cross_val_predict(ada_clf, X_train, np.ravel(y_train), cv=3)
confusion_matrix(np.ravel(y_train), y_train_pred)
from sklearn.ensemble import GradientBoostingClassifier
gbrt = GradientBoostingClassifier(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
y_train_pred = cross_val_predict(gbrt, X_train, np.ravel(y_train), cv=3)
confusion_matrix(np.ravel(y_train), y_train_pred)
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
svm_clf = LinearSVC(random_state=42, loss='hinge', C=0.1)
log_clf = LogisticRegression(random_state=42, n_jobs=-1, C=1)
knn_clf = KNeighborsClassifier(n_jobs=-1, n_neighbors= 5, weights= 'distance')
voting_clf = VotingClassifier(estimators=[('svm', svm_clf), ('log', log_clf), ('knn', knn_clf)],
voting='hard')
y_train_pred = cross_val_predict(voting_clf, X_train_scaled, np.ravel(y_train), cv=3)
confusion_matrix(np.ravel(y_train), y_train_pred)
y_score = cross_val_score(voting_clf, X_train_scaled, np.ravel(y_train), cv=3, scoring='f1')
y_score.mean()
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
extt_clf = ExtraTreesClassifier(random_state=42, n_jobs=-1, max_features= None, max_leaf_nodes= 15, n_estimators= 300)
extt_clf.fit(X_train, np.ravel(y_train))
X_test = pd.read_parquet('data/x_test.parquet')
y_test = pd.read_parquet('data/y_test.parquet')
y_test_pred = extt_clf.predict(X_test)
cm = confusion_matrix(np.ravel(y_test), y_test_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm)
disp.plot()
plt.savefig('./report/confusion.png', dpi=400)
###Output
_____no_output_____
###Markdown
**Q1. Exploratory Data Analysis (EDA)** **OBJECTIVE**This Jupyter Notebook will seek to conduct an EDA on the dataset from MN Department of Transportation and present its findings of the analysis at the end. **GENERAL OVERVIEW OF EDA** **1) CHECKING IF THE DATA IS INTUITIVE**Using domain knowledge, we will analyse the data and pick out areas that might require further analysis (e.g. if data seems incorrect, identify outliers etc.) **2) UNIVARIATE AND BIVARIATE ANALYSIS**We will analyse each feature in detail and conduct feature cleaning/engineering (if needed).We will analyze pairs of features to obtain further insight on the relationship between them and conduct feature cleaning/engineering (if needed). **3) SUMMARY OF ANALYSIS AND IMPLICATIONS**We will then summarize our findings above and identify things which we can do based on our findings.
###Code
# Importing the libraries
# System
import io, os, sys, datetime, math, calendar
# Data Manipulation
import numpy as np
import pandas as pd
# Data Preprocessing
import sklearn
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
from sklearn.compose import ColumnTransformer
# Machine Learning
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.metrics import confusion_matrix, accuracy_score,recall_score,precision_score,f1_score,r2_score,explained_variance_score
from xgboost import XGBClassifier, XGBRegressor
# Visualisation
%matplotlib inline
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
import seaborn as sn
###Output
The scikit-learn version is 0.21.3.
###Markdown
**1) CHECKING IF THE DATA IS INTUITIVE** **Summary:** This dataset provides hourly traffic volume for a city, including features indicating holidays and weather conditions. **Features:** `holiday`: US national and regional holidays `temp`: average temperature in Kelvin (K) `rain_1h`: rain that occured in the hour (mm) `snow_1h`: snow that occured in the hour (mm) `clouds_all`: percentage of cloud cover `weather main`: textual description of current weather `weather_description`: longer textual description of current weather `date_time`: hour of the data collected in local time **Output:**`traffic_volume`: hourly I-94 reported westbound traffic volume
###Code
# Importing the dataset
data_url = 'https://aisgaiap.blob.core.windows.net/aiap5-assessment-data/traffic_data.csv'
dataset = pd.read_csv(data_url)
# Checking the first 10 lines for the dataset for intuition
dataset.head(10)
# Checking the details of the dataset for intuition
dataset.info()
# Checking the details of the dataset for intuition
dataset.describe()
###Output
_____no_output_____
###Markdown
From the snapshots of the dataset provided above, please refer to the table below for the summary of our observations. For each observation, we will analyze them in further detail when we conduct our univariate / bivariate analysis. | S/N | Findings | Actions to be taken || :-: | :-- | :-: ||| **Findings from head()** ||| 1 | weather_description seems to be extremely similar to weather_main (possible that weather_main might be redundant if weather description provides greater details) | bivariate analysis || 2 | in row 2, weather_description is "heavy snow" while snow_1h is 0 (possible incorrect data / data was previously pre-processed. There might be similar issue for rain_1h) | univariate analysis || 3 | in row 0, holiday is "New Years Day", but in row 1, it is "None" (since we are predicting traffic volume, the period before and after a holiday is also important. We might need to create additional features to take into account the effect of the period before and after a holiday on traffic volumn ) | univariate analysis || 4 | date_time only has the date and hour (since we are predicting traffic volume, the day of the week is also an important feature (e.g. traffic can be higher for weekdays vs weekend. We might need to create additional features that will improve the model) | univariate analysis ||| **Findings from info()** ||| 5 | date_time type is "object" (might need to convert to "datetime") | univariate analysis || 6 | there are no null values (data might have been pre-processed, null data might have been replaced (e.g. replaced with mean, median, -1, -999 etc.)) | to check with data provider ||| **Findings from describe()** ||| 7 | snow_1h has zeros for the entire dataset even when weather_description is "heavy snow" (as mentioned in findings from head()) (possible incorrect data, will not be useful for model prediction since the values are all zeros.) | univariate analysis and to check with data provider| **2) UNIVARIATE AND BIVARIATE ANALYSIS**For our dataset, we can categorise into 3 main categories for our analysis: **Numerical:** feature that contains numeric values **Categorical:** feature that contains categories or texts **Time_Date:** feature that contains time/dateFor this section we will: **a) conduct relevant analysis based on the category** **b) conduct feature cleaning and engineering based on findings from part 1 and part 2a (if required)**
###Code
# Defining a function for plotting distribution
def plot_distribution(dataset, columns, cols=5, rows=2, width=20 , height=10, hspace=0.4, wspace=0.1):
plt.style.use('seaborn-whitegrid')
fig = plt.figure(figsize=(width,height))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=wspace, hspace=hspace)
rows = math.ceil(float(dataset.shape[1]) / cols)
for i, column in enumerate(columns):
ax = fig.add_subplot(rows, cols, i + 1)
ax.set_title(column)
#if feature is categorical, to plot countplot
if dataset.dtypes[column] == np.object:
sn.countplot(y=dataset[column])
plt.xticks(rotation=25)
#if feature is numerical, to plot boxplot
else:
sn.boxplot(dataset[column])
plt.xticks(rotation=25)
###Output
_____no_output_____
###Markdown
**NUMERICAL FEATURES:** "temp", "rain_1h", "snow_1h", "clouds_all", "traffic_volume" **a) Analysis of numerical features**
###Code
# Plot distribution of all numerical features for analysis
num_features = ['temp', 'rain_1h', 'snow_1h', 'clouds_all', 'traffic_volume']
plot_distribution(dataset, num_features, cols=5, rows=2, width=20 , height=10, hspace=0.4, wspace=0.1)
###Output
_____no_output_____
###Markdown
From the boxplot, there are no issues highlighted for "temp", "clouds_all" and "traffic_volume. As for rain_1h and snow_1h, our observations are as follows: | S/N | Findings | Actions to be taken || :-: | :-- | :-: ||| **Findings for boxplot of rain_1h** ||| 1 | We can see that majority of its datapoints are at 0, resulting in datapoints that are above 0 to be classified as outliers. There do not seem to be an issue with this as majority datapoints can be measured when there is no rain. In addition, the highest point for rain_1h is slightly below 60mm/hour. From our research online, if the there is violent rainfall, 60mm/hour is attainable. (Since the datapoints for when rain_1h are limited, we should not remove these outliers, instead we will choose a decision tree model for our prediction as it is robust towards outliers) | choose a decision tree based machine learning model ||| **Findings for boxplot of snow_1h** ||| 2 | We can see that all datapoints are 0 which coincides with the findings above in part 1. | to remove and to check with data provider | **b) Feature cleaning and engineering** Based on our findings above, we will conduct the following cleaning/engineering processes for the following features stated>**Cleaning:** snow_1h: the data was all zeroes, to remove the data since it will have no effect on the model **Feature:** snow_1h As mentioned above, we will proceed to remove snow_1h as a data of all zeroes has no effect on our model
###Code
# Drop columns which are redundant and check that it is properly dropped
dataset = dataset.drop(["snow_1h"], axis = 1)
dataset.head()
###Output
_____no_output_____
###Markdown
**CATEGORICAL FEATURES:** "holiday", "weather_main", "weather_description" **a) Analysis of categorical features**
###Code
# Plot distribution of all categorical features
num_features = ["holiday", "weather_main", "weather_description"]
plot_distribution(dataset, num_features, cols=2, rows=2, width=15, height=25, hspace=0.3, wspace=0.4)
###Output
_____no_output_____
###Markdown
From the countplot, our observations are as follows: | S/N | Findings | Actions to be taken || :-: | :-- | :-: ||| **Findings for countplot of holiday** ||| 1 | We can see that majority of its datapoints are "None". As mentioned in part 1, the periods before and after holiday is important as it is the period people usually travels. | create additional features ||| **Findings for countplot of weather_main and weather_description** ||| 2 | We can see that all weather_description seems to be more informative than weather_main, as it splits weather_main into small "subsets". As mentioned in part 1, weather_main seems redundant. | bivariate analysis | **b) Feature cleaning and engineering** Based on our findings above, we will conduct the following cleaning/engineering processes for the following features stated>**Engineering:** holidays: create additional features for the time period before and after holidays weather_main: to remove weather_main **Feature:** holidays As mentioned above, we will create additional features for the time period before and after holidays as they are useful features for our model.We will create 2 features, "24h_before_holiday" and "24h_after_holiday". Since people usually travels within 24 hours before and after holidays, our additional features will be capped at 24 hours. (e.g. Under the new feature "24h_before_holiday, if a time period is within 24hours before a holiday, we will assign it a value of True.)
###Code
# Before creating a function to create the additional features,
# we should first create a function to check if there are holidays that are back to back (e.g. within 24 hours apart from each other) as this might cause error
def backtoback_holidays (dataset, column, hours=24):
holiday_row = []
for index, row in dataset[column].to_frame().iterrows():
if row[column] != "None":
print (row[column], "is at line", index)
holiday_row.append(index)
print("\n", holiday_row)
for i in range(len(holiday_row)-1):
if holiday_row[i] + hours >= holiday_row[i+1]:
print ("Error identified at line: ", holiday_row[i], 'and at line:', holiday_row[i+1])
backtoback_holidays(dataset, "holiday")
# From the above, we can see that no error is identified.
# Next, we will create the additional features "hours_before_holiday" and "hours_after_holiday", and remove "holiday"
def add_features_holiday (dataset, column, hours=24):
# Create blank list for hours_before and hours_after, if the row is within 24 hours from a holiday, we will append the row number to it
hours_before = []
hours_after = []
# Create blank list for hours_holiday, if the row is the holiday itself, we will append the row number to it
hours_holiday = []
# Create numpy arrays of False, if row number is within 24 hours from a holiday, we will change it to True
before_holiday = np.zeros(len(dataset[column])).astype(dtype=bool)
after_holiday = np.zeros(len(dataset[column])).astype(dtype=bool)
for index, row in dataset[column].to_frame().iterrows():
# If there is a holiday, append the relevant number to hours_holiday
if row[column] != "None":
hours_holiday.append(index)
# Append the relevant row humbers to hours_before and hours_after
for i in hours_holiday:
for hour in range(0, hours+1):
hours_before.append(i - hour)
hours_after.append(i + hour)
# Remove the row rumbers that are out of range
hours_before = np.asarray(hours_before)
hours_before = hours_before[(hours_before>=0) & (hours_before<=len(dataset[column]))]
hours_after = np.asarray(hours_after)
hours_after = hours_after[(hours_after>=0) & (hours_after<=len(dataset[column]))]
# Change numpy array to true, if the respective row number within 24 hours from a holiday
before_holiday[hours_before.tolist()] = True
after_holiday[hours_after.tolist()] = True
# Convert hours_before_holiday and hours_after_holiday to dataframe and merge to original dataset
before_holiday = pd.DataFrame(before_holiday, columns=['before_holiday'])
after_holiday = pd.DataFrame(after_holiday, columns=['after_holiday'])
dataset = pd.concat([dataset, before_holiday], axis=1, sort=False)
dataset = pd.concat([dataset, after_holiday], axis=1, sort=False)
# Drop column as relevant features were already extracted and feature takes into account column
dataset = dataset.drop([column], axis = 1)
return dataset
dataset = add_features_holiday(dataset, "holiday")
dataset.head()
###Output
_____no_output_____
###Markdown
**Feature:** weather_main As mentioned above, weather_main seems to be redundant as weather_description seems to be a 'subset' if weather_main and is hence more informative.However, we have to first verify whether weather_description is a 'subset' of weather_main. This will ensure that we do not remove important data. (e.g. if weather_main data is "Clouds", weather_description should be clouds related such as "overcast clouds".)
###Code
# We will create plot a scatterplot to check for the above
# First, we extract the relevant data from our dataset (i.e. weather_main and weather_description)
weather_data = dataset.iloc[:, 3:5].values
weather_data = pd.DataFrame(weather_data)
# Next, we plot weather_data on a scatterplot
plt.figure(figsize=(5,5))
for i, weather in enumerate(np.unique(weather_data[0].to_numpy())):
plt.scatter(y=weather_data[1][weather_data[0]==weather],x=weather_data[0][weather_data[0]==weather])
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
From the scatterplot, we are certain that weather_description is a 'subset' of weather_main. Therefore, we will proceed to remove weather_main.
###Code
# Drop columns which are redundant and check that it is properly dropped
dataset = dataset.drop(["weather_main"], axis = 1)
dataset.head()
###Output
_____no_output_____
###Markdown
**DATE_TIME FEATURES:** "date_time" **a) Analysis of date_time features** We won't be plotting a graph for date_time, as bivariate analysis suits date_time features better. Instead with what we discovered from part 1, we will proceed to feature cleaning and engineering. **b) Feature cleaning and engineering** Based on our findings above, we will conduct the following cleaning/engineering processes for the following features stated>**Cleaning:** date_time: incorrect datatype, to convert from object to date_time datatype>**Engineering:** date_time: create additional features for the days of the week **Feature:** date_time As mentioned above, we will first convert the datatype to date_time then create additional features for the days of the week.Next, we will be creating 4 new features, "year", "month", "day_of_the_week" and "time_period" to replace date_time as I believe these features seperately will be more informative in predicting traffic volume as compared to a single date_time feature.
###Code
# We will first convert date_time to date_time datatype
dataset['date_time'] = pd.to_datetime(dataset['date_time'], format="%Y-%m-%d %H:%M:%S")
dataset.head()
# Next, we will create the additional features "year", "month", "day_of_the_week" and "time_period", and remove "date_time"
def add_features_datetime_YMD (dataset, column="date_time", feature_name=["year", "month", "day", "time"]):
# Create numpy arrays of zeros/empty string, we will replace the values subsequently
dt_year = np.ones(len(dataset[column]))
dt_month = np.ones(len(dataset[column]))
dt_day = []
dt_time = np.ones(len(dataset[column]))
# Extract the relevant feature from column and update the features to dataset
for feature in feature_name:
if feature == "year":
for index, row in dataset[column].to_frame().iterrows():
dt_year[index] = row[column].year
dt_year = pd.DataFrame(data=dt_year, columns=['year'], dtype=np.int64)
dataset = pd.concat([dataset, dt_year], axis=1, sort=False)
elif feature == "month":
for index, row in dataset[column].to_frame().iterrows():
dt_month[index] = row[column].month
dt_month = pd.DataFrame(data=dt_month, columns=['month'], dtype=np.int64)
dataset = pd.concat([dataset, dt_month], axis=1, sort=False)
elif feature == "day":
for index, row in dataset[column].to_frame().iterrows():
dt_day.append(row[column].strftime('%A'))
dt_day = pd.DataFrame(data=dt_day, columns=['day_of_the_week'], dtype=str)
dataset = pd.concat([dataset, dt_day], axis=1, sort=False)
elif feature == "time":
for index, row in dataset[column].to_frame().iterrows():
dt_time[index] = row[column].hour
dt_time = pd.DataFrame(data=dt_time, columns=['time_period'], dtype=np.int64)
dataset = pd.concat([dataset, dt_time], axis=1, sort=False)
# Drop column as relevant features were already extracted
dataset = dataset.drop([column], axis = 1)
return dataset
dataset = add_features_datetime_YMD (dataset, column="date_time", feature_name=["year", "month", "day", "time"])
dataset.head()
# Next, we will carry out binning for the time_period,
# We will classify time period into bins of Morning, Afternoon, Evening and Night. For each bin, the traffic is expected to be different
dataset["time_period"] = pd.cut(dataset["time_period"],
bins=[0,6,12,18,23],
labels=['Night','Morning','Afternoon','Evening'],
include_lowest=True)
dataset.head()
###Output
_____no_output_____
###Markdown
Visualization
###Code
import numpy as np
import matplotlib.pyplot as plt
data1 = np.load('./training_info/cleveland.npy', allow_pickle=True)
data2 = np.load('./training_info/dermatology.npy', allow_pickle=True)
data3 = np.load('./training_info/glass.npy', allow_pickle=True)
data4 = np.load('./training_info/sonar.npy', allow_pickle=True)
seq1 = [x['CA'] for x in data1]
seq2 = [x['CA'] for x in data2]
seq3 = [x['CA'] for x in data3]
seq4 = [x['CA'] for x in data4]
plt.plot(seq1[:20])
plt.plot(seq2[:20])
plt.plot(seq3[:20])
plt.plot(seq4[:20])
plt.title('SVM 10-fold')
plt.xlabel('epoch')
plt.ylabel('CA')
plt.legend(['cleveland', 'dermatology', 'glass', 'sonar'], loc=4)
###Output
_____no_output_____
###Markdown
HAR Data Loading and Exploration Import Dependencies
###Code
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
Load Data
###Code
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
train.head()
train.shape
###Output
_____no_output_____
###Markdown
We have **no** missing data in any of the 563 columns.
###Code
train.isnull().sum().sum()
train['Activity'].value_counts()
plt.figure(figsize=(8, 6))
plt.title('Acitvity counts in Training Data')
sns.countplot(data=train, x='Activity', palette='gray')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Our data are not extremely unbalanced. Exploratory Data Analysis
###Code
train.describe()
###Output
_____no_output_____
###Markdown
- We observe all values are between -1 and 1.- The means and modes seem to be equal for most variables, suggesting that most features are normally distributed. One `mean` Feature Distribution sns.distplot(train['tBodyAcc-mean()-X'])plt.title('tBodyAcc-mean()-X')plt.show() One `std` Feature Distribution
###Code
sns.distplot(train['tBodyAcc-std()-X'])
plt.title('tBodyAcc-std()-X')
plt.show()
###Output
_____no_output_____
###Markdown
One Mode Feature Distribution
###Code
sns.distplot(train['tBodyAcc-mad()-X'])
plt.title('tBodyAcc-mad()-X')
plt.show()
###Output
_____no_output_____
###Markdown
The pattern is really similar to that of the standard deviation. One `max` Feature Distribution
###Code
sns.distplot(train['tBodyAcc-max()-X'])
plt.title('tBodyAcc-max()-X')
plt.show()
###Output
_____no_output_____
###Markdown
The shape is similar to the ones above. However, some values tend to be more positive, which makes sense. Gyroscope Feature Distribution
###Code
sns.distplot(train['angle(tBodyGyroJerkMean,gravityMean)'])
plt.title('angle(tBodyGyroJerkMean,gravityMean)')
plt.show()
sns.distplot(train['angle(X,gravityMean)'])
plt.title('angle(X,gravityMean)')
plt.show()
###Output
_____no_output_____
###Markdown
This feature has a bimodal distribution centered at slightly below zero. One Entropy Feature Distribution
###Code
sns.distplot(train['fBodyAcc-entropy()-Z'])
plt.title('fBodyAcc-entropy()-Z')
plt.show()
facet = sns.FacetGrid(train, hue='Activity', height=8, aspect=2, palette='nipy_spectral')
facet.map(sns.distplot, 'tBodyAccMag-mean()', hist=False).add_legend()
plt.title('Mean Acceleration by Acitvity', fontsize=18)
plt.annotate('Stationary', xy=(-0.98, 8.37), xytext=(-0.75, 7), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.annotate('Stationary', xy=(-0.985, 6.13), xytext=(-0.75, 7), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.annotate('Stationary', xy=(-0.985, 5.25), xytext=(-0.75, 7), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.annotate('Dynamic', xy=(-0.225, 4.17), xytext=(-0.05, 5), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.annotate('Dynamic', xy=(-0.14, 3.65), xytext=(-0.05, 5), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.annotate('Dynamic', xy=(0.075, 2.28), xytext=(-0.05, 5), arrowprops={'arrowstyle': '-', 'ls': 'dashed'})
plt.show()
sns.pairplot(train[['tBodyAcc-mean()-Z', 'tBodyAcc-energy()-X', 'angle(X,gravityMean)', 'subject',]], plot_kws={'alpha': 0.6, 'edgecolor': 'k'}, size=4)
###Output
_____no_output_____
###Markdown
df_man = df.copy()df_man = df_man[df_man["man"] ==1]df_woman = df.copy()df_woman = df_woman[df_woman["woman"] ==1]plt.figure(figsize=(10,7))sns.distplot(df_man["age"],bins = 11,label="man")sns.distplot(df_woman["age"],bins = 11,label="woman")plt.legend(fontsize= 10 )plt.savefig("./age_ke.png",dpi=300,bbox_inches="tight" )plt.show()
###Code
sizes = [len(x) for x in [df_man,df_woman]]
explode = (0,0.03)
labels = ["man","woman"]
plt.pie(sizes,explode=explode,labels=labels,autopct='%1.1f%%',shadow=False,startangle=150)
plt.show()
df_deal2_1 = df.copy()
df_deal2_1 = df_deal2_1[df_deal2_1["deal_2_1"] ==1]
df_deal2_2 = df.copy()
df_deal2_2 = df_deal2_2[df_deal2_2["deal_2_2"] ==1]
sizes = [len(x) for x in [df_deal2_1,df_deal2_2]]
explode = (0,0.03)
labels = ["negative","positive"]
plt.pie(sizes,explode=explode,labels=labels,autopct='%1.1f%%',shadow=False,startangle=150)
plt.savefig("./ne_pos.png",dpi=300,bbox_inches="tight" )
plt.show()
sizes = [len(x) for x in [df_man,df_woman]]
explode = (0,0.03)
labels = ["man","woman"]
plt.pie(sizes,explode=explode,labels=labels,autopct='%1.1f%%',shadow=False,startangle=150)
plt.show()
df
report = pp.ProfileReport(df)
report
report.to_file('1216.html')
#df = pd.read_excel("./AS&IBD.xls")
df["性别"]
df_xb = pd.get_dummies(df["性别"])
df[["w","m"]] = df_xb
df
###Output
_____no_output_____
###Markdown
EDA
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import datetime, date, time, timedelta
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score, roc_curve
from sklearn.model_selection import train_test_split
#importing over and undersampling algorithms from imblearn (you will have to manually install it in your envoirenment with pip install imblearn)
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from sklearn.metrics import confusion_matrix
import itertools
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
data = pd.read_csv('./data/training.csv')
data.info()
data.describe()
#sns.pairplot(data)
# plotting the correlation matrix on the given data to see how each column correlates to another
fraud_data = data.drop(['TransactionId', 'BatchId', 'AccountId', 'SubscriptionId', 'CustomerId', 'CurrencyCode', 'CountryCode'], axis = 1)
plt.figure(figsize=(10, 10))
matrix = np.triu(fraud_data.corr())
sns.heatmap(fraud_data.corr(), annot=True,
linewidth=.8, mask=matrix, cmap="viridis");
# Convert time to datatime format
data['TransactionStartTime'] = pd.to_datetime(data['TransactionStartTime'], format='%Y-%m-%dT%H:%M:%SZ')
data['Hour'] = data['TransactionStartTime'].dt.hour
#Creating a new variable
data.loc[data['Amount'] >= 0, 'DirectionOfMoney'] = 0
data.loc[data['Amount'] < 0, 'DirectionOfMoney'] = 1
#Creating the final dataset
cat_var = ['PricingStrategy', 'ProviderId', 'ProductId', 'ChannelId', 'ProductCategory', 'Hour', 'DirectionOfMoney']
con_variables = ['Value']
features_cat = pd.get_dummies(data[cat_var])
features_cat
df = data[con_variables].merge(features_cat, left_index=True, right_index=True, how='inner')
df['Value'] = df.Value **2
#defining X and y
X = df
y = data['FraudResult']
# univariate distributions
for c in data[['ProviderId', 'ProductId','ProductCategory', 'ChannelId','PricingStrategy', 'FraudResult', 'Hour','DirectionOfMoney']].columns:
plt.figure()
sns.countplot(data[c])
plt.xticks(rotation=90)
plt.yscale('log')
# bivartiate distrobution
for c in data[['ProviderId', 'ProductId','ProductCategory', 'ChannelId','PricingStrategy', 'Hour','DirectionOfMoney']].columns:
plt.figure()
#g = sns.FacetGrid(data = data, col = 'FraudResult')
sns.countplot(x=c, hue='FraudResult', data = data)
#g.map(sns.countplot, x = c)
plt.xticks(rotation=90)
plt.yscale('log')
#g = sns.FacetGrid(data[['ProviderId', 'ProductId','ProductCategory', 'ChannelId','PricingStrategy', 'FraudResult', 'Hour','DirectionOfMoney', 'FraudResult']],
#'FraudResult')
#g.map(sns.catplot,
# creating new column for the log of value (to erase outliers)
plt.hist(df['Valuelog'], bins=25)
plt.yscale('log')
min(data.loc[data['FraudResult'] == 1,'Value'])
data['Value2'] = data.Value ** 2
import matplotlib.pyplot as plt
plt.hist(data.loc[data['FraudResult'] == 0,'Value2'], bins=100)
plt.yscale('log')
plt.hist(data.loc[data['FraudResult'] == 1,'Value2'], bins=100, alpha = 0.5)
plt.yscale('log')
plt.show()
plt.hist(data.loc[data['FraudResult'] == 0,'Value'], bins=100)
plt.yscale('log')
plt.hist(data.loc[data['FraudResult'] == 1,'Value'], bins=100, alpha = 0.5)
plt.yscale('log')
plt.show()
df.columns
# initialising first very simple basline model, every transaction used for financial services is predicte to be fradulent
#used the great method kat showed us
df.loc[df['ProductCategory_financial_services'] == 1, 'Prediction'] = 1
df.loc[df['ProductCategory_financial_services'] != 1, 'Prediction'] = 0
predictions = df.Prediction
df = df.drop('Prediction', axis=1)
condition1 = df['ProductCategory_financial_services'] == 1
condition2 = df['DirectionOfMoney'] == 0
condition3 = df['ChannelId_ChannelId_3'] == 1
condition4 = (df['PricingStrategy'] == 0) | (df['PricingStrategy'] == 2)
condition5 = df["ProductId_ProductId_15"] == 1
condition6 = (df['ProviderId_ProviderId_1'] == 1) | (df['ProviderId_ProviderId_3'] == 1)| (df['ProviderId_ProviderId_5'] == 1)
condition7 = df['Value'] >= 200000
predictions = condition1 & condition2 & condition3 & condition4 & condition5 & condition6 & condition7
#printing scores for baseline
print_evaluations(y, predictions)
#defining X and y
X = df
y = data['FraudResult']
X.to_csv("data/X.csv")
y.to_csv("data/y.csv")
y.head()
#splitting data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
#used smote algorithm (synthetic oversampling) to oversample fradulent class
# dataframes of synthetic datapoints: smote_data_X, smote_data_Y
smote_algo = SMOTE(random_state=50)
smote_data_X, smote_data_Y = smote_algo.fit_resample(X_train, y_train)
smote_data_X = pd.DataFrame(data=smote_data_X, columns=X_train.columns)
smote_data_Y = pd.DataFrame(data=smote_data_Y, columns=['FraudResult'])
sum(smote_data_Y.FraudResult)/len(smote_data_Y)
sum()
# random forest on oversampled, synthecized (smote) data
from sklearn.ensemble import RandomForestClassifier
model_rf = RandomForestClassifier(n_estimators=100,
random_state=50,
max_features = 'sqrt',
n_jobs=-1, verbose = 1)
model_rf.fit(smote_data_X, smote_data_Y)
# Training predictions (to demonstrate overfitting)
train_rf_predictions = model_rf.predict(smote_data_X)
train_rf_probs = model_rf.predict_proba(smote_data_X)[:, 1]
# Testing predictions (to determine performance)
test_rf_predictions = model_rf.predict(X_test)
test_rf_probs = model_rf.predict_proba(X_test)[:, 1]
# Confusion matrix
cm = confusion_matrix(y_test, test_rf_predictions)
plot_confusion_matrix(cm, classes = ['Fraud', 'No Fraud'],
title = 'Fraud Confusion Matrix')
print_evaluations(y_test, test_rf_predictions)
cm = confusion_matrix(smote_data_Y, train_rf_predictions)
plot_confusion_matrix(cm, classes = ['Fraud', 'No Fraud'],
title = 'Fraud Confusion Matrix')
print_evaluations(smote_data_Y, train_rf_predictions)
model_adaboost = AdaBoostClassifier(random_state = 50)
model_adaboost.fit(smote_data_X, smote_data_Y)
# Training predictions (to demonstrate overfitting)
train_adaboost_predictions = model_adaboost.predict(smote_data_X)
train_adaboost_probs = model_adaboost.predict_proba(smote_data_X)[:, 1]
# Testing predictions (to determine performance)
test_adaboost_predictions = model_adaboost.predict(X_test)
test_adaboost_probs = model_adaboost.predict_proba(X_test)[:, 1]
# Confusion matrix
cm = confusion_matrix(y_test, test_adaboost_predictions)
plot_confusion_matrix(cm, classes = ['Fraud', 'No Fraud'],
title = 'Fraud Confusion Matrix')
print_evaluations(y_test, test_adaboost_predictions)
# used randomundersampler algorithm to undersample non fradulent class
# dataframes for undersampled data: X_res, y_res
rus = RandomUnderSampler(random_state=50)
X_res, y_res = rus.fit_resample(X_train, y_train)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from pathlib import Path
import json
import matplotlib.pyplot as plt
import seaborn as sns
# from geopy.distance import distance
PATH = Path('data')
# will be used later to remove outliers
def remove_outlier(df_in, col_name):
q1 = df_in[col_name].quantile(0.25)
q3 = df_in[col_name].quantile(0.75)
iqr = q3-q1 #Interquartile range
fence_low = q1-1.5*iqr
fence_high = q3+1.5*iqr
df_out = df_in.loc[(df_in[col_name] > fence_low) & (df_in[col_name] < fence_high)]
return df_out
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
df = pd.read_feather(PATH/'houston_processed.feather')
df.shape
df.head(3).T
###Output
_____no_output_____
###Markdown
Generate date related features
###Code
def generate_date_features(df):
#convert timestampms (UTC time) to datetime US central (Texas time)
df['date_time'] = pd.to_datetime(df.timestampMs,unit='ms').dt.tz_localize('utc').dt.tz_convert('US/Central')
df['year']=df.date_time.dt.year
df['month']=df.date_time.dt.month
df['day']=df.date_time.dt.day
df['day_of_week']=df.date_time.dt.dayofweek
df['hour']=df.date_time.dt.hour
df['minute'] = df.date_time.dt.minute
df=df.drop('timestampMs',axis=1)
return df
df = generate_date_features(df)
# fix lat - long
df['latitude'] = df.latitudeE7 / 1e7
df['longitude'] = df.longitudeE7 / 1e7
df.drop(['latitudeE7','longitudeE7'],axis=1,inplace=True)
# df.to_feather(PATH/'houston_processed.feather')
df.dtypes
df.groupby('year').size()
###Output
_____no_output_____
###Markdown
There are few records from 2013 when I activated Google location history on my iphone, however 2014 records are missing
###Code
# remove year 2013 due to lack of records and to maintain the continuity of the dataset
idx_drop = df[df.year==2013].index
df.drop(idx_drop,inplace=True)
df.reset_index(drop=True,inplace=True)
df.groupby('year').size()
###Output
_____no_output_____
###Markdown
Distance differences (in miles) between 2 neighbor GPS points
###Code
def calculate_distance(lat1,long1,lat2,long2):
# geopy default distance calculation is geodesic distance
return float("{0:.2f}".format(distance((lat1,long1),(lat2,long2)).miles))
lat2 = df.latitude.values.tolist()
long2 = df.longitude.values.tolist()
lat1 = df.latitude.shift().values.tolist()
lat1[0] = lat2[0]
long1 = df.longitude.shift().values.tolist()
long1[0] = long2[0]
from concurrent.futures import ProcessPoolExecutor
def multiprocessing(func, args, workers):
with ProcessPoolExecutor(max_workers=workers) as executor:
res = executor.map(func, *args)
return (list(res))
args = [lat1,long1,lat2,long2]
%%time
mile_diff = multiprocessing(calculate_distance,args,4)
df['mile_diff'] = mile_diff
# df.to_feather(PATH/'houston_processed_miles_time_diff.feather')
###Output
_____no_output_____
###Markdown
Distance difference calculation can be faster if we manually code the distance formula
###Code
# faster way to calculate miles diff: manually calculate haversine distance, slightly difference from geodesic distance from geopy
def haversine_array(lat1, lng1, lat2, lng2):
lat1, lng1, lat2, lng2 = map(np.radians, (lat1, lng1, lat2, lng2))
AVG_EARTH_RADIUS = 6371 # in km
lat = lat2 - lat1
lng = lng2 - lng1
d = np.sin(lat * 0.5) ** 2 + np.cos(lat1) * np.cos(lat2) * np.sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * np.arcsin(np.sqrt(d))
return h
lat2 = df.latitude.values.tolist()
long2 = df.longitude.values.tolist()
lat1 = df.latitude.shift().values.tolist()
lat1[0] = lat2[0]
long1 = df.longitude.shift().values.tolist()
long1[0] = long2[0]
km_diff = haversine_array(lat1,long1,lat2,long2)
df['mile_diff'] = km_diff * 0.621371 # to miles
df.mile_diff.describe()
def remove_outlier(df_in, col_name):
q1 = df_in[col_name].quantile(0.25)
q3 = df_in[col_name].quantile(0.75)
iqr = q3-q1 #Interquartile range
fence_low = q1-1.5*iqr
fence_high = q3+1.5*iqr
df_out = df_in.loc[(df_in[col_name] > fence_low) & (df_in[col_name] < fence_high)]
return df_out
plt.boxplot(df.mile_diff,vert=False);
# remove outliers
df_no_outl = remove_outlier(df,'mile_diff')
# % of outliers for miles diff
(len(df) - len(df_no_outl)) / len(df)
df_no_outl.mile_diff.plot(kind='hist',bins=100,figsize=(15,5));
# % of GPS points that are < 1 miles difference
len(df[df.mile_diff<1.0]) / len(df)
###Output
_____no_output_____
###Markdown
99% of differences are less than 1 miles. I would say my android phone is consistent in recording GPS locations (no 2 points are too far from each other) Verify some significant distance differences (> 10 miles)
###Code
df[df.mile_diff>10].groupby(['year','month','day']).mile_diff.mean()
###Output
_____no_output_____
###Markdown
All of these are out-of-state plane travel or out-of-city travel. This info could be helpful to identify far travel GPS points in my dataset There are high differences in GPS points:- GPS glitch?- On plane: when on a plane (airplane mode, no gps recorded), every 100-200 seconds the phone will use the last available gps recorded location as current location. When airplane mode is off, it will record new gps location which results in a huge difference. Time differences (in second) between 2 GPS points
###Code
date_shift = df.date_time.shift()
date_shift.loc[0] = df.date_time.loc[0]
df['sec_diff']=(df.date_time - date_shift).astype('timedelta64[s]')
df.sec_diff.describe()
plt.boxplot(df.sec_diff,vert=False);
###Output
_____no_output_____
###Markdown
Remove outliers
###Code
df_no_outl = remove_outlier(df,'sec_diff')
# % of outliers for sec diff
(len(df) - len(df_no_outl)) / len(df)
df_no_outl.sec_diff.plot(kind='hist',bins=150,figsize=(15,5))
df_no_outl[(df_no_outl.sec_diff>=50) & (df_no_outl.sec_diff<=70)].sec_diff.plot(kind='hist',bins=50,figsize=(15,5))
###Output
_____no_output_____
###Markdown
It's safe to say that on average, Google timeline records my GPS points after every ~60 seconds and sometimes it records GPS points rapidly (less than 10 seconds)
###Code
fig,axes= plt.subplots(nrows=2,figsize=(25,10),sharey=True)
norm_day = df[(df.year==2016) & (df.month==10) & (df.day==10) & (df.sec_diff<1000)]
norm_day.set_index('date_time').sec_diff.plot(ax=axes[0]);
norm_day = df[(df.year==2016) & (df.month==9) & (df.day==7) & (df.sec_diff<1000)]
norm_day.set_index('date_time').sec_diff.plot(ax=axes[1]);
###Output
_____no_output_____
###Markdown
Here are the time difference distribution for 2 typical school day (my common routine). We can see two patterns:- Time difference is bigger (maximum is 200 to 400 seconds) during sleep time when there is little movement (before 9 am)- Tim difference is smaller and more condense during school time when there is lot of movement between 9 am - 6 pm (commute, walk between classes) , and a mix between big and small during night time Calculate speed and identify abnormal speed With both time differences and distance differences, we can easily calculate speed between 2 GPS points. From here we can identify odd GPS points if its speed is more than a threshold
###Code
# df = pd.read_feather(PATH/'houston_processed_miles_time_diff.feather')
# df.shape
# max speed
max_mph=80
speed = (df_timediff.mile_diff / df_timediff.sec_diff) * 3600 # miles/sec to mph
speed.describe()
temp = df[df.sec_diff!=0] #exclude 0 time diff to avoid inf speed
df_abnormal=temp[speed>=max_mph]
df_abnormal.shape
###Output
_____no_output_____
###Markdown
Abnormal speed can be a result of GPS points glitch or plane travel. We will keep these GSP points
###Code
# keeping small glitches (<1 miles) or plane travel (probably > 15 miles)
abnormal_idx = df_abnormal[(df_abnormal.mile_diff>1) & (df_abnormal.mile_diff<15 )].index
abnormal_idx.shape
df.drop(abnormal_idx,inplace=True)
# at this point, mile diff and sec diff have to be recalculated. Remove them for now
df.drop(['mile_diff','sec_diff'],axis=1,inplace=True)
df.reset_index(drop=True,inplace=True)
df.shape
df.to_feather(PATH/'houston_ready.feather')
###Output
_____no_output_____
###Markdown
Quick GPS scatter plots
###Code
df = pd.read_feather(PATH/'houston_ready.feather')
fig,ax = plt.subplots(figsize=(20,10))
ax.scatter(df.longitude,df.latitude,color='blue',s=1,alpha=0.6)
ax.set_ylabel('latitude')
ax.set_xlabel('longitude')
# analyzing Houston area
df_houston = df[(df.longitude <=-95) & (df.longitude >=-95.7)& (df.latitude >= 29.5) & (df.latitude <= 30.25)]
fig,ax = plt.subplots(figsize=(20,10))
ax.scatter(df_houston.longitude,df_houston.latitude,color='blue',s=1,alpha=0.4)
ax.set_ylabel('latitude')
ax.set_xlabel('longitude')
###Output
_____no_output_____
###Markdown
Look pretty good. Houston road network is recognizable, and you can also see moving paths from both graphs(car and plane), some dense path and dense areas in Houston graph. We will look into it more in clustering notebook Single feature EDADive deep into each features of this dataset
###Code
df = pd.read_feather(PATH/'houston_ready.feather')
df.shape
# % of missing values for each features
(df.isnull().sum() / len(df)) * 100
###Output
_____no_output_____
###Markdown
Altitude
###Code
df.altitude.describe()
df[df.altitude >= 2000].groupby(['year','month','day']).altitude.mean()
###Output
_____no_output_____
###Markdown
Altitude can glitch as well. After checking with Google timeline site, some locations with high altitude are actually near home
###Code
df[df.altitude >= 5000].groupby(['year','month','day']).altitude.mean()
# about 5000 feet it seems to get all the airplane GPS point
# def get_exact(df,year,month,day):
# return df[(df.year == year) & (df.month == month) & (df.day==day)]
# get_exact(df[df.altitude < -400],2016,11,20)
lowest=-400
df[df.altitude < lowest].groupby(['year','month','day']).altitude.mean()
df[df.altitude < lowest].shape
###Output
_____no_output_____
###Markdown
After checking with Google Timeline site, majority of these 'low' altitudes are glitch. Majority of them are at home We will remove these low altitude. We still keep high altitude as it can help to identify flight path
###Code
# remove low altitude. Keep high altitude as it can be plane travel
df[(df.altitude >=-400) | (df.altitude.isnull())].shape
df.shape
df = df[(df.altitude >=-400) | (df.altitude.isnull())]
df.shape
# view ground (normal) altitude
df[df.altitude < 5000].altitude.plot(kind='hist',bins=100,figsize=(15,5))
###Output
_____no_output_____
###Markdown
Deal with missing valuesHalf of dataset is missing Altitude values. Altitude cannot be changed easily, so we will use pandas forward fill to deal with missing values
###Code
# Altitude cannot be changed easily, so use forward fill
alt = df.altitude
alt_fillna = df.altitude.fillna(method='ffill')
fig,ax=plt.subplots(nrows=1,ncols=2,figsize=(15,15),sharex=True,sharey=True)
ax[0].plot(alt,df.date_time,'.',alpha=0.05,label='no null')
ax[1].plot(alt_fillna,df.date_time,'.',alpha=0.05,label='null filled')
ax[0].legend(loc=0)
ax[1].legend(loc=0)
###Output
_____no_output_____
###Markdown
In the left graph (null data isn't plotted), there aren't many gaps in altitude even though altitude contains 50% missing values, and it seems to stay stable (there is no big altitude jump resulting in zic-zac pattern). Forward filling is not a bad first choice
###Code
df.altitude.fillna(method='ffill',inplace=True)
df.altitude.fillna(0,inplace=True) # use 0 for first few NaN altitudes
###Output
_____no_output_____
###Markdown
Heading
###Code
df.heading.describe()
# 0-360 degree?
df.heading.plot(kind='hist',bins=360,figsize=(15,5))
fig,axes= plt.subplots(nrows=2,figsize=(20,5),sharey=True)
# typical routine
norm_day = df[(df.year==2016) & (df.month==10) & (df.day==10) & (df.hour >= 19) & (df.hour <21)]
norm_day.set_index('date_time').heading.plot(ax=axes[0]);
# same routine, a date later
norm_day = df[(df.year==2016) & (df.month==10) & (df.day==11) & (df.hour >= 19) & (df.hour <21)]
norm_day.set_index('date_time').heading.plot(ax=axes[1]);
###Output
_____no_output_____
###Markdown
Unfortunately, they did not share the same patternLet's see if heading is anywhere related to activity type
###Code
df_temp = df[~df.heading.isnull()]
df_temp.act_type1.value_counts() / len(df_temp)
###Output
_____no_output_____
###Markdown
A record with heading has higher chance to have type 'IN_VEHICLE' or 'TILTING'
###Code
df_temp[df_temp.act_type1=='IN_VEHICLE'].act_conf1.plot(kind='hist',bins=100,figsize=(15,5))
len(df_temp[(df_temp.act_type1=='IN_VEHICLE') & (df_temp.act_conf1 >=80)]) / len(df_temp[df_temp.act_type1=='IN_VEHICLE'])
###Output
_____no_output_____
###Markdown
For records with heading, only ~30% of them have IN_VEHICLE confidence >=80, meaning having heading does not always mean user is in a vehicle Heading would be a bad feature to be considered because:- 82% missing values- Not stable (different pattern for 2 similar records)- Not sure how heading is generated. My best bet: heading is related to vehicle heading, but only 30% of records with heading have high IN_VEHICLE confidenceWe can extract these heading records to study them later. Velocity 98% missing values
###Code
df.velocity.isnull().sum() / len(df)
df.velocity.describe()
df.velocity.plot(kind='hist',bins=35,figsize=(15,5))
plt.boxplot(df[~df.velocity.isnull()].velocity,1,'',vert=False);
# no outlier plots
###Output
_____no_output_____
###Markdown
This is another bad feature. For a 'velocity' feature, majority are between 0 and 1. Not sure if this is mph or kph; either way, it would be too low. Activity type and activity confidence Based on data cleaning, first activity type (act_type1) has the highest confidence.
###Code
df.act_type1.isnull().sum() / len(df)
###Output
_____no_output_____
###Markdown
There are 46% missing values
###Code
fig, ax = plt.subplots(figsize=(15,5))
sns.countplot(ax=ax,x=df.act_type1,data=df)
fig,ax = plt.subplots(figsize=(20,5))
sns.boxplot(ax=ax,x='act_type1',y='act_conf1',data=df)
###Output
_____no_output_____
###Markdown
'TILTING' is the most confident activity Google comes up with. However, after checking with google timeline, TILTING can relate to home GPS location (still), school location and even on driving path. Let's check to see whether 'IN_VEHICLE' is accurate label by checking how many high confidence IN_VEHICLE data points are in 'home' square area; between (29.6890,-95.2719) and (29.6899, -95.2708). A point to be considered 'high confidence' will have act_conf1 >= 70
###Code
temp = df[(df.act_type1 == 'IN_VEHICLE') & (df.act_conf1 >=70) & (df.latitude>=29.6890) & (df.latitude <=29.6899) & (df.longitude>=-95.2719) & (df.longitude <=-95.2708)]
temp.head().T
len(temp) / len(df[(df.act_type1 == 'IN_VEHICLE') & (df.act_conf1 >=70)])
###Output
_____no_output_____
###Markdown
25% high-confidence 'IN_VEHICLE' records are actually at home (where it should be STILL or ON_FOOT). It could be higher if I check with other still places such as work.I might need to look into it more, but it is safe to say that Activity is not a reliable record. 'Extra' features As discussed in cleaning_long.ipynb notebook, Extra features for this dataset only contain 1 value:```'extra': [{'type': 'VALUE', 'name': 'vehicle_personal_confidence', 'intVal': 100}]```Also these features are 99.9% missing. It's safe to disregard 'extra'
###Code
df.reset_index(drop=True,inplace=True)
df.to_feather(PATH/'houston_ready.feather')
###Output
_____no_output_____
###Markdown
Document: [PySpark API](https://spark.apache.org/docs/latest/api/python/index.html)
###Code
%matplotlib inline
from pyspark.sql.functions import col
from pyspark.sql.functions import explode
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import IndexToString
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import MultilayerPerceptronClassifier
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import OneVsRest
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
###Output
_____no_output_____
###Markdown
Load Data from PIO
###Code
event_df = p_event_store.find('IrisApp')
event_df.show(5)
def get_field_type(name):
if name.startswith('attr'):
return 'double'
else:
return 'string'
field_names = (event_df
.select(explode("fields"))
.select("key")
.distinct()
.rdd.flatMap(lambda x: x)
.collect())
field_names.sort()
exprs = [col("fields").getItem(k).cast(get_field_type(k)).alias(k) for k in field_names]
data_df = event_df.select(*exprs)
data_df.show(5)
###Output
_____no_output_____
###Markdown
Pandas
###Code
p_data_df = data_df.toPandas()
import matplotlib.pyplot as plt
from pandas.plotting import scatter_matrix
scatter_matrix(p_data_df, diagonal='kde', color='k', alpha=0.3)
plt.show()
###Output
_____no_output_____
###Markdown
Train and Test
###Code
(train_df, test_df) = data_df.randomSplit([0.9, 0.1])
labelIndexer = StringIndexer(inputCol="target", outputCol="label").fit(train_df)
featureAssembler = VectorAssembler(inputCols=[x for x in field_names if x.startswith('attr')],
outputCol="features")
clf = RandomForestClassifier(featuresCol="features", labelCol="label", predictionCol="prediction",
probabilityCol="probability", rawPredictionCol="rawPrediction",
maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0,
maxMemoryInMB=256, cacheNodeIds=False, checkpointInterval=10,
impurity="gini", numTrees=20, featureSubsetStrategy="auto",
seed=None, subsamplingRate=1.0)
# clf = DecisionTreeClassifier(featuresCol="features", labelCol="label", predictionCol="prediction",
# probabilityCol="probability", rawPredictionCol="rawPrediction",
# maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0,
# maxMemoryInMB=256, cacheNodeIds=False, checkpointInterval=10,
# impurity="gini", seed=None)
# TODO MultilayerPerceptronClassifier is NPE...
# clf = MultilayerPerceptronClassifier(featuresCol="features", labelCol="label",
# predictionCol="prediction", maxIter=100, tol=1e-6, seed=None,
# layers=None, blockSize=128, stepSize=0.03, solver="l-bfgs",
# initialWeights=None)
# TODO NPE...
# lr = LogisticRegression(featuresCol="features", labelCol="label", predictionCol="prediction",
# maxIter=100, regParam=0.0, elasticNetParam=0.0, tol=1e-6, fitIntercept=True,
# threshold=0.5, probabilityCol="probability", # thresholds=None,
# rawPredictionCol="rawPrediction", standardization=True, weightCol=None,
# aggregationDepth=2, family="auto")
# lr = LogisticRegression()
# clf = OneVsRest(classifier=lr)
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel",
labels=labelIndexer.labels)
pipeline = Pipeline(stages=[featureAssembler, labelIndexer, clf, labelConverter])
model = pipeline.fit(train_df)
predict_df = model.transform(test_df)
predict_df.select("predictedLabel", "target", "features").show(5)
evaluator = MulticlassClassificationEvaluator(
labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predict_df)
print("Test Error = %g" % (1.0 - accuracy))
###Output
_____no_output_____
###Markdown
US Powerball Winning Numbers Analysis
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv('data/powerball_winning_numbers.csv')
df.head()
df.tail()
df.info()
df.isna().sum()
df['Winning Numbers'][:10]
###Output
_____no_output_____
###Markdown
**Initial Observation:**- Looks like there are 210 drawings without Multipliers, these grand prize winners did not purchase the power play option- Winning Numbers column contains all the numbers seperated by a space
###Code
#next steps:
#seperate winning numbers into individual columns
#create a frequency count of winning numbers for each pick column
#plot histograms of number frequency for each pick column
winning_numbers_array = df['Winning Numbers'].str.split(" ", 6, expand=True)
df['a'] = winning_numbers_array[0]
df['b'] = winning_numbers_array[1]
df['c'] = winning_numbers_array[2]
df['d'] = winning_numbers_array[3]
df['e'] = winning_numbers_array[4]
df['powerball'] = winning_numbers_array[5]
df = df.drop('Winning Numbers', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
---
###Code
def count_freq(num_array):
"""
Take in array
Return a dictionary of numbers and their frequency counts
"""
num_list = list(num_array.values.T.flatten())
freq_count = {}
for num in num_list:
if num in freq_count:
freq_count[num] += 1
else:
freq_count[num] = 1
return freq_count
#create frequency count for each pick
first_num = count_freq(df['a'])
second_num = count_freq(df['b'])
third_num = count_freq(df['c'])
fourth_num = count_freq(df['d'])
fifth_num = count_freq(df['e'])
powerball = count_freq(df['powerball'])
#test out plot of first number
first_num_sorted = sorted(first_num.items())
x,y = zip(first_num.items())
plt.figure(figsize=[20,10])
plt.plot(x,y)
plt.show()
###Output
_____no_output_____
###Markdown
Data available for download from Kaggle: https://www.kaggle.com/dimitaryanev/mobilechurndataxlsx
###Code
import pandas as pd
# Converted to TSV for faster load times, if using the link above, use read_excel()
f_path = "data/mobile-churn-data.tsv"
df = pd.read_csv(f_path, sep='\t')
# Get rid of P.I.D. for privacy and lack of predictive value, year is all the same, so no helpful info
df = df.drop(['user_account_id', 'year'], axis=1)
df.head()
X = df.drop('churn', axis=1)
y = df[['churn']]
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
from sklearn.preprocessing import StandardScaler
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1)
def get_shape(clean=False):
if not clean:
iters = [X_train, y_train, X_test, y_test]
else:
iters = [X_train, X_train_clean, y_train, X_test, X_test_clean, y_test]
for x in iters:
print(x.shape)
get_shape()
def remove_corr_cols(data, target=['user_lifetime']):
"""Remove columns that share the same information with the target column."""
target = ['user_lifetime']
sorted_corr = X_train.corr()[target].sort_values(target, ascending=False)
removing_features = sorted_corr[sorted_corr.duplicated()].index
return data.drop(removing_features, axis=1)
def process_df(data, scale=None):
#Remove input features with same information
data = remove_corr_cols(data, True)
# Make a long col name shorter & more intuitive
data = data.rename(columns={'user_no_outgoing_activity_in_days': 'min_outgoing_inactive_days'})
cols = data.columns
if not scale:
scale = StandardScaler()
scale.fit(data)
return pd.DataFrame(scale.transform(data), columns=cols), scale
X_train_clean, scale = process_df(X_train)
X_test_clean, scale = process_df(X_test, scale)
get_shape(True)
# Uncomment the next line if you need to install SMOTE aka imbalanced-learn
# conda install -c conda-forge imbalanced-learn
from imblearn.over_sampling import SMOTE
os = SMOTE()
cols = X_train_clean.columns
os_data_X, os_data_y = os.fit_sample(X_train_clean, y_train)
os_data = pd.DataFrame(data=os_data_X, columns=cols)
os_data['churn'] = os_data_y
os_data_X.columns
os_data_X.shape
print("Length of oversampled data is ",len(os_data_X))
print("Number of churn whose value is 0 in oversampled data ",len(os_data_y[os_data.churn==0]))
print("Number of churn whose value is 1 in oversampled data in oversampled data ",len(os_data_y[os_data.churn==1]))
len(X_test_clean)
len(y_test['churn'])
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
logreg = LogisticRegression(max_iter=200)
# y requires a series, not a dataframe
logreg.fit(X_train_clean, y_train['churn'])
X_test_preds = logreg.predict(X_test_clean)
# The above is accuracy without SMOTE, now let's try it with
logreg.fit(os_data_X.drop('churn', axis=1), os_data_y['churn'])
logreg.score(X_test_clean, y_test['churn'])
import numpy as np
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
rfe = RFE(logreg, 40)
rfe = rfe.fit(os_data_X, os_data_y['churn'])
print(rfe.support_)
print(rfe.ranking_)
###Output
_____no_output_____
###Markdown
Data Cleaning and Exploratory Data Analysis Import necessary packages
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Import data
###Code
df = pd.read_csv('../data/entering_canada/bwt-taf-2016-07-01--2016-09-30-en.csv')
###Output
_____no_output_____
###Markdown
Determine Target Locations from Table
###Code
df['Location'].unique()
target_locations = ['Surrey, BC', 'Huntingdon, BC', 'Aldergrove, BC']
df.loc[(df['Location'].isin(target_locations)) & (df['Travellers Flow'] != 'No Delay')]
###Output
_____no_output_____
###Markdown
초기화 및 데이터 로드
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set_theme(color_codes=True)
from dkt.dataloader import __process_feature
dtype = {
'userID': 'int16',
'answerCode': 'int8',
'KnowledgeTag': 'int16'
}
df = pd.read_csv('./data/train_data.csv', dtype=dtype, parse_dates=['Timestamp'])
df = df.sort_values(by=['userID', 'Timestamp']).reset_index(drop=True) # 데이터가 이미 정렬되어 있어서 의미는 없음
df
df = __process_feature(df)
df
###Output
_____no_output_____
###Markdown
기본 정보
###Code
print(f"""
userID : {df.userID.nunique()}
assessmentItemID : {df.assessmentItemID.nunique()}
testID : {df.testId.nunique()}
mean answer rate : {df.answerCode.sum() / df.shape[0] * 100:.2f}%
KnowledgeTag : {df.KnowledgeTag.nunique()}
""")
###Output
userID : 6698
assessmentItemID : 9454
testID : 1537
mean answer rate : 65.44%
KnowledgeTag : 912
###Markdown
assessmentItemID & KnowledgeTag 조합assessmentItemID가 대분류이고 KnowledgeTag가 중분류라면 KnowledgeTag가 대분류와 독립적인지 종속적인지 여부 확인
###Code
combination = df.groupby('KnowledgeTag').agg({
'assessmentItemID': 'count'
})
combination
# 결과는 독립적 (똑같은 태그가 여러 ID에 있을 수 있다)
###Output
_____no_output_____
###Markdown
사용자 기준 분석
###Code
def percentile(s):
return np.sum(s) / len(s)
user_group = df.groupby('userID').agg({
'assessmentItemID': 'count',
'answerCode': percentile
})
user_group
# 사용자마다 푼 문제 갯수 및 정답률
user_group.describe()
###Output
_____no_output_____
###Markdown
문제 당 분석 기본 정보
###Code
fig, ax = plt.subplots()
user_group['assessmentItemID'].hist(bins=20, ax=ax)
ax.axvline(user_group['assessmentItemID'].mean(), color='red')
###Output
_____no_output_____
###Markdown
같은 문항을 푼 학생들의 정답률 평균
###Code
itemnum_ans = user_group.groupby('assessmentItemID').mean()
itemnum_ans['num_items'] = itemnum_ans.index
itemnum_ans
fig, ax = plt.subplots()
sns.regplot(data=itemnum_ans, x='num_items', y='answerCode',
line_kws={"color": "orange"}, scatter_kws={'alpha':0.6}, ax=ax)
ax.set_title('# of Questions - Answer Rate')
ax.set_xlabel('# of Questions')
ax.set_ylabel('Answer Rate')
###Output
_____no_output_____
###Markdown
비슷한 문항을 푼 학생들을 전부 집계
###Code
itemnum_ans = user_group.groupby('assessmentItemID').mean()
bins = 300
itemnum_ans['bins'] = pd.cut(itemnum_ans.index, [i * (itemnum_ans.index.max() - itemnum_ans.index.min()) // bins for i in range(bins)])
itemnum_ans = itemnum_ans.groupby('bins').mean()
itemnum_ans['mid'] = list(map(lambda x: (x.left + x.right)//2, itemnum_ans.index))
fig, ax = plt.subplots()
sns.regplot(data=itemnum_ans, x='mid', y='answerCode',
line_kws={"color": "orange"}, scatter_kws={'alpha': 0.6}, ax=ax)
ax.set_title(f'# of Items - Answer Rate | bins={bins}')
ax.set_xlabel('# of Items')
ax.set_ylabel('Answer Rate')
###Output
_____no_output_____
###Markdown
Loading Data > consists of 4063300 records
###Code
data = []
indicies = []
import numpy as np
from tqdm import tqdm
for i,(res,idx) in tqdm(enumerate(ds)):
res,idx = res.numpy(),idx.numpy()
if(not (np.isnan(res) or np.isinf(res))):
data.append(res)
indicies.append(idx)
data = np.array(data)
indicies = np.array(indicies)
###Output
2021-10-09 16:05:22.491630: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
4063300it [05:08, 13183.97it/s]
###Markdown
Memorization Metric plots> Plotting average values of memorization metric over a bucketed range of values
###Code
from IPython.display import display
import matplotlib.pyplot as plt
import ipywidgets as widgets
%matplotlib inline
import numpy as np
class Plotter:
def __init__(self,title,xlabel,ylabel,y,x=None,size=25,default_slider_value=None):
self.title = title
self.xlabel = xlabel
self.ylabel = ylabel
self.default_slider_value = default_slider_value
self.y = y
self.x = x
if(x is None):
self.x = [i for i in range(len(data))]
self.size = 25
self.params = {'legend.fontsize': 'large',
'figure.figsize': (15,5),
'axes.labelsize': size,
'axes.titlesize': size,
'xtick.labelsize': size*0.75,
'ytick.labelsize': size*0.75,
'axes.titlepad': 25,
'font.family':'sans-serif',
'font.weight':'bold',
'text.color':'aqua'
}
def plot_data(self,scale):
scale = 2**scale #Converting log scale to normal scale
buckets = []
length = len(self.y)
bucket_size = length//scale
index = []
for i in range(0,length,bucket_size):
buckets.append(self.y[i:i+bucket_size].mean())
index.append(self.x[min(i+bucket_size-1,len(indicies)-1)])
plt.plot(index,buckets)
plt.rcParams.update(self.params)
plt.title(self.title)
plt.xlabel(self.xlabel)
plt.ylabel(self.ylabel)
plt.show()
def clicked(self,b):
self.out.clear_output()
scale = self.slider.value
with self.out:
self.plot_data(scale)
def run(self):
self.out = widgets.Output()
button = widgets.Button(description="Plot Value")
slider_max = int(np.log2(len(self.y)))
if(self.default_slider_value is not None):
default_slider_value = self.default_slider_value
else:
default_slider_value = np.random.choice([i for i in range(1,slider_max)])
self.slider = widgets.IntSlider(min=1, max=slider_max,
value=default_slider_value,
description="Scale",
layout=widgets.Layout(width='50%'))
box_layout = widgets.Layout(
display='flex',
flex_flow='column',
align_items='center',
width='80%'
)
box = widgets.VBox(
[
self.out,
self.slider,
button
],
layout=box_layout
)
with self.out:
self.plot_data(default_slider_value)
button.on_click(self.clicked)
display(box)
plotter = Plotter(title="Memorization Metric",
xlabel='Index',ylabel='NLL Loss',
x=indicies,y=data)
plotter.run()
###Output
_____no_output_____
###Markdown
Correlation
###Code
from scipy import signal
correlation = signal.correlate(indicies, data, mode="full")
plotter = Plotter(xlabel='indicies',ylabel='correlation',
title='Correlation',x=indicies,y=correlation,default_slider_value=11)
plotter.run()
###Output
_____no_output_____
###Markdown
Statistics
###Code
import matplotlib.pyplot as plt
SAMPLE_VALUE = len(data)*25//100
from sklearn.metrics import r2_score
r2 = r2_score(indicies,data)
print(f"R2 Score between indicies and data: {r2:.5f}")
avg_start = data[:SAMPLE_VALUE].mean()
avg_end = data[SAMPLE_VALUE:].mean()
var_start = data[:SAMPLE_VALUE].var()
var_end = data[SAMPLE_VALUE:].var()
print(f"Average NLL Loss changed from {avg_start:.5f} to {avg_end:.5f}")
print(f"Varience of NLL Loss changed from {var_start:.5f} to {var_end:.5f}")
print("Trend of very slight improvement continues")
###Output
R2 Score between indicies and data: -3.00022
Average NLL Loss changed from -10.01421 to -10.00763
Varience of NLL Loss changed from 35.04649 to 34.87698
Trend of very slight improvement continues
###Markdown
Olympic Games Exploratory Data Analysis Before we begin, let's set up some useful settings:- Max number of columns to be displayed = 100- Max number of columns to be displayed = 100
###Code
import pandas as pd
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', 100)
###Output
_____no_output_____
###Markdown
First step: read and glimpse the datasetIn this EDA, we'll use the ["120 years of Olympic history: athletes and results"](https://www.kaggle.com/heesoo37/120-years-of-olympic-history-athletes-and-results) Kaggle dataset, locally available in this repo in `raw_data\athlete_events.csv` . Let's first read the dataset:
###Code
df = pd.read_csv("raw_data/athlete_events.csv")
###Output
_____no_output_____
###Markdown
Q0: How many rows and columns are there in this dataset?
###Code
print(df.shape)
###Output
(271116, 15)
###Markdown
Over 271 thousand competitors in the last 120 years of Olympics! Wow! Let's get some basic info on the available data:
###Code
print(df.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 271116 entries, 0 to 271115
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 271116 non-null int64
1 Name 271116 non-null object
2 Sex 271116 non-null object
3 Age 261642 non-null float64
4 Height 210945 non-null float64
5 Weight 208241 non-null float64
6 Team 271116 non-null object
7 NOC 271116 non-null object
8 Games 271116 non-null object
9 Year 271116 non-null int64
10 Season 271116 non-null object
11 City 271116 non-null object
12 Sport 271116 non-null object
13 Event 271116 non-null object
14 Medal 39783 non-null object
dtypes: float64(3), int64(2), object(10)
memory usage: 31.0+ MB
None
###Markdown
Lots of infos available! Let's take a glimpse on actual data:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Each row represents a competitor in a specific event from a specific olympic games. Interesting, very interesting. Q1: Which are the oldest olympic summer and winter games with data available in the dataset?To solve this one, we may resort to the `np.sort()` function:
###Code
import numpy as np
np.sort(df['Year'].unique()) # .unique() to return only one ocurrence for each olympic year
###Output
_____no_output_____
###Markdown
The first olympic game with data available is actually the first one in modern age, 1896 Olympic Summer games, in Athens. Q2: Which game had the greatest number of registered competitors?To answer this one, we may resort to `df.value_counts()` :
###Code
df['Year'].value_counts()
###Output
_____no_output_____
###Markdown
Well, the one with greatest number of competitors was not one of the last ones, but rather the 1992 Summer Games! Very interesting! Q3.1: What is the range of competing athletes' age?This one is rather simple:
###Code
import numpy as np
min_age_all_sports = np.amin(df['Age'])
max_age_all_sports = np.amax(df['Age'])
print(f'Age ranging from {min_age_all_sports} to {max_age_all_sports}')
###Output
Age ranging from 10.0 to 97.0
###Markdown
Q3.2: What is the most common athlete age found in games?One could guess that most athletes are young, in their finest physical forms. But is this true? Let's find out.
###Code
df.groupby(by="Age")["Age"].count().sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Interesting! Most common age is 23 years old, followed by ages in twenties range. But is the age spread or tightly concentrade around this value?
###Code
df["Age"].describe() # display all major statistics (mean, median, std, quartiles) at once
###Output
_____no_output_____
###Markdown
Well, indeed, most athletes (75%) had 28 or less years while competing. The youngest of all was a 10-year old child! And the oldest one was a 97-year old senior! Impressive!Is Age "evenly distributed", in the sense of being not side-skewed or not too spiked /flatted? We can quickly glance this by looking at its kurtosis and skewness values.
###Code
df["Age"].skew() # retrieve its skewness
###Output
_____no_output_____
###Markdown
So, as Age distribution has a positive skewness, it is right-skewed, i.e. skewed towards the right, having its most common value (mode), mean and median all concentrated in the left side, with a long tail to the right
###Code
df["Age"].kurt() # retrieve its kurtosis
###Output
_____no_output_____
###Markdown
As Age distribution has a kurtosis > 3, it is leptokurtic, i.e. a little "spikier" than normal distribution, with more mass concentrated around its central values (mean, median, mode).By only looking at its kurtosis and skewness, we found Age distritubion is assymetric to the left (i.e. with smaller values of Age being more common) and "spikier" (i.e. much concentrated around mean, median and mode). If are a "seeing is believing" kind of person, let's make a simple histogram/distribution graph to confirm it:
###Code
import seaborn as sns
sns.displot(df["Age"], discrete=True)
###Output
_____no_output_____
###Markdown
Just as we have found previously!So, this brief analysis confirm that, in general, most athletes are very young while competing, relying in their finest physical forms to complete most sports' events. But.. does this result hold for most sports? Is there a sport where seniors compete most? We shall see this one next. But first, one may ask: is medal-winners Age distribution any how similar to the general athlete distribution? Let's find out.
###Code
df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].describe()
print(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].kurt())
print(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].skew())
###Output
4.6159894422695835
1.4975894843728454
###Markdown
So, winners' distribution is quite similar, assimetric to the left and spikier, but less so. We can check this in its distribution plot.
###Code
sns.displot(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"], discrete=True)
###Output
_____no_output_____
###Markdown
Its form is very similar, see? OK, it's not that easy to see just looking at each graph. Let's plot them overlapped.
###Code
mod_df = df.assign(Medallist=["Yes" if Medal in ("Gold", "Silver", "Bronze") else "No" for Medal in df.Medal ])[["Age", "Medallist"]] # compute a new column "Medallist" telling if athlete was or not a medalist (regardless of whic medal it achieved)
sns.displot(mod_df, x="Age", hue ="Medallist", discrete=True)
###Output
_____no_output_____
###Markdown
Now, you see! Medallists' and non-medalists' Age distribution is very similar, with Medallist much less frequent (of course). Q3.3: What is the distribution of age in various sports?Now, to answer this one, we must look not only at the most common value, but also other meaningful statistics of Age attribute in various sports. Let's start with the usual `describe` method:
###Code
df[['Age', 'Sport']].groupby('Sport').describe()
###Output
_____no_output_____
###Markdown
Looking at table above, we see some interesting facts:* Rhythmic Gymnastics is a clear outlier with youger athletes - its 75th-percentile is 20 years old!* Shooting, Polo, Equestrianism, Croquet, Alpinism, Art Competitions, Roque are outliers with older athletes - their 75th-percentile are 39, 39, 40, 42.5, 47.5, 54, 61.5 years old, respectively!* More popular team sports like Football (Soccer), Volleyball, Basketball are very alike and aligned with general statistics - their 75th-percentile are 26, 28, 28 years old, respectivelyTo better see the relationship, let's plot some of the above cited sports.
###Code
sports_df = df.query(' Sport in ("Croquet", "Alpinism", "Roque") ')[['Age', 'Sport']] # selecting less popular sports together, to not distort graph with very different count scales
sns.displot(sports_df, x="Age", hue ="Sport", discrete=True)
sports_df = df.query(' Sport in ("Rhythmic Gymnastics", "Equestrianism") ')[['Age', 'Sport']] # selecting somewhat popular sports, to not distort graph with very different count scales
sns.displot(sports_df, x="Age", hue ="Sport", discrete=True)
sports_df = df.query(' Sport in ("Football", "Volleyball", "Basketball") ')[['Age', 'Sport']] # selecting very popular sports, to not distort graph with very different count scales
sns.displot(sports_df, x="Age", hue ="Sport", discrete=True)
###Output
_____no_output_____
###Markdown
Document: [PySpark API](https://spark.apache.org/docs/latest/api/python/index.html)
###Code
%matplotlib inline
from pyspark.sql.functions import col
from pyspark.sql.functions import explode
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import IndexToString
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import MultilayerPerceptronClassifier
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.classification import OneVsRest
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
###Output
_____no_output_____
###Markdown
Load Data from PIO
###Code
from pypio.utils import new_string_array
train_event_df = p_event_store.find('HousePrices', event_names=new_string_array(['train'], sc._gateway))
train_event_df.show(5)
def get_data_df(df):
int_fields = ["MSSubClass","LotFrontage","LotArea","OverallQual","OverallCond","YearBuilt","YearRemodAdd","MasVnrArea","BsmtFinSF1","BsmtFinSF2","BsmtUnfSF","TotalBsmtSF","1stFlrSF","2ndFlrSF","LowQualFinSF","GrLivArea","BsmtFullBath","BsmtHalfBath","FullBath","HalfBath","BedroomAbvGr","KitchenAbvGr","TotRmsAbvGrd","Fireplaces","GarageYrBlt","GarageCars","GarageArea","WoodDeckSF","OpenPorchSF","EnclosedPorch","3SsnPorch","ScreenPorch","PoolArea","MiscVal","MoSold","YrSold","SalePrice"]
def get_field_type(name):
if name in int_fields:
return 'integer'
else:
return 'string'
field_names = (df
.select(explode("fields"))
.select("key")
.distinct()
.rdd.flatMap(lambda x: x)
.collect())
field_names.sort()
exprs = [col("fields").getItem(k).cast(get_field_type(k)).alias(k) for k in field_names]
return df.select(*exprs)
train_data_df = get_data_df(train_event_df)
train_data_df.show(1)
###Output
_____no_output_____
###Markdown
Data ExplorationFor details, see https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
df_train = train_data_df.toPandas()
df_train.columns
#descriptive statistics summary
df_train['SalePrice'].describe()
#histogram
sns.distplot(df_train['SalePrice']);
#skewness and kurtosis
print("Skewness: %f" % df_train['SalePrice'].skew())
print("Kurtosis: %f" % df_train['SalePrice'].kurt())
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#scatter plot totalbsmtsf/saleprice
var = 'TotalBsmtSF'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
var = 'YearBuilt'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(16, 8))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
plt.xticks(rotation=90);
#correlation matrix
corrmat = df_train.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
#saleprice correlation matrix
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
# TODO null values?
#missing data
total = df_train.isnull().sum().sort_values(ascending=False)
percent = (df_train.isnull().sum()/df_train.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
#dealing with missing data
df_train = df_train.drop((missing_data[missing_data['Total'] > 1]).index,1)
df_train = df_train.drop(df_train.loc[df_train['Electrical'].isnull()].index)
df_train.isnull().sum().max() #just checking that there's no missing data missing...
#standardizing data
saleprice_scaled = StandardScaler().fit_transform(df_train['SalePrice'][:,np.newaxis]);
low_range = saleprice_scaled[saleprice_scaled[:,0].argsort()][:10]
high_range= saleprice_scaled[saleprice_scaled[:,0].argsort()][-10:]
print('outer range (low) of the distribution:')
print(low_range)
print('\nouter range (high) of the distribution:')
print(high_range)
#bivariate analysis saleprice/grlivarea
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
# TODO wrong index
#deleting points
df_train.sort_values(by = 'GrLivArea', ascending = False)[:2]
df_train = df_train.drop(df_train[df_train['Id'] == 1299].index)
df_train = df_train.drop(df_train[df_train['Id'] == 524].index)
#bivariate analysis saleprice/grlivarea
var = 'TotalBsmtSF'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#histogram and normal probability plot
sns.distplot(df_train['SalePrice'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'], plot=plt)
#applying log transformation
df_train['SalePrice'] = np.log(df_train['SalePrice'])
#transformed histogram and normal probability plot
sns.distplot(df_train['SalePrice'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'], plot=plt)
#histogram and normal probability plot
sns.distplot(df_train['GrLivArea'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'], plot=plt)
#data transformation
df_train['GrLivArea'] = np.log(df_train['GrLivArea'])
#transformed histogram and normal probability plot
sns.distplot(df_train['GrLivArea'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'], plot=plt)
#histogram and normal probability plot
sns.distplot(df_train['TotalBsmtSF'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['TotalBsmtSF'], plot=plt)
#create column for new variable (one is enough because it's a binary categorical feature)
#if area>0 it gets 1, for area==0 it gets 0
df_train['HasBsmt'] = pd.Series(len(df_train['TotalBsmtSF']), index=df_train.index)
df_train['HasBsmt'] = 0
df_train.loc[df_train['TotalBsmtSF']>0,'HasBsmt'] = 1
#transform data
df_train.loc[df_train['HasBsmt']==1,'TotalBsmtSF'] = np.log(df_train['TotalBsmtSF'])
#histogram and normal probability plot
sns.distplot(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], plot=plt)
#scatter plot
plt.scatter(df_train['GrLivArea'], df_train['SalePrice']);
#scatter plot
plt.scatter(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], df_train[df_train['TotalBsmtSF']>0]['SalePrice']);
#convert categorical variable into dummy
df_train = pd.get_dummies(df_train)
###Output
_____no_output_____
###Markdown
TODO: Train and Test
###Code
(train_df, test_df) = data_df.randomSplit([0.9, 0.1])
labelIndexer = StringIndexer(inputCol="target", outputCol="label").fit(train_df)
featureAssembler = VectorAssembler(inputCols=[x for x in field_names if x.startswith('attr')],
outputCol="features")
clf = RandomForestClassifier(featuresCol="features", labelCol="label", predictionCol="prediction",
probabilityCol="probability", rawPredictionCol="rawPrediction",
maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0,
maxMemoryInMB=256, cacheNodeIds=False, checkpointInterval=10,
impurity="gini", numTrees=20, featureSubsetStrategy="auto",
seed=None, subsamplingRate=1.0)
# clf = DecisionTreeClassifier(featuresCol="features", labelCol="label", predictionCol="prediction",
# probabilityCol="probability", rawPredictionCol="rawPrediction",
# maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0,
# maxMemoryInMB=256, cacheNodeIds=False, checkpointInterval=10,
# impurity="gini", seed=None)
# TODO MultilayerPerceptronClassifier is NPE...
# clf = MultilayerPerceptronClassifier(featuresCol="features", labelCol="label",
# predictionCol="prediction", maxIter=100, tol=1e-6, seed=None,
# layers=None, blockSize=128, stepSize=0.03, solver="l-bfgs",
# initialWeights=None)
# TODO NPE...
# lr = LogisticRegression(featuresCol="features", labelCol="label", predictionCol="prediction",
# maxIter=100, regParam=0.0, elasticNetParam=0.0, tol=1e-6, fitIntercept=True,
# threshold=0.5, probabilityCol="probability", # thresholds=None,
# rawPredictionCol="rawPrediction", standardization=True, weightCol=None,
# aggregationDepth=2, family="auto")
# lr = LogisticRegression()
# clf = OneVsRest(classifier=lr)
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel",
labels=labelIndexer.labels)
pipeline = Pipeline(stages=[featureAssembler, labelIndexer, clf, labelConverter])
model = pipeline.fit(train_df)
predict_df = model.transform(test_df)
predict_df.select("predictedLabel", "target", "features").show(5)
evaluator = MulticlassClassificationEvaluator(
labelCol="label", predictionCol="prediction", metricName="accuracy")
accuracy = evaluator.evaluate(predict_df)
print("Test Error = %g" % (1.0 - accuracy))
###Output
_____no_output_____
###Markdown
Pclass This ticket class with values (1 = 1st, 2 = 2nd, 3 = 3rd). A proxy for socio-economic status (SES)+ 1st = Upper+ 2nd = Middle+ 3rd = LowerThe probability of survive is decreasing with Pclass values, the highest frequency is for first class with 62.96%, see details in the graph below. Thus, the people of the first class is more likely to survive than others.
###Code
plot_xvars(df=df_train, xvar='Pclass', yvar='Survived')
###Output
_____no_output_____
###Markdown
Sex The ladies has almost 4 times more probability to survived than gentlemans.
###Code
plot_xvars(df=df_train, xvar='Sex', yvar='Survived')
###Output
_____no_output_____
###Markdown
SibSp Number of siblings / spouses aboard the Titanic
###Code
plot_xvars(df=df_train, xvar='SibSp', yvar='Survived')
###Output
_____no_output_____
###Markdown
Parch Number of parents / children aboard the Titanic
###Code
df_train['Parch_cut'] = df_train['Parch'].map(lambda x: 'c'+str(x) if x<3 else 'c3')
plot_xvars(df=df_train, xvar='Parch_cut', yvar='Survived')
###Output
_____no_output_____
###Markdown
Fare
###Code
df_train['Fare'].hist(bins=10);
cut_labels = ['[0,8]', '(8,20]', '(20,60]','(60,100]', '(100,Inf]']
cut_bins = [0,8,20,60,100,np.Inf]
df_train['Fare_cut'] = pd.cut(df_train['Fare'], bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_train, xvar='Fare_cut', yvar='Survived')
###Output
_____no_output_____
###Markdown
Age
###Code
from sklearn import tree
X_train = df_train[['Age']].fillna(99)
y_train = df_train['Survived']
model = tree.DecisionTreeClassifier(min_samples_split=0.05, min_samples_leaf=0.05)
model.fit(X_train, y_train)
y_predict = model.predict(X_train)
plt.figure(figsize=(30,20))
tree.plot_tree(model, proportion=True, class_names=None)
plt.show()
cut_labels = ['[0,6]', '(6,15]', '(15,25]', '(25,30]', '(30,40]', '(40,50]', '(50,100]']
cut_bins = [0,6,15,25,30,40,50,100]
df_train['Age_cut'] = pd.cut(df_train['Age'].fillna(99), bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_train, xvar='Age_cut', yvar='Survived')
plot_xvars(df=df_train, xvar='Embarked', yvar='Survived')
df_train.columns
df_train[df_train.SibSp>4]
df_test[df_test.SibSp>4]
df_train.shape
df_train
df_train.info()
import pandas as pd
import lightgbm as lgb
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.model_selection import cross_validate
from sklearn.model_selection import train_test_split
import pickle
# ================================= Data preprocessor ======================================
disc_vars = ['SibSp','Parch']
cat_vars = ['Pclass','Sex','Embarked']
num_vars = ['Age','Fare']
num_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
disc_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value=-999))])
cat_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='none')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(transformers=[
('num', num_transformer, num_vars),
('disc', disc_transformer, disc_vars),
('cat', cat_transformer, cat_vars)])
# ================================= Building the model ======================================
# Spliting the data into test and train sets
X = df_train[num_vars + cat_vars + disc_vars]
y = df_train["Survived"]
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size =.20, random_state=7)
# Fit the model
gbm_model = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', lgb.LGBMClassifier())])
scores = cross_validate(gbm_model, X_train, y_train, scoring='roc_auc') # roc_auc, accuracy
print('-' * 80)
print(str(gbm_model.named_steps['classifier']))
print('-' * 80)
for key, values in scores.items():
print(key, ' mean ', values.mean())
print(key, ' std ', values.std())
print('-' * 80)
gbm_model.fit(X_train, y_train)
# ================================= Saving the model ======================================
#pickle.dump(gbm_model, open('models/gbm_model.pickle', 'wb'))
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size =.20, random_state=7) # 3, 5
df_train.Survived.mean(), y_train.mean(), y_test.mean()
import pickle
model_path = 'models/gbm_model.pickle'
gbm_model = pickle.load(open(model_path, 'rb'))
gbm_pred = gbm_model.predict(df_test)
gbm_proba = gbm_model.predict_proba(df_test)
df_test['Survived'] = gbm_pred
df_test['Survived_proba'] = [v[1] for v in list(gbm_proba)]
df_test[['PassengerId','Survived']].to_csv('data/submission1.csv', index=False, sep=',', decimal='.')
df_test.head()
plot_xvars(df=df_train, xvar='Parch', yvar='Survived')
plot_xvars(df=df_test, xvar='Parch', yvar='Survived_proba')
cut_labels = ['[0,6]', '(6,15]', '(15,25]', '(25,30]', '(30,40]', '(40,50]', '(50,100]']
cut_bins = [0,6,15,25,30,40,50,100]
df_test['Age_cut'] = pd.cut(df_test['Age'].fillna(99), bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_test, xvar='Age_cut', yvar='proba')
cut_labels = ['[0,6]', '(6,15]', '(15,25]', '(25,30]', '(30,40]', '(40,50]', '(50,100]']
cut_bins = [0,6,15,25,30,40,50,100]
df_train['Age_cut'] = pd.cut(df_train['Age'].fillna(99), bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_train, xvar='Age_cut', yvar='Survived')
cut_labels = ['[0,8]', '(8,20]', '(20,60]','(60,100]', '(100,Inf]']
cut_bins = [0,8,20,60,100,np.Inf]
df_train['Fare_cut'] = pd.cut(df_train['Fare'], bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_train, xvar='Fare_cut', yvar='Survived')
cut_labels = ['[0,8]', '(8,20]', '(20,60]','(60,100]', '(100,Inf]']
cut_bins = [0,8,20,60,100,np.Inf]
df_test['Fare_cut'] = pd.cut(df_test['Fare'], bins=cut_bins, labels=cut_labels, include_lowest=True)
plot_xvars(df=df_test, xvar='Fare_cut', yvar='proba')
###Output
_____no_output_____
###Markdown
EDA of Opioid-Crisis-Adjacent Factors County-Level Drug-Related DeathsWe take a look at drug poisoning mortality by county. The relevant dataset is cited in this [NYTimes Article](https://www.nytimes.com/interactive/2016/01/07/us/drug-overdose-deaths-in-the-us.html) and can be [found on the CDC website here](https://www.cdc.gov/nchs/data-visualization/drug-poisoning-mortality/).
###Code
# import county-level overdose counts
od_path = Path('data/NCHS_-_Drug_Poisoning_Mortality_by_County__United_States.csv')
county_od = pd.read_csv(od_path, dtype={'FIPS': str})
# pad FIPS code to 5 digits
county_od['FIPS'] = county_od['FIPS'].str.pad(5, side='left', fillchar='0')
county_od.head()
###Output
_____no_output_____
###Markdown
There are a few limitations of the dataset: first, the death rate is not raw data and is the result of some modeling already. **This suggests that we may need to propogate errors if we decide to include this data in our models**. Second, the death count is based on drug overdoses across all categories of drugs, so it does not provide heroin- or opioid-specific data.However, given that opioids are responsible for a majority of fatal drug overdoses, taking a look at this dataset should still provide some insight into how opioid-specific overdoses are changing over time.
###Code
# check for missing values
display('Number of missing values in each column:',
county_od.isnull().sum())
# explore range of values
display('Earliest year:',
county_od['Year'].min(), 'Latest year:', county_od['Year'].max())
display('States included:',
county_od['State'].unique(), 'Number of states:', len(county_od['State'].unique()))
display('Urban/Rural Categories:',
county_od['Urban/Rural Category'].unique())
# are observations unique by FIPS and year?
display('Number of duplicated observations by FIPS code and year:',
county_od[['FIPS', 'Year']].duplicated().sum())
# are all years available for each county?
display('Number of counties without 16 years of data:',
(county_od.groupby('FIPS')['Year'].count() != 16).sum())
###Output
_____no_output_____
###Markdown
The benefit of the death rates already having gone through some processing is that the dataset is very complete. In the following, we explore how death rates have changed by 'Urban/Rural Category'.
###Code
year = county_od.groupby('Year', as_index=False)['Model-based Death Rate'].mean()
year['Urban/Rural Category'] = 'Overall'
year_urban = county_od.groupby(['Year', 'Urban/Rural Category'], as_index=False)
year_urban = year_urban['Model-based Death Rate'].mean()
year_urban = pd.concat([year, year_urban], ignore_index=True)
dash_spec = {type: (2,2) for type in county_od['Urban/Rural Category'].unique()}
dash_spec['Overall'] = ''
sns.relplot(x='Year', y='Model-based Death Rate',
hue='Urban/Rural Category', style='Urban/Rural Category',
dashes=dash_spec,
kind='line',
height=7, data=year_urban)
plt.title('Average County-level Death Rate (per 100,000) by Urban/Rural Category',
pad = 20);
###Output
_____no_output_____
###Markdown
Interestingly, up until 2016, the growth of the average county-level death rate seems to be fairly comparable across urban/rural classifications. We also explore the growth of deaths by year and state.
###Code
# calculate average death rate (per 100,000) by year and state
year_state = county_od.groupby(['Year', 'State'], as_index=False)
year_state = year_state['Model-based Death Rate'].mean()
yr_st_plot = sns.lmplot(x='Year', y='Model-based Death Rate',
col='State', col_wrap=5,
data=year_state)
def annotate_lm(data, **kwargs):
mod = sp.stats.linregress(data['Year'], data['Model-based Death Rate'])
slope = mod.slope
intercept = mod.intercept
stderr = mod.stderr
plt.annotate(f'Slope={slope:.2f},\nIntercept={intercept:.2f},\nStderr={stderr:.2f}',
(2004,35))
yr_st_plot.map_dataframe(annotate_lm);
###Output
_____no_output_____
###Markdown
As we may have expected, states like West Virginia and Pennsylvania stick out as having large, more erratic growth in death rates when compared to other states. Other states like Oregon and South Dakota have steadier, linear-looking growth. The growth in many states looks surprisingly linear. As our final work with this dataset on its own, we visualize the death rates on a map. This sets us up nicely for visualizing all other county-level data in the future.
###Code
# import county geometries
counties_url = 'https://raw.githubusercontent.com/plotly/datasets/master/geojson-counties-fips.json'
with urlopen(counties_url) as response:
counties = json.load(response)
# plot death rates by county on a map
fig = px.choropleth(county_od, geojson=counties, locations='FIPS',
color='Model-based Death Rate',
color_continuous_scale='reds',
range_color=[0, 40],
animation_frame='Year',
animation_group='FIPS',
hover_name='County',
hover_data=['Urban/Rural Category'],
scope='usa')
fig.update_traces(marker_line_width=0, marker_opacity=0.8)
fig.update_geos(resolution=110, showsubunits=True, subunitcolor='black')
fig.show()
###Output
_____no_output_____
###Markdown
As we can see, the crisis does seem to spread spatially, almost like a viral epidemic. Opioid Dispensing Rate DataNow let's take a look at the prescriptions data. The data are scraped [from the CDC Dispensing Rate Maps pages](https://www.cdc.gov/drugoverdose/rxrate-maps/index.html). The CDC sources these data from IQVIA, a healthcare data science company. The data product, Xponent, is a sample approximately 50,400 non-hospital retail pharmacies, which dispense nearly 92% of all retail prescriptions in the US. A prescription in this data set is defined as a days' supply for 1 to 365 days with a known strength. The rate is calculated as the projected total number of opioid prescriptions dispensed annually at the county level over resident population obtained from the U.S. Census bureau.There is a known change in methodology circa 2017. IQVIA changed the definition of projected prescription services from "number of presciptions dispensed to bin" to "sold to the patient," eliminating the effects of voided and reversed prescriptions and resulting in a 1.9% downward shift in measured opioid prescriptions dispensed.The rate is given as the number of retail opioid prescriptions every year per 100 people.
###Code
prescription_path = Path('data/Prescription_Data.pkl')
prescriptions = pd.read_pickle(prescription_path)
prescriptions.head()
###Output
_____no_output_____
###Markdown
Completeness of the DataLet's take a look at how many missing values we have. This is all at the county level. We can see that reporting used to be much less reliable prior to 2017, but now we don't see much missing data.
###Code
display('Number of Counties Missing Data',
(prescriptions
.groupby('Year')['Opioid Dispensing Rate per 100']
.aggregate(lambda x: x.isnull().sum())
)
)
###Output
_____no_output_____
###Markdown
Trends in Dispensing RateLooking at the distribution by year at the county level, we see a general downward trend starting around 2012, but the yearly distributions are right-skew with many outlier counties having high dispensing rates.
###Code
boxplot = prescriptions.boxplot(by='Year',
column='Opioid Dispensing Rate per 100',
figsize = (20,10),
grid=False)
###Output
_____no_output_____
###Markdown
We can also look at the mean opioid dispensing rate on the state level over the years. We see that most states follow the same trend as we saw in the boxplot, with a rise up untill the early 2010's followed by a more recent and sharp drop in prescriptions.
###Code
year_state = prescriptions.groupby(['Year', 'State'])['Opioid Dispensing Rate per 100'].mean()
year_state = year_state.reset_index()
yr_st_plot = sns.lmplot(x='Year', y='Opioid Dispensing Rate per 100',
col='State', col_wrap=4,
data=year_state)
def annotate_lm(data, **kwargs):
mod = sp.stats.linregress(data['Year'], data['Opioid Dispensing Rate per 100'])
slope = mod.slope
intercept = mod.intercept
stderr = mod.stderr
plt.annotate('Slope={:.2f},\nIntercept={:.2f},\nStderr={:.2f}'.format(slope, intercept, stderr),
(2007, 175))
yr_st_plot.map_dataframe(annotate_lm)
###Output
_____no_output_____
###Markdown
We can examine this phenomena on the county level more visually with the animated map below:
###Code
fig = px.choropleth(prescriptions, geojson=counties, locations='County FIPS Code',
color='Opioid Dispensing Rate per 100',
color_continuous_scale='viridis_r',
range_color=[25, 200],
animation_frame='Year',
animation_group='County FIPS Code',
hover_name='County',
scope='usa')
fig.update_traces(marker_line_width=0, marker_opacity=0.8)
fig.update_geos(resolution=110, showsubunits=True, subunitcolor='black')
fig.show()
###Output
_____no_output_____
###Markdown
The opiod dispension rate is going down sharply across pretty much all counties, but deaths from opioid use have increased. This is an intersesting relationship that warrants some more investigation. We now join opioid prescription rate data to our drug overdose data from before.
###Code
pres_temp = prescriptions.rename({'County FIPS Code':'FIPS'}, axis=1)
pres_temp = pres_temp[['Year', 'FIPS', 'Opioid Dispensing Rate per 100']]
od_pres = county_od.merge(pres_temp, how='inner', on=['Year', 'FIPS'])
od_pres.head()
###Output
_____no_output_____
###Markdown
Now we can explore the relationship between opioid prescription rates and drug overdose rates:
###Code
# convert opioid dispensing rate to be per 100,000
od_pres['Opioid Dispensing Rate per 100k'] = od_pres['Opioid Dispensing Rate per 100'] * 1000
# plot average overdose rate and average dispensing rate
# by state and year
year_state = od_pres.groupby(['Year', 'State'], as_index=False)
year_state = year_state[['Opioid Dispensing Rate per 100k', 'Model-based Death Rate']].mean()
# function to plot faceted data on two axes
def plt_two_axes(x, y1, y2, data, **kwargs):
ax1 = plt.gca()
ax2 = ax1.twinx()
ax1.plot(data[x], data[y1], color='coral', label=y1)
ax1.set_ylabel(y1, color='coral')
ax1.tick_params(axis='y', colors='coral')
ax2.plot(data[x], data[y2], color='dodgerblue', label=y2)
ax2.set_ylabel(y2, color='dodgerblue')
ax2.tick_params(axis='y', colors='dodgerblue')
sns.set_style('white')
dual_plot = sns.FacetGrid(data=year_state, col='State', col_wrap=2, aspect=2, sharex=True, sharey=False)
dual_plot.map_dataframe(plt_two_axes, x='Year', y1='Model-based Death Rate',
y2='Opioid Dispensing Rate per 100k')
for ax in dual_plot.axes.flatten():
ax.tick_params(labelbottom=True)
ax.set_xlabel('Year')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
As we may have expected from our previous work, there is not a simple relationship between opioid dispensing rates and drug overdose rates. For some states like Oregon, we see sharp increases in drug overdose rates even as dispensing rates are sharply decreasing. For many states, it looks like there was a lag between when opioid prescription rates peaked and when drug overdose rates started sharply increasing. US Mortality Micro-DataThe National Center for Health Statistics (NCHS) provide mortality data at the individual level derived from death certificates filed in vital statistics offices of each State and the District of Columbia. This data set contains a wealth of demographic data for each decedent, which include, but are not limited to factors leading to death, age, marital status, race, and education level. In 2020, the decedents' industries of work is also included in the data. The causes of death are coded according to the International Classification of Diseases (ICD).For privacy reasons, the publically available data does not include geographical identifiers. **For this reason, we are concerned about how to incorporate this data-set with our spatial models.**As we have only recently finished the minimum required processing to read the data, only basic explorations of a subset of the data is included as the actual data set is very large. Subsetting the DataWe restrict our view to the scope of the previously explored data sets.Drug overdose deaths were identified in the National Vital Statistics System multiple cause-of-death mortality files* by using International Classification of Diseases, Tenth Revision (ICD-10) underlying cause-of-death codes:* X40–44 (unintentional)* X60–64 (suicide)* X85 (homicide)* Y10–14 (undetermined intent) Drug categories were defined using the following ICD-10 multiple cause-of-death codes: * T40.1 poisoning by and adverse effect of heroin * T40.2 poisoning by, adverse effect of and underdosing of other opioids* T40.3 poisoning by, adverse effect of and underdosing of methadone* T40.4 synthetic opioids other than methadone* T40.5 cocaine* T43.6 psychostimulants with abuse potentialCategories are not mutually exclusive.
###Code
drug_related_deaths = pd.read_pickle(Path('data/drug_related_deaths.pkl'))
###Output
_____no_output_____
###Markdown
Visualizing Trends in SubpopulationsWe provide some basic time series of the absolute number of deaths per month within certain subpopulations. There is a comparison problem between the subgroups since we have not normalized the data with national-level demographics, but we plan on resolving this in the near future. The overall trend of growth across all subpopulations shown is certainly concerning, however. The plots are interactive, so you can zoom in and out to look at features you want to investigate. Total Monthly Drug-Related Deaths
###Code
fig = px.line(
data_frame=(
drug_related_deaths
.groupby('time')
.size()
.reset_index()
.rename(columns={0: 'number_of_deaths'})
),
x='time',
y='number_of_deaths',
range_y=[0,7500]
)
fig.show()
###Output
_____no_output_____
###Markdown
Monthly Drug-Related Deaths by Age Group
###Code
fig = px.line(
data_frame=(
drug_related_deaths
.groupby(['time', 'age'], as_index=False)
.size()
.rename(columns={'size':'Number of Deaths'})
),
x='time',
y='Number of Deaths',
color='age',
range_y=[0,2000]
)
fig.show()
###Output
_____no_output_____
###Markdown
Monthly Drug-Related Deaths by Education
###Code
fig = px.line(
data_frame=(
drug_related_deaths
.groupby(['time', 'education'], as_index=False)
.size()
.rename(columns={'size':'Number of Deaths'})
),
x='time',
y='Number of Deaths',
color='education',
range_y=[0,4000]
)
fig.show()
###Output
_____no_output_____
###Markdown
Monthly Drug-Related Deaths by RaceIn this particular coding of race (there are several in the data set), Hispanics are classified as White. We are working on disaggregating this information.
###Code
fig = px.line(
data_frame=(
drug_related_deaths
.groupby(['time', 'race'], as_index=False)
.size()
.rename(columns={'size':'Number of Deaths'})
),
x='time',
y='Number of Deaths',
color='race',
range_y=[0,6000]
)
fig.show()
###Output
_____no_output_____
###Markdown
There is no missing data.
###Code
training_set.plot(x = "kills", y = "damageDealt", kind="scatter", figsize = (15,10))
###Output
_____no_output_____
###Markdown
Clearly, there is a positive correlation between the number of kills and damage dealt.
###Code
import seaborn as sns
headshots = training_set[training_set['headshotKills'] > 0]
plt.figure(figsize = (15, 5))
sns.countplot(headshots['headshotKills'])
dbno = training_set[training_set['DBNOs'] > 0]
plt.figure(figsize = (15, 5))
sns.countplot(dbno['DBNOs'])
training_set.plot(x = 'kills', y = 'DBNOs', kind = 'scatter', figsize = (15, 10))
###Output
_____no_output_____
###Markdown
There is a positive correlation between no. of enemies down but not out (DBNO) and the number of kills.
###Code
walk0 = training_set["walkDistance"] == 0
ride0 = training_set["rideDistance"] == 0
swim0 = training_set["swimDistance"] == 0
print("{} of players didn't walk at all, {} players didn't drive and {} didn't swim." .format(walk0.sum(),ride0.sum(),swim0.sum()))
walk0_data = training_set[walk0]
print("Average place for non walkers is {:.3f}, minimum is {}, and best is {}, 95% players have a score below {}."
.format(walk0_data['winPlacePerc'].mean(), walk0_data['winPlacePerc'].min(), walk0_data['winPlacePerc'].max(), walk0_data['winPlacePerc'].quantile(0.95)))
walk0_data.hist('winPlacePerc',bins = 50, figsize = (15, 5))
###Output
Average place for non walkers is 0.039, minimum is 0.0, and best is 1.0, 95% players have a score below 0.2308.
###Markdown
Most non walkers tend to be on the lower side of the scoreboard but some of them have the best scores. These could be suspicious players. Following are the players that did not walk at all but have the best score.
###Code
suspicious = training_set.query('walkDistance == 0 & winPlacePerc == 1')
suspicious.head()
print("Maximum ride distance for suspected entries is {:.3f} meters, and swim distance is {:.1f} meters." .format(suspicious["rideDistance"].max(), suspicious["swimDistance"].max()))
###Output
Maximum ride distance for suspected entries is 0.000 meters, and swim distance is 28.7 meters.
###Markdown
Non walker- winners are non-rider winners as well becsause their ride distance is 0.
###Code
plt.plot(suspicious['swimDistance'])
suspicious_non_swimmer = suspicious[suspicious['swimDistance'] == 0]
suspicious_non_swimmer.shape
###Output
_____no_output_____
###Markdown
So there are 162 non swimmers, non walkers and non riders who won. They clearly cheated.
###Code
ride = training_set.query('rideDistance >0 & rideDistance <10000')
walk = training_set.query('walkDistance >0 & walkDistance <4000')
ride.hist('rideDistance', bins=40, figsize = (15,10))
walk.hist('walkDistance', bins=40, figsize = (15,10))
###Output
_____no_output_____
###Markdown
EDA
###Code
import pandas as pd
import numpy as np
# import pymssql
# from fuzzywuzzy import fuzz
import json
import tweepy
from collections import defaultdict
from datetime import datetime
import re
# import pyodbc
from wordcloud import WordCloud
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import string, nltk, re, json, tweepy, gensim, scipy.sparse, pickle, pyLDAvis, pyLDAvis.gensim
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from gensim import matutils, models, corpora
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('./meme_cleaning.csv')
df_sentiment = pd.read_csv('563_df_sentiments.csv')
df_sentiment = df_sentiment.drop(columns=['Unnamed: 0', 'Unnamed: 0.1', 'Unnamed: 0.1.1'])
df_sentiment.head()
#Extract all words that begin with # and turn the results into a dataframe
temp = df_sentiment['Tweet'].str.lower().str.extractall(r"(#\w+)")
temp.columns = ['unnamed']
# Convert the multiple hashtag values into a list
temp = temp.groupby(level = 0)['unnamed'].apply(list)
# Save the result as a feature in the original dataset
df_sentiment['hashtags'] = temp
for i in range(len(df_sentiment)):
if df_sentiment.loc[i, 'No_of_Retweets'] >= 4:
df_sentiment.loc[i, 'No_of_Retweets'] = 4
for i in range(len(df_sentiment)):
if df_sentiment.loc[i, 'No_of_Likes'] >= 10:
df_sentiment.loc[i, 'No_of_Likes'] = 10
retweet_df = df_sentiment.groupby(['No_of_Retweets', 'vaderSentiment']).vaderSentimentScores.agg(count='count').reset_index()
like_df = df_sentiment.groupby(['No_of_Likes', 'vaderSentiment']).vaderSentimentScores.agg(count='count').reset_index()
classify_df = df_sentiment.vaderSentiment.value_counts().reset_index()
df_sentiment.Labels = df_sentiment.Labels.fillna('')
df_likes_dict = df_sentiment.groupby('No_of_Likes').vaderSentimentScores.agg(count='count').to_dict()['count']
df_retweet_dict = df_sentiment.groupby('No_of_Retweets').vaderSentimentScores.agg(count='count').to_dict()['count']
for i in range(len(like_df)):
like_df.loc[i, 'Normalized_count'] = like_df.loc[i, 'count'] / df_likes_dict[like_df.loc[i, 'No_of_Likes']]
for i in range(len(retweet_df)):
retweet_df.loc[i, 'Normalized_count'] = retweet_df.loc[i, 'count'] / df_retweet_dict[retweet_df.loc[i, 'No_of_Retweets']]
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
exploratory data analysis
###Code
import pydicom
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import pylab
from pylab import *
import numpy as np
from skimage import measure
from skimage.transform import resize
from config import *
###Output
_____no_output_____
###Markdown
Target distribution
###Code
df_targets = pd.read_csv(TARGET_LABELS_DATA_PATH)
df_targets.head()
df_targets.Target.hist()
print('Number of rows in auxiliary dataset:', df_targets.shape[0])
print('Number of unique patient IDs:', df_targets['patientId'].nunique())
plt.show()
df_classes = pd.read_csv(DETAILED_CLASS_INFO_DATA_PATH)
df_classes.head()
df_classes['class'].hist()
plt.show()
assert df_targets['patientId'].values.tolist() == df_classes['patientId'].values.tolist(), 'PatientId columns are different.'
df_train = pd.concat([df_targets, df_classes.drop(labels=['patientId'], axis=1)], axis=1)
df_train.head(6)
pId = df_targets['patientId'].sample(1).values[0]
dcmdata = pydicom.read_file(TRAIN_IMAGES_PATH + pId + '.dcm')
print(dcmdata)
def get_boxes_per_patient(df, pId):
'''
Given the dataset and one patient ID,
return an array of all the bounding boxes and their labels associated with that patient ID.
Example of return:
array([[x1, y1, width1, height1, class1, target1],
[x2, y2, width2, height2, class2, target2]])
'''
boxes = df.loc[df['patientId']==pId][['x', 'y', 'width', 'height', 'class', 'Target']].values
return boxes
def get_dcm_data_per_patient(pId, sample='train'):
'''
Given one patient ID and the sample name (train/test),
return the corresponding dicom data.
'''
return pydicom.read_file(DATA_FOLDER_PATH+'stage_1_'+sample+'_images/'+pId+'.dcm')
def display_image_per_patient(df, pId, angle=0.0, sample='train'):
'''
Given one patient ID and the dataset,
display the corresponding dicom image with overlaying boxes and class annotation.
To be implemented: Optionally input the image rotation angle, in case of data augmentation.
'''
dcmdata = get_dcm_data_per_patient(pId, sample=sample)
dcmimg = dcmdata.pixel_array
boxes = get_boxes_per_patient(df, pId)
plt.figure(figsize=(14,7))
plt.imshow(dcmimg, cmap=pylab.cm.binary)
plt.axis('off')
class_color_dict = {'Normal' : 'green',
'No Lung Opacity / Not Normal' : 'orange',
'Lung Opacity' : 'red'}
if len(boxes)>0:
for box in boxes:
# extracting individual coordinates and labels
x, y, w, h, c, t = box
# create a rectangle patch
patch = Rectangle((x,y), w, h, color='red',
angle=angle, fill=False, lw=4, joinstyle='round', alpha=0.6)
# get current axis and draw rectangle
plt.gca().add_patch(patch)
# add annotation text
plt.text(10, 50, c, color=class_color_dict[c], size=20,
bbox=dict(edgecolor=class_color_dict[c], facecolor='none', alpha=0.5, lw=2))
plt.show()
## get a sample from each class
samples = df_train.groupby('class').apply(lambda x: x.sample(1))['patientId']
for pId in samples.values:
display_image_per_patient(df_train, pId, sample='train')
###Output
_____no_output_____
###Markdown
Extract useful meta-data from dicom headers
###Code
def get_metadata_per_patient(pId, attribute, sample='train'):
'''
Given a patient ID, return useful meta-data from the corresponding dicom image header.
Return:
attribute value
'''
# get dicom image
dcmdata = get_dcm_data_per_patient(pId, sample=sample)
# extract attribute values
attribute_value = getattr(dcmdata, attribute)
return attribute_value
df_train = df_train.sample(2000)
# create list of attributes that we want to extract (manually edited after checking which attributes contained valuable information)
attributes = ['PatientSex', 'PatientAge', 'ViewPosition']
for a in attributes:
df_train[a] = df_train['patientId'].apply(lambda x: get_metadata_per_patient(x, a, sample='train'))
# convert patient age from string to numeric
df_train['PatientAge'] = df_train['PatientAge'].apply(pd.to_numeric, errors='coerce')
# remove a few outliers
df_train['PatientAge'] = df_train['PatientAge'].apply(lambda x: x if x<120 else np.nan)
df_train.head()
###Output
_____no_output_____
###Markdown
Gender Distribution
###Code
df_train.PatientSex.hist()
plt.show()
###Output
_____no_output_____
###Markdown
Age Distribution
###Code
df_train.PatientAge.hist()
plt.show()
# empty dictionary
pneumonia_locations = {}
for _, row in df_targets.iterrows():
# retrieve information
filename = row[0]
location = row[1:5]
pneumonia = row[5]
# if row contains pneumonia add label to dictionary
# which contains a list of pneumonia locations per filename
if pneumonia == '1':
# convert string to float to int
location = [int(float(i)) for i in location]
# save pneumonia location in dictionary
if filename in pneumonia_locations:
pneumonia_locations[filename].append(location)
else:
pneumonia_locations[filename] = [location]
ns = [len(value) for value in pneumonia_locations.values()]
plt.figure()
plt.hist(ns)
plt.xlabel('Pneumonia per image')
plt.xticks(range(1, np.max(ns)+1))
plt.show()
heatmap = np.zeros((1024, 1024))
ws = []
hs = []
for values in pneumonia_locations.values():
for value in values:
x, y, w, h = value
heatmap[y:y+h, x:x+w] += 1
ws.append(w)
hs.append(h)
plt.figure()
plt.title('Pneumonia location heatmap')
plt.imshow(heatmap)
plt.figure()
plt.title('Pneumonia height lengths')
plt.hist(hs, bins=np.linspace(0,1000,50))
plt.show()
plt.figure()
plt.title('Pneumonia width lengths')
plt.hist(ws, bins=np.linspace(0,1000,50))
plt.show()
print('Minimum pneumonia height:', np.min(hs))
print('Minimum pneumonia width: ', np.min(ws))
###Output
_____no_output_____
###Markdown
EDA: Winemag Data8/18/2018Space to explore [WineMag data](https://www.kaggle.com/zynicide/wine-reviews/home), develop hypotheses to test.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# load data
df = pd.read_csv('./data/winemag-data-130k-v2.csv', index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Initial QuestionsOff the bat, I have half a dozen questions:- What features affect `points`? - How much of an effect does price have? Province? Region? Varietal?- What are the regions with the greatest markups in Washington?- Controlling for other factors, what varietal has the most expensive wines?- What about the Winery? Most expensive? Hightest rated? Most consistent?- Description features of the highest scoring wines? Common descriptive features for Countries, regions, etc? Description of DataFirst thing's first: we need to get to know our data. Here are our initial features:- `country`: Country of origin for wine.- `province`: Province or state.- `region_1`: Wine growing region.- `region_2`: Sub-region, if in one.- `winery`: Name of winery.- `designation`: Specific vineyard of wine.- `variety`: Variety of grape used.- `title`: Name of wine (includes year)- `points`: WineEnthusiast points.- `price`: Price (in dollars?)- `description`: Excerpt of tasting notes from a sommelier.- `taster_name`: Name of taster/reviewer.- `taster_twitter_handle`: Twitter handle of taster/reviewer.We have information on where a wine is from, what kind of wine it is, a subjective description and review, the reviewer, and a subjective score.It's important to note the subjectivity of `points` and `description`. We may consider them to be aspects of a review, and are thus partially dependent on `taster_name` - the reviewer. Reviewers may have biases that are reflected in both the points they award a wine and the way in which they describe it. Because we're interested in modeling `points`, we may need to find a way to adjust for these biases in order to normalize `points`.Notably, there isn't a feature for `vintage` - the year in which the wine was made. This is an important feature. Many factors that aren't listed - weather, harvest date, winemaker, etc. - are dependent on year, and the same wine from different yeas may have a significant difference in `points`. We'll need to engineer it.It's also worth noting that the `description` contains a wealth of features (tasting adjectives such as 'blueberries', and 'pepper') that we may want to mine. Data Cleaning + Initial Feature EngineeringBefore progressing, we need to clean our data and validate its quality. Specifically:- Determine how to handle Null values- Look for bad data points (outliers, high leverage)- Determine if it's of a high enough quality answer the questions we're interested in.
###Code
# check for null columns
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 129971 entries, 0 to 129970
Data columns (total 13 columns):
country 129908 non-null object
description 129971 non-null object
designation 92506 non-null object
points 129971 non-null int64
price 120975 non-null float64
province 129908 non-null object
region_1 108724 non-null object
region_2 50511 non-null object
taster_name 103727 non-null object
taster_twitter_handle 98758 non-null object
title 129971 non-null object
variety 129970 non-null object
winery 129971 non-null object
dtypes: float64(1), int64(1), object(11)
memory usage: 13.9+ MB
###Markdown
Location Features for Wines VaryWe should also note that the way a wine's location is described varies. For example, the most specific location feature is `designation` - the specific vineyard a wine is from. As many wines are made from grapes sourced from different vineyards, we expect this feature to have a lot of null values.
###Code
# wines with no designation
df[df.designation.isnull()].shape[0]
###Output
_____no_output_____
###Markdown
However, many wines are missing other location parameters:
###Code
# wines with no location parameters
df[df.designation.isnull() & df.region_1.isnull() & df.region_2.isnull() & df.province.isnull()].shape[0]
# wines with only designation
df[~df.designation.isnull() & df.region_1.isnull() & df.region_2.isnull() & df.province.isnull()].shape[0]
# wines with only region
df[df.designation.isnull() & ~df.region_1.isnull() & df.province.isnull()].shape[0]
# wines with only province
df[df.designation.isnull() & df.region_1.isnull() & df.region_2.isnull() & ~df.province.isnull()].shape[0]
###Output
_____no_output_____
###Markdown
What's going on? It turns out that how you describe the location of a wine varies by country. For example,
###Code
# wines with no region, by country
df[df.region_1.isnull() & df.region_2.isnull()][['country', 'province']].groupby('country')\
.count()\
.sort_values('province', ascending=False).head(10)
df[['country', 'province']].groupby('country')
# count without region by country
df[df.region_1.isnull() & df.region_2.isnull()][['country', 'province']].groupby('country')\
.count()\
.sort_values('province', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Get % of wines with location parameter, by country
###Code
# create location df
df_location = df[['title', 'country']].copy()
# add binary vars for location parameters
df_location['has_designation'] = ~df.designation.isnull()
df_location['has_province'] = ~df.province.isnull()
df_location['has_winery'] = ~df.winery.isnull()
df_location['has_region'] = ~(df.region_1.isnull() & df.region_2.isnull())
# get counts of wines, sums of binary vars by country
df_location_by_country = df_location.groupby('country').agg({
'title': 'count',
'has_designation': 'sum',
'has_province': 'sum',
'has_winery': 'sum',
'has_region': 'sum'
})
df_location_by_country.rename(columns={'title': 'wines'}, inplace=True)
# change counts to % values, rename columns
location_cols = ['province', 'region', 'winery', 'designation']
for col_name in location_cols:
col = f'has_{col_name}'
df_location_by_country[col] = df_location_by_country[col] / df_location_by_country['wines']
df_location_by_country[col] = df_location_by_country[col].apply(lambda x: round(x*100, 2))
df_location_by_country.rename(columns={col: f'%_{col}'}, inplace=True)
# show top 15 countries by wine count
df_location_by_country.sort_values('wines', ascending=False).head(15)
###Output
_____no_output_____
###Markdown
So, we can see that location designations can vary by country. We'll handle these discrepencies by introducing a new value, `NA`. Add `NA` to Location Columns
###Code
location_cols = ['province', 'region_1', 'region_2', 'winery', 'designation']
# convert np.NaN to string `NA`
convert_null = lambda val: 'NA' if (type(val) != str and np.isnan(val)) else val
for col in location_cols:
df[col] = df[col].apply(convert_null)
df.head()
###Output
_____no_output_____
###Markdown
Add Vintage Feature
###Code
import string
# extract vintage feature from title
def extract_vintage(title):
for tk in title.split():
clean_tk = tk.strip(string.punctuation)
if clean_tk.isdigit():
return clean_tk
df['vintage'] = df.title.apply(extract_vintage)
df[df.vintage.isnull()]
###Output
_____no_output_____
###Markdown
It seems that nearly all wines missing the `vintage` feature are sparkling wines (champagne, prosecco, etc.). We should feel okay restricting our consideration to traditional non-sparkling wine and dropping these observations. EDA
###Code
from scipy.stats import norm
# fix odd name character
df['taster_name'] = df.taster_name.apply(lambda name: name if name != 'Anne Krebiehl\xa0MW' else 'Anne Krebiehl')
# get tasters
tasters = [name for name in df.taster_name.unique() if type(name) == str]
fig, ax = plt.subplots(
nrows=len(tasters),
ncols=1,
figsize=(6, 4*len(tasters)),
sharex=True
)
# get range for x-axis
x_min, x_max = df.points.min(), df.points.max()
x = np.linspace(x_min, x_max, 100)
for name, row in zip(tasters, ax):
# plot histogram
points = df[df.taster_name == name].points
row.hist(points, density=True, bins=30)
# plot normal distribution
mu, std = norm.fit(points)
row.plot(x, norm.pdf(x, mu, std))
def test_plot_norm(points):
# get x axis
x_min, x_max = df.points.min(), df.points.max()
x = np.linspace(x_min, x_max, 100)
mu, std = norm.fit(points)
plt.hist(points, bins=20, density=True)
plt.plot(x, norm.pdf(x, mu, std))
test_plot_norm(df[df.taster_name=='Roger Voss'].points)
# heatmap of world/wine regions by avg price?
df[df.country == 'US'].groupby('province').agg({'price': 'mean', 'title': 'count'}).sort_values('price', ascending=False)
df[df.province=='Oregon'].groupby(['region_2', 'region_1']).agg({'price': 'mean', 'title': 'count'}).sort_values('price', ascending=False)
df_var = df[(df.country=='US') & (df.province=='California')]\
.groupby(['variety'])\
.agg({'title': 'count',
'price': 'mean',
'points': 'mean'})\
.sort_values('price', ascending=False)
df_var[df_var.title >= 10]
df_var = df[(df.country=='France') & (df.province=='Bordeaux')]\
.groupby(['variety', 'vintage', 'region_1'])\
.agg({'title': 'count',
'price': 'mean',
'points': 'mean'})\
.sort_values('price', ascending=False)
df_var[df_var.title >= 10]\
.reset_index()\
.sort_values(['variety', 'region_1', 'points', 'vintage'], ascending=False)
df_var = df[(df.country=='US') & (df.province=='Washington')]\
.groupby(['variety', 'vintage', 'region_1'])\
.agg({'title': 'count',
'price': 'mean',
'points': 'mean'})\
.sort_values('price', ascending=False)
df_var[df_var.title >= 10]\
.reset_index()\
.sort_values(['variety', 'region_1', 'points', 'vintage'], ascending=False)
df[(df.country=='US') & (df.province=='Washington')].description.head()
###Output
_____no_output_____
###Markdown
EDA of Dub Dynasty League Setup
###Code
# imports
import numpy as np
import pandas as pd
import pickle
import plotly.express as px
pd.set_option('max_rows',1000)
pd.set_option('max_columns',1000)
# load league data
p = open('league.pkl', 'rb')
league = pickle.load(p)
###Output
_____no_output_____
###Markdown
Preview My Team
###Code
# bf = league.teams[2]
# bf.roster
###Output
_____no_output_____
###Markdown
Matchup
###Code
matchup = league.box_scores()[2]
matchup.home_score
###Output
_____no_output_____
###Markdown
Free Agents
###Code
fa = pd.DataFrame()
for player in league.free_agents(size=1000):
row = []
row.append(player.name)
row.append(player.proTeam)
row.append(player.position)
row.append(player.injuryStatus)
row.append(player.injured)
# total
try:
row.append(player.stats['total_2022'].get('total')['GP'])
row.append(player.stats['total_2022'].get('total')['GS'])
row.append(player.stats['total_2022'].get('total')['MIN'])
row.append(player.stats['total_2022'].get('total')['MPG'])
row.append(player.stats['total_2022'].get('applied_total'))
row.append(player.stats['total_2022'].get('applied_avg'))
except:
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
# last 30
try:
row.append(player.stats['last_30_2022'].get('total')['GP'])
row.append(player.stats['last_30_2022'].get('total')['GS'])
row.append(player.stats['last_30_2022'].get('total')['MIN'])
row.append(player.stats['last_30_2022'].get('total')['MPG'])
row.append(player.stats['last_30_2022'].get('applied_total'))
row.append(player.stats['last_30_2022'].get('applied_avg'))
except:
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
# last 15
try:
row.append(player.stats['last_15_2022'].get('total')['GP'])
row.append(player.stats['last_15_2022'].get('total')['GS'])
row.append(player.stats['last_15_2022'].get('total')['MIN'])
row.append(player.stats['last_15_2022'].get('total')['MPG'])
row.append(player.stats['last_15_2022'].get('applied_total'))
row.append(player.stats['last_15_2022'].get('applied_avg'))
except:
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
# last 7
try:
row.append(player.stats['last_7_2022'].get('total')['GP'])
row.append(player.stats['last_7_2022'].get('total')['GS'])
row.append(player.stats['last_7_2022'].get('total')['MIN'])
row.append(player.stats['last_7_2022'].get('total')['MPG'])
row.append(player.stats['last_7_2022'].get('applied_total'))
row.append(player.stats['last_7_2022'].get('applied_avg'))
except:
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
row.append(np.nan)
fa = fa.append([row])
fa = fa.reset_index(drop=True)
fa.columns = [
'name'
,'team'
,'position'
,'injury_status'
,'injured'
# total
,'total_gp'
,'total_gs'
,'total_min'
,'total_mpg'
,'total_fpts'
,'total_favg'
# last 30
,'l30_gp'
,'l30_gs'
,'l30_min'
,'l30_mpg'
,'l30_fpts'
,'l30_favg'
# last 15
,'l15_gp'
,'l15_gs'
,'l15_min'
,'l15_mpg'
,'l15_fpts'
,'l15_favg'
# last 7
,'l7_gp'
,'l7_gs'
,'l7_min'
,'l7_mpg'
,'l7_fpts'
,'l7_favg'
]
# replace NaN with 0
fa = fa.fillna(0.0)
# convert float cols to int
fa.loc[:, fa.dtypes=='float'] = fa.select_dtypes(float).astype('int')
# filter out players with no injury status as they appear to be inactive
fa = fa.loc[fa.injury_status.str.len() != 0]
# add max columns - do i need the max columns, or just the ratios?
# fa['total_gp_max'] = fa.total_gp.max()
# fa['total_gs_max'] = fa.total_gs.max()
# fa['total_min_max'] = fa.total_min.max()
# fa['total_mpg_max'] = fa.total_mpg.max()
# fa['total_fpts_max'] = fa.total_fpts.max()
# fa['total_favg_max'] = fa.total_favg.max()
# fa['l30_gp_max'] = fa.l30_gp.max()
# fa['l30_gs_max'] = fa.l30_gs.max()
# fa['l30_min_max'] = fa.l30_min.max()
# fa['l30_mpg_max'] = fa.l30_mpg.max()
# fa['l30_fpts_max'] = fa.l30_fpts.max()
# fa['l30_favg_max'] = fa.l30_favg.max()
# fa['l15_gp_max'] = fa.l15_gp.max()
# fa['l15_gs_max'] = fa.l15_gs.max()
# fa['l15_min_max'] = fa.l15_min.max()
# fa['l15_mpg_max'] = fa.l15_mpg.max()
# fa['l15_fpts_max'] = fa.l15_fpts.max()
# fa['l15_favg_max'] = fa.l15_favg.max()
# fa['l7_gp_max'] = fa.l7_gp.max()
# fa['l7_gs_max'] = fa.l7_gs.max()
# fa['l7_min_max'] = fa.l7_min.max()
# fa['l7_mpg_max'] = fa.l7_mpg.max()
# fa['l7_fpts_max'] = fa.l7_fpts.max()
# fa['l7_favg_max'] = fa.l7_favg.max()
# add ratio columns
fa['total_gp_ratio'] = round(fa['total_gp'] / fa.total_gp.max(), 2)
fa['l30_gp_ratio'] = round(fa['l30_gp'] / fa.l30_gp.max(), 2)
fa['l15_gp_ratio'] = round(fa['l15_gp'] / fa.l15_gp.max(), 2)
fa['l7_gp_ratio'] = round(fa['l7_gp'] / fa.l7_gp.max(), 2)
fa['total_gs_ratio'] = round(fa['total_gs'] / fa.total_gs.max(), 2)
fa['l30_gs_ratio'] = round(fa['l30_gs'] / fa.l30_gs.max(), 2)
fa['l15_gs_ratio'] = round(fa['l15_gs'] / fa.l15_gs.max(), 2)
fa['l7_gs_ratio'] = round(fa['l7_gs'] / fa.l7_gs.max(), 2)
fa['total_min_ratio'] = round(fa['total_min'] / fa.total_min.max(), 2)
fa['l30_min_ratio'] = round(fa['l30_min'] / fa.l30_min.max(), 2)
fa['l15_min_ratio'] = round(fa['l15_min'] / fa.l15_min.max(), 2)
fa['l7_min_ratio'] = round(fa['l7_min'] / fa.l7_min.max(), 2)
fa['total_mpg_ratio'] = round(fa['total_mpg'] / fa.total_mpg.max(), 2)
fa['l30_mpg_ratio'] = round(fa['l30_mpg'] / fa.l30_mpg.max(), 2)
fa['l15_mpg_ratio'] = round(fa['l15_mpg'] / fa.l15_mpg.max(), 2)
fa['l7_mpg_ratio'] = round(fa['l7_mpg'] / fa.l7_mpg.max(), 2)
fa['total_fpts_ratio'] = round(fa['total_fpts'] / fa.total_fpts.max(), 2)
fa['l30_fpts_ratio'] = round(fa['l30_fpts'] / fa.l30_fpts.max(), 2)
fa['l15_fpts_ratio'] = round(fa['l15_fpts'] / fa.l15_fpts.max(), 2)
fa['l7_fpts_ratio'] = round(fa['l7_fpts'] / fa.l7_fpts.max(), 2)
fa['total_favg_ratio'] = round(fa['total_favg'] / fa.total_favg.max(), 2)
fa['l30_favg_ratio'] = round(fa['l30_favg'] / fa.l30_favg.max(), 2)
fa['l15_favg_ratio'] = round(fa['l15_favg'] / fa.l15_favg.max(), 2)
fa['l7_favg_ratio'] = round(fa['l7_favg'] / fa.l7_favg.max(), 2)
fa.head()
# find players who:
# have been played in a game a lot recently and a reasonable amount all season
# have scored above average all season and recently
fa.loc[(fa.l30_gp_ratio >= .8) & (fa.total_gp_ratio >= 0.6) & (fa.total_fpts_ratio >= 0.5) & (fa.total_favg_ratio >= 0.5)]
# fa.iloc[0]['Stats']
###Output
_____no_output_____
###Markdown
Player Actual Performance vs. Projected Performance
###Code
rosters = pd.DataFrame()
for team in league.teams:
for player in team.roster:
try:
rosters = rosters.append([[
team.team_name
,player.name
,player.total_points
,player.projected_total_points
,player.avg_points
,player.projected_avg_points
,player.stats['total_2022']['total']['GP']
]])
except:
pass
rosters.columns = [
'Team'
,'Player'
,'Player Total Points'
,'Player Proj Total Points'
,'Player Avg Points'
,'Player Proj Avg Points'
,'GP'
]
# filter our players with no projections (rookies, other random cases)
rosters = rosters.loc[rosters['Player Proj Total Points']>0.00]
# filter our players with less than 10 games played
rosters = rosters.loc[rosters['GP']>=10.0]
# calculate Actual versus Projected
rosters['Player Avg Points Diff'] = rosters['Player Avg Points'] - rosters['Player Proj Avg Points']
rosters.loc[rosters.Team == 'Orange Julius']
pts_diff = rosters.groupby('Team')['Player Avg Points Diff'].sum().reset_index().sort_values('Player Avg Points Diff')
pts_diff = pts_diff.rename(columns={'Player Avg Points Diff':'Fantasy Points'})
pts_diff['Fantasy Points'] = pts_diff['Fantasy Points'].astype(int)
fig = px.bar(pts_diff, x='Team', y='Fantasy Points', title='Actual vs. Projected Average Fantasy Points', text_auto=True)
fig.show()
###Output
_____no_output_____
###Markdown
Matchups
###Code
for matchup in league.matchup:
print(matchup)
league.box_scores(matchup_period=2)
###Output
_____no_output_____
###Markdown
Missing values
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 longitude 20640 non-null float64
1 latitude 20640 non-null float64
2 housing_median_age 20640 non-null float64
3 total_rooms 20640 non-null float64
4 total_bedrooms 20433 non-null float64
5 population 20640 non-null float64
6 households 20640 non-null float64
7 median_income 20640 non-null float64
8 median_house_value 20640 non-null float64
9 ocean_proximity 20640 non-null object
dtypes: float64(9), object(1)
memory usage: 1.6+ MB
###Markdown
The feature 'total_bedrooms' has lesser values from the rest. It points out there may be null values.
###Code
df['total_bedrooms'].isnull().value_counts()
# alternative method of missing data visualization
sns.heatmap(df.isnull(),cmap='viridis',cbar=False,yticklabels=False)
plt.title('missing data')
plt.show()
df[df['total_bedrooms'].isnull()]
###Output
_____no_output_____
###Markdown
We can see that indeed total_bedrooms has 207 null values in the dataframe. Perhaps we can fill in the missing values with linear regression if another predictor variable correlates strongly with the missing data variable (i.e. total_bedrooms). We can create a heatmap featuring the correlation scores between predictors as below.
###Code
# Bivariate Analysis Correlation plot for numerical features
ft_df = df.iloc[:,0:9] # grab only the numeric feature columns
plt.figure(figsize=(12,10))
sns.heatmap(ft_df.corr(), annot=True, cmap='coolwarm')
###Output
_____no_output_____
###Markdown
We see that households correlates the strongest with total_bedrooms with r2_score = 0.98. Data imputation
I decided to impute the null values of total_bedrooms according to the predicted values obtained from modelling total_bedrooms by households as they showed a clear strong association as evidenced by the heatmap plot.
###Code
null_idx = df[df['total_bedrooms'].isnull()].index
copy_df = df.copy().drop(null_idx)
x = copy_df['households'].values
y = copy_df['total_bedrooms'].values
x_test = df[df['total_bedrooms'].isnull()]['households'].values
x = x.reshape(-1,1)
y = y.reshape(-1,1)
# Apply linear regression to fit the values to the curve.
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import numpy as np
model = LinearRegression()
model.fit(x,y)
y_pred = model.predict(x)
y_imput = model.predict(x_test.reshape(-1,1))
impute_df = pd.DataFrame(y_imput, columns=['values'])
impute_df = impute_df.astype(float)
print(f'MSE score: {int(mean_squared_error(y,y_pred))}')
print(f'R2 score: {r2_score(y,y_pred)}')
# Plot outputs
plt.scatter(x, y, color='black')
plt.plot(x, y_pred, color='blue', linewidth=3)
plt.ylabel('Total bedrooms')
plt.xlabel('# of households')
plt.show()
y_pred[0:10] # Show first 10 impute values predicted by linear regression
df['total_bedrooms'] = df['total_bedrooms'].fillna(dict(zip(df[df['total_bedrooms'].isnull()].index, y_pred)))
df['total_bedrooms'] = df['total_bedrooms'].astype(int)
df.reindex(null_idx)[0:10]
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 longitude 20640 non-null float64
1 latitude 20640 non-null float64
2 housing_median_age 20640 non-null float64
3 total_rooms 20640 non-null float64
4 total_bedrooms 20640 non-null int32
5 population 20640 non-null float64
6 households 20640 non-null float64
7 median_income 20640 non-null float64
8 median_house_value 20640 non-null float64
9 ocean_proximity 20640 non-null object
dtypes: float64(8), int32(1), object(1)
memory usage: 1.5+ MB
###Markdown
EDA
###Code
#creating plots on dataset
%matplotlib inline
import matplotlib.pyplot as plt
df.hist(bins=50,figsize=(20,15))
plt.show()
#advanced scatter plot using median value of house
df.plot(kind="scatter",x="latitude",y="longitude",alpha=0.4,
s=df["population"]/100,label="population",
c="median_house_value",cmap=plt.get_cmap("jet"),
colorbar=True)
plt.legend()
#exploring more on median income
df.plot(kind="scatter",x="median_income",y="median_house_value",alpha=0.6)
plt.figure(figsize=(10,6))
sns.displot(df['median_house_value'],color='purple', kde=True)
plt.show()
#we can see that area where median price frequency for >= 500000 is surprisngly more and could be a sign of outlier or wrong data
###Output
_____no_output_____
###Markdown
Removing outliers
###Code
df[df['median_house_value']>450000]['median_house_value'].value_counts().head() # we see an abnormal quantity of values just above 50,000. This could point to outliers
df = df.loc[df['median_house_value']<500001.0]
df.to_csv('housing2.csv', index=False)
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisThis notebook highlights some simple, yet invaluable, exploratory data science techniques.
###Code
# Numpy and Pandas are data science heavy lifters
import numpy as np
import pandas as pd
# Read CSV Argus output from a file
filename = "data/two-hour-sample.csv"
df = pd.read_csv(filename)
# Shape is the number of rows and columns of the dataframe
df.shape
# Head prints the first several rows of the dataframe
df.head(20)
df.columns
# `describe` computes "5-number" summaries of the numerical fields
df.describe()
# Get Unique Destination ports
df["Dport"].unique()
# Plot a Degree Distribution
import matplotlib.pyplot as plt
plt.hist(df.groupby("DstAddr").size())
plt.show()
# Select only DNS flows and draw BoxPlots
dns = df[df["Dport"] == 53]
dns.shape
dns[["TotPkts","TotBytes"]].plot(kind='box', subplots=True, layout=(
1, 2), sharex=False, sharey=False)
plt.show()
from pandas.plotting import scatter_matrix
scatter_matrix(df[["Dur","TotPkts", "TotBytes"]])
plt.show()
###Output
_____no_output_____
###Markdown
This notebook provides main Exploration and Visualisation insights based on customers credit history data. Assumed steps: **Data set basic analysis** - @TODO denote WARNING columns which has imbalanced distribution - @TODO add decoding of labels for pie charts e.g.: A13 - male, divorced **Customers statistical insights**
###Code
# imports
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import entropy
# util functions
def rename_columns(dataset: pd.DataFrame):
"""Rename dataframe columns names.
Notes:
target column names based on provided data_description.txt
Returns:
pd.DataFrame - dataframe with renamed columns
"""
target_columns = {
'X01': 'account_status',
'X06': 'account_savings',
'X02': 'credit_duration',
'X03': 'credit_history',
'X04': 'credit_purpose',
'X05': 'credit_amount',
'X07': 'employment_status',
'X17': 'employment_description',
'X08': 'income_installment_rate',
'X09': 'gender_status',
'X10': 'credit_guarantors',
'X11': 'residence',
'X12': 'owned_property',
'X13': 'age',
'X14': 'installment_plans',
'X15': 'accomondation_type',
'X16': 'credit_existing_number',
'X18': 'liable_maintain',
'X19': 'phone_number',
'X20': 'foreign_worker',
'Y': 'y'
}
return dataset.rename(columns=target_columns)
row_dataset = pd.read_csv('./dataset/project_data.csv', delimiter=';')
df = rename_columns(row_dataset) # we will rename encoded columns for better undersatnding of data set
df
###Output
_____no_output_____
###Markdown
Data set basic analysis 1. Explore data set in terms of categorical vs numerical columns 2. Explore data set in terms of missing/nan values 3. Explore data set in terms of values distribution (plot per each column) - @TODO denote WARNING columns which has imbalanced distribution - @TODO add decoding of labels for pie charts e.g.: A13 - male, divorced Explore data set in terms of categorical vs numerical columns (plot pie_chart)
###Code
# Explore data set in terms of categorical vs numerical columns (plot pie_chart)
numerical_cols = list(df._get_numeric_data().columns)
categorical_cols = list(set(df.columns) - set(numerical_cols))
print(f'Categorical cloumns: {len(categorical_cols)}')
print(f'Numerical cloumns: {len(numerical_cols)}')
labels = ['numerical_columns', 'categorical_columns']
sizes = [len(numerical_cols), len(categorical_cols)]
explode = (0, 0.1) # only "explode" the 2nd slice (i.e. 'Hogs')
fig1, ax1 = plt.subplots()
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
###Output
Categorical cloumns: 13
Numerical cloumns: 8
###Markdown
Explore data set in terms of missing/nan values
###Code
# Explore data set in terms of missing/nan values
nans = df.isnull().sum().sum()
print(f'Data set has {nans} missing values')
###Output
Data set has 0 missing values
###Markdown
Explore data set in terms of values distribution (plot per each column) - denote WARNING columns which has imbalanced distribution
###Code
# Explore data set in terms of values distribution (plot per each column)
# - denote WARNING columns which has imbalanced distribution
for column in df.columns:
### Explore data set in terms of values distribution (plot per each column)
# exceptional cases, for them pie chart is uninformative
if column == 'credit_amount':
plt.xlabel('amount of credit')
plt.ylabel('number of customers')
plt.title(column)
plt.hist(df[column], bins=50, alpha=0.6, color='g')
continue
elif column == 'credit_duration':
plt.xlabel('amount of months')
plt.ylabel('number of customers')
plt.title(column)
plt.hist(df[column], bins=50, alpha=0.6, color='g')
continue
# other cases for pie chart
labels = [] # values in column
sizes = [] # amount of value's entries
column_stats = list(df[column].value_counts().items()) # initially zip; e.g.: [('A14', 394), ('A11', 274), ('A12', 269), ('A13', 63)]
for pair in column_stats:
labels.append(str(pair[0])) # str because label
sizes.append(pair[1]) # amount of values of corresponding label
explode = (0, 0.1)
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax1.axis('equal')
plt.title(column)
plt.show()
### denote WARNING columns which has imbalanced distribution
# The freq is the most common value’s frequency
counts = df[column].describe()
print(counts)
# Conclusion:
# - data set has no misssing values
# - data set has more categorical (13) rather than numerical (8) columns
# - data set statistics per each column can be found via pie charts + histograms
###Output
_____no_output_____
###Markdown
Customers statistical insights - Gender based analysis - Highest Loans amount filtered by age --> looking for which loan {duration, amount} is mostly frequent per each age group - Which purposes of credit are more frequent? - How often people with already existed credits make repeted loans? Gender and Age based analysis - What is the average age for male versus female? - What is the average loan duration for male versus female? - Max vs Min loans per gender - Which gender is more suiatable client (based on target)
###Code
# add to dataframe dedicated genders by decoding "gender_status" column based on data description
"""
A91: male - divorced/separated
A92: female - divorced/separated/married
A93: male - single
A94: male - married/widowed
A95: female - single
"""
genders_dict = {'A91': 'male', 'A93': 'male', 'A94': 'male',
'A92': 'female', 'A95': 'female'}
genders = df['gender_status'].map(genders_dict)
df['gender'] = genders
print('Average age for Male vs Female customers')
print(f"Male: {df[['age', 'gender']].groupby('gender').mean()['age'].male}")
print(f"Female: {df[['age', 'gender']].groupby('gender').mean()['age'].female}")
# Good Male vs Good Female customers
good_male = df[(df['gender'] == 'male') & (df['y'] == 1)]['y'].count()
bad_male = df[(df['gender'] == 'male') & (df['y'] == 2)]['y'].count()
good_female = df[(df['gender'] == 'female') & (df['y'] == 1)]['y'].count()
bad_female = df[(df['gender'] == 'female') & (df['y'] == 2)]['y'].count()
print('Good amount of Male customers vs Good amount of Female customers')
print(f'{good_male} vs {good_female}')
print('Bad amount of Male customers vs Bad amount of Female customers')
print(f'{bad_male} vs {bad_female}')
plt.title('Amount of male vs female customers')
df['gender'].hist();
plt.show()
print(f'Male customers {df[df["gender"] == "male"]["gender"].count()}');
print(f'Female customers {df[df["gender"] == "female"]["gender"].count()}');
plt.title('Mean amount of loans splited by gender')
df.groupby('gender')['credit_amount'].mean().sort_values().plot(kind='barh');
plt.show()
plt.title('Max amount of loans splited by gender')
df.groupby('gender')['credit_amount'].max().sort_values().plot(kind='barh');
plt.show()
plt.title('Min amount of loans splited by gender')
df.groupby('gender')['credit_amount'].min().sort_values().plot(kind='barh');
plt.show()
print('Average credit duration in terms of age')
df[['age', 'gender', 'credit_duration']].groupby('credit_duration').mean().plot();
# Consclusion
# - The amount of male customers (690) exceeds female customers (310)
# - Average age for Male (37 years) exceeds Female (33 years) in average of 4 years
# - Male GOOD customers (499) exceeds GOOD Female customers (201)
# - Male BAD customers (191) as well exceeds BAD Female customers (109)
# BUT
# - In average male GOOD customers exceeds GOOD female in ~2.5 times when BAD male exceeds BAD female in ~1.1
# - In average male's amount of loan is more than female and equal ~3500
# - In average female's amount of loan is less than male and ~2700
# BUT
# - Average MAXIMUM female loan amount exceeds MAXIMUM male loan amount
# AND
# - Average MINIMUN male loan amount exceeds MINIMUM female loan amount
# - In average elder people have shorter credit duration rather than younger people
###Output
Average age for Male vs Female customers
Male: 36.778260869565216
Female: 32.803225806451614
Good amount of Male customers vs Good amount of Female customers
499 vs 201
Bad amount of Male customers vs Bad amount of Female customers
191 vs 109
###Markdown
Higest Loans amount filtered by age - Min vs Max loans per each period --> looking for which loan {duration, amount} is mostly frequent per each age group
###Code
row = df[['credit_duration', 'credit_amount', 'age']]
fig = px.scatter(df, x=df['credit_duration'], y=df['credit_amount'], color=df['age'],
size=df['credit_amount'], hover_data=[df['credit_duration']])
fig.show()
# Conclusion
# - In average the higest loan amount is taken from ~37 upto ~48 months
# by people in age group from 30 upto 45 years
###Output
_____no_output_____
###Markdown
Which purposes of credit are more frequent? - What femaleS make loan more frequently for? - What maleS make loan more frequently for?
###Code
row = df[['credit_purpose', 'credit_amount', 'age', 'gender']]
credit_purpose_dict = {
'A40': 'car (new)',
'A41': 'car (used)',
'A42': 'furniture/equipment',
'A43': 'radio/television',
'A44': 'domestic appliances',
'A45': 'repairs',
'A46': 'education',
'A47': 'vacation',
'A48': 'retraining',
'A49': 'business',
'A410': 'others'
}
row['credit_purpose'] = row['credit_purpose'].map(credit_purpose_dict)
print('The list of loans needs by desceding order: \n', row['credit_purpose'].value_counts())
print('The list of loans needs for male gender by desceding order: \n', row[row['gender'] == 'male']['credit_purpose'].value_counts())
print('The list of loans needs for female gender by desceding order: \n', row[row['gender'] == 'female']['credit_purpose'].value_counts())
# filtered by age
print('The list of loans needs for male gender by desceding order: \n', row[row['gender'] == 'male']['credit_purpose'].value_counts())
print('The list of loans needs for female gender by desceding order: \n', row[row['gender'] == 'female']['credit_purpose'].value_counts())
fig = px.scatter(row, x=row['credit_purpose'], y='credit_amount', color='age',
size='credit_amount', hover_data=['credit_purpose'])
fig.show()
# Conclusion
# - The list of ALL loans needs by desceding order per male AND female:
# radio/television 280
# car (new) 234
# furniture/equipment 181
# car (used) 103
# business 97
# education 50
# repairs 22
# others 12
# domestic appliances 12
# retraining 9
# - The TOP 5 list of loans needs for male gender by desceding order:
# radio/television 195
# car (new) 164
# furniture/equipment 107
# car (used) 79
# business 78
# - The TOP 5 list of loans needs for female gender by desceding order:
# radio/television 85
# furniture/equipment 74
# car (new) 70
# car (used) 24
# education 21
# business 19
# MALE Statistics of age + credit amount + gender per each credit purpose group
row[row['gender'] == 'male'].groupby(['credit_purpose', 'age']).max()
# FEMALE Statistics of age + credit amount + gender per each credit purpose group
row[row['gender'] == 'female'].groupby(['credit_purpose', 'age']).max()
###Output
_____no_output_____
###Markdown
[{'label': 0, 'name': 'unknown', 'rgb': [0, 0, 0]}, {'label': 1, 'name': 'balcony', 'rgb': [128, 0, 0]}, {'label': 2, 'name': 'bath', 'rgb': [0, 128, 0]}, {'label': 3, 'name': 'cl', 'rgb': [128, 128, 0]}, {'label': 4, 'name': 'dk', 'rgb': [0, 0, 128]}, {'label': 5, 'name': 'tatami', 'rgb': [128, 0, 128]}, {'label': 6, 'name': 'wc', 'rgb': [0, 128, 128]}, {'label': 7, 'name': 'washing', 'rgb': [128, 128, 128]}, {'label': 8, 'name': 'west', 'rgb': [64, 0, 0]}, {'label': 9, 'name': 'entrance', 'rgb': [192, 0, 0]}, {'label': 10, 'name': 'door', 'rgb': [64, 128, 0]}, {'label': 11, 'name': 'wall', 'rgb': [192, 128, 0]}]
###Code
def get_outline(arr):
h, w, c = arr.shape
for i in range(h):
for j in range(w):
if tuple(arr[i,j]) == (64, 128, 0) or tuple(arr[i,j]) == (192, 128, 0):
arr[i,j] = 255
else:
arr[i,j] = 0
return arr
# outline を計算、保存
for i, f in enumerate(os.listdir(ann_dir)[:]):
if i > 0: break
ann_path = os.path.join(ann_dir, f)
out_path = os.path.join(outline_dir, f)
arr = cv2.imread(ann_path)[:,:,::-1]
ol = get_outline(arr)
plt.imsave(out_path, ol)
#break
# image, label, outline を表示
for i, f in enumerate(os.listdir(img_dir)[:]):
#if i > 0: break
if f != '00199.jpg': continue
img_path = os.path.join(img_dir, f)
ann_path = os.path.join(ann_dir, f.replace('.jpg', '.png'))
outline_path = os.path.join(outline_dir, f.replace('.jpg', '.png'))
plt.figure(figsize=(15,5))
plt.subplot(131)
arr = cv2.imread(img_path)[:,:,::-1]
arr = cv2.resize(arr, (256, 256))
plt.imshow(arr)
plt.subplot(132)
arr = cv2.imread(ann_path)[:,:,::-1]
arr = cv2.resize(arr, (256, 256))
plt.imshow(arr)
plt.subplot(133)
arr = cv2.imread(outline_path)[:,:,::-1]
arr = cv2.resize(arr, (256, 256))
plt.imshow(arr)
plt.show()
#break
for f in os.listdir(outline_dir):
if f != '00199.png': continue
f_path = os.path.join(outline_dir, f)
arr = cv2.imread(f_path, cv2.IMREAD_GRAYSCALE)
retval, labels, stats, centroids = cv2.connectedComponentsWithStats(arr)
# ラベル数
print("number of labels:", retval)
# ラベリング結果
print(labels.shape, labels.dtype) # (362, 420) int32
fig, ax = plt.subplots(facecolor="w")
ax.imshow(labels)
#print(stats)
plt.show()
f_train = open('./data/train.txt', 'w')
f_val = open('./data/val.txt', 'w')
f_test = open('./data/test.txt', 'w')
for i, x in enumerate(os.listdir(img_dir)):
if i < 263*7: f_train.write(f'{os.path.splitext(x)[0]}\n')
elif 263*7 <= i < 263*9: f_val.write(f'{os.path.splitext(x)[0]}\n')
else: f_test.write(f'{os.path.splitext(x)[0]}\n')
###Output
_____no_output_____
###Markdown
Разведовательный анализ данных
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from pathlib import Path
from scipy.stats import ks_2samp, mode
warnings.filterwarnings("ignore")
plt.style.use('classic')
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.serif'] = ['Times New Roman']
paths = {
'agro' : 'data/agro/agro.csv',
'syn' : list(Path('data/syn/').rglob('*.csv'))
}
agro = pd.read_csv(paths['agro'])
agro.shape[0]
print(agro[['ind', 'dec', 'kult', 'year', 'month', 'day']].agg(['nunique', 'min', 'max']).to_latex())
ax = agro.kult.value_counts().sort_index().hist(bins=20)
ax.set_ylabel('Количество значений', size=20)
ax.set_xlabel('kult', size=20)
plt.grid()
plt.tight_layout()
plt.show()
ax = agro.kult.value_counts().sort_index().plot.bar(figsize=(20,5))
ax.set_ylabel('Количество значений', size=20)
ax.set_xlabel('kult', size=20)
plt.grid()
plt.tight_layout()
plt.savefig('assets/kult.png')
plt.show()
ax = agro.hist(sharey=True, figsize=(10,5), bins=15, column=['val_1', 'val_2'], xlabelsize=15, ylabelsize=15)
ax[0,0].set_ylabel('Количество значений', size=20)
ax[0,0].set_xlabel('Значение ЗПВ на 10мм', size=20)
ax[0,1].set_xlabel('Значение ЗПВ на 20мм', size=20)
ax[0,0].set_xlim(xmax=agro.val_1.max(), xmin=agro.val_1.min())
ax[0,1].set_xlim(xmax=agro.val_2.max(), xmin=agro.val_2.min())
plt.tight_layout()
plt.savefig('assets/val.png')
plt.show()
def load_agro(path: str) -> pd.DataFrame:
agro = pd.read_csv(path)
agro.loc[:,'datetime'] = pd.to_datetime(agro.year.astype(str)+agro.month.astype(str)\
+ agro.day.astype(str)+np.ones(len(agro), dtype='str'), format='%Y%m%d%H', origin='unix')
agro = agro.drop(['month', 'day'], axis=1)
agro.loc[:,'prev'] = agro.dec - 1
return agro
agro = load_agro(paths['agro'])
agro = agro.merge(agro, left_on=['ind', 'dec', 'year'], right_on=['ind', 'prev', 'year'], suffixes=('', '_next'))
agro.loc[:, 'dur'] = (agro.datetime_next - agro.datetime).dt.days
fig, ax = plt.subplots(ncols=2, figsize=(10,5), sharey=True)
_, bins, _ = ax[0].hist(agro[agro.dur == 10].val_1_next-agro[agro.dur == 10].val_1, bins=15, density=True, alpha=0.5, label='10 дней')
_, bins, _ = ax[0].hist(agro[agro.dur == 11].val_1_next-agro[agro.dur == 11].val_1, bins=bins, density=True, alpha=0.5, label='11 дней')
_, bins, _ = ax[1].hist(agro[agro.dur == 10].val_2_next-agro[agro.dur == 10].val_2, bins=15, density=True, alpha=0.5, label='10 дней')
_, bins, _ = ax[1].hist(agro[agro.dur == 11].val_2_next-agro[agro.dur == 11].val_2, bins=bins, density=True, alpha=0.5, label='11 дней')
ax[0].set_xlim(xmax=(agro[agro.dur == 10].val_1_next-agro[agro.dur == 10].val_1).max(),
xmin=(agro[agro.dur == 10].val_1_next-agro[agro.dur == 10].val_1).min())
ax[1].set_xlim(xmax=(agro[agro.dur == 10].val_2_next-agro[agro.dur == 10].val_2).max(),
xmin=(agro[agro.dur == 10].val_2_next-agro[agro.dur == 10].val_2).min())
ax[0].set_xlabel('Изменение ЗПВ за декаду', size=20)
ax[1].set_xlabel('Изменение ЗПВ за декаду', size=20)
ax[0].set_ylabel('Вероятность', size=20)
ax[0].grid()
ax[1].grid()
ax[0].set_title('val_1')
ax[1].set_title('val_2')
ax[0].legend()
ax[1].legend()
plt.show()
fig, ax = plt.subplots(ncols=2, figsize=(10,5), sharey=True)
_, bins, _ = ax[0].hist(agro[agro.dur == 10].val_1_next-agro[agro.dur == 10].val_1, bins=15, density=True, alpha=0.5, cumulative=True)
_, bins, _ = ax[0].hist(agro[agro.dur == 11].val_1_next-agro[agro.dur == 11].val_1, bins=bins, density=True, alpha=0.5, cumulative=True)
_, bins, _ = ax[1].hist(agro[agro.dur == 10].val_2_next-agro[agro.dur == 10].val_2, bins=15, density=True, alpha=0.5, cumulative=True)
_, bins, _ = ax[1].hist(agro[agro.dur == 11].val_2_next-agro[agro.dur == 11].val_2, bins=bins, density=True, alpha=0.5, cumulative=True)
ax[0].set_xlabel('Значение ЗПВ на 10мм', size=20)
ax[1].set_xlabel('Значение ЗПВ на 20мм', size=20)
ax[0].set_ylabel('Вероятность', size=20)
ax[0].grid()
ax[1].grid()
ax[0].set_title('val_1_next')
ax[1].set_title('val_2_next')
plt.show()
fig, ax = plt.subplots(ncols=2, figsize=(10,5), sharey=True)
_, bins, _ = ax[0].hist(agro[agro.dur == 10].val_1_next, bins=15, density=True, alpha=0.5, label='10 дней')
_, bins, _ = ax[0].hist(agro[agro.dur == 11].val_1_next, bins=bins, density=True, alpha=0.5, label='11 дней')
_, bins, _ = ax[1].hist(agro[agro.dur == 10].val_2_next, bins=15, density=True, alpha=0.5, label='10 дней')
_, bins, _ = ax[1].hist(agro[agro.dur == 11].val_2_next, bins=bins, density=True, alpha=0.5, label='11 дней')
ax[0].set_xlim(xmax=(agro[agro.dur == 10].val_1_next).max(),
xmin=(agro[agro.dur == 10].val_1_next).min())
ax[1].set_xlim(xmax=(agro[agro.dur == 10].val_2_next).max(),
xmin=(agro[agro.dur == 10].val_2_next).min())
ax[0].set_xlabel('Значение ЗПВ на 10мм', size=20)
ax[1].set_xlabel('Значение ЗПВ на 20мм', size=20)
ax[0].set_ylabel('Вероятность', size=20)
ax[0].grid()
ax[1].grid()
ax[0].set_title('val_1_next')
ax[1].set_title('val_2_next')
ax[0].legend()
ax[1].legend()
plt.tight_layout()
plt.savefig('assets/hist_val.png')
plt.show()
for i in range(2):
x1 = agro[agro.dur == 10][f'val_{i+1}']-agro[agro.dur == 10][f'val_{i+1}_next']
x2 = agro[agro.dur == 11][f'val_{i+1}']-agro[agro.dur == 11][f'val_{i+1}_next']
stat, p = ks_2samp(x1,x2)
print(f'val_{i+1}_next K-S stat: {stat}, p-val: {p}')
df
print(agro[['val_1', 'val_2', 'val_1_next', 'val_2_next']].corr('spearman').to_latex(float_format='%.2f'))
plt.figure(figsize=(5,3))
sns.heatmap(agro[['val_1', 'val_2', 'val_1_next', 'val_2_next']].corr('spearman'), cmap='coolwarm', vmin=-1, vmax=1, annot=True, square=True)
plt.show()
(163476 - 143884)/163476 * 100
def load_syn(path: str) -> pd.DataFrame:
syn = pd.read_csv(path, usecols=['s_ind', 'datetime', 't2m', 'td2m', 'ff', 'R12'])
syn.loc[syn.datetime.astype(str).str.len() == 7, 'datetime'] = '0'+\
syn[syn.datetime.astype(str).str.len() == 7].datetime.astype(str)
syn.loc[:, 'datetime'] = pd.to_datetime(syn.datetime, format='%y%m%d%H')
return syn
syn = pd.concat([load_syn(file) for file in paths['syn']], axis=0)
syn.loc[:, 'phi'] = np.sin(((syn.datetime-pd.Timestamp('1970-01-01'))/pd.Timedelta(seconds=1)/pd.Timedelta(days=365.24).total_seconds()*2*np.pi))
print(syn[['t2m', 'td2m', 'ff', 'R12']].describe().round(2).to_latex())
def clear_data(syn: pd.DataFrame):
syn.R12[syn.R12 == 9990] = 0.1
syn = syn[syn.t2m.abs() < 60]
syn = syn[syn.td2m.abs() < 60]
syn = syn[syn.ff <= 30]
return syn
syn = clear_data(syn.copy())
r12 = (syn.sort_values(['s_ind', 'datetime']).groupby(['s_ind', 'datetime']).R12.sum()/4).fillna(method='bfill', limit=3).fillna(0).reset_index()
syn = syn.merge(r12, on=['s_ind', 'datetime'])
syn.rename(columns={'R12_y': 'R3'}, inplace=True)
syn.drop('R12_x', axis=1, inplace=True)
print(syn[['t2m', 'td2m', 'ff', 'R3', 'phi']].describe().round(2).to_latex())
((12407339 - 12325619) / 12407339) * 100
syn[['t2m', 'td2m', 'ff', 'R3']].hist(figsize=(10,10), bins=20)
plt.tight_layout()
plt.show()
sns.heatmap(syn.corr(), vmin=-1, vmax=1, cmap='coolwarm')
plt.show()
syn = syn[syn.t2m.abs() <= syn.t2m.std()*4]
s, d = syn[syn.t2m == syn.t2m.min()][['s_ind', 'datetime']].iloc[0].values
syn[(syn.s_ind == s) & (syn.datetime.dt.date == d.date())].t2m.plot.line()
plt.show()
import netCDF4
from geotiff import GeoTiff
def load_climate(optinons: dict, pairs: pd.DataFrame) -> pd.DataFrame:
path = list(optinons.keys())[0]
nc = netCDF4.Dataset(path)
latmask = np.argmin(pairwise_distances(nc['lat'][:].data.reshape(-1, 1),
pairs['s_lat'].values.reshape(-1, 1)), axis=0)
lonmask = np.argmin(pairwise_distances(nc['lon'][:].data.reshape(-1, 1),
pairs['s_lon'].values.reshape(-1, 1)), axis=0)
climate = pd.DataFrame()
for i in range(12):
df = pairs[['s_ind']].copy()
for path in optinons.keys():
nc = netCDF4.Dataset(path)
df.loc[:, 'month'] = i+1
df.loc[:, optinons[path]] = nc[optinons[path]][i].data[latmask, lonmask]
climate = pd.concat((climate, df), axis=0, ignore_index=True)
return climate.drop_duplicates()
CLIMATE_OPT = {
'data/climate/air.mon.1981-2010.ltm.nc': 'air',
'data/climate/soilw.mon.ltm.v2.nc': 'soilw',
'data/climate/precip.mon.ltm.0.5x0.5.nc': 'precip'
}
from mpl_toolkits.axes_grid1 import make_axes_locatable
def decode_tif(lat: np.array, lon: np.array, tifname: str) -> np.array:
lon1 = lon.min()
lon2 = lon.max()
lat1 = lat.min()
lat2 = lat.max()
arr = np.array(GeoTiff(tifname).read_box([(lon1, lat1), (lon2, lat2)]))
return arr
pairs = pd.read_csv('data/pairs/pairs.csv')
for path in CLIMATE_OPT.keys():
for i in range(12):
ax = plt.subplot()
nc = netCDF4.Dataset(path)
data = nc[CLIMATE_OPT[path]][i].data
vmin, vmax = nc[CLIMATE_OPT[path]].valid_range
if CLIMATE_OPT[path] == 'air':
data -= 273
data[data == -9.96921e+36-273] = np.nan
vmin -= 273
vmax -= 273
else:
data[data == -9.96921e+36] = np.nan
im = ax.imshow(data, cmap='coolwarm')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax=cax)
cbar.vmin, cbar.vmax = vmin, vmax
cbar.set_label(CLIMATE_OPT[path], size=20)
ax.set_xticklabels([])
ax.set_yticklabels([])
#ax.set_title('')
plt.tight_layout()
plt.savefig(f"assets/{CLIMATE_OPT[path]}/{i}.png", bbox_inches='tight')
plt.clf()
netCDF4.Dataset(list(CLIMATE_OPT.keys())[1])['soilw']
90/360
import json
with open('exp_config_1.json') as f:
config = json.load(f)
config = pd.DataFrame().from_dict(config).T
cnf = config[config['mod'] == 'Linear']
cnf.loc['exp1','l']
CAT_OPT = {
'soil': {
'tiff': 'data/agro/soil/so2015v2.tif',
'description': 'data/agro/soil/2015_suborders_and_gridcode.txt'
},
'cover': {
'tiff': 'data/agro/cover/GLOBCOVER_L4_200901_200912_V2.3.tif',
'description': 'data/agro/cover/Globcover2009_Legend.xls'
}
}
arr = np.array(GeoTiff('data/agro/soil/so2015v2.tif').read())
plt.imshow(arr)
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.savefig('assets/soils.png', bbox_inches='tight')
plt.show()
data = pd.read_parquet('data/data.pq')
from sklearn.model_selection import GroupShuffleSplit
gss = GroupShuffleSplit(n_splits=1, train_size=0.8, random_state=42)
tr_idx, val_idx = next(gss.split(X=data, y=data[['val_1_next', 'val_2_next']], groups=data.ts.dt.date))
def cat_prep(data: pd.DataFrame):
cover_frac = data[['cover_name']].value_counts().reset_index().rename(columns={0:'perc'})
cover_frac.loc[:, 'perc'] = cover_frac.perc/cover_frac.perc.sum()*100
cover_frac.loc[:, 'cover_name_new'] = cover_frac.cover_name
cover_frac.loc[cover_frac.perc < 5, 'cover_name_new'] = 'Other'
cover_frac = cover_frac.drop(['perc'], axis=1)
soil_frac = data[['soil_label']].value_counts().reset_index().rename(columns={0:'perc'})
soil_frac.loc[:, 'perc'] = soil_frac.perc/soil_frac.perc.sum()*100
soil_frac.loc[:, 'soil_label_new'] = soil_frac.soil_label
soil_frac.loc[soil_frac.perc < 2, 'soil_label_new'] = 'Other'
soil_frac = soil_frac.drop(['perc'], axis=1)
cult = pd.read_csv('data/agro/cult.csv', sep=';').rename(columns={'id': 'kult'})
data = data.merge(cover_frac, on='cover_name')\
.merge(soil_frac, on='soil_label')\
.merge(cult, on='kult')\
.drop(['cover_name', 'soil_label'], axis=1)\
.rename(columns={'cover_name_new': 'cover_name', 'soil_label_new': 'soil_label'})
data.loc[:, 'soiltype'] = data.soil_label.map({elm: i for i,elm in enumerate(data.soil_label.unique())})
data.loc[:, 'covertype'] = data.cover_name.map({elm: i for i,elm in enumerate(data.cover_name.unique())})
data.loc[:, 'culttype'] = data.type.map({elm: i for i,elm in enumerate(data.type.unique())})
return data
data = pd.read_parquet('data/data.pq')
cult = pd.read_csv('data/agro/cult.csv', sep=';').rename(columns={'id': 'kult'})
data = data.merge(cover_frac, on='cover_name')\
.merge(soil_frac, on='soil_label')\
.merge(cult, on='kult')\
.drop(['cover_name', 'soil_label'], axis=1)\
.rename(columns={'cover_name_new': 'cover_name', 'soil_label_new': 'soil_label'})
data.loc[:, 'soiltype'] = data.soil_label.map({elm: i for i,elm in enumerate(data.soil_label.unique())})
data.loc[:, 'covertype'] = data.cover_name.map({elm: i for i,elm in enumerate(data.cover_name.unique())})
data.loc[:, 'culttype'] = data.type.map({elm: i for i,elm in enumerate(data.type.unique())})
data = pd.read_parquet('data/data.pq')
data.groupby(['ind','year','dec']).ind.count().max()
def load_agro(path: str) -> pd.DataFrame:
agro = pd.read_csv(path)
agro.loc[:,'datetime'] = pd.to_datetime(agro.year.astype(str)+agro.month.astype(str)\
+ agro.day.astype(str)+np.ones(len(agro), dtype='str'), format='%Y%m%d%H', origin='unix')
agro = agro.drop(['month', 'day'], axis=1)
agro.loc[:,'prev'] = agro.dec - 1
return agro
def agro_to_event_period(df: pd.DataFrame) -> pd.DataFrame:
df = df.merge(df, left_on=['ind', 'dec', 'year'], right_on=['ind', 'prev', 'year'], suffixes=('', '_next'))
df.loc[:, 'dur'] = (df.datetime_next - df.datetime).dt.days.astype(int)
df.loc[df.dur == 11, 'datetime_next'] = df[df.dur == 11].datetime_next-pd.Timedelta('1d')
df.loc[:, 'dur'] = (df.datetime_next - df.datetime).dt.total_seconds().astype(int)
new_agro = pd.to_datetime((np.repeat(df.datetime.view(int)//int(1e9), 243)\
+ np.hstack([np.arange(0, v, pd.Timedelta('1h').total_seconds()) for v in df.dur+10800.0]))*int(1e9))
new_agro = df.join(new_agro.rename('ts'), how='outer')
return new_agro
agro = agro_to_event_period(load_agro('data/agro/agro.csv'))
###Output
_____no_output_____
###Markdown
Objective* The main objective of the project is to identify the optimal location for a business in Haiti, specifically in the West Department. This can be a new business or an extension of an existing business in the form of a branch office. Data Source* The data used for this project comes from several sources, first of all to have all the companies in the western department I had to do some web scraping and then for the demographic information I had access to the results of a survey conducted by the Office For The Coordination Of Humanitarian Affairs on the density of the Haitian population by department and by municipality.* Other information such as average per capita income, total activity rate, and tax revenues by department were obtained via articles on the Haitian economy. Importing Libraries
###Code
import pandas as pd
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import chi2_contingency
###Output
_____no_output_____
###Markdown
Loading Dataset
###Code
#Loading Business Data Files
kenscoff = pd.read_excel('dataset/business_paup.xlsx', sheet_name='kenscoff')
paup = pd.read_excel('dataset/business_paup.xlsx', sheet_name='paup')
carrefour = pd.read_excel('dataset/business_paup.xlsx', sheet_name='carrefour')
delmas = pd.read_excel('dataset/business_paup.xlsx', sheet_name='delmas')
crxdesbouquets = pd.read_excel('dataset/business_paup.xlsx', sheet_name='crxdesbouquets')
tabarre = pd.read_excel('dataset/business_paup.xlsx', sheet_name='tabarre')
leogane = pd.read_excel('dataset/business_paup.xlsx', sheet_name='leogane')
petitgoave = pd.read_excel('dataset/business_paup.xlsx', sheet_name='petit_goave')
grandgoave = pd.read_excel('dataset/business_paup.xlsx', sheet_name='grand_goave')
cabaret = pd.read_excel('dataset/business_paup.xlsx', sheet_name='cabaret')
arcahaie = pd.read_excel('dataset/business_paup.xlsx', sheet_name='arcahaie')
ganthier = pd.read_excel('dataset/business_paup.xlsx', sheet_name='ganthier')
gressier = pd.read_excel('dataset/business_paup.xlsx', sheet_name='gressier')
pv = pd.read_excel('dataset/business_paup.xlsx', sheet_name='PetionVille')
caphaitien = pd.read_excel('dataset/business_paup.xlsx', sheet_name='caphaitien')
limonade = pd.read_excel('dataset/business_paup.xlsx', sheet_name='limonade')
milot = pd.read_excel('dataset/business_paup.xlsx', sheet_name='milot')
limbe = pd.read_excel('dataset/business_paup.xlsx', sheet_name='limbe')
fortliberte = pd.read_excel('dataset/business_paup.xlsx', sheet_name='fort_liberte')
ouanaminthe = pd.read_excel('dataset/business_paup.xlsx', sheet_name='ouanaminthe')
jacmel = pd.read_excel('dataset/business_paup.xlsx', sheet_name='jacmel')
cayesjacmel = pd.read_excel('dataset/business_paup.xlsx', sheet_name='cayesjacmel')
gonaives = pd.read_excel('dataset/business_paup.xlsx', sheet_name='gonaives')
saintmarc = pd.read_excel('dataset/business_paup.xlsx', sheet_name='saintmarc')
dessalines = pd.read_excel('dataset/business_paup.xlsx', sheet_name='dessalines')
#Population dataset file
population = pd.read_excel('dataset/hti-pop-statistics.xlsx')
#Municipality geo location
commune_geolocalisation = pd.read_excel('dataset/hti_commune_geolocation.xlsx')
# Business Dataset file concatenation
dataset = pd.concat([paup,carrefour,delmas,kenscoff,crxdesbouquets,tabarre,
leogane,petitgoave,grandgoave,cabaret,arcahaie,ganthier,
gressier,pv,caphaitien,limonade,milot,limbe,fortliberte,
ouanaminthe,jacmel,cayesjacmel,gonaives,saintmarc,dessalines])
display(dataset.shape)
display(dataset.head())
#Selecting the needed columns
dataset=dataset.reset_index()
dataset = dataset.loc[:,['index','adm2code','commune','secteur activite','category']]
dataset.head()
final_population = population.iloc[:,4:15]
final_population.head()
###Output
_____no_output_____
###Markdown
Additional Information* According to the academy of economic development the activity rate in haiti is 66.73% and according to the world bank the employment rate is 55% so we can say that on average the employment rate per commune is 55%. * According to the World Bank the RNB per capita is $823.00 with an exchange rate of HTG 95 the average annual income per capita is HTG 78,185.00
###Code
final_population['income']= (((final_population['Population']*0.6673)*0.55)*6515.41).astype('int64')
final_population.head()
dataset['commune'].nunique()
pop=final_population[['adm2_fr','Femmes','Hommes','income']]
pop=pop.sort_values(by='income',ascending=False)
pop=pop.head(10)
plt.figure(figsize=(10,9))
ax=sns.barplot(y='adm2_fr',x='income', palette="CMRmap", data=pop)
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
#business distribution by sector of activity
secteur=dataset.groupby(by='secteur activite').index.count().to_frame()
secteur=secteur.sort_values(by='index', ascending=False)
secteur=secteur.head(10)
plt.figure(figsize=(10,9))
ax=sns.barplot(y=secteur.index,x='index', palette="CMRmap", data=secteur)
#Count part
# for container in ax.containers:
# ax.bar_label(container,padding=5)
#Percentage part
for p in ax.patches:
percentage = '{:.1f}%'.format(100 * p.get_width()/sum(secteur['index'].values))
x = p.get_x() + p.get_width()
y = p.get_y() + p.get_height()
ax.annotate(percentage, (x, y),fontsize=11,color="black")
#Fonction to create pivot table and bar chat to visualize sector of activity by municipality
def table_bar(secteur=''):
commune=dataset[dataset['secteur activite']==secteur].pivot_table(index='commune', columns='secteur activite', values='index', aggfunc='count')
commune=commune.sort_values(by=secteur, ascending=False)
plt.figure(figsize=(10,9))
ax = sns.barplot(y=commune.index,x=secteur, palette="CMRmap", data=commune)
# for container in ax.containers:
# ax.bar_label(container, padding=2.5)
for p in ax.patches:
percentage = '{:.1f}%'.format(100 * p.get_width()/sum(commune[secteur].values))
x = p.get_x() + p.get_width()
y = p.get_y() + p.get_height()
ax.annotate(percentage, (x, y),fontsize=11,color="black")
return commune
###Output
_____no_output_____
###Markdown
Business Of The Health Sector By Municipality
###Code
table_bar(secteur='sante')
###Output
_____no_output_____
###Markdown
Business Of The Construction Sector By Municipality
###Code
table_bar('construction')
###Output
_____no_output_____
###Markdown
Automobile Sector Activity By Municipality
###Code
table_bar('service automobile')
###Output
_____no_output_____
###Markdown
Restaurant Business By Municipality
###Code
table_bar('restauration')
###Output
_____no_output_____
###Markdown
Agri-food Sector Activity By Municipality
###Code
table_bar('agroalimentaire')
###Output
_____no_output_____
###Markdown
Professional Sector Activity By Municipality
###Code
table_bar('service professionnel')
###Output
_____no_output_____
###Markdown
Financial Sector Business By Municipality
###Code
table_bar('service financier')
###Output
_____no_output_____
###Markdown
IT Business By Municipality
###Code
table_bar('informatique')
###Output
_____no_output_____
###Markdown
Education Sector Activity By Municipality
###Code
table_bar('education')
###Output
_____no_output_____
###Markdown
Fashion Sector Activity By Municipality
###Code
table_bar('fashion')
###Output
_____no_output_____
###Markdown
Transportation Sector Activity by Municipality
###Code
table_bar('transport')
display(final_population.head(2))
display(final_population.info())
#display(commune_geolocalisation.head(2))
#display(dataset.head(2))
#Transform Hommes
final_population['%Hommes']=final_population['Hommes']/final_population['Population']
final_population['%\Femmes']=final_population['Femmes']/final_population['Population']
#Transform Income to Dummies Interval
final_population['Income_0_400M']=final_population['income'].apply(lambda x : 1 if (x<=400000000) else 0)
final_population['Income_400M_1MM']=final_population['income'].apply(lambda x : 1 if (x>400000000 and x<=1000000000) else 0)
final_population['Income_1MM_3MM']=final_population['income'].apply(lambda x : 1 if (x>1000000000 and x<=3000000000) else 0)
popdummies=final_population[['adm2code','%Hommes','%\Femmes','Income_0_400M','Income_400M_1MM','Income_1MM_3MM']]
popdummies.head(2)
dummiestest= pd.get_dummies(dataset['secteur activite'])
dummiestest['adm2code']= dataset['adm2code']
dummiestest['commune']= dataset['commune']
fcol = dummiestest.pop('adm2code')
tcol=dummiestest.pop('commune')
dummiestest.insert(0, 'adm2code', fcol)
dummiestest.insert(1, 'commune', tcol)
dummiestest.head(2)
f_merge= pd.merge(left=dummiestest,right=popdummies,on='adm2code', how='inner')
f_merge=f_merge.drop(['adm2code'],1)
f_merge.shape
f_merge.head()
group=f_merge.groupby(by='commune').mean()
group.head(5)
# import k-means from clustering stage
from sklearn.cluster import KMeans
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from yellowbrick.cluster import KElbowVisualizer
model = KMeans()
plt.figure(figsize=(9,8))
visualizer = KElbowVisualizer(model, k=(1,12))
visualizer.fit(group)
visualizer.show()
# define min max scaler
scaler = MinMaxScaler()
# transform data
data = scaler.fit_transform(group)
print(data)
X = pd.DataFrame(data=data,columns=list(group.columns))
X.head()
model = KMeans(n_clusters=3,random_state=49).fit(X)
X['labels'] = model.labels_
X['labels'].values
X.head()
X.index= group.index
X.head()
group['cluster']=X['labels']
group=group.reset_index()
###Output
_____no_output_____
###Markdown
Cluster Analysis
###Code
cluster0=group[group['cluster']==0]
cluster0.head(2)
cluster1=group[group['cluster']==1]
cluster1
cluster2=group[group['cluster']==2]
cluster2
X=X.reset_index()
commune_cluster=X[['commune','labels']]
commune_cluster.head(2)
cluster_merge=pd.merge(left=f_merge,right=commune_cluster,on='commune', how='inner')
cluster_merge.head()
profil=cluster_merge.groupby(by='labels').mean()
profil=profil.reset_index()
profil
###Output
_____no_output_____
###Markdown
Read data
###Code
df_selected = pd.read_csv('./data/df_selected.csv')
df_selected.shape
df_selected.describe().T
def plot_feature(df, col_name, isContinuous):
"""
Visualize a variable with and without faceting on the loan status.
- col_name is the variable name in the dataframe
- full_name is the full variable name
- continuous is True if the variable is continuous, False otherwise
"""
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,3), dpi=90)
# Plot without loan status
if isContinuous:
sns.distplot(df.loc[df[col_name].notnull(), col_name], kde=False, ax=ax1)
else:
sns.countplot(df[col_name], order=sorted(df[col_name].unique()), color='#5975A4', saturation=1, ax=ax1)
ax1.set_xlabel(col_name)
ax1.set_ylabel('Count')
ax1.set_title(col_name)
plt.xticks(rotation = 90)
# Plot with loan status
if isContinuous:
sns.boxplot(y=col_name, x='loan_status', data=df, ax=ax2)
ax2.set_ylabel('')
ax2.set_title(col_name + ' by Loan Status')
else:
data = df.groupby(col_name)['loan_status'].value_counts(normalize=True).to_frame('proportion').reset_index()
sns.barplot(x = col_name, y = 'proportion', hue= "loan_status", data = data, saturation=1, ax=ax2)
ax2.set_ylabel('Loan fraction')
ax2.set_title('Loan status')
plt.xticks(rotation = 90)
ax2.set_xlabel(col_name)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Feature correlations
###Code
corr = df_selected.corr(method = 'spearman')
plt.figure(figsize = (10, 8))
sns.heatmap(corr.abs(), cmap ='viridis' )
plt.show()
###Output
_____no_output_____
###Markdown
Find highly correlated features
###Code
new_corr = corr.abs()
new_corr.loc[:,:] = np.tril(new_corr, k=-1) # below main lower triangle of an array
new_corr = new_corr.stack().to_frame('correlation').reset_index().sort_values(by='correlation', ascending=False)
new_corr[new_corr.correlation > 0.4]
high_correlated_feat = ['funded_amnt','funded_amnt_inv', 'fico_range_high', 'grade',
'credit_history', 'installment']
df_selected.drop(high_correlated_feat, axis=1, inplace=True)
df_selected.shape
###Output
_____no_output_____
###Markdown
Correlation with target variable
###Code
# df_selected.nunique().to_frame().reset_index()
corr_with_target = df_selected.corrwith(df_selected.loan_status).sort_values(ascending = False).abs().to_frame('correlation_with_target').reset_index().head(20)
unique_values = df_selected.nunique().to_frame('unique_values').reset_index()
corr_with_unique = pd.merge(corr_with_target, unique_values, on = 'index', how = 'inner')
corr_with_unique
###Output
_____no_output_____
###Markdown
Vizualizations
###Code
plot_feature(df_selected, 'sub_grade', False)
plot_feature(df_selected, 'int_rate', True)
plot_feature(df_selected, 'dti', True)
plot_feature(df_selected, 'revol_util', True)
plot_feature(df_selected, 'issue_month', False)
###Output
_____no_output_____
###Markdown
Observe the selected features
###Code
df_selected.shape
df_selected.head().T
df_selected.to_csv('./data/df_processed_v2.csv', index = False)
###Output
_____no_output_____
###Markdown
WalMart Trip Type
###Code
import pandas as pd
import numpy as np
import scipy.stats as stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels as sm
import math
import tools
plt.rcParams["figure.figsize"] = (10, 8)
mpl.style.use('bmh')
%matplotlib inline
df = pd.read_csv('input/train.csv')
u = df.groupby('VisitNumber')
###Output
_____no_output_____
###Markdown
Look at a visit
###Code
u.get_group(8)
###Output
_____no_output_____
###Markdown
How many unique items of each column are there?
###Code
[(x, len(df[x].unique())) for x in ['TripType', 'Upc', 'Weekday', 'DepartmentDescription', 'FinelineNumber']]
###Output
_____no_output_____
###Markdown
What are the DepartmentDescription Factors?
###Code
dds = [repr(x) for x in list(set(df['DepartmentDescription']))]
dds.sort()
for d in dds:
print(d)
df['ScanCount'].describe()
df['ScanCount'].hist(bins=100)
###Output
_____no_output_____
###Markdown
How many NA's are there by column?
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
What is the overlap between missing NAs in different columns?
###Code
len(df[df['DepartmentDescription'].isnull() & df['Upc'].isnull()])
len(df[df['DepartmentDescription'].isnull() & df['FinelineNumber'].notnull()])
len(df[df['FinelineNumber'].isnull() & df['Upc'].notnull()])
###Output
_____no_output_____
###Markdown
When finelineNumber or Upc is NA, what departments do they come from (when not also NA)?
###Code
df[df['FinelineNumber'].isnull() & df['DepartmentDescription'].notnull()]['DepartmentDescription'].value_counts()
df[df['Upc'].isnull() & df['DepartmentDescription'].notnull()]['DepartmentDescription'].value_counts()
###Output
_____no_output_____
###Markdown
When Upc is NA, what are the scan counts?
###Code
df[df['Upc'].isnull() & df['DepartmentDescription'].notnull()]['ScanCount'].value_counts()
df[df['FinelineNumber'].isnull() & df['DepartmentDescription'].notnull()]['ScanCount'].value_counts()
###Output
_____no_output_____
###Markdown
TripType by FineLineNumber
###Code
pd.crosstab(index=df['FinelineNumber'], columns=df['TripType']).idxmax()
###Output
_____no_output_____
###Markdown
Most common DepartmentDescription for each TripType
###Code
pd.crosstab(index=df['DepartmentDescription'], columns=df['TripType']).idxmax()
###Output
_____no_output_____
###Markdown
Most common Weekday for each TripType
###Code
pd.crosstab(index=df['Weekday'], columns=df['TripType']).idxmax()
###Output
_____no_output_____
###Markdown
Most common TripType for each weekday
###Code
pd.crosstab(index=df['TripType'], columns=df['Weekday']).hist(figsize=(20,10))
###Output
_____no_output_____
###Markdown
Clean data
###Code
dd = (df.pivot_table('ScanCount', ['VisitNumber'], ['DepartmentDescription']))
fln = df.pivot_table('ScanCount', ['VisitNumber'], ['FinelineNumber'])
weekdays = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
wd = df[['VisitNumber', 'Weekday']].drop_duplicates(subset='VisitNumber')
wd['Weekday'] = wd['Weekday'].apply(lambda x: weekdays.index(x))
trip_type = df[['VisitNumber', 'TripType']].drop_duplicates(subset='VisitNumber')
dd = df[['VisitNumber', 'TripType']].drop_duplicates()
dd['TripType'].value_counts()
result = trip_type.join(dd, on='VisitNumber')
result = result.join(fln, on='VisitNumber')
result['Weekday'] = wd['Weekday']
result2 = result.fillna(0.0)
result2
df['Returns'] = df['ScanCount'].apply(lambda x: 1 if x < 0 else 0)
rtns = df.pivot_table('Returns', ['VisitNumber'], aggfunc=sum)
rtns.apply(lambda x: 1 if x > 0 else 0)
dd = list(set(df['DepartmentDescription'].fillna('')))
dd.sort()
dd
vcs = df['Upc'].value_counts()
for x in [int(x) for x in list(vcs.head(2000).index)]:
print('{}, '.format(x))
###Output
4011,
60538862097,
7874235186,
7874235187,
4046,
68113107862,
60538871457,
3338320027,
4087,
60538871461,
4900000044,
4062,
4065,
4900003165,
3338365020,
7874235188,
4900005010,
68113163351,
60538896309,
4078,
69922162117,
7874211433,
4093,
4900000977,
20966500000,
60538819035,
7874235201,
4051,
7225003706,
3151,
60538862128,
7874235200,
4022,
6827473529,
20108800000,
60538812238,
4900000764,
78035378403,
20154500000,
20154200000,
7874222803,
1200001311,
4050,
75752800879,
4178900121,
20108700000,
68113176761,
7432309090,
68113163352,
68113163353,
4048,
4016,
7874201228,
4135,
3800057608,
4045,
4958,
4889,
3082,
7874203524,
7084781116,
60538812237,
60538812236,
7874209728,
3107,
7294560136,
7976503128,
4900001463,
4178900125,
68113189617,
4900002890,
61126999100,
4000042431,
7874298393,
1200080994,
7225003712,
9518801128,
4900001278,
4900005015,
68113102889,
7874203952,
4693,
3338360002,
3338311000,
4900001916,
8265750406,
30521500700,
4029,
7432309750,
68113178253,
7763304737,
3040077852,
2100061526,
7539100470,
60538800144,
7104113636,
20577400000,
4900005025,
4664,
4400003214,
4900000045,
22501100000,
3400000239,
68113132894,
7874205776,
4069,
4082,
3040079000,
7874235296,
5000021667,
4178900105,
2840033579,
7874222953,
3663202760,
3421,
60538887953,
68113107939,
68113132946,
6827473441,
5200033875,
7874235202,
7109160010,
4312,
3338365322,
7874222375,
4235,
4044,
7874235191,
1590013401,
5200032673,
7539100878,
68113174355,
7874204025,
68113178113,
4400003442,
60538802945,
4032,
4072,
2840042073,
7225004319,
4400003211,
7874204026,
5200033876,
4900004086,
3338340010,
3400000246,
1200000159,
88828940068,
4067,
5200033877,
76163520360,
5400010183,
4400003202,
4827,
1200000131,
22592200000,
7874235193,
22763000000,
3400000241,
4056,
1500014000,
4611,
3338365101,
61126981899,
4400003113,
4900000790,
60538862129,
5200032555,
3663203646,
4900002620,
980012301,
7874203951,
87458603436,
2840043318,
3800039122,
4023,
4959,
2840043393,
7874235192,
3338366602,
4688,
2200000899,
4900002468,
83032400641,
4900004575,
3400008803,
3663202725,
7453490164,
4040,
2840000289,
4378,
3400040551,
7874201456,
4460030770,
9506,
4159,
7874298761,
2840042054,
65876136801,
2840043143,
71514111356,
7874235189,
68113132896,
81547301182,
7442602000,
22003200000,
2200000484,
2120057235,
2840028204,
22616700000,
7874204017,
3338311010,
4900001801,
7874214813,
7282017092,
4000000432,
9518802128,
4400003219,
2840023987,
7224013381,
7874208830,
2800000824,
5200032213,
3710268600,
2840000288,
60538871459,
2200000666,
60538807734,
68113178251,
68113104849,
68113111868,
4460030776,
1600048366,
68113178252,
3400000240,
4663,
7763304547,
5410722101,
7874212310,
20580000000,
5210037160,
81547301188,
3338353010,
60538871462,
8768400107,
4080,
3338370001,
7874243167,
8265755561,
2840043774,
68113107095,
9870,
72201100004,
4409,
88491200391,
980080005,
7874235190,
9853,
2900000822,
2500001970,
2840023985,
1200080996,
6827434661,
1300000466,
4000047652,
3338360152,
4020,
4036,
3040077387,
4300095373,
60538800120,
1254661531,
8768400099,
3500039371,
5200010239,
3338320030,
3663200991,
71533943000,
7314918518,
4223830241,
71514172928,
1450001098,
7800008216,
4073,
2460001001,
3338311502,
68113157393,
1090000015,
1254661959,
68113132893,
3408663825,
4178900115,
76163520200,
75322200110,
2840000210,
5200033905,
68113102536,
4000000263,
2430004102,
7047029423,
2840043789,
3710267500,
2733100616,
4900004574,
18778700092,
4178900211,
7639703200,
3338365585,
2500004772,
4900001679,
73114973851,
4499,
64312604011,
7146410017,
22591100000,
73114965337,
2840042158,
2100061223,
7432309119,
3400000480,
7432308224,
4140503670,
2200000891,
1200000129,
3663205590,
4416,
3400000229,
4900005011,
7763304338,
36461304562,
5200020806,
60538809877,
4000029480,
5400015029,
3485600408,
4650073333,
4127197110,
3700092976,
2340000063,
7874204022,
4856406700,
3700089074,
8000000674,
1450001100,
7343500006,
7146418095,
1980003663,
7962507258,
4111600592,
65876136931,
2840043503,
4650073332,
8390000575,
76163520390,
2800046631,
2500004766,
7143000752,
7342000011,
5200012196,
3700088234,
1070080727,
88828940302,
5020000880,
7874207704,
68113132353,
4400000057,
5400010060,
3800076862,
7047048545,
5210002740,
4031,
4061,
75166677005,
4066,
7680000691,
2340000655,
68113104850,
72957184448,
4300095369,
68113170247,
3700088281,
8000049564,
6827492714,
5200012937,
68113110523,
9503,
2200000488,
2610000573,
4000000032,
2500004948,
2500005381,
87458603514,
4900004573,
3338365162,
3729791447,
2840006408,
1254661229,
2840042177,
7874204018,
4017,
2100060464,
4900005014,
2740010307,
3800000127,
4470036001,
7874204028,
4612,
2800008040,
4190007320,
2840019914,
60538818788,
7297900418,
71533916001,
2430004120,
7874237037,
3500039013,
88491201426,
4178900231,
4920004550,
75752800868,
4900005027,
4650073339,
5200032868,
3800059099,
4900002892,
4178900157,
7279903542,
4460031259,
7343500004,
4460030784,
5000001011,
4030,
1070002440,
76101040154,
1980003674,
7248600220,
7874207363,
2840019078,
2430004101,
4000000460,
4460031301,
3345611314,
5200032016,
4171623215,
7874298690,
2840023824,
2460001003,
7874209522,
3700088277,
4138309073,
4400003412,
71514151464,
4138309072,
1370008850,
1980003667,
5210002735,
3338366000,
8520000125,
5200010296,
7874204014,
71514171682,
88828940186,
22592300000,
4138309074,
7432300583,
4133112466,
20395900000,
4300002825,
78616200151,
71575610003,
7618316364,
4900004523,
76172098749,
2840034367,
7800001305,
7800008240,
7874201184,
30869110610,
3040021640,
7282017090,
5200012178,
80240400446,
7046209835,
20867000000,
4400003111,
20382100000,
68113174202,
7800015216,
87489600557,
68896200806,
4400003338,
3207,
68896200201,
7874207582,
4178900143,
71458100088,
88491201427,
2500004769,
3400007018,
5200020805,
2575100988,
61126910802,
5200020807,
2200000898,
4400002734,
5210002741,
2200000512,
3700089716,
7618316363,
2265571555,
3000001040,
68113132897,
7618316850,
7874209945,
7800001616,
3400000647,
1450001101,
68113138753,
4900004579,
5200032866,
7432309524,
3400000221,
980000760,
3120020007,
4400003210,
2569500819,
4812127707,
2840004176,
5210002739,
4133382501,
60538818863,
7962507006,
4900002979,
7932265549,
1300000640,
3338360051,
7278510112,
4400003327,
3700022205,
2500005426,
980000761,
7535311723,
8768400512,
2840044089,
2733100625,
2529300099,
7339001404,
25088200000,
68113109133,
7874208826,
8000000673,
4400003213,
4111601118,
1284241001,
4800113997,
4400003414,
68113107867,
2500005466,
65876135802,
2840007098,
5210002736,
4000000431,
22500500000,
2200000514,
3040021648,
68113132352,
2200000838,
2265571544,
1980003661,
7874208676,
2800051780,
6827419514,
1480064608,
4131,
5200013481,
68113109124,
3400000843,
3500053096,
22093300000,
87897100000,
1600043471,
3800084522,
2340000098,
7084781117,
4900005537,
7047000300,
75680910528,
2733100033,
1254600638,
66106100001,
2840042061,
7874235203,
1254661267,
2840042071,
20609900000,
7906801231,
8768400095,
4150000700,
5210002745,
4900005235,
3283,
7874203950,
7874205466,
8520000126,
3000005300,
2500005542,
2840023988,
8520000114,
4900005115,
3400000312,
7874243458,
3700088190,
1070002152,
2840006404,
73651154066,
30041668957,
7874203868,
4460030769,
1980003659,
7294560134,
5200012206,
68113168757,
1200000017,
9047833110,
7047029422,
7342000024,
7874201841,
2840033575,
3760035160,
5210037145,
7874204021,
81547301465,
2500004811,
60538818790,
7432309103,
980089500,
85303000210,
3338353050,
7940026700,
3400000440,
4400002854,
4000000405,
71988651037,
1200000230,
22555500000,
4900000548,
3700034885,
67293510413,
68113191698,
68113102790,
3663202809,
2100060471,
1700003943,
67811244415,
3500055500,
7547611200,
1070070860,
3338325059,
4650073334,
4400003223,
4650073341,
5210008698,
3800059655,
1200540466,
7464100146,
71752413100,
4850030569,
2265571584,
7874205334,
71514150349,
1864346102,
3746605625,
7874236939,
4178900144,
2840042049,
7339001393,
78889362865,
7874221000,
7682804702,
8520000127,
2840018734,
4400000463,
7680828008,
1480000010,
1300000218,
4178900122,
3400000007,
3700089160,
2500004748,
68113157392,
1254661595,
78533177820,
7315015223,
3500051200,
2800077212,
775014200016,
5210002744,
1090000021,
2924300011,
7255400154,
4329,
1450001099,
4650073340,
3500039131,
7494109822,
1920075352,
7110000578,
3224734514,
2200000486,
2840000290,
7342000006,
67984410957,
88828940190,
4223830221,
2840016201,
9741907031,
4460030783,
7432302964,
7874204880,
7794800902,
3800039138,
3500039372,
2733100036,
7874208275,
2840037185,
68670010124,
3429904589,
3500050900,
4850030102,
7050105105,
3000001200,
4470000063,
7432302816,
73447533423,
7187154860,
4650073335,
68113138748,
7874237419,
1070080722,
22616800000,
5450019329,
2200000483,
2265570177,
2840042053,
7874235141,
1590000211,
3500050909,
5150000163,
2840016190,
7874237394,
2840023981,
4800121348,
7097047125,
2840008773,
4285,
2100061144,
75680910494,
80240400518,
4400003258,
4178900217,
4450033905,
7763306374,
3500044667,
5200020808,
7874209197,
3500055100,
3120020300,
4460030786,
2840042052,
68113146701,
22628200000,
1111104090,
7287850523,
2500005349,
4086,
3400000220,
1200542757,
4380700032,
4089,
4129440112,
4650073343,
5150025516,
7046243123,
2410057467,
2200001110,
68113109896,
7373100830,
3500000163,
2100065883,
7278510989,
7480601500,
22591000000,
1700009245,
3338366001,
9898,
4856407002,
2430004130,
7092210371,
7442602001,
9518883000,
7339001079,
9147504189,
2840043783,
980080009,
65108018751,
7464100285,
7874201189,
2200001189,
22500900000,
78533176189,
4150000031,
68113191699,
7874205590,
4900001465,
22500800000,
2840042062,
60538812090,
1700002795,
7874237047,
5210002742,
2840020129,
3120022007,
7172053046,
7224013387,
65876136961,
22680600000,
2500004830,
4460001204,
1620033700,
79936675301,
87458603516,
61126910171,
3800012293,
3800012290,
2820000384,
88491200392,
2840016208,
60538871458,
5400020042,
1111161424,
25855100000,
4300002827,
5200013224,
7667710033,
5210002747,
88796124402,
7874243304,
1590000209,
1450001128,
7639703250,
3500050902,
980000071,
2773602296,
2740026499,
7100754116,
1090008031,
2100061527,
7874235317,
7047000307,
3485622698,
7550000002,
7800008340,
2840004179,
2800046701,
63256500002,
2840000152,
61300871512,
3500046791,
4470000857,
1254600193,
2800000820,
7535307841,
3700089131,
4900005034,
3400029005,
3800093772,
5150024177,
68113109120,
65876135732,
7874237427,
4178900142,
7432309149,
2200000667,
3400004861,
18778700096,
68113132886,
2840042063,
67811210179,
68113110457,
7874204019,
3700051051,
61696001310,
3663200282,
7097047123,
7874208942,
1112019909,
7874237117,
20501200000,
60538816336,
7341001375,
7874237147,
2840005658,
6414404306,
3400023938,
4400003946,
68113138804,
5200032199,
3500044678,
68113191697,
8265750080,
3400000567,
4400002829,
2110002511,
2840042055,
60538818716,
7100703113,
68113105630,
3940001614,
7535310386,
3500053031,
6414410236,
7680828009,
3400000203,
1200013027,
2500005126,
7874201185,
68113192552,
3890004207,
4470001163,
7874208411,
83032400696,
3320001130,
7874212452,
5400036922,
3338365141,
3040021530,
75752800881,
22004300000,
3940001635,
3500044668,
3040077171,
73528240084,
80253,
7874211254,
2265572274,
3120020298,
2840020131,
61126954601,
3400013851,
7874211675,
3500053032,
33383400105,
4650015474,
83874102642,
7874236940,
7432309164,
68113169937,
6414410235,
3500039014,
3500014424,
4166703626,
7464102712,
2500001010,
20692800000,
7432302814,
2500000035,
4850001833,
5040075116,
4190007661,
2265571576,
7373108304,
1600048363,
3600037390,
68113132884,
2200015933,
4900005028,
1204403954,
60538818717,
7423495114,
30521022127,
7911868806,
5150025362,
1111587132,
7962507189,
4400003325,
22093100000,
2840006399,
3090000359,
3700000878,
3710003662,
3800084523,
4138309029,
2265530044,
68113109028,
76172005820,
3338353610,
2733100032,
88859694414,
4650073337,
2700044212,
4850030139,
2265530301,
2500005392,
73114972419,
8066095615,
3338366604,
2840004177,
2200001553,
7874208962,
81547301466,
60538818715,
7874237146,
3500044673,
1200000134,
3338351010,
7639700228,
2620011700,
1300000124,
1200013032,
7874211475,
3980010797,
5210001970,
4133384401,
3800040260,
7874235255,
3700010776,
7047000317,
7874212313,
3600011977,
7874212318,
7341001385,
7373107107,
7874208674,
4180040127,
7874209249,
68113138783,
3112,
3338340243,
980089525,
2529300149,
4000045869,
2800051720,
7640602180,
2500004763,
7874235325,
73528240029,
7047000310,
2200000067,
3400000522,
4800171108,
8774506289,
4400003215,
4900001938,
4440,
2840005509,
4900000463,
68113152929,
7047029421,
3800084709,
3800031710,
7874223037,
4460031178,
7464100287,
7874201493,
4470000004,
7874204007,
2529360039,
7373108103,
2700069086,
3500055372,
3700086502,
8768400511,
4900002870,
2800000788,
3700088437,
7617180393,
7033091610,
7464100427,
20169400000,
5200010425,
3320011321,
2733100059,
7152401215,
2800046298,
5200010406,
7874209523,
2840033580,
68113191702,
3800001611,
4400003221,
4812129208,
4900005024,
7874205115,
3600028786,
4133103782,
1980000117,
2840005515,
4900003069,
1500007609,
1740010800,
66408114681,
2500005429,
3400000847,
7874237127,
3218753206,
63084090323,
2924300002,
6132825602,
7874237425,
7874209248,
3700092987,
20554800000,
4280010800,
3400000337,
8768400103,
4190007638,
1254600191,
60538816335,
7874202656,
3700088225,
22500200000,
7874298678,
4142011476,
7874201839,
8265700312,
4178900212,
3338340240,
1700012158,
1370050845,
4900002470,
1099500870,
5020055110,
1820053030,
7800008316,
3700012786,
30045044905,
4460031208,
7033071195,
3400000098,
22093700000,
63084090321,
6132835504,
8939615296,
85126400310,
81655901011,
1600027528,
2100002632,
4300002828,
4650074540,
68113104840,
3700095145,
4667501400,
7874205654,
3700085117,
61126911357,
3035,
1300000605,
4300002826,
7874211908,
2800008140,
7830006990,
30869157810,
7874220997,
3800026500,
1200080997,
20553800000,
7874204011,
4138309036,
5400016447,
68113168762,
4740015657,
4856407403,
7198074003,
7618316372,
4900003661,
4494608215,
68395304045,
9518806128,
1590014063,
1600045932,
7874235205,
9792809496,
65876135932,
7239276100,
7800011316,
7666616523,
7047019659,
7639703100,
7874237006,
1200081331,
3700000012,
22591400000,
4650073796,
68333351133,
7874237105,
3862211635,
1820000243,
7874211695,
2800027249,
7874208678,
3500039374,
2500001921,
2100003934,
5000016011,
4000046408,
5210003037,
2200000665,
5210002737,
3600001247,
7482064545,
20612700000,
4111600527,
67811256544,
4900002469,
20589000000,
8000000145,
7874208242,
7278504298,
4470003095,
5450019322,
4200035501,
2200001585,
1370009130,
2840020060,
68113178095,
66408104504,
7874235204,
2100000109,
4400003133,
4222281210,
35972612915,
1680043700,
66572107784,
7432302960,
7874220625,
3700000445,
2800051575,
3120021007,
1200542758,
1200080995,
1070070230,
2529300136,
81655901009,
7225002523,
63084090322,
1200014143,
68113109025,
7464100425,
3650009396,
3338311002,
2500005543,
4900001063,
4300000953,
78533175712,
3338370020,
2100063382,
7874237417,
1200000452,
7033063418,
4000048507,
68113106754,
1727622698,
3500051002,
71575630004,
68113132898,
7278510117,
7874206992,
3500076452,
7874201601,
78533176072,
7524310116,
2740080024,
2800011888,
7874220164,
22716200000,
2840003633,
7874228332,
3800084500,
7680000696,
2200000319,
7480605025,
3700085983,
7874214223,
6233878048,
7331045200,
2100004208,
4470006441,
22591500000,
7874211462,
7874243026,
3989714671,
3500014110,
1200011223,
4900001464,
6132835406,
60538871456,
60538818718,
4593,
4300006371,
7874204033,
8265754221,
7940056430,
7874237475,
8520000115,
1700002670,
61300874038,
79357380941,
3700012769,
7874235338,
4470006420,
1480064657,
1700009270,
71785410521,
4900003073,
4900005545,
2700038249,
2200001749,
63084090010,
7432300229,
7707103549,
2840016199,
80521910470,
2400003409,
7080004125,
2500004808,
80240400562,
4178900216,
8768400114,
75752800880,
4627,
4359500275,
2840044042,
3800059921,
4140900005,
7013200600,
4610000107,
7874208953,
68333321204,
89470001006,
3700087168,
3663203640,
3700086801,
60538807733,
31254662869,
4111600595,
2800027525,
2773602275,
3700042689,
68113107941,
74236526425,
7874207786,
3450015136,
2840001395,
2740026495,
1312000286,
3620001369,
68113102888,
2840041772,
4000049483,
68113109132,
1600043473,
2410010685,
68113105213,
68113138807,
4138309010,
2733102580,
84008511234,
7639700128,
7874236690,
4200087206,
5400036421,
7518500001,
2500004738,
2840042075,
1200500571,
8768400097,
1480064666,
2924300010,
75609300234,
89470001033,
61696010355,
22846000000,
2840024019,
1590000043,
7874222921,
3663203642,
7046208251,
78035378404,
68670010123,
4178900251,
3700086505,
4510006061,
3940001880,
2840020285,
3010010057,
1254600608,
2840037218,
7874237465,
68113106755,
7432300244,
2500004893,
4900005437,
3800000110,
71631052184,
68113106589,
7874209707,
1200001276,
4203710658,
2500004751,
5200032180,
89470001032,
5200013307,
3700051050,
5210002738,
7114233416,
2200001748,
7976503172,
64541671806,
980000763,
7800015246,
3338360101,
4900003076,
7874205335,
7874222869,
22010700000,
1200540463,
7432304704,
2840008314,
2265530605,
3018291829,
3800040280,
4900000371,
3760039885,
2840005598,
4178007275,
7447101720,
2956810081,
7874223039,
4100002278,
7278503766,
88491200246,
68113103499,
2840042173,
1111161164,
7349120100,
1600043101,
7874208902,
7682804668,
6414428243,
4850001830,
66572105686,
2840033727,
68113178096,
7294570544,
4834100297,
1410008547,
5000030302,
3663202884,
4610000113,
22013600000,
3500005364,
4300006571,
4400004255,
7432308129,
3890003032,
4280047645,
2840020059,
8312000207,
2211007980,
4000000627,
4300095368,
3700042688,
2500001927,
7874220622,
2840016207,
2430083938,
3500068046,
73114972560,
7874200725,
68113107270,
7874235220,
3500068177,
71533980040,
4510006029,
22635000000,
6414404322,
2700038182,
3500049860,
22148500000,
3700022879,
4650073175,
7874205336,
7007458059,
7666621383,
7874243403,
7994600660,
9147593097,
7800012416,
7800020640,
7315040710,
79749687167,
4667501351,
1700013138,
20184400000,
7003030410,
94011,
7418226988,
2200001187,
6233887979,
2529360027,
1090000114,
2100005228,
3120027627,
2529300098,
1254600470,
4600028869,
2800053970,
88859694413,
20140500000,
1500007605,
1700099415,
20104500000,
3710268616,
4470002455,
7874201449,
7203001353,
68113191684,
3345605008,
8939615297,
74107404238,
2120052042,
1070070259,
7100721362,
7763306328,
3700086176,
7373108202,
7874209759,
4300095370,
2430078106,
7047044518,
3825241051,
1500007604,
4900002891,
4470007061,
7874235322,
7050129797,
3700086522,
75680930996,
7874243268,
68113106998,
74236526447,
7385200938,
7874203641,
8000000670,
3600038571,
4400002950,
3800040500,
4800121338,
7680850106,
1121701069,
3600036094,
82513102368,
1111104094,
3338365301,
7088156021,
3700042902,
66144000021,
22620700000,
7092265562,
20554600000,
980000107,
4200044515,
7047043231,
7047029061,
2622977070,
7940049581,
2200001584,
7874206504,
7756712130,
22557400000,
22005600000,
4470000210,
7959416257,
3786744400,
2800046942,
68113132864,
2840042171,
4142010025,
7627500001,
4470003128,
4427604480,
2840042172,
7464100331,
7535310467,
3426441601,
7128785993,
74236526445,
1111161117,
68113132895,
4900005071,
5100002549,
1480064642,
2430004146,
3700086514,
5410001870,
8000000672,
2500004670,
7261370863,
3800032753,
22591200000,
2200000845,
7774529186,
3000005040,
5200001122,
30869664146,
7874203976,
2840043634,
3800000120,
4800171128,
2840008359,
7763306510,
3500076469,
20708300000,
7146401769,
7906800310,
4650071689,
7874208894,
7874211488,
7874222971,
3000004360,
3400000330,
3600025861,
1466850600,
68113107866,
4000000603,
9047832110,
1111161195,
35431222502,
3400000009,
5200013226,
2340099808,
7874212205,
7639700212,
6343589111,
3800039113,
2700039007,
68113106996,
1890003887,
1200000180,
1200542756,
88831391499,
2924300001,
1111161328,
7432309461,
7874206591,
7976503272,
3500049110,
3760013872,
1500007607,
22699600000,
7103025162,
71988618039,
4258,
4601346068,
2500005425,
4300004714,
1600026460,
3700086782,
7874214466,
4510006064,
4140900006,
2840008936,
2471930946,
2840042736,
7874211482,
7790050308,
7874206803,
7314918478,
4470000674,
68113191678,
22011800000,
4650071693,
9047821680,
3700000253,
4470001165,
2840020061,
1111161011,
4812110208,
68113106729,
2840008761,
4400002827,
7874207203,
7453475642,
31254742795,
2840042210,
7994605367,
68113102781,
5200012251,
8000000334,
4856406002,
5210002746,
4470003050,
7225000486,
3600034330,
5150020430,
31254742735,
78533176074,
3620000250,
7800009240,
7874237221,
6164890621,
3500014118,
89299201136,
2500002653,
7874202400,
3400000665,
1099500871,
20747800000,
7146301176,
3660081481,
2800013317,
22149200000,
78533172246,
7341013959,
2430004320,
7874236950,
3700028606,
9396651300,
9729801382,
88727668978,
2242226000,
4171623110,
2100005238,
7874211212,
7874222867,
4100002286,
7225004919,
9147597000,
2840021573,
4100002262,
7965600150,
3800093767,
7080004126,
7003031026,
7874236944,
3400047022,
2214345488,
2840042311,
3500053058,
3338353310,
68113132901,
7874200463,
7874208410,
2172460295,
2827203315,
3400000529,
7114600245,
7874220996,
66572105763,
4081,
3500055903,
3700044979,
7874263939,
65006600016,
22635100000,
5400075130,
4305,
1500007001,
4280011400,
1700012029,
2840020126,
2471930816,
2265572352,
7341000026,
3700082317,
4450031376,
2733100521,
4650015479,
4178900281,
7225009260,
89470001016,
3800031834,
5020001929,
3000001180,
3400000320,
87126040032,
1480064609,
4088,
68113168758,
3500050302,
7874212435,
7047000430,
7209232412,
2733103417,
3700050579,
4218,
79487801870,
61659450605,
4460030058,
60538816340,
1090028295,
89299201114,
5300000636,
22590600000,
2430083561,
75680910530,
7047040853,
7874206016,
3600037373,
7033091146,
7453484400,
2840022053,
7874208940,
7524310115,
3663202851,
4460001665,
2800013170,
1370011850,
7874237104,
20180700000,
9292670900,
4790050730,
7874235209,
4470036000,
68113191688,
3426443770,
3500076379,
3320020202,
3400000545,
7874200458,
3114627060,
3000031836,
2700038823,
2924300013,
7962146100,
2840003912,
7756702198,
3100010101,
2500005276,
2924300024,
7874242977,
4136408033,
2100000727,
7830006991,
2500001922,
4182080793,
84110100530,
2113150475,
7488770100,
7874205338,
3320000073,
4801,
3500074023,
8307811313,
7033072262,
7756736096,
3450015135,
8768400101,
7874204879,
4178500076,
7160301882,
7432300230,
4180040128,
7225003763,
2200001746,
4413,
22004200000,
68113102783,
1500007000,
81655901002,
4280010700,
3120020311,
2862112348,
2840019998,
7800020616,
5100012773,
22615900000,
8265733412,
88827325962,
2900000856,
2700039103,
81843100005,
2840005663,
3500068818,
76163520260,
4470000783,
20609300000,
3940001594,
22093000000,
3700086789,
68113105236,
3224756783,
89470001013,
7225001137,
7110000551,
81157101357,
7047000302,
7874201186,
1254600634,
7874203278,
3700007190,
68113170008,
3600038555,
68113104837,
1708200795,
2500005433,
81119601176,
3213424000,
7146410005,
7539102940,
7874208255,
4900003075,
7255411174,
7874206802,
3120033014,
1200080386,
7639700207,
1111132029,
68113109035,
7874209348,
5100001251,
61696007823,
1280051776,
7874211698,
89470001001,
89470001010,
4800121988,
1500007600,
4133370464,
3400011470,
61659450606,
1740010695,
3400011601,
3338367002,
7146401635,
7778200276,
80521991315,
3400043095,
7518500700,
3500074003,
2120045844,
1200000233,
88491200245,
30041060686,
3400000054,
7033060315,
7800001328,
68333321538,
3345611411,
7282017190,
3800059923,
7341013835,
4900005614,
3338308597,
71575610001,
5100021697,
2100001253,
3500049874,
7033060006,
4178900232,
25797300000,
2840023983,
68113153208,
38137009213,
7874243019,
2000012864,
2840016205,
7874204909,
4850030414,
88831391490,
20202000000,
3800059661,
3700086179,
4200015115,
80240400445,
9147563006,
3663200251,
3040021650,
7874222913,
4800171104,
7209242412,
7066209632,
89470001012,
25047000000,
3500076701,
4470002410,
7874298119,
1700009153,
2610000575,
1600042040,
2100005367,
30521017900,
7535311765,
68113197215,
5020001922,
68113138744,
1700009264,
68113102601,
4223831221,
1480000034,
###Markdown
Let us analyze the data.
###Code
df.describe()
df.info()
print("Data Shape : ",df.shape)
print("Data columns : ",df.columns)
###Output
Data Shape : (31653, 12)
Data columns : Index(['ID', 'UsageClass', 'CheckoutType', 'CheckoutYear', 'CheckoutMonth',
'Checkouts', 'Title', 'Creator', 'Subjects', 'Publisher',
'PublicationYear', 'MaterialType'],
dtype='object')
###Markdown
Let us consider each column one by one and analyze
###Code
#ID column. let us see if we have unique data or not.
df['ID'].nunique()
###Output
_____no_output_____
###Markdown
Since the unique count and the overall count of rows is same. We can ignore this column. Also it says that there are no duplicate rows. Column : UsageClass
###Code
print("Count of unique value :",df['UsageClass'].nunique())
df['UsageClass'].value_counts()
###Output
Count of unique value : 1
###Markdown
Since all are of usage class physical this attribute will not help us in the analysis and can be removed while model creation. Column : CheckoutType
###Code
print("Count of unique value :",df['CheckoutType'].nunique())
df['CheckoutType'].value_counts()
###Output
Count of unique value : 1
###Markdown
Since all are of usage class physical this attribute will not help us in the analysis and can be removed while model creation. Column : CheckoutYear
###Code
print("Count of unique value :",df['CheckoutYear'].nunique())
df['CheckoutYear'].value_counts()
###Output
Count of unique value : 1
###Markdown
The whole data of checckout year is of 2005 Column : CheckoutMonth
###Code
print("Count of unique value :",df['CheckoutMonth'].nunique())
df['CheckoutMonth'].value_counts()
###Output
Count of unique value : 1
###Markdown
Checkout month is 4 for the whole data. Column : Checkouts~60% of the data seems to have checkout count as 1.
###Code
print("Count of unique value :",df['Checkouts'].nunique())
df['Checkouts'].value_counts()/df.shape[0]
###Output
Count of unique value : 50
###Markdown
Column : PublicationYearHere we see that publication year is a string field withn different junk values also. Hence we will not be using this column.
###Code
print("Count of unique value :",df['PublicationYear'].nunique())
df['PublicationYear'].value_counts()
###Output
Count of unique value : 840
###Markdown
Column : MaterialType
###Code
print("Count of unique value :",df['MaterialType'].nunique())
print(df['MaterialType'].value_counts()/df.shape[0])
df['MaterialType'].value_counts().plot.bar()
###Output
Count of unique value : 8
BOOK 0.685780
SOUNDDISC 0.131078
VIDEOCASS 0.086911
VIDEODISC 0.044861
SOUNDCASS 0.032224
MIXED 0.010963
MUSIC 0.005213
CR 0.002970
Name: MaterialType, dtype: float64
###Markdown
After seeing the distribution of target variablewe can say that 68% of the total material type are books. Mixed Music and CR counts are mminimal and they may be clubbed with other classes if possible.
###Code
#Now lets see if there are null values present in the data2
df.isnull().sum()/df.shape[0]
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisThis notebook aims to explore the beer dataset and shortlist the less popular but good quality beers to build a recommender system.**Prepared by: Group 7***Chan Cheah Cha A0189006A, Chua Kai Bing A0185606Y, Goh Jia Yi A0185610J, Lim Jia Qi A0189626M, Tan Zen Wei A0188424X*
###Code
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# Setting paths
root = '/content/gdrive/MyDrive/BT4014/Codes/Data/'
###Output
_____no_output_____
###Markdown
Exploring the dataset
###Code
beer_df = pd.read_csv(root + 'beer_reviews.csv')
beer_df
beer_df.columns
beer_df.describe()
beer_df['beer_beerid'].nunique()
# 66055 different types of beers
beer_df['review_profilename'].nunique()
# 33387 different users
beer_df['brewery_id'].nunique()
# 5840 different types of brewery
###Output
_____no_output_____
###Markdown
Exploring Popularity of beers (by review count)
###Code
# Computing review count of each beer
beer_df['count'] = 1
beer_count = beer_df[['beer_name','count']].groupby('beer_name').sum().sort_values(by=['count'],ascending=False)
beer_count = beer_count.reset_index()
# Summary stats
beer_count.describe()
plt.figure(figsize=(10,7))
plt.plot(beer_count.index,beer_count['count'])
plt.title('Review frequency all beers')
plt.xlabel('Beer ID')
plt.ylabel('Number of reviews')
plt.xticks(np.arange(0, 60000, 5000), rotation='vertical')
plt.show()
plt.figure(figsize=(10,8))
plt.plot(beer_count.head(50).index,beer_count.head(50)['count'], marker='o')
plt.title('Top 50 most popular beers')
plt.xlabel('Beer ID')
plt.ylabel('Review Count')
plt.xticks(np.arange(0, 50, 1))
plt.axvline(9.5, 0, 1, c='red')
#plt.axvline(19.5, 0, 1, c='red')
plt.show()
plt.figure(figsize=(5,7))
plt.boxplot(beer_count['count'])
plt.title('Box Plot of the number review counts by beers')
plt.show()
# Filter out popular beers (review count > 100) minus top 10
popular_beers = beer_count.loc[beer_count['count']>100]
popular_beers = popular_beers[10:]
popular_beers
###Output
_____no_output_____
###Markdown
Exploring Quality of beers (by average overall ratings)
###Code
# Find out average overall ratings of the beers
beer_reviews = beer_df[['beer_name','review_overall']].groupby('beer_name').mean().sort_values(by=['review_overall'],ascending=False)
beer_reviews = beer_reviews.reset_index()
beer_reviews.rename(columns={'review_overall': 'review_mean'}, inplace=True) ##rename aggregated col
beer_reviews.head(30)
# Summary stats
beer_reviews.describe()
plt.figure(figsize=(5,7))
plt.boxplot(beer_reviews['review_mean'])
plt.title('Box Plot of the average overall reviews')
plt.show()
quality_beers = beer_reviews.loc[beer_reviews['review_mean']>4]
quality_beers
###Output
_____no_output_____
###Markdown
Final Shortlisted beers
###Code
# Join both df
beer_combined = pd.merge(quality_beers, popular_beers, on=["beer_name"])
shortlisted = beer_combined[:100]
shortlisted
shortlisted.describe()
fig, ax1 = plt.subplots(figsize = (5,6))
# ax.boxplot([shortlisted['review_mean'],shortlisted['count']])
props = dict(widths=0.7,patch_artist=True, medianprops=dict(color="gold"))
box1=ax1.boxplot(shortlisted['review_mean'].values, positions=[0], **props)
ax2 = ax1.twinx()
box2=ax2.boxplot(shortlisted['count'].values,positions=[1], **props)
plt.title('Box Plot of shortlisted beers')
ax1.set_xticklabels(['avg ratings','review count'])
plt.show()
###Output
_____no_output_____
###Markdown
3D Lables EDAExplor file structure, data structure and lables of our 3D images of mouse skulls and explore some of the issue facing the product development.The image files are in their original .mnc format which is an AutoCAD Compiled Menu file, while the keypoints files are in .tag format.---We are using the `nibabel` package to read the `.mnc` files
###Code
import matplotlib.pyplot as plt
import nibabel as nib
import numpy as np
img = nib.load("/Users/michaeldac/Code/CUNY/698/Skulls/475.mnc")
###Output
_____no_output_____
###Markdown
Lets get the type and shape of the image file.
###Code
data = img.get_data()
print("The data shape is:", data.shape)
print("The type of data is:", type(data))
np.set_printoptions(precision=2, suppress=True)
print(data[0:4, 0:4, 0:4])
###Output
The data shape is: (698, 397, 456)
The type of data is: <class 'numpy.ndarray'>
[[[-242. -186.99 -304.03 -101.02]
[ -59.98 -216.98 -267.03 -55.02]
[ 31.01 29.98 -118.01 68.97]
[ -35.98 230.02 337.03 221.01]]
[[-179.02 -62. 148.97 143.02]
[ -72.02 7.98 93.98 99.02]
[ 59.02 125. 152. 146. ]
[ 64. -3.98 -45.98 40.99]]
[[ 8.03 128.02 128.99 -11. ]
[ 92.01 181.01 90.02 1.02]
[ 88.99 41.98 -118.01 -69.98]
[ 137.02 43.98 -114.99 -23.03]]
[[-117. -31.99 -94.99 -12. ]
[ 103.03 32.02 -155.98 -89. ]
[ -3.99 32.02 -208. -107.98]
[ 208.03 132.99 -178.99 26.98]]]
###Markdown
As we can see, this particluar image has a shape of 698 x 397 x 456 voxels. Since we are dealing with three-dimensional images we will have to work with volume pixels, or voxels.-----Let's take a look at the images by plotting them. Since they are in 3d and we are using a 2d canvas, we can only look at particular slices of the 3d image.
###Code
img_data = img.get_fdata()
def show_slices(slices):
"""Function to show image slices"""
fig, axes = plt.subplot(1, len(slices), 1)
i=0
for s in slices:
axes[i].imshow(slice.T, cmap="gray", origin="lower")
i+=1
slice_0 = img_data[350, :, :]
slice_1 = img_data[:, 200, :]
slice_2 = img_data[:, :, 225]
#show_slices([slice_0, slice_1, slice_2])
# plt.suptitle("Center slices for EPI image") # doctest: +SKIP
plt.imshow(slice_1)
plt.show()
###Output
_____no_output_____
###Markdown
You can see that in each of the three image slices there are differences in brightness which correspond to each value in the array. The first image appears to be a top-down view of the mouse's skull.Unlike many photos these allow negative value instead of having a scale of 0-255. More invistigation needs to be done to find out what the best way to scale these for a neural network are.
###Code
plt.imshow(slice_2)
plt.show()
###Output
_____no_output_____
###Markdown
The second image looks like its a side view of the skull and the third image appears to be a view from the back of the head.
###Code
plt.imshow(slice_0)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's move on to the keypoint files. We've created a `tag_parser` function to split up the original file, remove the excess, and obrain a 3d ndarray.
###Code
import pandas as pd
from io import StringIO
from preprocessing import tag_parser
tags = tag_parser('/Users/michaeldac/Code/CUNY/698/Skulls/475_landmarks.tag')
tags
tags.shape
img_475 = (data, tags)
img_475_array = img_475[0]
img_476 = (data, tags)
img_475[0]
np.save('img_475.npy', img_475)
reload = np.load('img_475.npy', allow_pickle=True)
reload
###Output
_____no_output_____
###Markdown
The 3D images are accompanied by `.tag` files that denote the `(x, y, z)` cordinates of key points measured in mm. There are currently only 4 points as initially we are only trying to orientate the mouse skulls in space.---To match these to the points on an our images we need to find out how large the voxels (3D pixles) are:
###Code
print("The voxel size is:", img.header.get_zooms(), 'mm in each dimension')
###Output
The voxel size is: (0.035, 0.035, 0.035) mm in each dimension
###Markdown
Therefore, we can divide the point location by the voxel size to get the points in space of the key points for this image.
###Code
pixel_loc = np.round(tags / 0.035)
pixel_loc
data
###Output
_____no_output_____
###Markdown
When plotted on the skull image we can see that these points pertain to the left and right eyes, left and right front molars and the tip of the nose. These are used to orientate the skull in 3D space in order to make labeling easier.
###Code
def mri_point_plot(img, df, dim_cols=['x', 'z'], iter_cols='y'):
"""Graphs an points. pt_cols is used to set the cols to iterate
over (different views)
"""
ax = []
fig = plt.figure(figsize=(9, 8))
columns = 3
rows = 2
for i in df.index:
y_slice = int(df.loc[i, iter_cols])
im = img[:, y_slice, :]
ax.append( fig.add_subplot(rows, columns, i+1))
ax[-1].set_title("Image depth: "+str(y_slice)) # set title
plt.imshow(im)
plt.plot(df.loc[i, dim_cols[0]], df.loc[i, dim_cols[1]], 'ro')
plt.show()
###Output
_____no_output_____
###Markdown
Another example of a skull:
###Code
img2 = nib.load("/Users/michaeldac/Code/CUNY/698/Skulls/930.mnc")
tags2 = tag_parser("/Users/michaeldac/Code/CUNY/698/Skulls/930_landmarks.tag")
pix_size = img2.header.get_zooms()
print(pix_size)
img2 = img2.get_data()
tags2 = tags2 / pix_size[0]
mri_point_plot(img2, tags2)
img2 = nib.load("MouseSkullData/943.mnc")
tags2 = tag_parser("MouseSkullData/943_landmarks.tag")
pix_size = img2.header.get_zooms()
print(pix_size)
img2 = img2.get_data()
tags2 = tags2 / pix_size[0]
mri_point_plot(img2, tags2)
img2 = nib.load("/Users/michaeldac/Code/CUNY/698/Skulls/1837.mnc")
tags2 = tag_parser("/Users/michaeldac/Code/CUNY/698/Skulls/1837_landmarks.tag")
pix_size = img2.header.get_zooms()
print(pix_size)
img2 = img2.get_data()
tags2 = tags2 / pix_size[0]
mri_point_plot(img2, tags2)
###Output
_____no_output_____
###Markdown
Explor image sizeThe actual image data when stored as a numpy array is huge at around 1 Gb
###Code
import sys
sys.getsizeof(img_data)
print(round(sys.getsizeof(img_data) / 1e9, 2), "Gb")
###Output
_____no_output_____
###Markdown
Further, we need to be concerned at the dimensions of the images and the voxel size. The image dimensions are important because many deep learning algorithms require a uniform image input size. Further we will most likely have to scale the images down in order to be abel to perform and not overfit on such highly dimensional data. The voxel size is also important because our scales are denoted in milimeters and we need to match them to the appropritate location even with scaling.
###Code
import os
from tqdm import tqdm
files = os.listdir('/Users/michaeldac/Code/CUNY/698/Skulls')
mnc_files = [f for f in files if 'mnc' in f]
img_dims = {}
for i in tqdm(mnc_files):
dims = nib.load(str('/Users/michaeldac/Code/CUNY/698/Skulls/' + i)).header.get_data_shape()
img_dims[i] = dims
dim_df = pd.DataFrame.from_dict(img_dims).T
dim_df.columns = ['x', 'y', 'z']
dim_df.head()
img_res = {}
for i in tqdm(mnc_files):
res = nib.load(str('/Users/michaeldac/Code/CUNY/698/Skulls/' + i)).header.get_zooms()
img_res[i] = res
res_df = pd.DataFrame.from_dict(img_res).T
res_df.columns = ['x', 'y', 'z']
res_df.head()
res_df.loc[res_df.x != 0.035]
dim_df.describe()
dim_df.loc[dim_df.y == 888]
###Output
_____no_output_____
###Markdown
So we can see that the voxel size is almost always `0.035` however there are some images that differ. Further outside of this intial training example we can expect the voxel sizes to differ considerably. Thus we need a solution to scale to whatever size is inputted. ----We also need to pick an image ratio to pad our images to. The issue is that the dimensions are not all even xor odd. This means that adding a uniform band around one side of an image will not be an option. Instead the image band or pad size will have to be different by one pixel in approximately half of the specimens.
###Code
from ThreeDLabeler import images
from ThreeDLabeler.preprocessing import tag_parser
from ThreeDLabeler.plotting import mri_point_plot
# importlib.reload(ThreeDLabeler.images)
from preprocessing import mri_point_plot as mpp
from preprocessing import tag_parser
from preprocessing import Image
im = Image(data, (0.035, 0.035, 0.035), tags)
mpp(im.voxels, im.point_positon)
im.cube()
mri_point_plot(im.voxels, im.point_positon)
im.voxels
im.scale(128)
mri_point_plot(im.voxels, im.point_positon)
reduced_475 = (im.voxels, tags)
np.save('475_reduced.npy', reduced_475)
import os
os.getcwd()
reload_475 = np.load('475_reduced.npy', allow_pickle=True)
reload_475
print(im.point_positon)
print(im.voxels.shape)
mri_point_plot(im.voxels, im.point_positon)
im.cube()
print(im.point_positon)
print(im.voxels.shape)
im.scale(128)
type(im)
print(im.point_positon)
print(im.voxels.shape)
mri_point_plot(im.voxels, im.point_positon)
###Output
_____no_output_____
###Markdown
We can see this is positioning
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from nilearn import plotting
plotting.plot_glass_brain("MouseSkullData/test.nii")
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
class Image:
"""
Processor class for annotating 3D scans.
Arguments:
voxels: a 3D numpy array
voxel_size: a tuple/list of three numbers indicating the voxel size in mm, cm etc
point_position: the position in 3D of each point of interest. See tag_parser for more info
"""
def __init__(self, voxels, voxel_size, point_position):
self.voxels = voxels
self.voxel_size = voxel_size
self.point_position = point_position / voxel_size
def cube(self):
"""Returns a cube image with all dimensions equal to the longest."""
dims = self.voxels.shape
max_dim = max(dims)
x_target = (max_dim - dims[0]) / 2
y_target = (max_dim - dims[1]) / 2
z_target = (max_dim - dims[2]) / 2
self.voxels = np.pad(self.voxels,
((int(np.ceil(x_target)), int(np.floor(x_target))),
(int(np.ceil(y_target)), int(np.floor(y_target))),
(int(np.ceil(z_target)), int(np.floor(z_target)))),
'constant',
constant_values=(0))
self.point_position = self.point_position + [np.ceil(z_target),
np.ceil(y_target),
np.ceil(x_target)]
return(self)
def scale(self, size=128):
"""
Scales an cubic image to a certain number of voxels.
This function relies on numpy's ndimage.zoom function
"""
scale_factor = size / max(self.voxels.shape)
self.voxels = ndimage.zoom(self.voxels, scale_factor)
self.point_position = self.point_position * scale_factor
self.voxel_size = False # To ignore this
return(self)
import numpy as np
from tqdm import tqdm
from io import StringIO
import time
import os
def package_to_npy(file_path: str, mnc_files: list, tag_files: list, mnc_names: list):
"""
INPUT: Path where raw image files exist,
List of .mnc files,
List of corresponding .tag files,
List of .mnc prefix names
The .mnc file is loaded
The .tag file is parsed and converted to an ndarray via tag_parser()
Processor class is instantiated with the .mnc and .tag file and cubes
any images shaped as rectangular prisms and scales down image
resolution to 128x128x128.
OUTPUT: Tuple of the processed .mnc and .tag files stored as .npy file
and saved to disk locally.
"""
print('Starting image processing...')
count = 0
for i in tqdm(range(len(mnc_files))):
img = nib.load(f'{file_path}/{mnc_files[i]}')
tag = tag_parser(f'{file_path}/{tag_files[i]}')
im = Processor(img.get_data(), img.header.get_zooms(), tag)
im.cube().scale(128)
npy_file = (im.voxels, im.point_position)
np.save(f'{file_path}/{mnc_names[i]}.npy', npy_file)
count += 1
print(f'{count} .mnc/.tag file pairs have been processed and saved as .npy files')
x = reload[0]
y = reload[1]
y
img475 = Image(x, 1, y)
img475.cube()
img475.voxels.min()
nyp_cubed = (img475.voxels, img475.point_position)
np.save('/Users/michaeldac/Code/CUNY/698/Downloaded_Skulls/nyp_cubed.npy',nyp_cubed)
reloaded_nyp_cubed = np.load('/Users/michaeldac/Code/CUNY/698/Downloaded_Skulls/nyp_cubed.npy', allow_pickle=True)
reloaded_nyp_cubed[0].max
###Output
_____no_output_____
###Markdown
Enron EDA 1. Loading the dataset
###Code
import numpy as np
import pandas as pd
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
from ggplot import *
from IPython.display import Image
import warnings
from feature_engineering.feature_format import feature_format, target_feature_split
warnings.filterwarnings('ignore')
%config InlineBackend.figure_format = 'retina'
with open("./data/final_project_dataset.pkl", "rb") as data_file:
data_dict = pickle.load(data_file)
df = pd.DataFrame.from_records(list(data_dict.values()), index=data_dict.keys())
# df = df.replace('NaN', 0).drop(['email_address'], axis=1)
df.drop('poi', inplace=True, axis=1)
df.columns.values
features_list = ['poi'] + list(df.columns.values)
my_dataset = df.to_dict('index')
data = feature_format(my_dataset, features_list, sort_keys = True)
labels, features = target_feature_split(data)
len(features)
###Output
_____no_output_____
###Markdown
2. Dataframe stats 2.1 Number of 'NaN'
###Code
def counts(col, tag):
counter = 0
for each in col:
if each == tag:
counter += 1
return counter
df.apply(lambda col: counts(col, 'NaN'), axis=0)
###Output
_____no_output_____
###Markdown
2.2 Datatype Count
###Code
dtype_df = df.dtypes.reset_index()
dtype_df.columns = ["Count", "Column Type"]
dtype_df.groupby("Column Type").aggregate('count').reset_index()
###Output
_____no_output_____
###Markdown
2.3 Dataframe Memory Usage
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Index: 146 entries, METTS MARK to GLISAN JR BEN F
Data columns (total 20 columns):
bonus 146 non-null int64
deferral_payments 146 non-null int64
deferred_income 146 non-null int64
director_fees 146 non-null int64
exercised_stock_options 146 non-null int64
expenses 146 non-null int64
from_messages 146 non-null int64
from_poi_to_this_person 146 non-null int64
from_this_person_to_poi 146 non-null int64
loan_advances 146 non-null int64
long_term_incentive 146 non-null int64
other 146 non-null int64
poi 146 non-null bool
restricted_stock 146 non-null int64
restricted_stock_deferred 146 non-null int64
salary 146 non-null int64
shared_receipt_with_poi 146 non-null int64
to_messages 146 non-null int64
total_payments 146 non-null int64
total_stock_value 146 non-null int64
dtypes: bool(1), int64(19)
memory usage: 23.0+ KB
###Markdown
Data Pattern POI counts
###Code
df.replace('NaN', 0, inplace=True)
POI_type = {'POI': len(df[df.poi == True]),
'non POIs': len(df[df.poi == False])}
pd.DataFrame(list(POI_type.items()),
columns=['Class', 'Counts'])
###Output
_____no_output_____
###Markdown
Correlation
###Code
sns.set(font_scale=1.4)
f, ax = plt.subplots(figsize=(14, 11))
cmap = sns.diverging_palette(220, 20, sep=20, as_cmap=True)
ax = sns.heatmap(df.corr(), cmap=cmap, vmax=.5, vmin=-.3, center=0,
square=True, linewidths=.5, cbar=0,
annot=True, annot_kws={"size":8})
plt.show()
###Output
_____no_output_____
###Markdown
Univariate Analysis
###Code
df = df[df.index != 'TOTAL']
h = ggplot(aes(x='bonus'), data=df) + \
geom_histogram(binwidth=500000,
fill='deeppink',
color='black',
alpha=0.5) +\
theme(plot_title=element_text(size=20)) +\
scale_x_continuous(breaks=range(0, 8000000, 1500000)) +\
ggtitle('Bonus distribution')
t = theme_bw()
t._rcParams['font.size'] = 20
t._rcParams['figure.figsize'] = 10, 6
h + t
###Output
_____no_output_____
###Markdown
As we can see that the data is highly skewed but most situalted from **0 to 1500000** Lets look at the data within this range
###Code
df['log_bonus'] = np.log10(df.bonus + 0.1)
h = ggplot(aes(x='log_bonus'), data=df) + \
geom_histogram(binwidth=.5,
fill='deeppink',
color='black',
alpha=0.5) +\
theme(plot_title = element_text(size=20)) +\
scale_x_continuous(limits=(4, 8)) +\
ggtitle('Bonus distribution (Log Scale)')
t = theme_bw()
t._rcParams['font.size'] = 20
t._rcParams['figure.figsize'] = 10, 6
h + t
###Output
_____no_output_____
###Markdown
The distribution seems pretty normal barring the people with 0 bonus.
###Code
h = ggplot(aes(x='long_term_incentive'), data=df) + \
geom_histogram(binwidth=500000,
fill='darkgreen',
color='black',
alpha=0.5) +\
theme(plot_title = element_text(size=20)) +\
ggtitle('Long term incentive distribution')
t = theme_bw()
t._rcParams['font.size'] = 20
t._rcParams['figure.figsize'] = 10, 6
h + t
###Output
_____no_output_____
###Markdown
Feature Importance Using Random Forest
###Code
Image(url="./img/importance_random_forest.png", retina=True)
###Output
_____no_output_____
###Markdown
Using XGBoost
###Code
Image(url="./img/importance_xgboost.png", retina=True)
new_df = pd.read_pickle('final_df.pkl')
###Output
_____no_output_____
###Markdown
We can see from the feature importance that **poi_interaction** and **deferred_income** are the two most important features. Let's explore these variables Distribution of **poi_interaction**
###Code
h = ggplot(aes(x='poi_interaction'), data=new_df) + \
geom_histogram(fill='orange',
color='black',
alpha=0.5) +\
theme(plot_title = element_text(size=20)) +\
ggtitle('Poi Interaction distribution')
t = theme_bw()
t._rcParams['font.size'] = 20
t._rcParams['figure.figsize'] = 10, 6
h + t
h = ggplot(aes(x='deferred_income'), data=new_df) + \
geom_histogram(fill='orange',
color='black',
alpha=0.5) +\
theme(plot_title = element_text(size=20)) +\
ggtitle('Poi Interaction distribution')
t = theme_bw()
t._rcParams['font.size'] = 20
t._rcParams['figure.figsize'] = 10, 6
h + t
new_df.groupby('poi').describe().poi_interaction.reset_index()
sns.set_style("whitegrid")
sns.barplot(y='poi_interaction', x='poi', data=new_df)
plt.title('Distribution of POIs for poi_interaction')
plt.show()
###Output
_____no_output_____
###Markdown
Housekeeping
###Code
data_dir = Path.cwd().joinpath('OUTPUT')
image_dir = Path.cwd().joinpath('OUTPUT').joinpath('IMAGES')
config_dir = Path.cwd().joinpath('CONFIG')
column_dir = Path.cwd().joinpath('OUTPUT').joinpath('COLUMNS')
report_dir = Path.cwd().joinpath('OUTPUT').joinpath('REPORTING')
###Output
_____no_output_____
###Markdown
Load the Data This notebook uses the `df_merged_with_features` dataframe, which was the output of the `preprocessing` notebook.
###Code
filename = 'df_features'
with open(str(data_dir.joinpath(filename)), 'rb') as infile:
df = pickle.load(infile)
# Drop duplicates
df = df.loc[~df.index.duplicated(keep='first')]
# Define the data types of the columns
col_dtype_df = pd.read_csv(
config_dir.joinpath('mapping_column_types_extended.csv'),
index_col='columns')
df = df.apply(lambda x: utils.set_column_type2(x, col_dtype_df))
df.dtypes
###Output
_____no_output_____
###Markdown
Add a column for a float type of `student_rating`; this is required for aggregation. Ratings vs Blanks
###Code
xlabel = ''
ylabel = 'Count'
title = 'Rated vs Not Rated'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = pd.DataFrame({'count': [df.student_rating.isnull().sum(), df.student_rating.notnull().sum()],
'type': ['Not rated', 'Rated'],})
ax = sns.barplot(x='type',
y='count',
data=data)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Just less than half of the sessions were rated by the students. Comments vs Blanks
###Code
xlabel = ''
ylabel = 'Count'
title = 'Comment vs No Comment'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = pd.DataFrame({'count': [df.student_comment_word_length.isnull().sum(), df.student_comment_word_length.notnull().sum()],
'type': ['No Comment', 'Comment'],})
ax = sns.barplot(x='type',
y='count',
data=data)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
There are a lot fewer commented sessions than not. This seems to suggest that commenting take a lot more effort. Rating vs Comments Rating Distributions With Comments
###Code
xlabel = 'Student Ratings'
ylabel = 'Count'
title = 'Rating Distributions (Commented)'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df[df.student_comment_word_length > 0]
ax = sns.countplot(x='student_rating',
data=data)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
xlabel = 'Student Ratings'
ylabel = 'Count'
title = 'Rating Distributions (Not Commented)'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df[df.student_comment_word_length.isnull()]
ax = sns.countplot(x='student_rating',
data=data)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Kolmogorov Smirnov Test
###Code
column = 'student_rating'
ratings_w_comments = df[df.student_comment_word_length.notnull()]['student_rating'].dropna()
ratings_wo_comments = df[df.student_comment_word_length.notnull()]['student_rating'].dropna()
ratings_wo_comments.unique()
ks_2samp(ratings_w_comments, ratings_wo_comments)
###Output
_____no_output_____
###Markdown
The high p-value indicates that the two distributions are essentially the same. The conclusion is that whether a student comments or not doesn't affect the rating. Relationship Between Rating and Commenting
###Code
xlabel = ''
ylabel = 'Count'
title = 'Ratings vs Comments'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = pd.DataFrame({'type': ['Rated, No Comment',
'Not Rated, Commented',
'Rated, Commented'],
'count': [((df.student_rating_numeric > 0) & (df.student_comment == "")).sum(),
((df.student_rating_numeric.isna()) & (df.student_comment != "")).sum(),
((df.student_rating_numeric > 0) & (df.student_comment != "")).sum(),
]})
ax = sns.barplot(x='type',
y='count',
data=data)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
plt.show()
###Output
_____no_output_____
###Markdown
Ratings vs Service by Sex
###Code
data = (
df[['service', 'sex_guess', 'student_rating_numeric']]
.groupby(['service', 'sex_guess'])
.mean()
)
data
xlabel = 'Service'
ylabel = 'Average Rating'
title = 'Ratings vs Service by Sex'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (
df[['service', 'sex_guess', 'student_rating_numeric']]
.groupby(['service', 'sex_guess'])
.mean()
.reset_index()
)
ax = sns.barplot(
x='service',
y='student_rating_numeric',
hue='sex_guess',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.legend(
bbox_to_anchor=(1.05, 1),
loc=2,
borderaxespad=0.
)
plt.tight_layout()
plt.savefig(image_path)
plt.show()
###Output
_____no_output_____
###Markdown
Student Rating Distribution
###Code
df.student_rating.value_counts()
xlabel = 'Student Rating'
ylabel = 'Count'
title = 'Distribution of Student Rating'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
ax = sns.barplot(x=df.student_rating.value_counts().index,
y=df.student_rating.value_counts())
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
By `client_id` See the sensitivity of the `client_id` to the `wait_seconds` for the 5 largest clients by number of sessions.
###Code
clients_by_num_sessions = (df
.groupby(['service', 'client_id'])
.agg({'session_id': 'count',
'student_id': pd.Series.nunique,
'student_rating_float': 'mean',
'student_comment_char_word': 'mean',
'student_sessions_total': 'mean',
'sentiment_aggregated': 'mean',
'tutor_id': pd.Series.nunique,
'tutor_age': 'mean',
'tutor_sessions_total': 'mean',
'tutor_experience_days': 'mean',
})
.sort_values(by='session_id', ascending=False)
.rename(columns={'session_id':'num_sessions',
'student_rating_float': 'average_student_rating',
'sentiment_aggregated': 'average_sentiment'})
.reset_index()
)
clients_by_num_sessions.head()
###Output
_____no_output_____
###Markdown
Calculate the correlations between the wait time and the client id.
###Code
grouping = ['service', 'client_id']
cols = ['student_rating_fixed_float', 'wait_seconds']
corr_rating_wait = (df
.groupby(grouping)[cols]
.corr()
.reset_index()
.query('level_2 == "student_rating_fixed_float"')
.drop(labels=['student_rating_fixed_float', 'level_2'], axis='columns')
.rename({'wait_seconds': 'corr'})
)
corr_rating_wait.head()
corr_rating_wait.shape
###Output
_____no_output_____
###Markdown
Merge with `clients_by_num_sessions` to get the `num_sessions` column.
###Code
corr_rating_wait = (corr_rating_wait
.merge(clients_by_num_sessions,
how='left',
on=['service', 'client_id'])
)
corr_rating_wait.head()
corr_rating_wait.shape
###Output
_____no_output_____
###Markdown
Merge with `df` to get the `client_type_desc`.
###Code
corr_rating_wait = (corr_rating_wait
.merge(df[['client_id', 'client_type_desc']]
.drop_duplicates(),
how='left',
on='client_id')
)
corr_rating_wait.head()
corr_rating_wait.shape
###Output
_____no_output_____
###Markdown
CL Client IDs with the largest number of sessions over the whole period.
###Code
corr_rating_wait.query('service == "cl"').sort_values(by='num_sessions', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
WF
###Code
corr_rating_wait.query('service == "wf"').sort_values(by='num_sessions', ascending=False).head(10)
service = 'cl'
top_client_id = (corr_rating_wait
.query('service == @service')
.sort_values(by='num_sessions', ascending=False)
.client_id
.head(1)
.values[0]
)
data = (df
.query('service == @service and client_id == @top_client_id')
)
###Output
_____no_output_____
###Markdown
By `client_type_desc` Rating vs Waiting Time by `client_type_desc` Calculate the average `student_rating` and `sentiment_aggregated`.
###Code
grouping = ['service', 'client_type_desc']
cols = ['student_rating_fixed_float', 'sentiment_aggregated']
df.g
###Output
_____no_output_____
###Markdown
CL
###Code
service = 'cl'
df_subset = df.query('service == @service')
df_subset.client_type_desc.unique()
grid = sns.FacetGrid(
df_subset,
row='client_type_desc',
aspect=4,
)
grid = grid.map(
sns.scatterplot,
'wait_seconds',
'student_rating_fixed_float')
###Output
_____no_output_____
###Markdown
Intents and Topics
###Code
order_intent_full = df.query('intent_luis != "None"').intent_luis.value_counts().index
title = 'Count of Intents (excl NONE)'
x_label = 'Count'
y_label = 'Intent'
plt.figure(figsize=(13,5))
ax = sns.countplot(y='intent_luis',
data = df.query('intent_luis != "None"'),
order = order_intent_full,
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
second_dimension = 'student_rating'
value = 1
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'intent_luis != "None" and {second_dimension} == @value')['intent_luis']
plt.figure(figsize=(13,5))
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
second_dimension = 'student_rating'
value = 2
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'intent_luis != "None" and {second_dimension} == @value')['intent_luis']
plt.figure(figsize=(13,5))
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
second_dimension = 'student_rating'
value = 3
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'intent_luis != "None" and {second_dimension} == @value')['intent_luis']
plt.figure(figsize=(13,5))
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
second_dimension = 'student_rating'
value = 4
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'intent_luis != "None" and {second_dimension} == @value')['intent_luis']
plt.figure(figsize=(13,5))
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
second_dimension = 'student_rating'
value = 5
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'intent_luis != "None" and {second_dimension} == @value')['intent_luis']
plt.figure(figsize=(13,5))
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Mapping to the [SERVQUAL](https://en.wikipedia.org/wiki/SERVQUAL) Categories
###Code
intent_mapping = pd.read_csv(config_dir.joinpath('mapping_intents.csv'))
intent_mapping.head()
df = utils.add_column(
df,
column_dir,
'intent_luis')
###Output
_____no_output_____
###Markdown
Merge the `intent_luis` with the ... topics.
###Code
df = df.merge(
intent_mapping,
how='left',
on='intent_luis',
)
df[['intent_luis', 'intent_servqual']].dropna().head()
utils.save_object(
df.intent_servqual,
'intent_servqual',
column_dir,
)
df.loc[174, ['intent_luis', 'intent_servqual', 'student_comment']]
# Set the order for the overall data set
order = data.intent_client.value_counts().index
xlabel = 'Categories'
ylabel = 'Count'
title = 'Comment Category Distribution'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query('intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order
)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Comment Category Distribution by Service
###Code
service = 'cl'
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({service.upper()})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query('service == @service and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
service = 'wf'
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({service.upper()})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query('service == @service and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Comment Category Distribution by Rating
###Code
filter_var = 'student_rating'
filter_val = 1
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution (Rating: {filter_val})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var} == {filter_val} and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var = 'student_rating'
filter_val = 2
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution (Rating: {filter_val})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var} == {filter_val} and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var = 'student_rating'
filter_val = 3
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution (Rating: {filter_val})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var} == {filter_val} and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var = 'student_rating'
filter_val = 4
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution (Rating: {filter_val})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var} == {filter_val} and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var = 'student_rating'
filter_val = 5
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution (Rating: {filter_val})'
filename = title.replace(' ', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var} == {filter_val} and intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Comment Category Distribution by Service and Rating
###Code
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 1
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 2
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 3
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 4
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 5
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 1
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 2
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 3
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 4
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 5
xlabel = 'Categories'
ylabel = 'Count'
title = f'Comment Category Distribution ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = df.query(f'{filter_var1} == "{filter_val1}"'
f' and {filter_var2} == {filter_val2} and'
f' intent_client != "none"')[['intent_client']]
ax = sns.countplot(y='intent_client',
data=data,
order=order)
ax.set(xlabel=ylabel,
ylabel=xlabel,
title=title,
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories
###Code
groupby_vars = ['service', 'intent_servqual']
filter_var1 = 'service'
filter_val1 = 'cl'
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
groupby_vars = ['service', 'intent_servqual']
filter_var1 = 'service'
filter_val1 = 'wf'
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: CL, Rating: 1)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 1
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: CL, Rating: 2)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 2
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: CL, Rating: 3)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 3
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: CL, Rating: 4)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 4
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: CL, Rating: 5)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'cl'
filter_var2 = 'student_rating'
filter_val2 = 5
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: WF, Rating: 1)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 1
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: WF, Rating: 2)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 2
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 3
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: WF, Rating: 4)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 4
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Average Sentiment Scores by Categories (Service: WF, Rating: 5)
###Code
groupby_vars = ['service', 'intent_servqual', 'student_rating']
filter_var1 = 'service'
filter_val1 = 'wf'
filter_var2 = 'student_rating'
filter_val2 = 5
xlabel = 'SERVQUAL Categories'
ylabel = 'Average Sentiment Score'
title = f'Average Sentiment ({filter_var1.title()}: {filter_val1.upper()}, Rating: {filter_val2})'
filename = title.replace(' ', '_').replace(':', '_').lower() + '.png'
image_path = image_dir.joinpath(filename)
data = (df
.groupby(groupby_vars)['sentiment_aggregated']
.mean()
.reset_index()
.query(f'{filter_var1} == @filter_val1 and {filter_var2} == @filter_val2')
)
ax = sns.barplot(y='sentiment_aggregated',
x='intent_servqual',
data=data,
)
ax.set(xlabel=xlabel,
ylabel=ylabel,
title=title,
)
ax.set_xticklabels(
labels=ax.get_xticklabels(),
rotation=45,
horizontalalignment='right',
)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Word Cloud
###Code
wordcloud_string = ' '.join(list(data_df_comments.student_comment_no_stopwords.values))
wordcloud = WordCloud(background_color="white",
max_words=20,
contour_width=3,
contour_color='steelblue',
collocations=False)
wordcloud.generate(wordcloud_string)
wordcloud.to_image()
###Output
_____no_output_____
###Markdown
Matching Phrases Using `spaCy`
###Code
matcher = Matcher(nlp.vocab)
# Create a pattern for something like "did something wrong"
pattern_name = 'DID_SOMETHING_WRONG'
pattern = [{'POS': 'VERB'}, {'POS': 'DET', 'OP': '?'}, {'LOWER': 'wrong'}, {'POS': 'NOUN'}]
matcher.add(pattern_name, None, pattern)
# Create a pattern for something like "pressed the wrong button"
pattern_name = 'PRESSED_WRONG_BUTTON'
pattern = [{'POS': 'VERB'}, {'POS': 'DET', 'OP': '?'}, {'LOWER': 'wrong'}, {'LOWER': 'button'}]
matcher.add(pattern_name, None, pattern)
def get_match_list(doc):
"""Returns a dictionary of {match_pattern: span.text}
Note: match_pattern is string_id in the official documentation
"""
matches = matcher(doc)
match_list = []
for match_id, start, end in matches:
match_pattern = nlp.vocab.strings[match_id]
span = doc[start:end]
match_list.append({match_pattern: span})
return match_list if match_list else False
mask_press_wrong_button = data_df_comments.student_comment_processed.apply(lambda x: True if get_match_list(x) else False)
print(sum(mask_press_wrong_button))
[*zip(data_df_comments.student_comment_processed[mask_press_wrong_button].apply(get_match_list), data_df_comments.student_comment_processed[mask_press_wrong_button])]
data_df_comments[mask_press_wrong_button][['student_comment', 'student_rating', 'start_at']]
sns.countplot(x='service', data=data_df_comments[mask_press_wrong_button])
sns.countplot(x='student_rating', data=data_df_comments[mask_press_wrong_button])
###Output
_____no_output_____
###Markdown
Sentiment
###Code
data_df_comments.groupby('student_rating')['sentiment_textblob'].mean().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Distribution of Ratings vs Sentiment (TextBlob) In this section we want to see the distribution of the ratings and the distribution of the sentiment. Note that the plot of the ratings don't include the rows without ratings, so the data for the sentiment is also appropriately subsetted.
###Code
title = 'Distribution of Ratings'
sns.distplot(data_df_comments[data_df_comments.student_rating.notna()]['student_rating'],
kde=False,
rug=False).set_title(title)
title = 'Distribution of Sentiments (TextBlob)'
sns.distplot(data_df_comments[data_df_comments.student_rating.notna()]['sentiment_textblob'],
kde=False,
rug=False).set_title(title)
###Output
_____no_output_____
###Markdown
There are 153 rows which don't have a rating. Let's see the distribution of the sentiments for these rows.
###Code
sns.distplot(data_df_comments[data_df_comments.student_rating.isna()]['sentiment_textblob'],
kde=False,
rug=True).set_title("Blank Rating: Distribution of TextBlob Sentiment")
###Output
_____no_output_____
###Markdown
The distribution is quite wide from -0.5 to a max of 1.0. Rating/Sentiment Inconsistencies `TextBlob`
###Code
data_df_comments.query('sentiment_textblob < 0 and student_rating > 3')[['student_rating', 'student_comment_apostrophe', 'sentiment_textblob']]
###Output
_____no_output_____
###Markdown
`TextBlob` Caveats
###Code
test_sentences = ["It's anything but good.",
"It's good.",
"Extremely helpful.",
"Very helpful."]
for sent in test_sentences:
print(f"Sentence: {sent} \nScore: {TextBlob(sent).sentiment.polarity}")
print(TextBlob("It's anything but good.").sentiment)
print(TextBlob("It's good.").sentiment)
print(TextBlob("Extremely helpful").sentiment)
print(TextBlob("Very helpful").sentiment)
###Output
_____no_output_____
###Markdown
Aggregated Sentiment Scores by SERVQUAL Categories
###Code
cols = [
'sentiment_textblob',
'sentiment_vader',
'sentiment_luis',
'sentiment_aggregated',
]
group_cols = [
'intent_servqual'
]
aggregated_sentiment_total_df = df.groupby(group_cols)[cols].mean()
aggregated_sentiment_total_df
filepath = report_dir.joinpath('aggregated_sentiment_total.csv')
aggregated_sentiment_total_df.to_csv(filepath)
cols = [
'sentiment_textblob',
'sentiment_vader',
'sentiment_luis',
'sentiment_aggregated',
]
group_cols = [
'student_rating',
'intent_servqual'
]
aggregated_sentiment_df = df.groupby(group_cols)[cols].mean()
aggregated_sentiment_df
###Output
_____no_output_____
###Markdown
By Student
###Code
df.columns
###Output
_____no_output_____
###Markdown
Number of Unique Students There are 113411 unique number of students. This averages to about 4.5 sessions per student over the analysis period. Obviously there would be variations as some students would have only used the service once and others multiple times.
###Code
df.student_id.nunique()
df.shape[0] / df.student_id.nunique()
###Output
_____no_output_____
###Markdown
Number of Unique Students by `service`
###Code
df_unique = pd.DataFrame({'num_sessions': df.groupby('service')['student_id'].count(),
'num_unique_students': df.groupby('service')['student_id'].nunique(),
'num_unique_tutors': df.groupby('service')['tutor_id'].nunique()})
df_unique['perc_unique_students'] = df_unique.num_unique_students / df_unique.num_sessions
df_unique['perc_unique_tutors'] = df_unique.num_unique_tutors / df_unique.num_sessions
print(df_unique.transpose())
df_unique
###Output
_____no_output_____
###Markdown
There are slighly higher percentage of unique students in the WF service than in the CL service. In other words, there are more repeat students in WF, though not by much.For the tutors however, there is a lot more repeats at 0.3% and 0.2% uniqueness for CL and WF respectively.
###Code
df_unique=df_unique.reset_index().melt(id_vars=['service'])
df_unique
df_unique['party'] = ['total', 'total', 'students', 'students', 'tutors', 'tutors', 'students', 'students', 'tutors', 'tutors']
df_unique
df_unique['variable'] = df_unique.variable.str.replace('_students', '')
df_unique['variable'] = df_unique.variable.str.replace('_tutors', '')
df_unique
df_unique.query('variable == "perc_unique" and party != "total"')
plot_df = df_unique.query('variable == "perc_unique" and party == "students"')
ax = sns.barplot(x='service', y='value', data=plot_df)
ax.set(title = '% of Unique Students',
xlabel = 'service',
ylabel = '')
plot_df = df_unique.query('variable == "perc_unique" and party == "tutors"')
ax = sns.barplot(x='service', y='value', data=plot_df)
ax.set(title = '% of Unique Tutors',
xlabel = 'service',
ylabel = '')
###Output
_____no_output_____
###Markdown
Rating Distribution Per Student First add a column that is 1 if there is a comment and 0 otherwise.
###Code
comment_ind = df.student_comment.apply(lambda x: 1 if len(x) > 0 else 0)
utils.save_object('comment_ind', comment_ind, column_dir)
df = utils.add_column(df, 'comment_ind')
df_unique_students = pd.DataFrame({'num_comments': df.groupby(['student_id'])['comment_ind'].sum(),
'average_num_comments': df.groupby(['student_id'])['comment_ind'].mean(),
'average_comments_word_length': df.groupby(['student_id'])['length_word_comment'].mean(),
'std_comments_word_length': df.groupby(['student_id'])['length_word_comment'].std()})
df_unique_students.head()
###Output
_____no_output_____
###Markdown
Percentage of students who comment:
###Code
num_unique_students_commented = df_unique_students.query('num_comments > 0').shape[0]
num_unique_students = df_unique_students.shape[0]
average_students_commented = num_unique_students_commented/num_unique_students
print(f"Number of students who commented: {num_unique_students_commented}")
print(f"Total number of unique students: {num_unique_students}")
print(f"Average number of students who commented: {average_students_commented: .2f}")
sns.distplot(a=df_unique_students.reset_index().query('num_comments > 0')['average_num_comments'],
kde=False)
###Output
_____no_output_____
###Markdown
Correlation: Waiting Time vs `student_rating_fixed` Waiting time has different meanings in CL and WF. In CL it's the time that the student waited to be matched with a tutor; the scale is in seconds. In WF it's the time between submission and the students' receiving the feedback on their document, this can be up to days.There are {{len(df_merged.client_type_desc.unique())}} different
###Code
len(df_merged.client_type_desc.unique())
filter_var = 'service'
filter_val = 'CL'
op = '=='
var1 = 'student_rating'
var2 = 'wait_seconds'
subset_list = [var1, var2]
# cl_df_formatted[subset_list].dropna(subset=['student_rating']).corr()
sns.swarmplot(x=var1, y=var2, data=cl_df_formatted[subset_list].dropna(subset=['student_rating']))
###Output
_____no_output_____
###Markdown
Writing Feedback Waiting Time vs `student_rating_fixed`
###Code
waiting_time_groups = ['service', 'client_type', ]
wf_df_formatted.columns
wf_waiting_time = wf_df_formatted.completed_at - wf_df_formatted.start_at
wf_waiting_time.head()
wf_waiting_time.describe()
###Output
_____no_output_____
###Markdown
Convert the `Timedelta` objects to seconds so it can be joined with the waiting time column of Connect Live.
###Code
wf_df_formatted['wait_seconds'] = wf_waiting_time.apply(utils.get_seconds_from_timedelta)
def calc_td_stats(data, func = np.mean):
return pd.to_timedelta(func(data.values.astype(np.int64)))
wf_df_formatted.groupby('student_rating')['wait_seconds']
data = pd.DataFrame({'mean_wait_time': wf_df_formatted.groupby('student_rating')['wait_seconds'].mean()
,'std_wait_time': wf_df_formatted.groupby('student_rating')['wait_seconds'].std()})
filter_var = 'service'
filter_val = 'WF'
op = '=='
var1 = data.index
var2 = 'mean_wait_time'
subset_list = [var1, var2]
title = f'Average Wait Time vs Student Rating: service = {filter_val}'
x_label = 'Student Rating'
y_label = 'Average Time (Seconds)'
ax = sns.barplot(x=var1
,y=var2
, data=data
)
ax.set(title=title
,xlabel=x_label
,ylabel=y_label)
filter_var = 'service'
filter_val = 'WF'
op = '=='
var1 = data.index
var2 = 'std_wait_time'
subset_list = [var1, var2]
title = f'Standard Deviation Wait Time vs Student Rating: service = {filter_val}'
x_label = 'Student Rating'
y_label = 'Average Time (Seconds)'
ax = sns.barplot(x=var1
,y=var2
, data=data
)
ax.set(title=title
,xlabel=x_label
,ylabel=y_label)
###Output
_____no_output_____
###Markdown
Connect Live Waiting Time vs `student_rating_fixed`
###Code
data = pd.DataFrame({'mean_wait_time': cl_df_formatted.groupby('student_rating')['wait_seconds'].mean()
,'std_wait_time': cl_df_formatted.groupby('student_rating')['wait_seconds'].std()})
filter_var = 'service'
filter_val = 'CL'
op = '=='
var1 = data.index
var2 = 'mean_wait_time'
subset_list = [var1, var2]
title = f'Average Wait Time vs Student Rating: service = {filter_val}'
x_label = 'Student Rating'
y_label = 'Average Time (Seconds)'
ax = sns.barplot(x=var1
,y=var2
, data=data
)
ax.set(title=title
,xlabel=x_label
,ylabel=y_label)
filter_var = 'service'
filter_val = 'CL'
op = '=='
var1 = data.index
var2 = 'std_wait_time'
subset_list = [var1, var2]
title = f'Standard Deviation Wait Time vs Student Rating: service = {filter_val}'
ax = sns.barplot(x=var1
,y=var2
, data=data
)
ax.set(title=title
,xlabel=x_label
,ylabel=y_label)
###Output
_____no_output_____
###Markdown
Intents
###Code
df.query('luis_intent_pickle != "None"').luis_intent_pickle.value_counts().index
order_intent_full = df.query('luis_intent_pickle != "None"').luis_intent_pickle.value_counts().index
title = 'Count of Intents (excl NONE)'
x_label = 'Count'
y_label = 'Intent'
ax = sns.countplot(y='luis_intent_pickle'
,data = df.query('luis_intent_pickle != "None"')
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
# Saving
filename = title.replace(' ', '_').replace(':', '').lower() + '.png'
image_path = image_dir.joinpath(filename)
plt.tight_layout()
plt.savefig(image_path)
###Output
_____no_output_____
###Markdown
Intents by Sex
###Code
sex = 'male'
title = f'Count of Intents (excl NONE): {sex}'
x_label = 'Count'
y_label = 'Intent'
data = df.query('luis_intent_pickle != "None" and gender_guess_mfu == @sex')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
sex = 'female'
title = f'Count of Intents (excl NONE): {sex}'
x_label = 'Count'
y_label = 'Intent'
data = df.query('luis_intent_pickle != "None" and gender_guess_mfu == @sex')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
sex = 'unknown'
title = f'Count of Intents (excl NONE): {sex}'
x_label = 'Count'
y_label = 'Intent'
data = df.query('luis_intent_pickle != "None" and gender_guess_mfu == @sex')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
###Output
_____no_output_____
###Markdown
Intents by Rating
###Code
second_dimension = 'student_rating'
value = 1
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'student_rating'
value = 2
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'student_rating'
value = 3
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'student_rating'
value = 4
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'student_rating'
value = 5
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
###Output
_____no_output_____
###Markdown
Intents by Service
###Code
second_dimension = 'service'
value = 'CL'
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'service'
value = 'WF'
title = f'Count of Intents (excl NONE): {second_dimension} = {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} == @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
###Output
_____no_output_____
###Markdown
Word Cloud
###Code
wordcloud_string = ' '.join(list(data_df_comments.student_comment_no_stopwords.values))
wordcloud = WordCloud(background_color="white",
max_words=20,
contour_width=3,
contour_color='steelblue',
collocations=False)
wordcloud.generate(wordcloud_string)
wordcloud.to_image()
###Output
_____no_output_____
###Markdown
Wordcloud by Rating
###Code
def generate_wordcloud(data: pd.DataFrame, rating: int = None) -> WordCloud:
if rating is None:
subset_df = data
else:
subset_df = data.query('student_rating == @rating')
wordcloud_string = ' '.join(list(subset_df.student_comment_no_stopwords.values))
wordcloud = WordCloud(background_color="white",
max_words=20,
contour_width=3,
contour_color='steelblue',
collocations=False)
return wordcloud.generate(wordcloud_string)
generate_wordcloud(data = data_df_comments, rating = 1).to_image()
generate_wordcloud(data = data_df_comments, rating = 2).to_image()
generate_wordcloud(data = data_df_comments, rating = 3).to_image()
generate_wordcloud(data = data_df_comments, rating = 4).to_image()
generate_wordcloud(data = data_df_comments, rating = 5).to_image()
###Output
_____no_output_____
###Markdown
There seems to be a lot of "feedback". Let's see what the actual context is.
###Code
data_df_comments[data_df_comments.student_comment.str.contains('feedback')][['student_rating', 'student_comment']]
###Output
_____no_output_____
###Markdown
ngrams (Combined CL and WF)
###Code
wordcloud = WordCloud(max_words = 8, background_color='white')
###Output
_____no_output_____
###Markdown
Remove Punctuation and Stopwords
###Code
data_df_comments['student_comment_nopunct'] = data_df_comments.student_comment_processed.apply(lambda x: ' '.join([token.orth_.lower() for token in x if not token.is_punct]))
data_df_comments['student_comment_nopunct_nostopwords'] = data_df_comments.student_comment_processed.apply(lambda x: ' '.join([token.orth_.lower() for token in x if not token.is_stop and not token.is_punct]))
def create_ngram_dict(text_col: pd.Series, n: int) -> defaultdict:
"""Create a, n-word frequency dictionary"""
ngram_dict = defaultdict(int)
for text in text_col:
tokens = word_tokenize(text)
for ngram in ngrams(tokens, n):
key = ' '.join(ngram)
ngram_dict[key] += 1
return ngram_dict
def ddict_to_df(ddict):
"""Converts a defaultdict of frequencies to a pandas dataframe"""
name_list = []
freq_list = []
for key, value in ddict.items():
name_list.append(key)
freq_list.append(value)
ngram_df = pd.DataFrame({'word': name_list, 'frequency': freq_list})
ngram_df.sort_values(by = 'frequency', ascending = False, inplace = True)
return ngram_df
###Output
_____no_output_____
###Markdown
Create a function to produce the ngram frequencies and charts.
###Code
def create_ngram(df, ngram, rating, service):
"""Subset the data and produce the word frequency barchart"""
if rating and service:
if ngram == 1:
comments = df.query('student_rating == @rating and service == @service').student_comment_nopunct_nostopwords
else:
comments = df.query('student_rating == @rating and service == @service').student_comment_nopunct
elif rating and not service:
if ngram == 1:
comments = df.query('student_rating == @rating').student_comment_nopunct_nostopwords
else:
comments = df.query('student_rating == @rating').student_comment_nopunct
elif not rating and service:
if ngram == 1:
comments = df.query('service == @service').student_comment_nopunct_nostopwords
else:
comments = df.query('service == @service').student_comment_nopunct
else:
if ngram == 1:
comments = df.student_comment_nopunct_nostopwords
else:
comments = df.student_comment_nopunct
ngram_freq = create_ngram_dict(comments, ngram)
wordcloud.generate_from_frequencies(ngram_freq)
wordcloud.to_image()
ngram_df = ddict_to_df(ngram_freq)
def map_string(ngram):
result = None
if ngram == 1:
return 'Unigram'
elif ngram == 2:
return 'Bigram'
elif ngram == 3:
return 'Trigram'
elif ngram == 4:
return 'Four-gram'
return result
title = f'{map_string(ngram)} Rating: {rating} {service}'
ax = sns.barplot(x='frequency', y='word', data=ngram_df.head(10))
ax.set_title(title)
plt.show()
###Output
_____no_output_____
###Markdown
The following section loops through:- ngrams 1-3- rating 1-5- service CL and WF Unigrams
###Code
ngram = 1
for rating, service in product(range(1, 6), ('CL', 'WF')):
create_ngram(df = data_df_comments, ngram = ngram, rating = rating, service = service)
###Output
_____no_output_____
###Markdown
Bigrams
###Code
ngram = 2
for rating, service in product(range(1, 6), ('CL', 'WF')):
create_ngram(df = data_df_comments, ngram = ngram, rating = rating, service = service)
###Output
_____no_output_____
###Markdown
Trigrams
###Code
ngram = 3
for rating, service in product(range(1, 6), ('CL', 'WF')):
create_ngram(df = data_df_comments, ngram = ngram, rating = rating, service = service)
###Output
_____no_output_____
###Markdown
Four-grams
###Code
ngram = 4
for rating, service in product(range(1, 6), ('CL', 'WF')):
create_ngram(df = data_df_comments, ngram = ngram, rating = rating, service = service)
###Output
_____no_output_____
###Markdown
Intents by Sentiment
###Code
second_dimension = 'sentiment_aggregated'
value = 0
operator = '<='
op_dict = {'==': 'is'
,'<': 'is less than'
,'>': 'is greater than'
,'<=': 'is less than or equal to'
,'>=': 'is greater than or equal to'
}
title = f'Count of Intents (excl NONE): {second_dimension.title()} {op_dict[operator]} {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} {operator} @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
second_dimension = 'sentiment_aggregated'
value = 0
operator = '>'
op_dict = {'==': 'is'
,'<': 'is less than'
,'>': 'is greater than'
,'<=': 'is less than or equal to'
,'>=': 'is greater than or equal to'
}
title = f'Count of Intents (excl NONE): {second_dimension.title()} {op_dict[operator]} {value}'
x_label = 'Count'
y_label = 'Intent'
data = df.query(f'luis_intent_pickle != "None" and {second_dimension} {operator} @value')['luis_intent_pickle']
ax = sns.countplot(y=data
,order = order_intent_full
)
ax.set(xlabel=x_label
,ylabel=y_label
,title=title)
filepath = report_dir.joinpath('aggregated_sentiment_rating_vs_servqual.csv')
aggregated_sentiment_df.to_csv(filepath)
###Output
_____no_output_____
###Markdown
Correlations
###Code
# Reorder the columns so that 'student_rating_numeric' is the first.
columns = (
['student_rating_numeric']
+ [col for col in df.columns if col != 'student_rating_numeric']
)
corr_df = df.loc[:, columns].corr()
f = plt.figure(figsize=(19, 15))
sns.heatmap(corr_df)
title = "Correlations"
filename = title + '.png'
plt.title(title)
plt.savefig(image_dir.joinpath(filename))
# Enumerate the column names
f = plt.figure(figsize=(19, 15))
enumerated_columns = range(len(corr_df.index))
sns.heatmap(
corr_df,
xticklabels=enumerated_columns,
yticklabels=enumerated_columns,
)
title = "Correlations"
filename = title + '_unlabeled.png'
plt.title(title)
plt.savefig(image_dir.joinpath(filename))
###Output
_____no_output_____
###Markdown
Categorical
###Code
def cramers_v(x, y):
confusion_matrix = pd.crosstab(x,y)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum().sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1),(rcorr-1)))
import importlib
importlib.reload(utils)
###Output
_____no_output_____
###Markdown
Get Data
###Code
# Get all data
query = {}
results = medical_notes_kaggle_db.train.find(query) # should learn how to do this correctly
training_set = list(results) # list of dictionaries, would have been the same as reading directly from json
# Store headers and text data in dataframe
train_df = pd.DataFrame(columns=['index_', 'note_text', 'section_headers', 'clinical_domain'])
to_exclude = {'_id', 'index_', 'clinical_domain'}
for note in training_set:
train_df = train_df.append(
[{'index_': note.get('index_'),
'clinical_domain': note.get('clinical_domain'),
'section_headers': ', '.join([key for key in note.keys() if key not in to_exclude]),
'note_text': ', '.join([val for val in note.values()
if val not in map(lambda key: note[key], to_exclude)])}]
)
train_df['num_char'] = train_df.note_text.apply(len) # Add num_char column
train_df['num_section_headers'] = train_df['section_headers'].apply(lambda x: x.count(',') +1) # Count section headers
train_df = train_df.dropna().sort_values('index_').reset_index(drop=True)
train_df.head() # View data
###Output
_____no_output_____
###Markdown
Plot Number of Notes by Clinical Domain
###Code
clinical_domain_list = ['Orthopedic', 'Neurology', 'Urology', 'Gastroenterology', 'Radiology']
# Plot
fig, ax = plot_empty(title='Number of Notes By Clinical Domain', figsize=(8, 5))
ax = sns.countplot(x='clinical_domain', data=train_df, palette=None, order=clinical_domain_list)
plt.xlabel('Clinical Domain', fontsize=12)
plt.ylabel('Number of Notes', fontsize=12)
if save:
plt.savefig("figures/clinical_domain-num_notes.png", transparent=True, bbox_inches="tight")
plt.show(fig)
plt.close()
###Output
_____no_output_____
###Markdown
Plot Number of Characters by Clinical Domain
###Code
train_df.groupby('clinical_domain').mean()['num_char']
# Check distribution of data by class
ax = sns.FacetGrid(data = train_df, col = 'clinical_domain', hue = 'clinical_domain', palette='plasma', height=5)
ax.map(sns.histplot, "num_char")
# Plot
fig, ax = plot_empty(title='Length of Medical Notes By Clinical Domain', figsize=(8, 5))
sns.boxplot(x = 'clinical_domain', y = 'num_char', data = train_df, order = clinical_domain_list)
plt.xlabel('Clinical Domain', fontsize=12)
plt.ylabel('Length of Medical Notes', fontsize=12)
plt.ylim((0, 14000))
if save:
plt.savefig("figures/clinical_domain-num_char.png", transparent=True, bbox_inches="tight")
plt.show(fig)
plt.close()
###Output
_____no_output_____
###Markdown
Plot Header Length by Clinical Domain
###Code
# Plot
fig, ax = plot_empty(title='Number of Sections By Clinical Domain', figsize=(8, 5))
sns.boxplot(x = 'clinical_domain', y = 'num_section_headers', data = train_df, order = clinical_domain_list)
plt.xlabel('Clinical Domain', fontsize=12)
plt.ylabel('Number of Sections', fontsize=12)
if save:
plt.savefig("figures/clinical_domain-num_sections.png", transparent=True, bbox_inches="tight")
plt.show(fig)
plt.close()
###Output
_____no_output_____
###Markdown
Exploring messages* Its possible that we have empty cells and some jargon messages where the len of the message is < 145 characters.* Note: 145 is not a magic number. After observing messages where the character size < 145, i concluded that the information available was irrelavent. Hence the messages with < 145 characters are ignored.
###Code
# Check for Nan values in messages column.
df['message'].isnull().sum()
print(df.shape) # (2397, 2)
# As there are 40 null values, we can drop the rows as they are of no use
df.dropna(inplace=True)
print("Shape of dataframe after dropping nan rows")
print(df.shape)
MESSAGES_LEN_TO_IGNORE = 145
df['length_of_message'] = df['message'].apply(lambda x : len(str(x)))
# Filter out of the rows with message length < 145
df_filter = df[df['length_of_message'] > MESSAGES_LEN_TO_IGNORE]
# The final dataframe after filtering out un-necessary messages
print(df_filter.shape)
###Output
(2301, 3)
###Markdown
Below attributes will be extracted from messages* [Extracting Status](extract_status)* [Extracting Visa Interview Date](extract_interview_date)* [Extracting location](extract_location)* [Extracting Questions asked in VI](extract_questions)* [Extracting University Name](extract_university)* ~~Duration~~ Extracting location
###Code
def get_consulate_location(str_to_check):
known_consulate_locations = ['hyderabad', 'mumbai', 'kolkata', 'delhi', 'chennai', 'hyd', 'bombay', 'malaysia', 'madras']
str_converted_to_lower = str_to_check.lower()
for consulate_location in known_consulate_locations:
if consulate_location in str_converted_to_lower:
return consulate_location
df_filter['consulate_location'] = df_filter['message'].apply(get_consulate_location)
mapping_dict = {'bombay' : "mumbai", 'hyd' : "hyderabad", "madras" : "chennai"}
df_filter['consulate_location'] = df_filter['consulate_location'].apply(lambda x : mapping_dict.get(x) if mapping_dict.get(x) is not None else x )
df_filter['consulate_location'].fillna("NA", inplace=True)
print(df_filter.consulate_location.value_counts())
df_filter.to_csv("Test.csv", index=False)
###Output
mumbai 912
delhi 534
chennai 340
hyderabad 302
kolkata 158
NA 54
malaysia 1
Name: consulate_location, dtype: int64
###Markdown
Extracting Status
###Code
def get_visa_status(message):
possible_status = ['approved', 'rejected']
for _status in possible_status:
if _status in message.lower():
return _status
df_filter['visa_status'] = df_filter['message'].apply(get_visa_status)
df_filter['visa_status'].fillna("NA", inplace=True)
df_filter['visa_status'].value_counts()
###Output
_____no_output_____
###Markdown
Extracting Questions
###Code
questions_start_with = ['what', 'what\'s', 'which', 'who', 'where', 'why', 'when', 'how', 'whose', 'do', 'are', 'will', 'did ']
import re
import string
def extract_questions(message):
questions = []
regex_pattern = " |".join(questions_start_with)
for _string in message.lower().split("\n"):
if _string.endswith("?"):
questions.append(_string)
else:
matches = re.findall(regex_pattern, _string.strip())
if len(matches) > 0:
split_str = _string.split()
if ("vi" in split_str[0] or "vo" in split_str[0]):
first_word = split_str[1].strip()
if first_word in string.punctuation:
for i in range(2, len(split_str)):
if split_str[i] not in string.punctuation and split_str[i] not in ['vo', 'vi']:
first_word = split_str[i]
break
else:
first_word = split_str[0]
if first_word in questions_start_with:
questions.append(_string)
return questions
df_filter['Questions'] = df_filter['message'].apply(extract_questions)
df_filter['Questions'].fillna("NA", inplace=True)
df_filter.to_csv("Questions_extracted.csv", index=False)
###Output
_____no_output_____
###Markdown
Extracting University Name
###Code
# from multiprocessing import Pool
# from functools import partial
# import numpy as np
# # # Taken from here : https://stackoverflow.com/questions/26784164/pandas-multiprocessing-apply#:~:text=from%20multiprocessing%20import,run_on_subset%2C%20func)%2C%20num_of_processes)
# def parallelize(data, func, num_of_processes=4):
# data_split = np.array_split(data, num_of_processes)
# pool = Pool(num_of_processes)
# data = pd.concat(pool.map(func, data_split))
# pool.close()
# pool.join()
# return data
# df_unv = pd.read_excel('AccreditationData.xlsx', sheet_name='InstituteCampuses')
# def update_parent_data(location_name, parent_name):
# if parent_name == '-':
# return location_name
# else:
# return parent_name
# df_unv['UniqueName'] = df_unv.apply(lambda x: update_parent_data(x.LocationName, x.ParentName), axis=1)
# unique_university_names = df_unv['UniqueName'].unique()
# print(len(unique_university_names))
# There are 10595 unique universities across USA
# from fuzzywuzzy import fuzz
# matchlist = ['hospital','university','institute','school','academy', 'unv', 'univ']
# unv_regex_str = "|".join(matchlist)
# def get_unv_name_from_text(message):
# split_str = message.split("\n")
# for _str in split_str:
# matches = re.findall(unv_regex_str, _str.strip())
# if len(matches) > 0:
# return _str
# # max, max_index = -999999999, "NA"
# # for unv_index, _unv_name in enumerate(unique_university_names):
# # str1, str2 = message, _unv_name
# # token_set_ratio = fuzz.token_set_ratio(str1, str2)
# # # token_set_ratio_list.append(token_set_ratio)
# # if token_set_ratio > max:
# # max = token_set_ratio
# # max_index = unv_index
# # # index = np.argmax(token_set_ratio_list)
# # try:
# # return unique_university_names[max_index]
# # except Exception as e:
# return "NA"
# %%time
# from tqdm import tqdm
# tqdm.pandas()
# df_filter["University_name"] = df_filter['message'].progress_apply(get_unv_name_from_text)
# df_filter.to_csv("UnvName_extracted.csv", index=False)
import spacy
nlp = spacy.load('en')
MONTHDAY = r"(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])"
MONTH = r"\b(?:jan(?:uary|uar)?|feb(?:ruary|ruar)?|m(?:a|ä)?r(?:ch|z)?|apr(?:il)?|ma(?:y|i)?|jun(?:e|i)?|jul(?:y)?|aug(?:ust)?|sep(?:tember)?|o(?:c|k)?t(?:ober)?|nov(?:ember)?|de(?:c|z)(?:ember)?)\b"
unv_name = []
visa_interview_date = []
def get_organization_visa_date(message):
# print(unv_name, visa_interview_date)
doc = nlp(message)
final_dict = { entity.text:entity.label_ for entity in doc.ents}
# print(final_dict)
visa_date, u_name = None, None
for key, value in final_dict.items():
# print(visa_date, u_name)
if value == 'ORG' and "university" in key.lower() and "research" not in key.lower():
if u_name == None:
u_name = key
else:
continue
elif value == 'DATE' and re.findall(MONTH, key.lower()) and re.findall(MONTHDAY, key):
if visa_date == None:
visa_date = key
else:
continue
else:
continue
visa_interview_date.append(visa_date)
unv_name.append(u_name)
# visa_date.append(final_dict.get('DATE'))
for row in df_filter.itertuples():
get_organization_visa_date(row.message)
# df_test['message'].apply(get_organization_visa_date)
df_filter["University_name"] = unv_name
df_filter["VisaInterviewDate"] = visa_interview_date
df_filter.to_excel("FinalDataAfterNER.xlsx", index=False)
###Output
_____no_output_____
###Markdown
Extracting Interview Date
###Code
%%time
import datefinder
def extract_date_from_message(message):
try:
matches = list(datefinder.find_dates(message))
return matches[0]
except Exception as e:
return 'NA'
df_filter['Visa Interview Date'] = df_filter['message'].apply(extract_date_from_message)
df_filter.to_csv("Dates.csv", index=False)
%%time
# from datetime import datetime
# greater_than_date = datetime.strptime('2021-07-12', '%Y-%m-%d')
# less_than_date = datetime.strptime('2020-01-01', '%Y-%m-%d')
# def replace_value(visa_interview_date):
# try:
# final_vi_date = datetime.strptime(visa_interview_date.split(" ")[0], '%Y-%m-%d')
# if (final_vi_date > greater_than_date) or (final_vi_date < less_than_date):
# return "NA"
# else:
# return visa_interview_date
# except Exception as e:
# return "NA"
# df_filter['Visa Interview Date'] = df_filter['Visa Interview Date'].apply(replace_value)
# df_filter['Visa Interview Date'] = df_filter['Visa Interview Date'].replace(pd.NaT, "NA")
# df_filter['Visa Interview Date'] = pd.to_datetime(df_filter['Visa Interview Date'])
# df_filter.loc[df_filter['Visa Interview Date'] > greater_than_date, "Visa Interview Date"] = "NA"
# df_filter.loc[df_filter['Visa Interview Date'] < less_than_date, "Visa Interview Date"] = "NA"
# def extract_dates_for_failed_messages(message, extracted_date):
# try:
# return dateparser.parse(str(extracted_date))
# except Exception as e:
# matches = search_dates(message)
# for match in matches:
# if today.month and today.year and today.day:
# return match
# df_filter['Visa Interview Date'] = df_filter.apply(lambda x : extract_dates_for_failed_messages(x['message'], x['Visa Interview Date']), axis=1)
df_filter.to_csv("Final_Dates.csv", index=False)
txt = """ "July 9th
Hyderabad Consulate
In time 10:25
http://t.me/f1interviewreviews
Out time 10:40
University name: University of Connecticut
Status: Approved (45 seconds max)
Appointment time 11:00 AM
Counter 12
VO was a white American lady, super chill and very nicely spoken.
2 other counters were open.
Me: Good morning, Ma’am.
VO: Good morning.
VO: Please hold your passport through the screen this way (showed how to)
Me: Held the passport
VO: Can you please pass your I-20 from below the glass?
Me: Passed I-20
VO: When did you graduate?
Me: I graduated in 2017
VO: What did you do since then?
Me: I was working in XXX MNC for the past 3.5 years as an analyst.
VO: That’s nice. What are you going to pursue in this University?
Me: I am going to pursue Masters in Business Analytics.
VO (typed for 10 seconds): Why this course?
Me: Told
VO: How are you sponsoring?
Me: Told
VO typed for 10 seconds. Looked at me and typed for another 5-10 seconds.
VO: Take your I-20.
She didn’t speak anything for 5 seconds. I got scared for a while and was looking at her for her reply.
VO: Drop your VISA in the box there. I'm approving your visa.
Me: Thank you so much, Ma’am.
VO: Have a good stay at USA. Have fun.
Me: Thank you, Ma’am.
She was as excited as I was after approving. Very nicely replied.
@f1interviewreviews"
# """
# import re
# questions = []
# get_unv_name_from_text(txt)
import spacy
# Load English tokenizer, tagger, parser and NER
nlp = spacy.load('en')
doc = nlp(txt)
for entity in doc.ents:
print(entity.text, entity.label_)
txt = 'may 7th'
MONTHDAY = r"(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])"
MONTH = r"\b(?:jan(?:uary|uar)?|feb(?:ruary|ruar)?|m(?:a|ä)?r(?:ch|z)?|apr(?:il)?|ma(?:y|i)?|jun(?:e|i)?|jul(?:y)?|aug(?:ust)?|sep(?:tember)?|o(?:c|k)?t(?:ober)?|nov(?:ember)?|de(?:c|z)(?:ember)?)\b"
re.findall(MONTHDAY, txt)
###Output
_____no_output_____
###Markdown
Load files
###Code
dt = datetime.datetime.fromtimestamp(time.time())
logdir = os.path.join('./outputs/' ,dt.strftime('%Y-%m-%d_%H:%M:%S'))
print(f'Logging to {logdir}')
if not os.path.exists(logdir):
os.makedirs(logdir)
path_to_imagenet = '/scratch/users/saarimrahman/imagenet-testbed/outputs'
model_names = eda_utils.model_names
imagenet_dict = eda_utils.imagenet_dict
eval_settings = ['val', 'imagenetv2-matched-frequency']
ensembled_models = [
'top5_ensemble', 'random5_ensemble', 'top5_random5_ensemble',
'class_weighted_top5_ensemble', 'class_weighted_random5_ensemble', 'class_weighted_top5_random5_ensemble',
'acc_weighted_top5_ensemble', 'acc_weighted_random5_ensemble', 'acc_weighted_top5_random5_ensemble'
]
top_models = [
'efficientnet-l2-noisystudent', 'FixResNeXt101_32x48d_v2', 'FixResNeXt101_32x48d',
'instagram-resnext101_32x48d', 'efficientnet-b8-advprop-autoaug', 'BiT-M-R152x4-ILSVRC2012',
'efficientnet-b7-advprop-autoaug', 'instagram-resnext101_32x32d', 'BiT-M-R101x3-ILSVRC2012',
'efficientnet-b6-advprop-autoaug', 'efficientnet-b7-randaug', 'efficientnet-b7-autoaug',
'efficientnet-b5-advprop-autoaug', 'resnext101_32x8d_swsl', 'instagram-resnext101_32x16d',
'BiT-M-R50x3-ILSVRC2012', 'efficientnet-b6-autoaug', 'FixPNASNet',
'efficientnet-b5-autoaug', 'efficientnet-b5-randaug', 'resnext101_32x4d_swsl'
]
top5_models = top_models[:5]
print('top5_models', top5_models)
# TODO: random 5 from the top 15-20
random5_models = ['FixPNASNet', 'dpn68', 'fbresnet152', 'pnasnet5large', 'vgg19']
def load_logits_targets(models_to_load):
logits = defaultdict(dict)
targets = {}
output_folders = os.listdir(path_to_imagenet)
for model in tqdm(models_to_load, desc='load_logits_targets', leave=False):
for eval_setting in ['val', 'imagenetv2-matched-frequency']:
output_folder = model + '-' + eval_setting
if output_folder in output_folders:
model_targets = os.path.join(path_to_imagenet, output_folder, 'targets.pt')
model_logits = os.path.join(path_to_imagenet, output_folder, 'logits.pt')
if os.path.exists(model_logits):
logits[eval_setting][model] = torch.load(model_logits)
if eval_setting not in targets and os.path.exists(model_targets):
targets[eval_setting] = torch.load(model_targets)
return logits, targets
def find_missing_logits(models, eval_setting='val'):
print(f'Checking for missing {eval_setting} logits...')
on_disk, missing = [], []
output_folders = os.listdir(path_to_imagenet)
for model in models:
output_folder = model + '-' + eval_setting
if output_folder in output_folders:
model_logits = os.path.join(path_to_imagenet, output_folder, 'logits.pt')
if os.path.exists(model_logits):
on_disk.append(model)
else:
missing.append(model)
else:
missing.append(model)
print(len(missing), 'models missing:', missing)
return on_disk, missing
find_missing_logits(top5_models + random5_models, 'val')
find_missing_logits(top5_models + random5_models, 'imagenetv2-matched-frequency')
find_missing_logits(top_models[:5], 'val')
find_missing_logits(top_models[:5], 'imagenetv2-matched-frequency')
logits, targets = load_logits_targets(top5_models + random5_models)
###Output
Checking for missing val logits...
0 models missing: []
Checking for missing imagenetv2-matched-frequency logits...
0 models missing: []
Checking for missing val logits...
0 models missing: []
Checking for missing imagenetv2-matched-frequency logits...
0 models missing: []
###Markdown
Helper Functions
###Code
def accuracy_topk(logits, targets, topk=1):
batch_size = targets.size(0)
_, pred = logits.topk(topk, 1, True, True)
pred = pred.t()
correct = pred.eq(targets.view(1, -1).expand_as(pred))
correct_k = correct[:topk].view(-1).float().sum(0, keepdim=True)
return correct_k.mul_(100.0 / batch_size).item()
def find_correct(logits, targets, topk=1):
"""Returns a boolean tensor showing correct predictions"""
batch_size = targets.size(0)
_, pred = logits.topk(topk, 1, True, True)
pred = pred.t()
return pred.eq(targets.view(1, -1).expand_as(pred))
def get_pred(logits, topk=1):
_, pred = logits.topk(topk, 1, True, True)
return pred.t()
def num_pairwise_errors(x_correct, y_correct):
"""Finds the number of shared elements incorrectly classified for x and y"""
assert x_correct.size() == y_correct.size(), 'x and y are not the same size'
x_error_idx = (x_correct == False).nonzero(as_tuple=True)[1]
y_error_idx = (y_correct == False).nonzero(as_tuple=True)[1]
return len(np.intersect1d(x_error_idx, y_error_idx))
def pairwise_corrcoef(x_logits, y_logits):
"""Applies softmax to each row of 50000 entries, flattens, then calculates correlation
Note: Logits are originally of shape torch.Size([50000, 1000])
"""
sigmoid_x = torch.nn.functional.softmax(x_logits, dim=1).flatten().numpy()
sigmoid_y = torch.nn.functional.softmax(y_logits, dim=1).flatten().numpy()
return np.corrcoef(sigmoid_x, sigmoid_y)[0][1]
def partition(data, eval_setting='val'):
return train_test_split(data, test_size=0.5, stratify=targets[eval_setting], random_state=42)
def view_image(index, eval_setting='val'):
datasets_path = '/scratch/users/saarimrahman/imagenet-testbed/s3_cache/datasets'
eval_data_path = join(datasets_path, eval_setting)
num_img_per_class = targets[eval_setting].size(0) // 1000
folder_idx = index // num_img_per_class
img_idx = index % num_img_per_class
folder = sorted(os.listdir(eval_data_path))[folder_idx]
folder_path = join(eval_data_path, folder)
file_name = sorted(os.listdir(folder_path))[img_idx]
img_path = join(folder_path, file_name)
print('true class:', imagenet_dict[index // num_img_per_class])
display(Image(filename=img_path))
def sort_dict(dic):
return dict(sorted(dic.items(), key=lambda item: item[1], reverse=True))
###Output
_____no_output_____
###Markdown
Pairwise Error Overlap
###Code
def create_pairwise_error_df(eval_setting):
pairwise_errors = defaultdict(dict)
eval_targets = targets[eval_setting]
for x_model, x_logits in tqdm(logits[eval_setting].items(), desc=eval_setting, leave=False):
x_correct = find_correct(x_logits, eval_targets)
x_correct_train, x_correct_test = partition(x_correct.flatten())
x_correct_train = x_correct_train.view(1, -1)
for y_model, y_logits in logits[eval_setting].items():
y_correct = find_correct(y_logits, eval_targets)
y_correct_train, y_correct_test = partition(y_correct.flatten())
y_correct_train = y_correct_train.view(1, -1)
# utilize symmetric property of pairwise matrix to reduce computation
if x_model != y_model and y_model in pairwise_errors and x_model in pairwise_errors[y_model]:
pairwise_errors[x_model][y_model] = pairwise_errors[y_model][x_model]
else:
pairwise_errors[x_model][y_model] = num_pairwise_errors(x_correct_train, y_correct_train)
df = pd.DataFrame(pairwise_errors)
styles = [dict(selector='caption', props=[('caption-side', 'top'), ("font-size", "150%")])]
df = df.style.set_table_styles(styles).set_caption(eval_setting)
return df.data
# df_val_pairwise_error = create_pairwise_error_df('val')
# sns.clustermap(df_val_pairwise_error)
# plt.title('Pairwise Error on val')
# plt.savefig(join(logdir, 'pairwise_error_val'))
# plt.show();
###Output
_____no_output_____
###Markdown
Pairwise Correlation Between Concatenated Predicted Probability Vectors
###Code
def create_pairwise_corr_df(eval_setting):
pairwise_corr = defaultdict(dict)
eval_targets = targets[eval_setting]
for x_model, x_logits in tqdm(logits[eval_setting].items(), desc=eval_setting, leave=False):
x_train, x_test = partition(x_logits)
for y_model, y_logits in logits[eval_setting].items():
y_train, y_test = partition(y_logits)
# utilize symmetric property of pairwise matrix to reduce computation
if x_model != y_model and y_model in pairwise_corr and x_model in pairwise_corr[y_model]:
pairwise_corr[x_model][y_model] = pairwise_corr[y_model][x_model]
else:
pairwise_corr[x_model][y_model] = pairwise_corrcoef(x_train, y_train)
df = pd.DataFrame(pairwise_corr)
styles = [dict(selector='caption', props=[('caption-side', 'top'), ("font-size", "150%")])]
df = df.style.set_table_styles(styles).set_caption(eval_setting)
return df.data
# df_val_pairwise_corr = create_pairwise_corr_df('val')
# sns.clustermap(df_val_pairwise_corr);
# plt.title('Pairwise Correlation on val')
# plt.savefig(join(logdir, 'pairwise_corr_val'))
# plt.show();
###Output
_____no_output_____
###Markdown
Ensembling Ideas
###Code
def get_ensemble_logits(softmax_pred):
"""Construct ensemble logits from a tensor containing all ensemble model's softmax predictions."""
ensemble_logits = []
for i in range(softmax_pred.size(1)): # 50000 examples
logit = torch.mean(softmax_pred[:, i, :], 0)
ensemble_logits.append(logit)
return torch.stack(ensemble_logits)
def ensemble_models(models, eval_setting='val'):
softmax_pred, pred = [], []
for model in models:
softmax_pred.append(softmax(logits[eval_setting][model], dim=1))
softmax_pred = torch.stack(softmax_pred)
return get_ensemble_logits(softmax_pred)
def majority_vote_models(models):
pred = get_pred(logits['val'][models[0]])
for model in models[1:]:
pred = torch.cat((pred, get_pred(logits['val'][model])), 0)
return torch.mode(pred, 0, keepdim=True)[0]
for eval_setting in ['val', 'imagenetv2-matched-frequency']:
logits[eval_setting]['top3_ensemble'] = ensemble_models(top5_models[:3], eval_setting)
logits[eval_setting]['top5_ensemble'] = ensemble_models(top5_models, eval_setting)
logits[eval_setting]['random5_ensemble'] = ensemble_models(random5_models, eval_setting)
logits[eval_setting]['top5_random5_ensemble'] = ensemble_models(top5_models + random5_models, eval_setting)
def class_weighted_ensemble(models, eval_setting='val'):
"""Weights each model's softmax predictions based on its relative accuracy in the class.
Uses half of the class images to calculate in class accuracy.
"""
w = []
for model in models:
class_acc = []
pred = get_pred(logits[eval_setting][model])
miss = pred.eq(targets[eval_setting].view(1, -1).expand_as(pred)).float().flatten()
total_images = targets[eval_setting].size(0)
step_size = total_images // 1000
i = 0
while i < total_images:
class_acc.append(miss[i:i+(step_size // 2)].sum().item() / (step_size // 2)) # calculate relative in class accuracy
i += step_size
w.append(torch.tensor(class_acc))
w = torch.stack(w)
# np.savetxt(models[0], w.numpy())
weighted_softmax_pred = []
for i, model in enumerate(models):
probs = softmax(logits[eval_setting][model], dim=1)
weighted_pred = torch.mul(probs, w[i])
weighted_softmax_pred.append(weighted_pred)
weighted_softmax_pred = torch.stack(weighted_softmax_pred)
return get_ensemble_logits(weighted_softmax_pred)
for eval_setting in ['val', 'imagenetv2-matched-frequency']:
logits[eval_setting]['class_weighted_top5_ensemble'] = class_weighted_ensemble(top5_models, eval_setting)
logits[eval_setting]['class_weighted_random5_ensemble'] = class_weighted_ensemble(random5_models, eval_setting)
logits[eval_setting]['class_weighted_top5_random5_ensemble'] = class_weighted_ensemble(top5_models + random5_models, eval_setting)
# class_weighted_ensemble(['vgg19'], 'val')
# class_weighted_ensemble(['efficientnet-l2-noisystudent'], 'val')
def acc_weighted_ensemble(models, eval_setting='val'):
"""Weights each model's softmax predictions based on its overall accuracy on the dataset.
Uses half of the dataset to calculate overall accuracy.
"""
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
weighted_softmax_pred = []
for model in models:
train_logit, _ = partition(logits[eval_setting][model], eval_setting)
acc = accuracy_topk(train_logit, eval_targets_train)
probs = softmax(logits[eval_setting][model], dim=1)
weighted_pred = torch.mul(probs, acc)
weighted_softmax_pred.append(weighted_pred)
weighted_softmax_pred = torch.stack(weighted_softmax_pred)
return get_ensemble_logits(weighted_softmax_pred)
for eval_setting in ['val', 'imagenetv2-matched-frequency']:
logits[eval_setting]['acc_weighted_top5_ensemble'] = acc_weighted_ensemble(top5_models, eval_setting)
logits[eval_setting]['acc_weighted_random5_ensemble'] = acc_weighted_ensemble(random5_models, eval_setting)
logits[eval_setting]['acc_weighted_top5_random5_ensemble'] = acc_weighted_ensemble(top5_models + random5_models, eval_setting)
###Output
_____no_output_____
###Markdown
Which examples do all the models miss?
###Code
def get_miss_freq(eval_setting, topk, model_pred):
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
train_indices, test_indices = partition(np.arange(0, targets[eval_setting].size(0)), eval_setting)
miss = [] # boolean tensor of correctness of model predictions for each model
for x_model, x_logits in logits[eval_setting].items():
pred = get_pred(logits[eval_setting][x_model], topk) # top k prediction
tmp = []
for i in range(pred.size(0)):
tmp.append(partition(pred[i], eval_setting)[0]) # training pred
pred = torch.stack(tmp)
model_pred[eval_setting][x_model] = pred
correct = pred.eq(eval_targets_train.view(1, -1).expand_as(pred))
corr = []
for i in range(correct.size(1)):
if correct[:,i].sum() > 0:
corr.append(True)
else:
corr.append(False)
miss.append(corr)
miss = torch.tensor(miss)
shared_miss = [] # identify examples that all models miss. has original indices
shared_miss_train = [] # has training indices
no_miss = [] # identify examples that no models miss. has original indices.
no_miss_train = [] # has training indices
for i in range(miss.size(1)): # iterate over every image
if torch.sum(miss[:, i]) == 0: # every model incorectly classified the image
shared_miss_train.append(i)
shared_miss.append(train_indices[i]) # map from shuffled index -> original index
elif torch.sum(miss[:, i]) == miss.size(0): # every model correctly classified the image
no_miss_train.append(i)
no_miss.append(train_indices[i])
print(f'total # examples that all models miss (top-{topk}; {eval_setting}):', len(shared_miss), '/', len(eval_targets_train))
miss_freq = defaultdict(int)
for i in shared_miss:
true_class = targets[eval_setting][i]
miss_freq[int(true_class)] += 1
no_miss_freq = defaultdict(int)
for i in no_miss:
true_class = targets[eval_setting][i]
no_miss_freq[int(true_class)] += 1
miss_freq = sort_dict(miss_freq)
no_miss_freq = sort_dict(no_miss_freq)
return miss_freq, no_miss_freq, shared_miss, shared_miss_train, no_miss, no_miss_train
def visualize_errors(classes, miss_dict, miss_dict_train, model_pred, eval_setting='val'):
if len(classes) == 0:
print('no classes to display')
return
classes_seen = []
for ctr, i in enumerate(miss_dict):
image_class = targets[eval_setting][i].item()
if image_class in classes and image_class not in classes_seen:
models_predicted = []
n = len(list(model_pred[eval_setting].values())[0])
for j in range(n):
models_predicted = [imagenet_dict[pred[j][miss_dict_train[ctr]].item()] for model, pred in model_pred[eval_setting].items()]
models_predicted = Counter(models_predicted)
print(f'top-{j+1} predictions:', models_predicted)
view_image(i, eval_setting)
classes_seen.append(image_class)
###Output
_____no_output_____
###Markdown
Imagenet V1 Top-1 Errors
###Code
model_pred = defaultdict(dict) # contains training top-1 model preds
miss_freq, no_miss_freq, shared_miss, shared_miss_train, no_miss, no_miss_train = get_miss_freq('val', 1, model_pred)
plt.hist(list(miss_freq.values()), bins=20)
plt.title('Distribution of # misclassified examples (top-1; val)')
plt.xlabel('# misclassified')
plt.ylabel('# classes')
plt.savefig(join(logdir, 'val_top1_classes_misclassified_dist'))
plt.show();
n = 20 # number of classes to display
plt.bar([imagenet_dict[i] for i in miss_freq.keys()][:n], list(miss_freq.values())[:n])
plt.xticks(rotation = 90)
plt.title(f'Top {n} classes missclassified by all models (top-1; val)')
plt.ylabel('# misclassifications')
plt.savefig(join(logdir, 'val_top1_top_classes_misclassified'))
plt.show();
top_worst_classes = list(miss_freq.keys())[:5]
visualize_errors(top_worst_classes, shared_miss, shared_miss_train, model_pred, 'val')
best_classes = list(no_miss_freq.keys())[-5:]
visualize_errors(best_classes, no_miss, no_miss_train, model_pred, 'val')
np.random.seed(42)
random_classes = np.random.permutation(list(miss_freq.keys()))[:10]
visualize_errors(random_classes, shared_miss, shared_miss_train, model_pred, 'val')
###Output
_____no_output_____
###Markdown
Top-3 Errors
###Code
top3_model_pred = defaultdict(dict) # contains training top-3 model pred
top3_miss_freq, top3_no_miss_freq, top3_shared_miss, top3_shared_miss_train, top3_no_miss, top3_no_miss_train = get_miss_freq('val', 3, top3_model_pred)
plt.hist(list(top3_miss_freq.values()), bins=20)
plt.title('Distribution of # misclassified examples (top-3; val)')
plt.xlabel('# misclassified')
plt.ylabel('# classes')
plt.savefig(join(logdir, 'val_top3_classes_misclassified_dist'))
plt.show();
n = 20 # number of classes to display
plt.bar([imagenet_dict[i] for i in top3_miss_freq.keys()][:n], list(top3_miss_freq.values())[:n])
plt.xticks(rotation = 90)
plt.title(f'Top {n} classes missclassified by all models (top-3; val)')
plt.ylabel('# misclassifications')
plt.savefig(join(logdir, 'val_top3_top_classes_misclassified'))
plt.show();
top_worst_classes = list(top3_miss_freq.keys())[:5]
visualize_errors(top_worst_classes, top3_shared_miss, top3_shared_miss_train, top3_model_pred)
best_classes = list(top3_no_miss_freq.keys())[-5:]
visualize_errors(best_classes, top3_no_miss, top3_no_miss_train, top3_model_pred)
np.random.seed(42)
random_classes = np.random.permutation(list(top3_miss_freq.keys()))[:10]
visualize_errors(random_classes, top3_shared_miss, top3_shared_miss_train, top3_model_pred)
###Output
_____no_output_____
###Markdown
Imagenet V2 Top-1 Errors
###Code
miss_freq, no_miss_freq, shared_miss, shared_miss_train, no_miss, no_miss_train = get_miss_freq('imagenetv2-matched-frequency', 1, model_pred)
plt.hist(list(miss_freq.values()), bins=20)
plt.title('Distribution of # misclassified examples (top-1; imagenetv2)')
plt.xlabel('# misclassified')
plt.ylabel('# classes')
plt.savefig(join(logdir, 'imagenetv2_top1_classes_misclassified_dist'))
plt.show();
n = 20 # number of classes to display
plt.bar([imagenet_dict[i] for i in miss_freq.keys()][:n], list(miss_freq.values())[:n])
plt.xticks(rotation = 90)
plt.title(f'Top {n} classes missclassified by all models (top-1; imagenetv2)')
plt.ylabel('# misclassifications')
plt.savefig(join(logdir, 'imagenetv2_top1_top_classes_misclassified'))
plt.show();
top_worst_classes = list(miss_freq.keys())[:5]
visualize_errors(top_worst_classes, shared_miss, shared_miss_train, model_pred, 'imagenetv2-matched-frequency')
best_classes = list(no_miss_freq.keys())[-5:]
visualize_errors(best_classes, no_miss, no_miss_train, model_pred, 'imagenetv2-matched-frequency')
np.random.seed(42)
random_classes = np.random.permutation(list(miss_freq.keys()))[:10]
visualize_errors(random_classes, shared_miss, shared_miss_train, model_pred, 'imagenetv2-matched-frequency')
###Output
_____no_output_____
###Markdown
Top-3 Errors
###Code
top3_miss_freq, top3_no_miss_freq, top3_shared_miss, top3_shared_miss_train, top3_no_miss, top3_no_miss_train = get_miss_freq('imagenetv2-matched-frequency', 3, top3_model_pred)
plt.hist(list(top3_miss_freq.values()), bins=5)
plt.title('Distribution of # misclassified examples (top-3; imagenetv2)')
plt.xlabel('# misclassified')
plt.ylabel('# classes')
plt.savefig(join(logdir, 'imagenetv2_top3_classes_misclassified_dist'))
plt.show();
n = 20 # number of classes to display
plt.bar([imagenet_dict[i] for i in top3_miss_freq.keys()][:n], list(top3_miss_freq.values())[:n])
plt.xticks(rotation = 90)
plt.title(f'Top {n} classes missclassified by all models (top-3; imagenetv2)')
plt.ylabel('# misclassifications')
plt.savefig(join(logdir, 'imagenetv2_top3_top_classes_misclassified'))
plt.show();
# TODO: fix no output from below
top_worst_classes = list(top3_miss_freq.keys())[:5]
visualize_errors(top_worst_classes, top3_shared_miss, top3_shared_miss_train, top3_model_pred)
best_classes = list(top3_no_miss_freq.keys())[-5:]
visualize_errors(best_classes, top3_no_miss, top3_no_miss_train, top3_model_pred)
np.random.seed(42)
random_classes = np.random.permutation(list(top3_miss_freq.keys()))[:10]
visualize_errors(random_classes, top3_shared_miss, top3_shared_miss_train, top3_model_pred)
###Output
_____no_output_____
###Markdown
Cross Class Accuracies
###Code
def plot_cross_class_acc(eval_setting='val'):
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
class_acc = {}
for model, pred in model_pred[eval_setting].items():
acc = {}
for i in np.arange(0, 1000):
mask = (eval_targets_train == i).expand_as(model_pred[eval_setting][model])
corr_class_pred = model_pred[eval_setting][model][mask] == i
acc[i] = int(torch.sum(corr_class_pred)) / len(corr_class_pred)
class_acc[model] = acc
fig, axs = plt.subplots(5, 4, figsize=(25,25), facecolor='w', edgecolor='k', sharey=True)
axs = axs.ravel()
i = 0
y_model = 'efficientnet-l2-noisystudent'
for model, acc in class_acc.items():
if model == y_model:
continue
axs[i].set_title(f'cross class accuracy \n ({eval_setting}; top-1)')
x_model_acc = list(acc.values())
y_model_acc = list(class_acc[y_model].values())
if eval_setting == eval_settings[1]:
x_model_acc += np.random.normal(0, 0.01, len(x_model_acc))
y_model_acc += np.random.normal(0, 0.01, len(x_model_acc))
axs[i].scatter(x_model_acc, y_model_acc, s=10, alpha=0.2)
axs[i].plot([0, 1], [0, 1], '--', color='orange')
axs[i].set_xlabel(model)
axs[i].set_ylabel(y_model)
i += 1
plt.tight_layout()
plt.savefig(join(logdir, f'{eval_setting}_cross_class_accuracy'))
# plot_cross_class_acc('val')
plot_cross_class_acc('imagenetv2-matched-frequency')
def plot_v1v2_cross_class_acc(model):
eval_setting = eval_settings[0]
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
v1_class_acc = {}
pred = model_pred[eval_setting]
for i in np.arange(0, 1000):
mask = (eval_targets_train == i).expand_as(model_pred[eval_setting][model])
corr_class_pred = model_pred[eval_setting][model][mask] == i
v1_class_acc[i] = int(torch.sum(corr_class_pred)) / len(corr_class_pred)
eval_setting = eval_settings[1]
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
v2_class_acc = {}
pred = model_pred[eval_setting]
for i in np.arange(0, 1000):
mask = (eval_targets_train == i).expand_as(model_pred[eval_setting][model])
corr_class_pred = model_pred[eval_setting][model][mask] == i
v2_class_acc[i] = int(torch.sum(corr_class_pred)) / len(corr_class_pred) + np.random.normal(0, 0.01)
plt.scatter(list(v1_class_acc.values()), list(v2_class_acc.values()), alpha=0.2)
plt.plot([0, 1], [0, 1], '--', color='orange')
plt.title(f'{model} cross class accuracy (top-1)')
plt.xlabel('v1 class accuracy')
plt.ylabel('v2 class accuracy')
plt.show()
models = ['efficientnet-l2-noisystudent', 'top5_ensemble']
for model in models:
plot_v1v2_cross_class_acc(model)
###Output
_____no_output_____
###Markdown
Test Accuracies
###Code
def plot_topk_model_acc(topk, eval_setting='val', verbose=False):
model_acc = {}
for model, logit in logits[eval_setting].items():
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
train_logit, test_logit = partition(logit, eval_setting)
acc = accuracy_topk(train_logit, eval_targets_train, topk)
model_acc[model] = acc
model_acc = sort_dict(model_acc)
if verbose:
display(model_acc)
plt.xticks(rotation = 90)
plt.title(f'Model Accuracies (top-{topk}; {eval_setting})')
plt.ylabel('accuracy')
plt.scatter(model_acc.keys(), model_acc.values())
plt.savefig(join(logdir, f'model_acc_{eval_setting}_top{topk}'))
plt.show();
eval_setting = eval_settings[1]
logit = logits[eval_setting]['class_weighted_top5_ensemble']
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
train_logit, test_logit = partition(logit, eval_setting)
acc = accuracy_topk(train_logit, eval_targets_train, 1)
plot_topk_model_acc(1, 'val')
plot_topk_model_acc(3, 'val')
plot_topk_model_acc(1, 'imagenetv2-matched-frequency', verbose=True)
plot_topk_model_acc(3, 'imagenetv2-matched-frequency', verbose=True)
def plot_ensemble_acc(models, topk, title):
num_ensembled = np.arange(1, len(models))
ensemble_acc = {}
for i in tqdm(range(1, len(models)), leave=False, desc='ensemble'):
logit = ensemble_models(models[:i])
train_logit, test_logit = partition(logit)
acc = accuracy_topk(train_logit, eval_targets_train, topk)
ensemble_acc[i] = acc
plt.plot(list(ensemble_acc.keys()), list(ensemble_acc.values()));
plt.xlabel('Number of models ensembled')
plt.ylabel('Accuracy')
plt.title(f'{title} Ensemble Accuracies (top-{topk})')
plt.savefig(join(logdir, f'{title}_ensemble_acc_top{topk}'))
plt.show();
return ensemble_acc
# logits, targets = load_logits_targets(top_models)
# models_on_disk, _ = find_missing_logits(top_models)
# top_model_ens_acc_top1 = plot_ensemble_acc(models_on_disk, 1, 'top_models')
# top_model_ens_acc_top3 = plot_ensemble_acc(models_on_disk, 3, 'top_models')
# np.random.seed(42)
# random_models = np.random.permutation(model_names)[:10]
# logits, targets = load_logits_targets(random_models)
# models_on_disk, _ = find_missing_logits(random_models)
# rand_model_ens_acc_top1 = plot_ensemble_acc(models_on_disk, 1, 'random_models')
# rand_model_ens_acc_top3 = plot_ensemble_acc(models_on_disk, 3, 'random_models')
###Output
_____no_output_____
###Markdown
Calibration Curve
###Code
def plot_calibration_curve(model, eval_setting='val', topk=1):
"""Method #2 that Raaz described.
Checks the calibrtion of the max softmax scores normalized.
y_pred is the max softmax score for a given image
y_true is if the image was correctly classified or not
"""
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
x = logits[eval_setting][model]
if 'ensemble' not in model: # ensembled logits have already been through softmax
x = softmax(x, dim=1)
x[x != x] = 0 # set nan values to zero.
x = x / torch.sum(x, 1)[:, None] # normalize each image to total probability of 1
prob, pred = x.topk(topk, 1, True, True)
pred = pred.t()
train_pred = []
for i in range(pred.size(0)):
train_pred.append(partition(pred[i], eval_setting)[0]) # training pred
pred = torch.stack(train_pred)
prob = prob.sum(1) # sum normalized topk probabilities
correct = pred.eq(eval_targets_train.view(1, -1).expand_as(pred))
corr = []
for i in range(correct.size(1)):
if correct[:,i].sum() > 0: # one of topk classified correctly
corr.append(True)
else:
corr.append(False)
y_true = torch.tensor(corr).float()
y_prob = prob.flatten()
y_prob, _ = partition(y_prob, eval_setting)
normalize = False
if y_prob.min() < 0 or y_prob.max() > 1: # scikit learn complains if probability is exactly 0 or 1
normalize = True
fraction_of_positives, mean_predicted_value = calibration_curve(y_true, y_prob, n_bins=30, strategy='quantile', normalize=normalize)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs.ravel()
axs[0].set_title('Predicted Probability Distribution')
axs[0].hist(y_prob, density=True, bins=30, alpha=0.5)
axs[0].set_xlabel('probability')
axs[0].set_ylabel('# of elements')
axs[1].plot(mean_predicted_value, fraction_of_positives, linestyle='None', marker='D')
axs[1].set_ylabel('Fraction of positives')
axs[1].set_xlabel('Mean predicted value')
axs[1].plot([0, 1], [0, 1], '--')
axs[1].set_title(f'Calibration Curve for {model} \n (top-{topk}; {eval_setting})')
cutoff = 0.9 # predicted probability cutoff
zoom_idxs = np.argwhere(mean_predicted_value > cutoff).flatten()
zoom_mpv = mean_predicted_value[zoom_idxs]
zoom_fop = fraction_of_positives[zoom_idxs]
min_ax = min(min(zoom_mpv), min(zoom_fop)) # min value to start plot of y=x to get a square graph
axs[2].plot(zoom_mpv, zoom_fop, linestyle='None', marker='D')
axs[2].plot([min_ax, 1], [min_ax, 1], '--', color='orange')
axs[2].set_ylabel('Fraction of positives')
axs[2].set_xlabel('Mean predicted value')
axs[2].set_title('Zoomed in Calibration Curve')
plt.savefig(join(logdir, f'calibration_{model}_{eval_setting}_top{topk}'))
plt.show();
for eval_setting in eval_settings:
for model in ensembled_models + top5_models + random5_models:
for topk in [1, 3]:
plot_calibration_curve(model, eval_setting, topk)
print(f'***** Done with {eval_setting} *****')
def plot_class_calibration_curve(model, img_class):
"""Method #3 that Raaz described.
y_pred is the normalized softmax score for a given class
y_true is if the image's label was that class or not
"""
x = logits['val'][model]
if 'ensemble' not in model: # ensembled logits have already been through softmax
x = softmax(x, dim=1)
x = x / torch.sum(x, 1)[:, None] # normalize each image to total probability of 1
x = x[:, img_class]
y_true = (targets['val'] == img_class).float()
y_true, _ = partition(y_true)
y_prob, _ = partition(x)
prob_true, prob_pred = calibration_curve(y_true, y_prob, n_bins=25)
plt.plot(prob_true, prob_pred)
plt.plot([0, 1], [0, 1], '--')
plt.title(f'Calibration Curve: {model} // {imagenet_dict[img_class]}')
plt.savefig(join(logdir, f'class_calibration_{model}_{img_class}'))
plt.show();
# model = 'top5_ensemble'
# print('*'*15 + ' WORST CLASSES ' + 15* '*')
# for img_class in top_worst_classes:
# plot_class_calibration_curve(model, img_class)
# print('*'*15 + ' BEST CLASSES ' + 15* '*')
# for img_class in best_classes:
# plot_class_calibration_curve(model, img_class)
# print('*'*15 + ' RANDOM CLASSES ' + 15* '*')
# for img_class in random_classes:
# plot_class_calibration_curve(model, img_class)
###Output
_____no_output_____
###Markdown
Ensembling Analysis Cross Entropy Loss
###Code
def cross_entropy_loss(Y, Y_hat):
Y = Y.cpu().detach().numpy()
Y_hat = Y_hat.cpu().detach().numpy()
Y_hat += 1e-15
m = len(Y)
return -1/m * np.sum(Y * np.log(Y_hat))
def plot_cross_entropy_loss(eval_setting, verbose=True):
Y = F.one_hot(targets[eval_setting], num_classes=1000)
Y, _ = partition(Y, eval_setting) # training Y
loss = {}
for model, logit in logits[eval_setting].items():
Y_hat, _ = partition(logit, eval_setting) # training Y_hat
if 'ensemble' not in model: # ensembled logits have already been through softmax
Y_hat = softmax(Y_hat, dim=1)
loss[model] = cross_entropy_loss(Y, Y_hat)
loss = sort_dict(loss)
if verbose:
display(loss)
plt.xticks(rotation = 90)
plt.title(f'Cross Entropy Loss- {eval_setting}')
plt.ylabel('loss')
plt.scatter(loss.keys(), loss.values())
plt.savefig(join(logdir, f'cross_entropy_loss_{eval_setting}'))
plt.show();
for eval_setting in eval_settings:
plot_cross_entropy_loss(eval_setting)
###Output
_____no_output_____
###Markdown
Multiclass ROC
###Code
def plot_multiclass_roc(model, eval_setting='val', topk=1):
""" https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
TODO: broken. fix
"""
x = logits[eval_setting][model]
if 'ensemble' not in model: # ensembled logits have already been through softmax
x = softmax(x, dim=1)
x[x != x] = 0 # set nan values to zero.
x = x / torch.sum(x, 1)[:, None] # normalize each image to total probability of 1
y = label_binarize(targets[eval_setting], classes=np.arange(1000))
n_classes = y.shape[1]
_, y_score = partition(x, eval_setting)
_, y_test = partition(y, eval_setting)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i].numpy())
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(np.ravel(y_test), np.ravel(y_score))
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += np.interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
print('mean_tpr', mean_tpr)
print('all_fpr', all_fpr)
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
lw = 2
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show();
# plot_multiclass_roc('dpn68')
def plot_roc_curve(models):
""" Plots ROC curve
y_pred is the max softmax score for a given image
y_true is if the image was correctly classified or not
"""
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
fig.subplots_adjust(top=0.95)
for model in models:
for ax_i, eval_setting in enumerate(eval_settings):
for ax_j, topk in enumerate([1, 3]):
eval_targets_train, eval_targets_test = partition(targets[eval_setting], eval_setting)
x = logits[eval_setting][model]
if 'ensemble' not in model: # ensembled logits have already been through softmax
x = softmax(x, dim=1)
x[x != x] = 0 # set nan values to zero.
x = x / torch.sum(x, 1)[:, None] # normalize each image to total probability of 1
prob, pred = x.topk(topk, 1, True, True)
pred = pred.t()
train_pred = []
for i in range(pred.size(0)):
train_pred.append(partition(pred[i], eval_setting)[0]) # training pred
pred = torch.stack(train_pred)
prob = prob.sum(1) # sum normalized topk probabilities
correct = pred.eq(eval_targets_train.view(1, -1).expand_as(pred))
corr = []
for i in range(correct.size(1)):
if correct[:,i].sum() > 0: # one of topk classified correctly
corr.append(True)
else:
corr.append(False)
y_true = torch.tensor(corr).float()
y_prob = prob.flatten()
y_prob, _ = partition(y_prob, eval_setting)
fpr, tpr, _ = roc_curve(y_true, y_prob)
roc_auc = auc(fpr, tpr)
axs[ax_i, ax_j].plot(fpr, tpr, lw=2, label=f'{model} (area = %0.4f)' % roc_auc)
axs[ax_i, ax_j].plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
axs[ax_i, ax_j].set_xlim([0.0, 1.0])
axs[ax_i, ax_j].set_ylim([0.0, 1.05])
axs[ax_i, ax_j].set_xlabel('False Positive Rate')
axs[ax_i, ax_j].set_ylabel('True Positive Rate')
axs[ax_i, ax_j].set_title(f'{eval_setting}, top-{topk}')
axs[ax_i, ax_j].legend(loc="best")
fig.suptitle(f'ROC Curves: {model}', fontweight='bold')
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
fig.savefig(join(logdir, f'ROC'))
plt.show();
plot_roc_curve(['top3_ensemble', 'top5_ensemble', 'random5_ensemble', 'efficientnet-l2-noisystudent', 'FixResNeXt101_32x48d_v2', 'vgg19'])
!conda install -n base -c conda-forge jupyterlab_widgets
!conda install -n py36 -c conda-forge ipywidgets
###Output
Collecting package metadata (current_repodata.json): done
Solving environment: /
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- conda-forge/linux-64::websockify==0.10.0=py38h497a2fe_0
failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: |
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
- conda-forge/linux-64::websockify==0.10.0=py38h497a2fe_0
failed with initial frozen solve. Retrying with flexible solve.
Solving environment: done
## Package Plan ##
environment location: /usr/local/linux/anaconda3.8
added / updated specs:
- jupyterlab_widgets
The following packages will be downloaded:
package | build
---------------------------|-----------------
certifi-2021.10.8 | py38h578d9bd_1 145 KB conda-forge
conda-4.10.3 | py38h578d9bd_3 3.1 MB conda-forge
jupyterlab_widgets-1.0.2 | pyhd8ed1ab_0 130 KB conda-forge
numpy-1.21.1 | py38h9894fe3_0 6.2 MB conda-forge
xeus-1.0.4 | h7d0c39e_0 947 KB conda-forge
xeus-python-0.12.5 | py38hcf90354_2 851 KB conda-forge
zeromq-4.3.4 | h9c3ff4c_0 352 KB conda-forge
------------------------------------------------------------
Total: 11.7 MB
The following NEW packages will be INSTALLED:
jupyterlab_widgets conda-forge/noarch::jupyterlab_widgets-1.0.2-pyhd8ed1ab_0
numpy conda-forge/linux-64::numpy-1.21.1-py38h9894fe3_0
The following packages will be UPDATED:
certifi 2021.10.8-py38h578d9bd_0 --> 2021.10.8-py38h578d9bd_1
conda 4.9.2-py38h578d9bd_0 --> 4.10.3-py38h578d9bd_3
The following packages will be DOWNGRADED:
libblas 3.9.0-11_linux64_openblas --> 3.9.0-8_openblas
libcblas 3.9.0-11_linux64_openblas --> 3.9.0-8_openblas
libgcc-ng 11.2.0-h1d223b6_8 --> 9.3.0-h2828fa1_18
libgomp 11.2.0-h1d223b6_8 --> 9.3.0-h2828fa1_18
liblapack 3.9.0-11_linux64_openblas --> 3.9.0-8_openblas
libopenblas 0.3.17-pthreads_h8fe5266_1 --> 0.3.12-pthreads_h4812303_1
libstdcxx-ng 11.2.0-he4da1e4_8 --> 9.3.0-h6de172a_18
openssl 1.1.1l-h7f98852_0 --> 1.1.1k-h7f98852_0
xeus 2.1.0-h7d0c39e_0 --> 1.0.4-h7d0c39e_0
xeus-python 0.13.0-py38hcf90354_2 --> 0.12.5-py38hcf90354_2
zeromq 4.3.4-h9c3ff4c_1 --> 4.3.4-h9c3ff4c_0
Proceed ([y]/n)?
###Markdown
Goal
The goal of this notebook is to explore the data provided by the US Census Bureau and translate the bulleted information in this article
to easy to understand charts.
###Code
import pandas as pd
import numpy as np
import re
def conv_non_digits(str):
result = int(re.sub("[^0-9]","0", str))
return result
cs_df = pd.read_csv("2018_data.csv")
cs_df_keys = pd.read_csv("2018_keys.csv")
cs_df_keys
cs_df = cs_df.drop(
cs_df.columns.difference(
["SEX_LABEL", "ETH_GROUP_LABEL",
"RACE_GROUP_LABEL","FIRMPDEMP",
"VET_GROUP_LABEL","EMPSZFI", "EMPSZFI_LABEL",
"EMP", "EMPSZFI_LABEL","RCPPDEMP", "RCPSZFI_LABEL",
]), axis = 1)
cs_df = cs_df.drop(0, axis = 0)
cs_df["RCPPDEMP"] = cs_df["RCPPDEMP"].apply(conv_non_digits)
cs_df["FIRMPDEMP"] = cs_df["FIRMPDEMP"].apply(conv_non_digits)
cs_df.head()
black_firms = cs_df[cs_df.RACE_GROUP_LABEL == "Black or African American"]
black_rev = black_firms["RCPPDEMP"].sum()
black_business = black_firms["FIRMPDEMP"].sum()
black_firms.iloc[0:45]
pacific_firms = cs_df[cs_df.RACE_GROUP_LABEL == "Native Hawaiian and Other Pacific Islander"]
pacific_rev = pacific_firms["RCPPDEMP"].sum()
pacific_business = pacific_firms["FIRMPDEMP"].sum()
print(pacific_rev)
print(pacific_business)
native_firms = cs_df[cs_df.RACE_GROUP_LABEL == "American Indian and Alaska Native"]
native_rev = native_firms["RCPPDEMP"].sum()
native_business = native_firms["FIRMPDEMP"].sum()
print(native_rev)
print(native_business)
asian_firms = cs_df[cs_df.RACE_GROUP_LABEL == "Asian"]
asian_rev = asian_firms["RCPPDEMP"].sum()
asian_business = asian_firms["FIRMPDEMP"].sum()
print(asian_rev)
print(asian_business)
minority_firms = cs_df[cs_df.RACE_GROUP_LABEL == "Minority"]
minority_rev = minority_firms["RCPPDEMP"].sum()
minority_business = minority_firms["FIRMPDEMP"].sum()
print(minority_rev)
print(minority_business)
non_minority_firms = cs_df[cs_df.RACE_GROUP_LABEL == "Nonminority"]
non_minority_rev = non_minority_firms["RCPPDEMP"].sum()
non_minority_business = non_minority_firms["FIRMPDEMP"].sum()
print(non_minority_rev)
print(non_minority_business)
hispanic_firms = cs_df[cs_df.ETH_GROUP_LABEL == "Hispanic"]
hispanic_rev = hispanic_firms["RCPPDEMP"].sum()
hispanic_business = hispanic_firms["FIRMPDEMP"].sum()
print(hispanic_rev)
print(hispanic_business)
veteran_firms = cs_df[cs_df.VET_GROUP_LABEL == "Veteran"]
vet_rev = veteran_firms["RCPPDEMP"].sum()
vet_business = veteran_firms["FIRMPDEMP"].sum()
print(vet_rev)
print(vet_business)
female_firms = cs_df[cs_df.SEX_LABEL == "Female"]
female_rev = female_firms["RCPPDEMP"].sum()
female_business = female_firms["FIRMPDEMP"].sum()
print(female_rev)
print(female_business)
male_firms = cs_df[cs_df.SEX_LABEL == "Male"]
male_rev = male_firms["RCPPDEMP"].sum()
male_business = male_firms["FIRMPDEMP"].sum()
print(male_rev)
print(male_business)
###Output
116714007742
43104886
###Markdown
What is the number/receipts for “all firms/business”
- Asian Owned Businesses and their Revenue
- Black Owned Businesses and their Revenue
- Hispanic Owned Businesses and their Revenue
- Native Hawaiian/ Pacific Ilander Owned Businesses and thier Revenue
- Veran Owned Businesses and thier Revenue
- Woman Owned Businesses and thier Revenue Visualize The Data
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as tick
import numpy as np
%matplotlib inline
columns = ['CL_LABEL',"REVENUE", "QTY_BUSINESSES"]
index = range(10)
df_ = pd.DataFrame(index = index, columns=columns)
df_["QTY_BUSINESSES"][0] = black_business
df_["QTY_BUSINESSES"][1] = asian_business
df_["QTY_BUSINESSES"][2] = hispanic_business
df_["QTY_BUSINESSES"][3] = native_business
df_["QTY_BUSINESSES"][4] = pacific_business
df_["QTY_BUSINESSES"][5] = vet_business
df_["QTY_BUSINESSES"][6] = female_business
df_["QTY_BUSINESSES"][7] = male_business
df_["QTY_BUSINESSES"][8] = minority_business
df_["QTY_BUSINESSES"][9] = non_minority_business
df_["CL_LABEL"][0] = "Black or African American"
df_["CL_LABEL"][1] = "Asian"
df_["CL_LABEL"][2] = "Hispanic"
df_["CL_LABEL"][3] = "American Indian or Native Alaskan"
df_["CL_LABEL"][4] = "Native Hawaiian or Pacific Islander"
df_["CL_LABEL"][5] = "Veteran"
df_["CL_LABEL"][6] = "Female"
df_["CL_LABEL"][7] = "Male"
df_["CL_LABEL"][8] = "Minority"
df_["CL_LABEL"][9] = "Non-Minority"
df_["REVENUE"][0] = black_rev
df_["REVENUE"][1] = asian_rev
df_["REVENUE"][2] = hispanic_rev
df_["REVENUE"][3] = native_rev
df_["REVENUE"][4] = pacific_rev
df_["REVENUE"][5] = vet_rev
df_["REVENUE"][6] = female_rev
df_["REVENUE"][7] = male_rev
df_["REVENUE"][8] = minority_rev
df_["REVENUE"][9] = non_minority_rev
df_ = df_.sort_values(by= "REVENUE")
df_
sns.set(font_scale=1.4)
def reformat_large_tick_values(tick_val, pos):
"""
Turns large tick values (in the billions, millions and thousands)
such as 4500 into 4.5K and also appropriately turns 4000 into 4K (no zero after the decimal).
"""
if tick_val >= 1000000000:
val = round(tick_val/1000000000, 1)
new_tick_format = '{:}B'.format(val)
elif tick_val >= 1000000:
val = round(tick_val/1000000, 1)
new_tick_format = '{:}M'.format(val)
elif tick_val >= 1000:
val = round(tick_val/1000, 1)
new_tick_format = '{:}K'.format(val)
elif tick_val < 1000:
new_tick_format = round(tick_val, 1)
else:
new_tick_format = tick_val
# make new_tick_format into a string value
new_tick_format = str(new_tick_format)
# code below will keep 4.5M as is but change values such as 4.0M to 4M since that zero after the decimal isn't needed
index_of_decimal = new_tick_format.find(".")
if index_of_decimal != -1:
value_after_decimal = new_tick_format[index_of_decimal+1]
if value_after_decimal == "0":
# remove the 0 after the decimal point since it's not needed
new_tick_format = new_tick_format[0:index_of_decimal] + new_tick_format[index_of_decimal+2:]
return new_tick_format
###Output
_____no_output_____
###Markdown
Revenue and Quanity for All Classifications
###Code
fig = plt.figure(figsize=(20,8))
plt.bar(df_.CL_LABEL, df_.REVENUE, width=.20,
edgecolor = "blue", label= ("revenue"))
plt.title("Total Revenue per Classification", fontweight='bold', color = 'blue', fontsize='18')
plt.ylabel("Revenue per $1000")
plt.xlabel("Classification Labels", fontweight="bold")
plt.xticks(df_.CL_LABEL, rotation= 70, fontsize="12")
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
for i, data in enumerate(df_.REVENUE):
plt.text(x = i, y = data, s = "$" + str(data), fontweight="bold", ha= "center", va= "bottom")
plt.legend(loc= "upper left")
plt.show()
fig = plt.figure(figsize=(12,8))
plt.bar(df_.CL_LABEL, df_.QTY_BUSINESSES, width=.35,
edgecolor = "blue", label= ("# of Businesses"))
plt.title("Quantity of Businesses per Classification", fontweight='bold', color = 'blue', fontsize='18')
plt.ylabel("Frequency of Businesses")
plt.xlabel("Classification Labels", fontweight="bold")
plt.xticks(df_.CL_LABEL, rotation=90, fontsize="12")
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
for i, data in enumerate(df_.QTY_BUSINESSES):
plt.text(x = i, y = data, s = str(data), fontweight="bold", ha= "center", va= "bottom")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Revenue and Quantity of Business for All Ethnicities
###Code
df_race = df_.copy()
df_race = df_race.drop([5,6,7,8,9], axis=0)
df_race = df_race.sort_values(by= "REVENUE")
fig = plt.figure(figsize=(12,8))
plt.bar(df_race.CL_LABEL, df_race.REVENUE, edgecolor = "orange", label= ("Revenue"))
plt.title("Revenue Generated By Ethnicity", fontweight="bold", color = "blue")
plt.xlabel("Ethnicity Classification", fontweight = "bold")
plt.ylabel("Revenue per $1000")
plt.xticks(df_race.CL_LABEL, rotation=90, fontsize="12")
for i, data in enumerate(df_race.REVENUE):
plt.text(x = i, y = data, s = "$" + str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
fig = plt.figure(figsize=(12,5))
plt.bar(df_race.CL_LABEL, df_race.QTY_BUSINESSES, edgecolor = "orange", label= ("# of Businesses"))
plt.title("Quantity of Businesses By Ethnicity", fontweight="bold", color = "blue")
plt.xlabel("Ethnicity Classification", fontweight = "bold")
plt.ylabel("Frequency of Businesses")
plt.xticks(df_race.CL_LABEL, rotation=90, fontsize="12")
for i, data in enumerate(df_race.QTY_BUSINESSES):
plt.text(x = i, y = data, s =str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
###Output
_____no_output_____
###Markdown
Revenue and Quanity for Male v Female
###Code
df_sex = df_.copy()
df_sex = df_sex.drop([0,1,2,3,4,5,8,9], axis=0).sort_values(by= "REVENUE")
df_sex
fig = plt.figure(figsize=(12,6))
plt.bar(df_sex.CL_LABEL, df_sex.REVENUE, edgecolor = "orange", label= ("Revenue"), width= .35)
plt.title("Revenue Generated By Gender", fontweight="bold", color = "blue")
plt.xlabel("Gender Classification", fontweight = "bold")
plt.ylabel("Revenue per $1000")
plt.xticks(df_sex.CL_LABEL, fontsize="12")
for i, data in enumerate(df_sex.REVENUE):
plt.text(x = i, y = data, s = "$" + str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
fig = plt.figure(figsize=(12,5))
plt.bar(df_sex.CL_LABEL, df_sex.QTY_BUSINESSES, edgecolor = "orange", label= ("# of Businesses"), width= .35)
plt.title("Quantity of Businesses By Gender", fontweight="bold", color = "blue")
plt.xlabel("Gender Classification", fontweight = "bold")
plt.ylabel("Frequency of Businesses")
plt.xticks(df_sex.CL_LABEL, fontsize="12")
for i, data in enumerate(df_sex.QTY_BUSINESSES):
plt.text(x = i, y = data, s = str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
###Output
_____no_output_____
###Markdown
Revenue and Quantity Minority v Non-Minority
###Code
df_q = df_.copy()
df_q = df_q.drop([0,1,2,3,4,5,6,7], axis=0).sort_values(by= "REVENUE")
fig = plt.figure(figsize=(12,5))
plt.bar(df_q.CL_LABEL, df_q.REVENUE, edgecolor = "orange", label= ("Revenue"), width= .35)
plt.title("Revenue Generated Minority vs. Non-Minority", fontweight="bold", color = "blue")
plt.xlabel("Group", fontweight = "bold")
plt.ylabel("Revenue per $1000")
plt.xticks(df_q.CL_LABEL, fontsize="12")
for i, data in enumerate(df_q.REVENUE):
plt.text(x = i, y = data, s = "$" + str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
fig = plt.figure(figsize=(12,5))
plt.bar(df_q.CL_LABEL, df_q.QTY_BUSINESSES, edgecolor = "orange", label= ("# of Businesses"), width= .35)
plt.title("Quantity of Businesses Minority vs. Non-Minority", fontweight="bold", color = "blue")
plt.xlabel("Group", fontweight = "bold")
plt.ylabel("Frequency of Businesses")
plt.xticks(df_q.CL_LABEL, fontsize="12")
for i, data in enumerate(df_q.QTY_BUSINESSES):
plt.text(x = i, y = data, s = str(data), fontweight="bold", ha= "center", va="bottom")
plt.legend()
ax = plt.gca()
ax.yaxis.set_major_formatter(tick.FuncFormatter(reformat_large_tick_values));
plt.show()
###Output
_____no_output_____
###Markdown
EDA
###Code
import pandas as pd
import matplotlib.pyplot as plt
import os
import PIL
import numpy as np
###Output
_____no_output_____
###Markdown
Check out the csv with data on the images
###Code
df = pd.read_csv('./data/train_ship_segmentations_v2.csv')
df.info()
(81723 - 39167) / 192556
###Output
_____no_output_____
###Markdown
Roughly 22% of the images have ships in them based on how many rows in the CSV contain encoded pixels
###Code
81723 - 39167
df.head()
###Output
_____no_output_____
###Markdown
It looks like there can be multiple entries for the `ImageId` column, which is how multiple ships are encoded in a single image.
###Code
df.shape[0] - df.groupby('ImageId').sum().shape[0]
###Output
_____no_output_____
###Markdown
Of the 81723 rows with ships present, 39167 of those rows contain multiple ships.
###Code
ships = df.dropna()
ships
###Output
_____no_output_____
###Markdown
What is the distribution of ships per image?
###Code
ships['ImageId'].value_counts()
(ships['ImageId'].value_counts() == 15).value_counts()[1]
mask.value_counts()[1]
ship_counts = {}
for i in range(1, 16):
ship_counts[i] = (ships['ImageId'].value_counts() == i).value_counts()[1]
ship_counts = {'index': [i for i in range(1, 16)], 'count': [(ships['ImageId'].value_counts() == i).value_counts()[1] for i in range(1, 16)]}
ship_counts
counts = pd.DataFrame.from_dict(ship_counts)
counts
counts.set_index('index', inplace=True)
counts.plot(kind='bar', title='Distribution of ship counts');
counts.plot(kind='bar', title='Log distribution of ship counts', logy=True);
###Output
_____no_output_____
###Markdown
Let's visualize some images
###Code
train = os.listdir('./data/train_v2/')
len(train)
train[:4]
PIL.Image.open(f'./data/train_v2/{train[0]}').resize((200, 200))
###Output
_____no_output_____
###Markdown
Image to numpy array
###Code
img = PIL.Image.open(f'./data/train_v2/{train[0]}').resize((200, 200))
img_array = np.array(img)
plt.imshow(img_array)
###Output
_____no_output_____
###Markdown
EDA
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option("display.max_columns", None)
###Output
_____no_output_____
###Markdown
Games data
###Code
all_games = pd.read_csv("data/games_with_features.csv", index_col="id")
all_games.head()
###Output
_____no_output_____
###Markdown
Using Data from 1979 to 2020
###Code
all_seasons = np.sort(all_games["season"].unique())
all_seasons
###Output
_____no_output_____
###Markdown
Historically, The home team wins 61% of the time
###Code
n_home_wins = all_games[all_games["home_team_score"].gt(all_games["visitor_team_score"])].shape[0] # number of games where home team won
n_games = all_games.shape[0] # number of games
home_win_pct = round(n_home_wins/n_games, 2)
print(n_home_wins, n_games, home_win_pct, sep="\n")
home_win_pcts = []
for season in all_seasons:
season_games = all_games[all_games["season"].eq(season)]
n_home_wins = season_games[season_games["home_team_score"].gt(season_games["visitor_team_score"])].shape[0] # number of games where home team won
n_games = season_games.shape[0] # number of games
home_win_pct = round(n_home_wins/n_games, 2)
home_win_pcts.append(home_win_pct)
print(season, n_home_wins, n_games, home_win_pct)
plt.figure()
plt.plot(all_seasons, home_win_pcts, c="darkorange")
plt.title("Home Team Win % by Year")
plt.grid()
plt.ylabel("Win %")
plt.ylim(.5, .7)
plt.yticks(ticks=plt.yticks()[0], labels=(plt.yticks()[0]*100).round(1))
plt.show()
###Output
_____no_output_____
###Markdown
Naturally it follows that teams score more points when they are playing at home
###Code
home_avg = all_games[["home_team.full_name", "season", "home_team_avg_score"]].groupby(["home_team.full_name", "season"]).mean().values
visiting_avg = all_games[["visitor_team.full_name", "season", "visitor_team_avg_score"]].groupby(["visitor_team.full_name", "season"]).mean().values
avg_score_by_team = all_games[["home_team.full_name", "season", "home_team_avg_score"]].groupby(["home_team.full_name", "season"]).mean()
avg_score_by_team.columns = ["avg_score_as_home"]
avg_score_by_team["avg_score_as_home"] = home_avg
avg_score_by_team["avg_score_as_visitor"] = visiting_avg
avg_score_by_team["avg_score_mean"] = (home_avg + visiting_avg) / 2
avg_score_by_team["avg_score_diff"] = (home_avg - visiting_avg)
avg_score_by_team.reset_index(inplace=True)
plt.figure()
plt.hist(avg_score_by_team["avg_score_as_home"], alpha=0.8, label="Home", bins=10)
plt.hist(avg_score_by_team["avg_score_as_visitor"], alpha=0.8, label="Away", bins=10)
plt.title("Points Scored By Home Teams vs Away Teams")
plt.xlabel("Points Scored")
plt.legend()
plt.show()
plt.figure()
plt.hist(all_games["home_team_score"], alpha=0.8, label="Home", bins=20)
plt.hist(all_games["visitor_team_score"], alpha=0.8, label="Visitor", bins=20)
plt.xlim(60,140)
plt.title("Points Scored By Home Teams vs Away Teams - All Years")
plt.xlabel("Points Scored")
plt.ylabel("# of Games")
plt.legend()
plt.show()
avg_score_by_team[avg_score_by_team["avg_score_as_home"].gt(120)]
plt.figure()
plt.hist(avg_score_by_team[avg_score_by_team["season"].isin([2019,2020])]["avg_score_as_home"], alpha=0.8, label="Home")
plt.hist(avg_score_by_team[avg_score_by_team["season"].isin([2019,2020])]["avg_score_as_visitor"], alpha=0.8, label="Aw")
plt.title("Home and Away avg pts 2020")
plt.legend()
plt.show()
plt.figure()
plt.hist(all_games[all_games["season"].isin([2019,2020])]["home_team_score"], alpha=0.8, label="Home",
bins=[60,70,80,90,100,110,120,130,140,150,160])
plt.hist(all_games[all_games["season"].isin([2019,2020])]["visitor_team_score"], alpha=0.8, label="Away",
bins=[60,70,80,90,100,110,120,130,140,150,160])
plt.title("Points Scored By Home Teams vs Away Teams - 2019 & 2020")
plt.xlabel("Points Scored")
plt.ylabel("# of Games")
plt.legend()
plt.show()
avg_score_by_team["avg_score_diff"].mean()
print(all_games["home_team_avg_score"].gt(all_games["visitor_team_avg_score"]).value_counts())
print(round(43173 / 50460, 2))
avg_score_by_season = all_games[["season", "home_team_avg_score", "visitor_team_avg_score"]].groupby("season").mean()
avg_score_by_season["mean_avg_score"] = (avg_score_by_season["home_team_avg_score"] + avg_score_by_season["visitor_team_avg_score"]) / 2
avg_score_by_season["diff"] = avg_score_by_season["home_team_avg_score"] - avg_score_by_season["visitor_team_avg_score"]
plt.figure()
plt.title("Points Scored Per Game Average by Season")
plt.plot(avg_score_by_season.index, avg_score_by_season["mean_avg_score"], color="darkorange")
plt.show()
plt.figure()
plt.plot(avg_score_by_season.index, avg_score_by_season["diff"])
plt.title("Avg difference in pts scored at home vs away")
plt.show()
denver_home = all_games[all_games["home_team.full_name"].eq("Denver Nuggets")]
denver_away = all_games[all_games["visitor_team.full_name"].eq("Denver Nuggets")]
denver = pd.concat([denver_home, denver_away])
denver_home_win_pct = denver_home[["season", "winner"]].groupby("season").sum() / denver_home[["season", "winner"]].groupby("season").count()
denver_away_win_pct = 1 - denver_away[["season", "winner"]].groupby("season").sum() / denver_away[["season", "winner"]].groupby("season").count()
plt.figure()
plt.plot(denver_home_win_pct)
plt.plot(denver_away_win_pct)
plt.show()
plt.figure()
plt.bar(denver_home_win_pct.index, denver_home_win_pct.winner)
plt.bar(denver_home_win_pct.index, denver_away_win_pct.winner)
plt.show()
denver_home["winner"].value_counts(normalize=True)
denver_away["winner"].value_counts(normalize=True)
not_denver_home = all_games[all_games["home_team.full_name"].ne("Denver Nuggets")]
not_denver_away = all_games[all_games["visitor_team.full_name"].ne("Denver Nuggets")]
not_denver = pd.concat([denver_home, denver_away])
not_denver_home_win_pct = not_denver_home[["season", "winner"]].groupby("season").sum() / not_denver_home[["season", "winner"]].groupby("season").count()
not_denver_away_win_pct = 1 - not_denver_away[["season", "winner"]].groupby("season").sum() / not_denver_away[["season", "winner"]].groupby("season").count()
plt.figure()
plt.plot(not_denver_home_win_pct)
plt.plot(not_denver_away_win_pct)
plt.show()
plt.figure()
plt.bar(not_denver_home_win_pct.index, not_denver_home_win_pct.winner)
plt.bar(not_denver_home_win_pct.index, not_denver_away_win_pct.winner)
plt.show()
all_home[["season", "winner"]].groupby("season").sum() / all_home[["season", "winner"]].groupby("season").count()
avg_score_by_season
###Output
_____no_output_____
###Markdown
Import Modules
###Code
# Core
import pandas as pd
import numpy as np
# Visualizations
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from scipy import stats
###Output
_____no_output_____
###Markdown
Import Data
###Code
df_click = pd.read_csv('data/eda_click_data.csv')
df_assess = pd.read_csv('data/eda_assess_data.csv')
###Output
_____no_output_____
###Markdown
Show all dataframe columns for analysis.
###Code
pd.options.display.max_columns = None
pd.options.display.max_rows = None
###Output
_____no_output_____
###Markdown
EDA & Data Cleaning
###Code
df_click.describe()
df_assess.describe()
df_click.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 37030 entries, 0 to 37029
Data columns (total 38 columns):
is_banked 37030 non-null int64
code_module 37030 non-null object
code_presentation 37030 non-null object
assessment_type 37030 non-null object
module_presentation_length 37030 non-null int64
gender 37030 non-null object
region 37030 non-null object
highest_education 37030 non-null object
imd_band 37030 non-null object
age_band 37030 non-null object
num_of_prev_attempts 37030 non-null int64
studied_credits 37030 non-null int64
disability 37030 non-null object
final_result 37030 non-null object
dataplus 37030 non-null float64
dualpane 37030 non-null float64
externalquiz 37030 non-null float64
folder 37030 non-null float64
forumng 37030 non-null float64
glossary 37030 non-null float64
homepage 37030 non-null float64
htmlactivity 37030 non-null float64
oucollaborate 37030 non-null float64
oucontent 37030 non-null float64
ouelluminate 37030 non-null float64
ouwiki 37030 non-null float64
page 37030 non-null float64
questionnaire 37030 non-null float64
quiz 37030 non-null float64
repeatactivity 37030 non-null float64
resource 37030 non-null float64
sharedsubpage 37030 non-null float64
subpage 37030 non-null float64
url 37030 non-null float64
assess_date 37030 non-null float64
length_no_cred_ratio 37030 non-null float64
date_registration 37030 non-null float64
score 37030 non-null float64
dtypes: float64(24), int64(4), object(10)
memory usage: 10.7+ MB
###Markdown
Some nulls in 'access data' and 'score' variables in the assess dataset.
###Code
df_assess.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 153537 entries, 0 to 153536
Data columns (total 19 columns):
is_banked 153537 non-null int64
score 153436 non-null float64
code_module 153537 non-null object
code_presentation 153537 non-null object
assessment_type 153537 non-null object
weight 153537 non-null float64
module_presentation_length 153537 non-null int64
gender 153537 non-null object
region 153537 non-null object
highest_education 153537 non-null object
imd_band 153537 non-null object
age_band 153537 non-null object
num_of_prev_attempts 153537 non-null int64
studied_credits 153537 non-null int64
disability 153537 non-null object
final_result 153537 non-null object
date_registration 153537 non-null float64
assess_date 150888 non-null float64
length_no_cred_ratio 153537 non-null float64
dtypes: float64(5), int64(4), object(10)
memory usage: 22.3+ MB
###Markdown
Impute assess_date in df_assess.
###Code
df_assess['assess_date'] = df_assess['assess_date'].fillna(df_assess['assess_date'].median())
df_assess['score'] = df_assess['score'].fillna(df_assess['score'].median())
df_assess.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 153537 entries, 0 to 153536
Data columns (total 19 columns):
is_banked 153537 non-null int64
score 153537 non-null float64
code_module 153537 non-null object
code_presentation 153537 non-null object
assessment_type 153537 non-null object
weight 153537 non-null float64
module_presentation_length 153537 non-null int64
gender 153537 non-null object
region 153537 non-null object
highest_education 153537 non-null object
imd_band 153537 non-null object
age_band 153537 non-null object
num_of_prev_attempts 153537 non-null int64
studied_credits 153537 non-null int64
disability 153537 non-null object
final_result 153537 non-null object
date_registration 153537 non-null float64
assess_date 153537 non-null float64
length_no_cred_ratio 153537 non-null float64
dtypes: float64(5), int64(4), object(10)
memory usage: 22.3+ MB
###Markdown
Correlation Matrix to identify variables with strong correlations. Nothing really correlated in the assess file.Code Source: https://stackoverflow.com/questions/51347398/need-to-save-pandas-correlation-highlighted-table-cmap-matplotlib-as-png-image
###Code
df_assess.corr(method='kendall').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
###Output
_____no_output_____
###Markdown
Highest correlation between dataplus and questionaire, .66.Do not see a strong enough correlation to remove in the click file.
###Code
df_click.corr(method='kendall').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
###Output
_____no_output_____
###Markdown
Look at score outliers. Outliers are valid values and represent variance. Not removing.
###Code
sns.boxplot(x=df_click['score']);
sns.boxplot(x=df_assess['score']);
###Output
_____no_output_____
###Markdown
Feature engineer - create avg clicks for clicks dataset. Score/Average Clicks may make a good scatter.Code source: https://stackoverflow.com/questions/25748683/pandas-sum-dataframe-rows-for-given-columns/25748826
###Code
col_list= list(df_click)
col_list.remove('is_banked')
col_list.remove('code_module')
col_list.remove('code_presentation')
col_list.remove('assessment_type')
col_list.remove('module_presentation_length')
col_list.remove('gender')
col_list.remove('region')
col_list.remove('highest_education')
col_list.remove('imd_band')
col_list.remove('age_band')
col_list.remove('num_of_prev_attempts')
col_list.remove('studied_credits')
col_list.remove('disability')
col_list.remove('final_result')
col_list.remove('assess_date')
col_list.remove('date_registration')
col_list.remove('score')
df_click['avg_click'] = df_click[col_list].mean(axis=1)
df_click.head(2)
###Output
_____no_output_____
###Markdown
Scatter - Module Presentation Length / Score.
###Code
fig = plt.figure()
ax = plt.gca()
ax.scatter(df_click['avg_click'] ,df_click['score'] , c='blue', alpha=0.05, edgecolors='none');
###Output
_____no_output_____
###Markdown
Disability Proportions Bar Plot - Click Dataset.
###Code
df_click['disability'].value_counts(normalize=True).plot.bar(color='Blue');
###Output
_____no_output_____
###Markdown
Age Band Proportion Bar Plot - Click Dataset.
###Code
df_click['age_band'].value_counts(normalize=True).plot.bar(color='Gray');
###Output
_____no_output_____
###Markdown
Highest Education Proportion Bar Plot - Click Dataset.
###Code
df_click['highest_education'].value_counts(normalize=True).plot.bar(color='Green');
###Output
_____no_output_____
###Markdown
Region Proportion Bar Plot - Click Dataset.
###Code
df_click['region'].value_counts(normalize=True).plot.bar(color='Purple');
###Output
_____no_output_____
###Markdown
Assessment Type Proportion Bar Plot - Click Dataset.
###Code
df_click['assessment_type'].value_counts(normalize=True).plot.bar(color='Orange');
###Output
_____no_output_____
###Markdown
Gender Proportion Bar Plot - Click Dataset.
###Code
df_click['gender'].value_counts(normalize=True).plot.bar(color='Maroon');
###Output
_____no_output_____
###Markdown
Final Result Proportion Bar Plot - Click Dataset.
###Code
df_click['final_result'].value_counts(normalize=True).plot.bar(color='Gray');
###Output
_____no_output_____
###Markdown
Boxplot Score/Highest Education - Click Dataset.
###Code
sns.boxplot(x=df_click['score'],y=df_click['highest_education']);
###Output
_____no_output_____
###Markdown
Boxplot Score/Assessment Type - Click Dataset.
###Code
sns.boxplot(x=df_click['score'],y=df_click['assessment_type']);
###Output
_____no_output_____
###Markdown
Boxplot Score/Region - Click Dataset.
###Code
sns.boxplot(x=df_click['score'],y=df_click['region']);
###Output
_____no_output_____
###Markdown
Boxplot Score/Highest Education - Code Module.
###Code
sns.boxplot(x=df_click['score'],y=df_click['code_module']);
###Output
_____no_output_____
###Markdown
Boxplot Score/Age Band - Click Dataset.
###Code
sns.boxplot(x=df_click['score'],y=df_click['age_band']);
###Output
_____no_output_____
###Markdown
Boxplot Score/Is Banked - Click Dataset.
###Code
df_click.boxplot('score','is_banked', rot=60);
###Output
_____no_output_____
###Markdown
Histogram & Normal Probability Plot on Score - Click Dataset.Code Source: https://www.kaggle.com/vikrishnan/house-sales-price-using-regression
###Code
sns.distplot(df_click['score'], hist=True);
fig = plt.figure()
res = stats.probplot(df_click['score'], plot=plt)
###Output
/Users/christiandavies/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Histogram & Normal Probability Plot on Score - Assessment Dataset.
###Code
sns.distplot(df_assess['score'], hist=True);
fig = plt.figure()
res = stats.probplot(df_assess['score'], plot=plt)
###Output
/Users/christiandavies/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Preprocessing for Modeling Function that replaces zeros with NaN's.Code Source: https://stackoverflow.com/questions/49575897/cant-replace-0-to-nan-in-python-using-pandas
###Code
def zero_to_nan(df):
for i in range(14,34):
df_click.iloc[:, i] = df_click.iloc[:, i].replace(0, np.nan)
zero_to_nan(df_click)
###Output
_____no_output_____
###Markdown
Function that imputes median for NaN's.
###Code
def impute_median(df):
for i in range(14,34):
df_click.iloc[:, i] = df_click.iloc[:, i].fillna(df_click.iloc[:, i].median())
impute_median(df_click)
###Output
_____no_output_____
###Markdown
Expand categorical variables to binary classifiers - for ml.Code Source: DataCamp - Machine learning with the experts school budgets course.
###Code
df_click = pd.get_dummies(df_click, prefix_sep='_', drop_first=True)
df_assess = pd.get_dummies(df_assess, prefix_sep='_', drop_first=True)
print(df_click.shape)
print(df_assess.shape)
df_click.head(2)
df_assess.head(2)
###Output
_____no_output_____
###Markdown
Writing it all out for modeling notbooks.
###Code
df_click.to_csv(r'data/post_eda_click.csv',index=False)
df_assess.to_csv(r'data/post_eda_assess.csv',index=False)
###Output
_____no_output_____
###Markdown
Data Fieldsdatetime - hourly date + timestamp season - 1 = spring, 2 = summer, 3 = fall, 4 = winterholiday - whether the day is considered a holidayworkingday - whether the day is neither a weekend nor holidayweather - 1: Clear, Few clouds, Partly cloudy, Partly cloudy2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog temp - temperature in Celsiusatemp - "feels like" temperature in Celsiushumidity - relative humiditywindspeed - wind speedcasual - number of non-registered user rentals initiatedregistered - number of registered user rentals initiatedcount - number of total rentals **Workflow**1. Split: train, validation, test2. EDA (...) exploratory data analysis: we look at features, their distributions, cleaning the data, filling missing values, lloking at correlation between features and output and in between features ...etc.3. (We fit a very straightforward simple model as our baseline (e.g. dummy classifier))5. use train and validation data to iteratively improve my model/find best model (feature engineering, hyperparameter tuning, ...)6. apply best model to test data to estimate how the model will perform on new data (using test score, should not vary too much from the best validation score I get in step 4-5)
###Code
df.info()
#all non-null
df.head(10)
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
X = df.drop(['count'], axis=1)
y = df['count']
#df.drop(columns='col1') or df.drop('col1', axis=1)
#drop raw. df.drop(index=['row1', 'row2']), df.drop(['row1', 'row2'], axis=0) or df.drop(['row1', 'row2'])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
train_data = [X_train, y_train]
df = pd.concat(train_data, axis=1)
df
###Output
_____no_output_____
###Markdown
EDA
###Code
#datetime's type is object. We should convert it to datetime.
df.datetime = pd.to_datetime(df.datetime)
#create new fcolumns from 'datetime' olumn
df['year'] = df['datetime'].dt.year
df['month'] = df['datetime'].dt.month
df['day'] = df['datetime'].dt.day
df['hour'] = df['datetime'].dt.hour
df['weekday'] = df['datetime'].dt.day_name()
#dt.day_name, dt.day_name() difference
#pd.concat([df[['day']],pd.get_dummies(df[['year','month','weekday','hour']],columns=['year','month','weekday','hour'])],axis=1)
df['weekday']
#Change the category name for visualization
df["season"] = df.season.map({1: "Spring", 2 : "Summer", 3 : "Fall", 4 :"Winter" })
df["weather"] = df.weather.map({1: " Clear + Few clouds + Partly cloudy + Partly cloudy",\
2 : " Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist ", \
3 : " Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds", \
4 :" Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog " })
#df.season.map
df
dataTypeDf = pd.DataFrame(df.dtypes.value_counts()).reset_index().rename(columns={"index":"variableType",0:"count"})
fig,ax = plt.subplots()
fig.set_size_inches(12,5)
sns.barplot(data=dataTypeDf,x="variableType",y="count",ax=ax)
ax.set(xlabel='variableTypeariable Type', ylabel='Count',title="Variables DataType Count")
###Output
_____no_output_____
###Markdown
Outliers Analysis
###Code
fig, axes = plt.subplots(nrows=2,ncols=2)
fig.set_size_inches(20, 10)
sns.boxplot(data=df,y="count",orient="v",ax=axes[0][0])
sns.boxplot(data=df,y="count",x="season",orient="v",ax=axes[0][1])
sns.boxplot(data=df,y="count",x="hour",orient="v",ax=axes[1][0])
sns.boxplot(data=df,y="count",x="workingday",orient="v",ax=axes[1][1])
axes[0][0].set(ylabel='Count',title="Box Plot On Count")
axes[0][1].set(xlabel='Season', ylabel='Count',title="Box Plot On Count Across Season")
axes[1][0].set(xlabel='Hour Of The Day', ylabel='Count',title="Box Plot On Count Across Hour Of The Day")
axes[1][1].set(xlabel='Working Day', ylabel='Count',title="Box Plot On Count Across Working Day")
#remove outliers
dfWithoutOutliers = df[np.abs(df['count']-df['count'].mean())<=(2*df['count'].std())]
#68% of the data falls within one standard deviation of the mean.
#95% of the data falls within two standard deviations of the mean.
#99.7% of the data falls within three standard deviations of the mean.
#understand why np, np.abs later
df.shape, dfWithoutOutliers.shape
fig, axes = plt.subplots(nrows=2,ncols=2)
fig.set_size_inches(20, 10)
sns.boxplot(data=dfWithoutOutliers,y="count",orient="v",ax=axes[0][0])
sns.boxplot(data=dfWithoutOutliers,y="count",x="season",orient="v",ax=axes[0][1])
sns.boxplot(data=dfWithoutOutliers,y="count",x="hour",orient="v",ax=axes[1][0])
sns.boxplot(data=dfWithoutOutliers,y="count",x="workingday",orient="v",ax=axes[1][1])
axes[0][0].set(ylabel='Count',title="Box Plot On Count")
axes[0][1].set(xlabel='Season', ylabel='Count',title="Box Plot On Count Across Season")
axes[1][0].set(xlabel='Hour Of The Day', ylabel='Count',title="Box Plot On Count Across Hour Of The Day")
axes[1][1].set(xlabel='Working Day', ylabel='Count',title="Box Plot On Count Across Working Day")
###Output
_____no_output_____
###Markdown
Tried two standard deviations and three standard deviation but two looks much less outliers so I use two here. Correlation Analysis
###Code
corrMat = dfWithoutOutliers[["temp","atemp","casual","registered","humidity","windspeed","count"]].corr()
mask = np.array(corrMat)
mask[np.tril_indices_from(mask)] = False
fig,ax= plt.subplots()
fig.set_size_inches(20,10)
sns.heatmap(corrMat, mask=mask,vmax=.8, square=True,annot=True)
#just visualize
fig,(ax1,ax2,ax3) = plt.subplots(ncols=3)
fig.set_size_inches(15, 5)
sns.regplot(x="temp", y="count", data=dfWithoutOutliers,ax=ax1)
sns.regplot(x="windspeed", y="count", data=dfWithoutOutliers,ax=ax2)
sns.regplot(x="humidity", y="count", data=dfWithoutOutliers,ax=ax3)
###Output
_____no_output_____
###Markdown
'casual' and 'registered' are not taken into account since they are leakage variables and obviously highly correlated.'atemp' and 'temp' has got strong correlation with each other. I can also guess 'feels like' temprature should be correlated with temperature.'windspeed' is not so correlated with 'count', how every there are many 0 value and if I could deal with it like missing value, I could see this regresson graph minus correlation. And I can imagine if windspeed is fast, people don't wanna use bikes. Visualizing Distribution Of Data (remove later)
###Code
fig,axes = plt.subplots(ncols=2,nrows=2)
fig.set_size_inches(12, 10)
sns.distplot(df['count'],ax=axes[0][0])
stats.probplot(df['count'], dist='norm', fit=True, plot=axes[0][1])
sns.distplot(np.log(dfWithoutOutliers['count']),ax=axes[1][0])
stats.probplot(np.log1p(dfWithoutOutliers['count']), dist='norm', fit=True, plot=axes[1][1])
dfWithoutOutliers
###Output
_____no_output_____
###Markdown
Count Vs Month, Season, Hour and Weekday
###Code
fig,(ax1,ax2,ax3,ax4)= plt.subplots(nrows=4)
fig.set_size_inches(20,20)
sortOrder = ["January","February","March","April","May","June","July","August","September","October","November","December"]
hueOrder = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
monthAggregated = pd.DataFrame(dfWithoutOutliers.groupby("month")["count"].mean()).reset_index()
monthSorted = monthAggregated.sort_values(by="count",ascending=False)
sns.pointplot(data=monthSorted,x="month",y="count",ax=ax1,order=sortOrder)
ax1.set(xlabel='Month', ylabel='Avearage Count',title="Average Count By Month")
hourAggregated = pd.DataFrame(dfWithoutOutliers.groupby(["hour","season"],sort=True)["count"].mean()).reset_index()
sns.pointplot(x=hourAggregated["hour"], y=hourAggregated["count"],hue=hourAggregated["season"], data=hourAggregated, join=True,ax=ax2)
ax2.set(xlabel='Hour Of The Day', ylabel='Users Count',title="Average Users Count By Hour Of The Day Across Season",label='big')
hourAggregated = pd.DataFrame(dfWithoutOutliers.groupby(["hour","weekday"],sort=True)["count"].mean()).reset_index()
sns.pointplot(x=hourAggregated["hour"], y=hourAggregated["count"],hue=hourAggregated["weekday"],hue_order=hueOrder, data=hourAggregated, join=True,ax=ax3)
ax3.set(xlabel='Hour Of The Day', ylabel='Users Count',title="Average Users Count By Hour Of The Day Across Weekdays",label='big')
hourTransformed = pd.melt(dfWithoutOutliers[["hour","casual","registered"]], id_vars=['hour'], value_vars=['casual', 'registered'])
hourAggregated = pd.DataFrame(hourTransformed.groupby(["hour","variable"],sort=True)["value"].mean()).reset_index()
sns.pointplot(x=hourAggregated["hour"], y=hourAggregated["value"],hue=hourAggregated["variable"],hue_order=["casual","registered"], data=hourAggregated, join=True,ax=ax4)
ax4.set(xlabel='Hour Of The Day', ylabel='Users Count',title="Average Users Count By Hour Of The Day Across User Type",label='big')
#why month average is not plotted?
###Output
_____no_output_____
###Markdown
On weekdays more people tend to rent bicycle around 7AM-8AM and 5PM-6PM. Office hour or school hour. On the other hand, 'Saturday' and 'Sunday', more people tend to rent bicycle between 10AM and 4PM.Registered users use in office hours more than casual users. Dealing with 0 values of 'windspeed' Use mean value to 'windspeed' 0 Creating pipeline
###Code
#colums of previous df are messed up during EDA so crearting again
df = pd.read_csv('train.csv') #parse_dates=True
df.info()
X = df.drop(['count', 'casual', 'registered'], axis=1)
y = df['count']
#added 'casual', 'registered because test.csv doesn't have these columns
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42)
#df = dfWithoutOutliers
#dfWithoutOutliers = df[np.abs(df['count']-df['count'].mean())<=(2*df['count'].std())]
def extract(df):
df.datetime = pd.to_datetime(df.datetime)
df['day'] = df.datetime.dt.day
df['hour'] = df.datetime.dt.hour
df['weekday'] = df.datetime.dt.weekday
df['year'] = df.datetime.dt.year
df['month'] = df.datetime.dt.month
return pd.concat([df[['day']],pd.get_dummies(df[['year','month','weekday','hour']],columns=['year','month','weekday','hour'])],axis=1)
preprosessor = ColumnTransformer([
('do_nothing', 'passthrough', ['holiday', 'workingday']),
('time_extact', FunctionTransformer(extract), ['datetime']),
('one_hot_encoding', OneHotEncoder(sparse = False), ['season','weather']),
('0_imputer', SimpleImputer(strategy='mean', fill_value=0), ['windspeed']),
#('bins',KBinsDiscretizer(n_bins= 7, encode = 'onehot-dense', strategy = 'quantile'),['temp','humidity','windspeed']) #R-squared 68 to 70 but RMSLE 9.15 to 9.39...
],
remainder='drop') #dropping 'casual', 'registered', 'count', 'atemp', 'date', 'datetime'
# create the model pipeline
pipeline = make_pipeline(preprosessor, LinearRegression())
pipeline.fit(X_train, y_train)
X_train
X_train.info()
pipeline.score(X_train, y_train)
pipeline.score(X_test, y_test)
#Root Mean Squared Logarithmic Error (RMSLE)
def rmsle(y_pred, y,convertExp=True):
if convertExp:
y_pred = np.exp(y_pred),
y = np.exp(y)
log1 = np.nan_to_num(np.array([np.log(v + 1) for v in y_pred]))
log2 = np.nan_to_num(np.array([np.log(v + 1) for v in y]))
calc = (log1 - log2) ** 2
return np.sqrt(np.mean(calc))
y_pred = pipeline.predict(X_train)
rmsle(y_pred, y_train, convertExp=True)
###Output
/opt/anaconda3/lib/python3.8/site-packages/pandas/core/arraylike.py:364: RuntimeWarning: overflow encountered in exp
result = getattr(ufunc, method)(*inputs, **kwargs)
/var/folders/fs/bmr1lyws1dx3sp07l0gytvmc0000gn/T/ipykernel_81161/3908407482.py:8: RuntimeWarning: overflow encountered in square
calc = (log1 - log2) ** 2
###Markdown
Submission...
###Code
kaggle_data = pd.read_csv('test.csv')
kaggle_data
predictions = pipeline.predict(kaggle_data)
submission = pd.DataFrame({'datetime':kaggle_data['datetime'],'count':predictions})
submission
#why minus...
###Output
_____no_output_____
###Markdown
Simply replace the negative predicted values with something else (e.g. zero). e,g: y_pred[y_pred < 0] = 0.0Scale / transform the target column (bicycle count). Predict the log of count, for example: m.fit(Xtrain, np.log(ytrain)). Of course, then do not forget to "un-log" (i.e. np.exp()) the prediction afterwards, otherwise your model is reporting the log of the demand, which is on a different scale.Use a model that doesn't extrapolate into the negative value problem, e.g. the RandomForestRegressor
###Code
#replace negative count to 0
count_0 = submission['count'].where(submission['count'] >= 0, 0)
submission = pd.DataFrame({'datetime':X_kaggle['datetime'],'count':count_0})
submission
#Convert DataFrame to a csv file that can be uploaded
filename = 'Bike Sharing Demand LR.csv'
submission.to_csv(filename,index=False)
print('Saved file: ' + filename)
###Output
Saved file: Bike Sharing Demand LR.csv
###Markdown
Memo
###Code
#hour
#day of week
#sin cos, anything has cycle(min, hour)
#how to deal with windspeed
#Outliers Analysis
#Correlation Analysis
#Feature selection(Lasso, RandomForestRegressor) (features are not so many though)
#Visualizing Count Vs (Month,Season,Hour,Weekday,Usertype)
#Future expansion for liniear regression
#RandomForestRegressor for the model
#Hypeprparameter optimization, , cross validation, gridsearch
#Regularization
###Output
_____no_output_____
###Markdown
Data Cleaning using Python and Pandas Importing Required Packages
###Code
# import required packages
import pandas as pd
import re
import numpy as np
###Output
_____no_output_____
###Markdown
Importing the Dataset
###Code
# importing dataset
df = pd.read_csv('USA_cars_datasets.csv')
df.head()
###Output
_____no_output_____
###Markdown
Getting Data Overview
###Code
# data brief
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2499 entries, 0 to 2498
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 2499 non-null int64
1 price 2499 non-null int64
2 brand 2499 non-null object
3 model 2499 non-null object
4 year 2499 non-null int64
5 title_status 2499 non-null object
6 mileage 2499 non-null float64
7 color 2499 non-null object
8 vin 2499 non-null object
9 lot 2499 non-null int64
10 state 2499 non-null object
11 country 2499 non-null object
12 condition 2499 non-null object
dtypes: float64(1), int64(4), object(8)
memory usage: 253.9+ KB
###Markdown
Data Cleaning tasks
The following data cleaning tasks are needed to be performed -
1. Remove unnamed:0
2. Get rid of "color:" in color
3. Normalize condition Dropping unrequired columns
###Code
df = df.drop(columns=['Unnamed: 0'])
###Output
_____no_output_____
###Markdown
Replacing Wrong values
###Code
df['color'] = df['color'].replace('color:', 'no_color')
###Output
_____no_output_____
###Markdown
Normalizing Columns
###Code
# extract number
df['days/hours'] = df['condition'].str.extract(r'(\d+)')
df.head()
# extracting days or hours from the "days"
# duplicating condition column
df['days'] = df['condition']
# remove "left" from "days" column
df['days'] = df['days'].str.replace('left','')
# replace number from "days" column
df['days'] = df['days'].str.replace(r'(\d+)','')
# converting number of days to humber of hours
df['hours'] = df.apply(lambda x: int(x['days/hours']) * 24 if x['days'] == ' days ' else x['days/hours'], axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Global Seaborn Parameters
plt.rcParams['axes.titlesize'] = 22
plt.rcParams['axes.labelsize'] = 12
nba = pd.read_csv('nba_cleaned.csv', index_col = 0)
nba.head()
###Output
_____no_output_____
###Markdown
Point Analysis Let's find the top 5 scoring players.
###Code
print(nba.groupby(['Player'])['PTS'].sum().sort_values(ascending = False).head(5))
top_5 = nba.groupby(['Player'])['PTS'].sum().sort_values(ascending = False).head(5).index.tolist()
nba_top_5 = nba[nba['Player'].isin(top_5)]
nba_top_5 = nba_top_5.sort_values('PTS', ascending=False).reset_index(drop=True)
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.histplot(data = nba_top_5, x = 'Player', weights = 'PTS',
multiple = 'stack', palette = 'rocket', shrink = 0.8,
hue = 'Tm')
ax.set_title('Top 5 Scoring NBA Players')
ax.set_ylabel('Points')
ax.get_legend().set_title('Team')
plt.xticks(rotation=30, ha='right')
sns.despine()
###Output
_____no_output_____
###Markdown
James Harden's statistics seem strange, this needs to be investigated further.
###Code
nba_top_5[nba_top_5['Player'] == 'James Harden']
###Output
_____no_output_____
###Markdown
After some research, it seems that the team name of ___TOT___ refers to a players total number of points for the season based on the aggregation of statistics from each team. With this in mind, we only need to consider James Hardens' ___TOT___ entry, as this aggregates ___HOU___ and ___BRK___.
###Code
print(nba.groupby(['Player', 'Tm'])['PTS'].sum().sort_values(ascending = False).head(5))
top_5 = nba.groupby(['Player', 'Tm'])['PTS'].sum().sort_values(ascending = False).head(5).index.tolist()
nba_top_5 = nba[nba[('Player')].isin([player[0] for player in top_5])]
nba_top_5 = nba_top_5.sort_values('PTS', ascending=False).reset_index(drop=True)
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.histplot(data=nba_top_5, x='Player', weights='PTS',
multiple='stack', palette='rocket', shrink=0.8,
hue='Tm')
ax.set_title('Top 5 Scoring NBA Players')
ax.set_ylabel('Points')
plt.xticks(rotation=30, ha='right')
ax.get_legend().set_bbox_to_anchor((1, 1))
ax.get_legend().set_title('Team')
sns.despine()
# Insert chart labels
groupedvalues = nba_top_5.groupby('Player').sum().reset_index()
for index, row in groupedvalues.iterrows():
ax.text(row.Player, row.PTS/2, round(row.PTS,2), color='black', ha='center', bbox=dict(facecolor='white', alpha=0.2, boxstyle="round,pad=0.5"))
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.scatterplot(data = nba_top_5, x = '3P', y = '2P', size = 'PTS', hue = 'Player', alpha = 0.7, sizes = (800,2000))
ax.set_title('Two and Three Pointers of Top Scoring NBA Players')
# Setting the legend
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels[:6])
ax.set(ylim = (0, 400), xlim = (0,200))
ax.set_xlabel('3 Pointers')
ax.set_ylabel('2 Pointers')
sns.despine()
# Setting text labels
for index, row in groupedvalues.iterrows():
ax.text(row['3P'], row['2P'], round(row['PTS'],2), color='black', ha='center')
nba_top_5_points = pd.melt(nba_top_5, id_vars=['Player'], value_vars=['FT', '2P', '3P'])
nba_top_5_points
def map_function(x, y):
if x == '2P':
return y * 2
elif x == '3P':
return y * 3
else:
return y
nba_top_5_points['points'] = nba_top_5_points.apply(lambda x: map_function(x['variable'], x['value']), axis=1)
nba_top_5_points
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.histplot(data = nba_top_5_points, x = 'Player', weights = 'points',
multiple = 'stack', palette = 'rocket', shrink = 0.8,
hue = 'variable')
ax.set_title('Top 5 Points Breakdown')
ax.set_ylabel('Points')
plt.legend(['Free Throws', 'Two Pointers', 'Three Pointers'])
ax.get_legend().set_title('Shot Types')
ax.get_legend().set_bbox_to_anchor((1, 0.9))
plt.xticks(rotation=30, ha='right')
sns.despine()
###Output
_____no_output_____
###Markdown
Interestingly, the makeup of players total score with respect to two and three pointers was quite different for each of the top 5 players.
###Code
nba_three_pointers = nba[['PTS', '3P', '3PA', '3P%']]
fig = sns.pairplot(nba_three_pointers, palette = 'rocket', height = 4)
fig.fig.subplots_adjust(top = 0.9)
fig.fig.suptitle('Scatter Matrix of NBA 3 Pointers')
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.scatterplot(data=nba_three_pointers, x = 'PTS', y = '3P%')
axx2 = sns.regplot(data=nba_three_pointers, x = 'PTS', y = '3P%')
ax.set_title('Three Pointers Attempted vs. Points Scored')
ax.set_xlabel('Points')
ax.set_ylabel('Accuracy')
sns.despine()
###Output
_____no_output_____
###Markdown
We can see that as a player scores shoots more accurately, they score more points. Let's check to see why there seem to be a high number of players with a low/ 0 three pointer percentage. Nothing seems awry, however, it might potentially be related to their playing position.
###Code
nba[(nba['3P%'] == 0) & (nba['PTS'] > 300)]
###Output
_____no_output_____
###Markdown
It seems that most players have a three point accuracy of between 30% and 40%, with most of the top players achieving a percentage of slightly higher than 40%. It seems that most of the players with the low shot accuracy for three pointers play in Centre. Position Analysis
###Code
nba_positions = nba[['Player', 'PTS', 'Pos', 'Age', 'G', 'Tm']]
nba_position_average = nba_positions.groupby('Pos').mean()
nba_position_average = nba_position_average.reset_index(drop=False).sort_values('PTS', ascending=False)
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(data=nba_position_average, x='Pos', y='PTS', palette='rocket')
ax.set_title('Average Season Points by Position')
ax.set_xlabel('Position')
ax.set_ylabel('Points')
sns.despine()
###Output
_____no_output_____
###Markdown
Clearly, playing point guard/ shooting guard position rewards players with the most amount of average points, or there could be a high proportion of PG-SG players who scored well, skewing the results.
###Code
nba_position_count = nba_positions.groupby('Pos').count().reset_index()[['Pos', 'Player']]
nba_position_count = nba_position_count.sort_values('Player', ascending = False)
nba_position_count
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(data = nba_position_count, x = 'Pos', y = 'Player', palette = 'rocket')
ax.set_title('Number of Players by Position')
ax.set_xlabel('Position')
sns.despine()
###Output
_____no_output_____
###Markdown
This shows that PG-SG and SF-PF are skewing the data as there is only one player of each who played that position during this season. We can remove those two players in order to get a better idea of the shape of the top positions by points.
###Code
nba_position_average = nba_position_average[~nba_position_average['Pos'].isin(['SF-PF', 'PG-SG'])]
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(data=nba_position_average, x='Pos', y='PTS', palette='rocket')
ax.set_title('Average Season Points by Position')
ax.set_xlabel('Position')
ax.set_ylabel('Points')
sns.despine()
###Output
_____no_output_____
###Markdown
We can see that Point Guards scored the most points on average for this season, followed by Shooting Guards and Small Forwards. Power Forwards and Centres score a similar average of points, and were the lowest scoring position this season. Team Analysis
###Code
nba_team_sum = nba.groupby('Tm').sum()
nba_team_sum = nba_team_sum.reset_index(drop=False).sort_values('PTS', ascending=False)
nba_team_sum = nba_team_sum[~nba_team_sum['Tm'].isin(['TOT'])]
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(data = nba_team_sum, x = 'PTS', y = 'Tm', palette = 'rocket')
plt.yticks(fontsize = 9)
ax.set_title('Total Points by Team')
ax.set_xlabel('Points')
ax.set_ylabel('Team')
sns.despine()
###Output
_____no_output_____
###Markdown
Game Analysis
###Code
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.histplot(nba['G'], palette='rocket', bins = 40)
ax.set_title('Games Played')
ax.set_xlabel('Games')
sns.despine()
nba['G_bins'] = pd.cut(nba['G'], bins=[1,10,20,30,38], labels=['1-10', '11-20', '21-30', '31-38'])
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.stripplot(data=nba, x='Age', y='PTS', hue='G_bins', palette='rocket', jitter=0.3)
ax.set_title('Age vs. Points and Games Played')
ax.set_ylabel('Points')
plt.legend(title='Games Played')
sns.despine()
###Output
_____no_output_____
###Markdown
Part 1| Data CleaningThe features can be split into three types of data namely; 1. Numerical, 2. Binary and 3. CategoricalIndex number is a unique key ID which has not much relevance (Rec to remove)Target Variable: Final_test
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import itertools
from scipy.stats import norm
from scipy.special import boxcox1p
from scipy.stats import boxcox_normmax
from sklearn.preprocessing import StandardScaler
from scipy import stats
from scipy.stats import norm,skew
from matplotlib.pyplot import figure
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, log_loss
df.dtypes #check what type of features we are dealing with
percent_missing = df.isnull().sum() * 100 / len(df)
missing_value_df = pd.DataFrame({'column_name': df.columns,
'percent_missing': percent_missing})
missing_value_df.sort_values('percent_missing', inplace =True)
missing_value_df
###Output
_____no_output_____
###Markdown
We can see that both attendance_rate and final_test has empty values. In order to determine if median or mode fill should be used, a hisotogram plot will determine if each features are filled with outliers
###Code
fig =plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
sns.distplot(df['final_test'], fit=norm);
plt.legend(['Normal dist'],loc='best')
plt.ylabel('final_test Frequency')
plt.title('final_test Distribution')
plt.subplot(1,2,1)
fig =plt.figure(figsize=(15,5))
plt.subplot(1,2,2)
sns.distplot(df['attendance_rate'], fit=norm);
plt.legend(['Normal dist'],loc='best')
plt.ylabel('attendace_rate Frequency')
plt.title('attendance_rate Distribution')
###Output
<ipython-input-7-0667065ac35b>:7: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
plt.subplot(1,2,1)
###Markdown
There is not need to do any log transformation on the target feature as the feature is already normally ditributed.Conclusion: Use mean fill for final test as the tests are normally distributed and, use median fill for attendace rate as they are negatively skewed.
###Code
df['final_test']=df['final_test'].fillna(df['final_test'].mean())
df['attendance_rate']=df['attendance_rate'].fillna(df['attendance_rate'].median())
#check if any null left
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Check for duplicatesIt is unlikely that there will be duplicates across every feature, should there be 1, it will liklely be an abnomally.
###Code
dupdf= df[df.duplicated()]
print(dupdf)
###Output
Empty DataFrame
Columns: [index, number_of_siblings, direct_admission, CCA, learning_style, student_id, gender, tuition, final_test, n_male, n_female, age, hours_per_week, attendance_rate, sleep_time, wake_time, mode_of_transport, bag_color]
Index: []
###Markdown
Part 2 | Feature visualisation + EngineeringStarting off with a heatmap of numerical features. We will analyse each feature individually to see if there are any insights to be derived.In addition, additional features will be engineered that will supposedly help improve the interpretability of the dataset. Optimistically, these features should also help to improve the performance of data. Feature creation: Sleeping time, total hoursSince sleep and wake time are both objects, have to convert them to strings then to integer. Studies have also shown that students who have longer sleep tend to do better at school, therefore a new feature hours_slept will be created.
###Code
df
from datetime import datetime, timedelta
df['col'] = pd.to_datetime(df['sleep_time'])
df['col2'] = pd.to_datetime(df['wake_time'])
df['datetime_sleep'] = df['col'].dt.strftime('%H:%M')
df['datetime_wake'] = df['col2'].dt.strftime('%H:%M')
#(df.fr-df.to).astype('timedelta64[h]')
df['col_hr'] = df.col.dt.hour
df['col_min'] = df.col.dt.minute
df['col2_hr'] = df.col2.dt.hour
df['col2_min'] = df.col2.dt.minute
#add 24 to wake hours
df['col2_hr']=df['col2_hr']+24
df['total_sleep_hr']= df["col2_hr"] - df["col_hr"]
# mins
df['total_sleep_mins']=df["col2_min"] - df["col_min"]
# importing pandas module
import pandas as pd
# creating a blank series
Type_new = pd.Series([])
# running a for loop and assigning some values to series
for i in range(len(df)):
if df["total_sleep_hr"][i] > 20 :
Type_new[i]=df["total_sleep_hr"][i] - 24
else:
Type_new[i]= df["total_sleep_hr"][i]
df['total_sleep_hr'] = Type_new
sns.distplot(df['total_sleep_hr']) # ensure that there are no abnomallies in total sleeping hours
df=df.drop(['col', 'col2','datetime_sleep','datetime_wake','col_min','col_hr','col2_hr','col2_min'], axis=1)
df
#df['sleep_time'] = df['sleep_time'].str.replace(r':', '') #remove all colon
#df['wake_time'] = df['wake_time'].str.replace(r':', '') #remove all colon
#df['sleep_time']=df['sleep_time'].astype(int) #convert to int
#df['wake_time']=df['wake_time'].astype(int) #convert to int
###Output
_____no_output_____
###Markdown
Feature visualisation: Numerical Features
###Code
numerical=['number_of_siblings','final_test','n_male','n_female','age','hours_per_week','attendance_rate','total_sleep_hr']
num_df=df[numerical]
plt.figure(figsize=(15,12))
sns.heatmap(num_df.corr(), cmap="coolwarm", annot=True)
###Output
_____no_output_____
###Markdown
Typically, if features have higher correlation with each other. it will be wise to prevent multi-colinearity. However, in this case, there is no need as there are no feautures that are strongly correlated.
###Code
#vars = df.columns
vars = df[numerical].columns
figures_per_time = 4
count = 0
for var in vars:
x = df[var]
# print(y.shape,x.shape)
plt.figure(count//figures_per_time,figsize=(25,5))
plt.subplot(1,figures_per_time,np.mod(count,4)+1)
sns.distplot(x);
plt.title('f model: T= {}'.format(var))
count+=1
###Output
C:\Users\tan_k\anaconda3\lib\site-packages\seaborn\distributions.py:369: UserWarning: Default bandwidth for data is 0; skipping density estimation.
warnings.warn(msg, UserWarning)
###Markdown
Agethere seems to be some abnomalies present in age, as there are values such as 5,6,-4 etc. Removing them would make the data frame cleaner
###Code
df['age']=df['age'].astype(int)
df = df[df.age > 14]
###Output
<ipython-input-28-38ac8b17f933>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['age']=df['age'].astype(int)
###Markdown
Feature elimination Index + Student IDThe indexes and student ID are unique key identifiers that should not provide much variance to our target feature.
###Code
df.drop(['index', 'student_id'], axis=1)
###Output
_____no_output_____
###Markdown
Feature analysis: Categorical Visualisation Mode of Transport
###Code
ranks = df.groupby("mode_of_transport")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "mode_of_transport"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("mode_of_transport")["final_test"].median())
print(df.groupby("mode_of_transport")["final_test"].var())
###Output
_____no_output_____
###Markdown
Gender
###Code
ranks = df.groupby("gender")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "gender"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("gender")["final_test"].median())
print(df.groupby("gender")["final_test"].var())
###Output
_____no_output_____
###Markdown
CCACCA feature has some issues with encoding.
###Code
df["CCA"].replace({"CLUBS": "Clubs", "SPORTS": "Sports", "ARTS":"Arts","NONE":"None"}, inplace=True)
ranks = df.groupby("CCA")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "CCA"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("CCA")["final_test"].median())
print(df.groupby("CCA")["final_test"].var())
###Output
_____no_output_____
###Markdown
CCA feauture is useful as well in providing information. While it would be good to consider to classify it as having CCA and No CCA. I opted not to do so as there is some context to the features. Direction Admission
###Code
ranks = df.groupby("direct_admission")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "direct_admission"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("direct_admission")["final_test"].median())
print(df.groupby("direct_admission")["final_test"].var())
###Output
_____no_output_____
###Markdown
The direct admission feature is useful as it provides some variance to the final_test results. TuitionThere were some issues with the encoding so some data cleaning is required. group No and N together and Yes and Y together.
###Code
df["tuition"].replace({"No": "N", "Yes": "Y"}, inplace=True)
ranks = df.groupby("tuition")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "tuition"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("tuition")["final_test"].median())
print(df.groupby("tuition")["final_test"].var())
###Output
_____no_output_____
###Markdown
Sleep Time
###Code
ranks = df.groupby("sleep_time")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "sleep_time"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("sleep_time")["final_test"].median())
print(df.groupby("sleep_time")["final_test"].var())
###Output
_____no_output_____
###Markdown
As there is a distinct difference in the median test scores between the pre and post midnight category, i will group them into before midnight and after midnight.
###Code
df["sleep_time"].replace({"1:00": "Past_midnight", "3:00": "Past_midnight", "1:30":"Past_midnight","2:00":"Past_midnight","2:30":"Past_midnight","0:30":"Past_midnight","23:30": "Pre_midnight", "21:30": "Pre_midnight", "22:00":"Pre_midnight","21:00":"Pre_midnight","23:00":"Pre_midnight","0:00":"Pre_midnight","22:30":"Pre_midnight"}, inplace=True)
ranks = df.groupby("sleep_time")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "sleep_time"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("sleep_time")["final_test"].median())
print(df.groupby("sleep_time")["final_test"].var())
###Output
_____no_output_____
###Markdown
We can see that this new feature created shows a sharp distincition in both categories. Wake Time
###Code
ranks = df.groupby("wake_time")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "wake_time"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("wake_time")["final_test"].median())
print(df.groupby("wake_time")["final_test"].var())
###Output
_____no_output_____
###Markdown
The variance across multiple waking times do not contribute to any large variance in the final score. Further more we have already created a new feature utilizing waking time. Therefore, wake _time can be removed.
###Code
ranks = df.groupby("bag_color")["final_test"].median().sort_values(ascending=False)[::-1].index
var_name = "bag_color"
col_order = np.sort(df[var_name].unique()).tolist()
plt.figure(figsize=(12,6))
sns.boxplot(x=var_name, y='final_test', data=df, order=ranks)
plt.xlabel(var_name, fontsize=12)
plt.ylabel('final_test', fontsize=12)
plt.title("Distribution of final test variable with "+var_name, fontsize=15)
plt.show()
print(df.groupby("bag_color")["final_test"].median())
print(df.groupby("bag_color")["final_test"].var())
###Output
_____no_output_____ |
notebooks/usage-LinearForest.ipynb | ###Markdown
REGRESSION
###Code
n_sample, n_features = 8000, 15
X, y = make_regression(n_samples=n_sample, n_features=n_features, n_targets=1,
n_informative=5, shuffle=True, random_state=33)
X.shape, y.shape
regr = LinearForestRegressor(Ridge())
regr.fit(X, y)
regr.predict(X).shape, regr.apply(X).shape, regr.decision_path(X)[-1].shape, regr.score(X,y)
###Output
_____no_output_____
###Markdown
multi-target regression with weights
###Code
n_sample, n_features = 8000, 15
X, y = make_regression(n_samples=n_sample, n_features=n_features, n_targets=2,
n_informative=5, shuffle=True, random_state=33)
W = np.random.uniform(1,3, (n_sample,))
X.shape, y.shape
regr = LinearForestRegressor(Ridge())
regr.fit(X, y, W)
regr.predict(X).shape, regr.apply(X).shape, regr.decision_path(X)[-1].shape, regr.score(X,y)
###Output
_____no_output_____
###Markdown
BINARY CLASSIFICATION
###Code
n_sample, n_features = 8000, 15
X, y = make_classification(n_samples=n_sample, n_features=n_features, n_classes=2,
n_redundant=4, n_informative=5,
n_clusters_per_class=1,
shuffle=True, random_state=33)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
default configuration
###Code
clf = LinearForestClassifier(Ridge())
clf.fit(X, y)
clf.predict(X).shape, clf.predict_proba(X).shape, clf.apply(X).shape, clf.decision_path(X)[-1].shape, clf.score(X,y)
###Output
_____no_output_____
###Markdown
MULTI-CLASS CLASSIFICATION
###Code
n_sample, n_features = 8000, 15
X, y = make_classification(n_samples=n_sample, n_features=n_features, n_classes=3,
n_redundant=4, n_informative=5,
n_clusters_per_class=1,
shuffle=True, random_state=33)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
default configuration
###Code
from sklearn.multiclass import OneVsRestClassifier
clf = OneVsRestClassifier(LinearForestClassifier(Ridge()))
clf.fit(X, y)
clf.predict(X).shape, clf.predict_proba(X).shape, clf.score(X,y)
###Output
_____no_output_____ |
p2_continuous-control/Crawler.ipynb | ###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler.app')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.18618646803467223
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
from tqdm.notebook import tqdm, trange
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler.app')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
if False:
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
#env.close()
###Output
_____no_output_____
###Markdown
4. It's Your Turn!Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:```pythonenv_info = env.reset(train_mode=True)[brain_name]```
###Code
import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("using device: ",device)
import random
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from ddpg_agent import Agent
agent = Agent(state_size=state_size, action_size=action_size, random_seed=10)
###Output
_____no_output_____
###Markdown
Train the Agent with DDPG
###Code
def ddpg(n_episodes=300, max_t=1000, print_every=10):
scores_deque = deque(maxlen=100)
scores_total = []
for i_episode in trange(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
agent.reset()
for t in range(max_t):
actions = agent.act(states) # select an action (for each agent)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
agent.step(t, states, actions, rewards, next_states, dones)
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if t % 100 == 0 and False:
print(f'Timestep {t}\tScore: {round(np.mean(scores),2)}\tmin: {round(np.min(scores),2)}\tmax: {round(np.max(scores),2)}')
if np.any(dones): # exit loop if episode finished
break
scores_deque.append(scores)
scores_total.append(scores)
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="")
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor_crawler.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic_crawler.pth')
if i_episode % print_every == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
# Environment solved when average of last 100 scores is greater or equal to 30
if np.mean(scores_deque) >= 40.0 and i_episode > 100:
print('Environment solved in {:d} episodes! Average score of {:.2f}'.format(i_episode, np.mean(scores_deque)))
break
return scores_total
scores = ddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
plt.savefig('DDQN.png')
###Output
_____no_output_____
###Markdown
Watch a smart agent
###Code
if 1:
agent.actor_local.load_state_dict(torch.load('checkpoint_actor_crawler.pth'))
agent.critic_local.load_state_dict(torch.load('checkpoint_critic_crawler.pth'))
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
for t in range(300):
actions = agent.act(states, add_noise=False) # select an action (for each agent)
#print(actions.shape)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
states = next_states # roll over states to next time step
scores += env_info.rewards # update the score (for each agent)
if np.any(dones): # exit loop if episode finished
break
print("Mean score in 300 steps:", scores.mean())
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
# env = UnityEnvironment(file_name='../../crawler/Crawler.app')
env = UnityEnvironment(file_name='./unity/Crawler_Linux_NoVis/Crawler.x86_64')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.1988490386866033
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler.app')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 2.633553469165539
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
4. It's Your Turn!Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:```pythonenv_info = env.reset(train_mode=True)[brain_name]```
###Code
from collections import deque
import matplotlib.pyplot as plt
import torch
%matplotlib inline
unity_env = env
from unity_env_wrapper import EnvMultipleWrapper
env = EnvMultipleWrapper(env=unity_env, train_mode=True)
print(f"env.action_size: {env.action_size}")
print(f"env.state_size: {env.state_size}")
print(f"env.num_agents: {env.num_agents}")
import progressbar as pb
def train(env, agent, episodes=2000, max_t=1000, print_every=50):
widget = ['training loop: ', pb.Percentage(), ' ', pb.Bar(), ' ', pb.ETA()]
timer = pb.ProgressBar(widgets=widget, maxval=episodes).start()
scores = []
scores_deque = deque(maxlen=100)
for i_episode in range(1, episodes+1):
states = env.reset()
agent.reset()
score = np.zeros(env.num_agents)
for t in range(max_t):
actions = agent.act(states)
next_states, rewards, dones = env.step(actions)
agent.step(states, actions, rewards, next_states, dones)
states = next_states
score += rewards
if np.any(dones):
break
scores_deque.append(np.mean(score))
scores.append(np.mean(score))
print(f"\rEpisode {i_episode}/{episodes}\
Average Score: {np.mean(scores_deque):.2f}\
Score: {np.mean(score):.2f}\
Max Score: {np.max(scores_deque):.2f}", end="")
if i_episode % print_every == 0:
timer.update(i_episode)
if (scores_deque[0]>30) and (np.mean(scores_deque) > 30):
print(f"\nEnvironment solved in {i_episode-100} episodes!\t Average Score: {np.mean(scores_deque):.2f}")
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
break
timer.finish()
return scores
import pandas as pd
import seaborn as sns
sns.set(style="whitegrid")
def plot_scores(scores):
episodes = np.arange(start=1, stop=len(scores)+1)
data = pd.DataFrame(data=scores, index=episodes, columns=["Score"])
fig = sns.lineplot(data=data)
fig.set_xlabel("Episode #")
from ddpg_agent import Agent
from network import Actor
from network import Critic
from replay_buffer import ReplayBuffer
from noise import OUNoise
buffer_size = int(1e5)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
learning_rate_actor = 1e-4
learning_rate_critic = 1e-3
batch_size = 128
discount = 0.99
seed = 2
action_size = env.action_size
state_size = env.state_size
num_agents = env.num_agents
def create_actor(state_dim, action_dim):
return Actor(
state_dim = state_dim,
action_dim = action_dim,
fc1_units = 400,
fc2_units = 300,
seed = seed)
def create_critic(state_dim, action_dim):
return Critic(
state_dim = state_dim,
action_dim = action_dim,
fc1_units = 400,
fc2_units = 300,
seed = seed)
agent = Agent(
create_actor = create_actor,
create_critic = create_critic,
replay_buffer = ReplayBuffer(buffer_size = buffer_size, seed = seed),
noise = OUNoise(size = (num_agents, action_size), seed = seed),
state_dim = state_size,
action_dim = action_size,
seed = seed,
lr_actor = learning_rate_actor,
lr_critic = learning_rate_critic,
batch_size = 128,
discount = 0.99)
scores = train(env=env, agent=agent, episodes=500, print_every=20)
###Output
training loop: 0% | | ETA: --:--:--
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler_Windows_x86_64/Crawler.app')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.46692298958078027
###Markdown
When finished, you can close the environment.
###Code
# env.close()
###Output
_____no_output_____
###Markdown
4. It's Your Turn!Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:```pythonenv_info = env.reset(train_mode=True)[brain_name]```
###Code
import torch
from agent import AgentDDPG, AgentA2C
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
agent= AgentA2C(brain.vector_observation_space_size, brain.vector_action_space_size, num_agents, 42)
from tqdm import tqdm
from collections import deque
import matplotlib.pyplot as plt
from agent import AgentA2C
import torch
env_info = env.reset(train_mode=True)[brain_name]
def compute_returns(next_value, rewards, masks, gamma=0.99):
R = next_value
returns = []
for step in reversed(range(len(rewards))):
R = rewards[step].unsqueeze(1) + gamma * R * masks[step]
returns.insert(0, R)
return returns
def a2c(agent=agent, n_episodes=50000, eps_start=1.0, eps_end=0.01, eps_decay=0.995, max_t= 1000):
scores = []
scores_deque = deque(maxlen=1000) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in tqdm(range(1, n_episodes+1)):
log_probs = []
values = []
rewards = []
masks = []
entropy = 0
frame_idx=0
score = np.zeros(num_agents) # initialize the score (for each agent)
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
agent.reset()
for t_step in range(1, max_t):
action, dist, value = agent.act(states) # select an action (for each agent)
env_info = env.step(action.detach().cpu().numpy())[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
reward = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
log_prob = dist.log_prob(action)
entropy += dist.entropy().mean()
log_probs.append(log_prob)
values.append(value)
rewards.append(torch.FloatTensor(reward).to(device))
masks.append(1 -torch.FloatTensor(dones).unsqueeze(1).to(device))
states = next_states
score += np.array(reward)
# if i_episode % 100 == 0:
# test_rewards.append(np.mean([test_env() for _ in range(2)]))
# plot(frame_idx, test_rewards)
if np.any(dones):
break
scores_deque.append(np.mean(score))
scores.append(score)
next_states = torch.FloatTensor(next_states).to(device)
_, _, next_value = agent.act(next_states.cpu())
returns = compute_returns(next_value, rewards, masks)
log_probs = torch.cat(log_probs)
returns = torch.cat(returns).detach()
values = torch.cat(values)
advantage = returns - values
actor_loss = -(log_probs * advantage.detach()).mean()
critic_loss = advantage.pow(2).mean()
loss = actor_loss + 0.5 * critic_loss - 0.001 * entropy
agent.optimizer.zero_grad()
loss.backward()
agent.optimizer.step()
if i_episode%500==0:
print('\rEpisode {}\tAverage Score: {:.2f}\tScore: {:.2f}'.format(i_episode, np.mean(scores_deque), np.mean(score)))
return scores
a2c()
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
action, dist, value = agent.act(states)
actions = np.clip(action.detach().cpu().numpy(), -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler_Linux_NoVis/Crawler.x86_64')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.22467557546527436
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesIn this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.Run the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='Crawler.app')
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: CrawlerBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 129
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 20
Vector Action descriptions: , , , , , , , , , , , , , , , , , , ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
Number of agents: 12
Size of each action: 20
There are 12 agents. Each observes a state with length: 129
The state for the first agent looks like: [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.25000000e+00
1.00000000e+00 0.00000000e+00 1.78813934e-07 0.00000000e+00
1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093168e-01 -1.42857209e-01 -6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339906e+00 -1.42857209e-01
-1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093347e-01 -1.42857209e-01 -6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339953e+00 -1.42857209e-01
-1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
-6.06093168e-01 -1.42857209e-01 6.06078804e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.33339906e+00 -1.42857209e-01
1.33341408e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.06093347e-01 -1.42857209e-01 6.06078625e-01 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 1.33339953e+00 -1.42857209e-01
1.33341372e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00]
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
Total score (averaged over agents) this episode: 0.23180090104384968
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____
###Markdown
Continuous Control---Congratulations for completing the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program! In this notebook, you will learn how to control an agent in a more challenging environment, where the goal is to train a creature with four arms to walk forward. **Note that this exercise is optional!** 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Crawler.app"`- **Windows** (x86): `"path/to/Crawler_Windows_x86/Crawler.exe"`- **Windows** (x86_64): `"path/to/Crawler_Windows_x86_64/Crawler.exe"`- **Linux** (x86): `"path/to/Crawler_Linux/Crawler.x86"`- **Linux** (x86_64): `"path/to/Crawler_Linux/Crawler.x86_64"`- **Linux** (x86, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86"`- **Linux** (x86_64, headless): `"path/to/Crawler_Linux_NoVis/Crawler.x86_64"`For instance, if you are using a Mac, then you downloaded `Crawler.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Crawler.app")```
###Code
env = UnityEnvironment(file_name='../../crawler/Crawler.app')
###Output
_____no_output_____
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesIn this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.Run the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
###Output
_____no_output_____
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
###Code
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
env.close()
###Output
_____no_output_____ |
Assignments/Assignment_2/Q1/q1_Arch1_Line.ipynb | ###Markdown
Increasing number of filters to 64
###Code
import numpy as np
import keras
from keras.models import Sequential
from matplotlib import pyplot as plt
from keras.layers import Dense,Flatten
from keras.layers import Conv2D, MaxPooling2D,BatchNormalization
from keras.utils import np_utils
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, classification_report
class AccuracyHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.acc = []
self.loss = []
self.val_f1s = []
self.val_recalls = []
self.val_precisions = []
def on_epoch_end(self, batch, logs={}):
self.acc.append(logs.get('acc'))
self.loss.append(logs.get('loss'))
X_val, y_val = self.validation_data[0], self.validation_data[1]
y_predict = np.asarray(model.predict(X_val))
y_val = np.argmax(y_val, axis=1)
y_predict = np.argmax(y_predict, axis=1)
self.val_recalls.append(recall_score(y_val, y_predict, average=None))
self.val_precisions.append(precision_score(y_val, y_predict, average=None))
self.val_f1s.append(f1_score(y_val,y_predict, average=None))
data = np.load('/home/aj/assignments/assign2/outfile.npz')
X_train=data["X_train.npy"]
X_test=data["X_test.npy"]
y_train=data["y_train.npy"]
y_test=data["y_test.npy"]
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0],28, 28,3).astype('float32')
X_test = X_test.reshape(X_test.shape[0],28, 28,3).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
num_classes = y_test.shape[1]
input_shape=(28,28,3)
history = AccuracyHistory()
def create_model(filters,filt1_size,conv_stride,pool_size,pool_stride,opt,loss):
model=Sequential()
model.add(Conv2D(filters, kernel_size=(filt1_size, filt1_size), strides=(conv_stride, conv_stride),activation='relu',input_shape=input_shape))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(pool_size, pool_size), strides=(pool_stride,pool_stride), padding='valid'))
model.add(Flatten())
model.add(Dense(1024,activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer=opt,loss=loss,metrics=['accuracy'])
return model
model = create_model(64,7,1,2,2,'adam','categorical_crossentropy')
print(model.summary())
def fit_model(epochs,batch_size):
model.fit(X_train, y_train,batch_size=batch_size,epochs=epochs,validation_split=0.05,callbacks=[history])
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
y_pred = model.predict_classes(X_test)
cnf_mat = confusion_matrix(np.argmax(y_test,axis=1), y_pred)
return cnf_mat,score,y_pred
epochs=15
batch_size = 512
cnf_mat,score,y_pred = fit_model(epochs,batch_size)
from keras.models import load_model
model.save('inc_filter_model_line.h5')
fscore=f1_score(np.argmax(y_test,axis=1), y_pred,average=None)
recall=recall_score(np.argmax(y_test,axis=1), y_pred,average=None)
prec=precision_score(np.argmax(y_test,axis=1), y_pred,average=None)
def plot(r1,r2,data,Info):
plt.plot(range(r1,r2),data)
plt.xlabel('Epochs')
plt.ylabel(Info)
plt.show()
plot(1,epochs+1,history.acc,'Accuracy')
plot(1,epochs+1,history.loss,'Loss')
plt.plot(recall,label='Recall')
plt.plot(prec,label='Precision')
plt.xlabel('Class')
plt.ylabel('F-score vs Recall vs Precision')
plt.plot(fscore,label='F-score')
plt.legend()
avg_fscore=np.mean(fscore)
print(avg_fscore)
avg_precision=np.mean(prec)
print(avg_precision)
avg_recall=np.mean(recall)
print(avg_recall)
cnf_mat = confusion_matrix(np.argmax(y_test,axis=1), y_pred)
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
conf = cnf_mat
fig, ax = plt.subplots(figsize=(30,30))
im = ax.imshow(conf,alpha=0.5)
# plt.show()
# We want to show all ticks...
ax.set_xticks(np.arange(cnf_mat.shape[0]))
ax.set_yticks(np.arange(cnf_mat.shape[1]))
# ... and label them with the respective list entries
ax.set_xticklabels(np.arange(0,96))
ax.set_yticklabels(np.arange(0,96))
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(cnf_mat.shape[0]):
for j in range(cnf_mat.shape[1]):
text = ax.text(j, i, conf[i, j],
ha="center", va="center",color="black",fontsize=10)
ax.set_title("Confusion matrix",fontsize=20)
fig.tight_layout()
# fig.savefig('plot1_cnf.png')
plt.show()
del model
###Output
_____no_output_____ |
jupyter/rank_classification_using_BERT_on_Amazon_Review.ipynb | ###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.15.0
%maven ai.djl:basicdataset:0.15.0
%maven org.slf4j:slf4j-simple:1.7.32
%maven ai.djl.mxnet:mxnet-model-zoo:0.15.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.15.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
Vocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private Vocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return Batchifier.STACK; }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
predictor.predict(review)
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.13.0
%maven ai.djl:basicdataset:0.13.0
%maven org.slf4j:slf4j-api:1.7.32
%maven org.slf4j:slf4j-simple:1.7.32
// See https://github.com/deepjavalibrary/djl/blob/master/engines/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.13.0
%maven ai.djl.mxnet:mxnet-native-auto:1.8.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.13.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.9.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
DefaultVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private DefaultVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.10.0
%maven ai.djl:basicdataset:0.10.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/deepjavalibrary/djl/blob/master/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.10.0
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-backport
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.10.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.7.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.engine.Engine;
import ai.djl.basicdataset.tabular.CsvDataset;
import ai.djl.basicdataset.utils.DynamicBuffer;
import ai.djl.inference.Predictor;
import ai.djl.metric.Metrics;
import ai.djl.modality.Classifications;
import ai.djl.modality.nlp.SimpleVocabulary;
import ai.djl.modality.nlp.bert.BertFullTokenizer;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDList;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Activation;
import ai.djl.nn.Block;
import ai.djl.nn.SequentialBlock;
import ai.djl.nn.core.Linear;
import ai.djl.nn.norm.Dropout;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.Batch;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.SaveModelTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import ai.djl.translate.*;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;
import org.apache.commons.csv.CSVFormat;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = ModelZoo.loadModel(criteria);
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.14.0
%maven ai.djl:basicdataset:0.14.0
%maven org.slf4j:slf4j-api:1.7.32
%maven org.slf4j:slf4j-simple:1.7.32
%maven ai.djl.mxnet:mxnet-model-zoo:0.14.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.14.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
DefaultVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private DefaultVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.11.0
%maven ai.djl:basicdataset:0.11.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/deepjavalibrary/djl/blob/master/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.11.0
%maven ai.djl.mxnet:mxnet-native-auto:1.8.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.11.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.8.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.engine.Engine;
import ai.djl.basicdataset.tabular.CsvDataset;
import ai.djl.basicdataset.utils.DynamicBuffer;
import ai.djl.inference.Predictor;
import ai.djl.metric.Metrics;
import ai.djl.modality.Classifications;
import ai.djl.modality.nlp.SimpleVocabulary;
import ai.djl.modality.nlp.bert.BertFullTokenizer;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDList;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Activation;
import ai.djl.nn.Block;
import ai.djl.nn.SequentialBlock;
import ai.djl.nn.core.Linear;
import ai.djl.nn.norm.Dropout;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.Batch;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.SaveModelTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import ai.djl.translate.*;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;
import org.apache.commons.csv.CSVFormat;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = ModelZoo.loadModel(criteria);
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/awslabs/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/awslabs/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.10.0
%maven ai.djl:basicdataset:0.10.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.10.0
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-backport
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.10.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.7.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.engine.Engine;
import ai.djl.basicdataset.tabular.CsvDataset;
import ai.djl.basicdataset.utils.DynamicBuffer;
import ai.djl.inference.Predictor;
import ai.djl.metric.Metrics;
import ai.djl.modality.Classifications;
import ai.djl.modality.nlp.SimpleVocabulary;
import ai.djl.modality.nlp.bert.BertFullTokenizer;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDList;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Activation;
import ai.djl.nn.Block;
import ai.djl.nn.SequentialBlock;
import ai.djl.nn.core.Linear;
import ai.djl.nn.norm.Dropout;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.Batch;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.SaveModelTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import ai.djl.translate.*;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;
import org.apache.commons.csv.CSVFormat;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = ModelZoo.loadModel(criteria);
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/awslabs/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/awslabs/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven:
###Code
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.9.0-SNAPSHOT
%maven ai.djl:basicdataset:0.9.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-model-zoo:0.9.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// for more MXNet library selection options
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-backport
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.Application;
import ai.djl.Device;
import ai.djl.MalformedModelException;
import ai.djl.Model;
import ai.djl.basicdataset.CsvDataset;
import ai.djl.basicdataset.utils.DynamicBuffer;
import ai.djl.inference.Predictor;
import ai.djl.metric.Metrics;
import ai.djl.modality.Classifications;
import ai.djl.modality.nlp.SimpleVocabulary;
import ai.djl.modality.nlp.bert.BertFullTokenizer;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDList;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Activation;
import ai.djl.nn.Block;
import ai.djl.nn.SequentialBlock;
import ai.djl.nn.core.Linear;
import ai.djl.nn.norm.Dropout;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.Batch;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.CheckpointsTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import ai.djl.translate.*;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;
import org.apache.commons.csv.CSVFormat;
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens
List<String> tokens = tokenizer.tokenize(input);
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addNumericLabel("star_rating") // set label
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls("https://resources.djl.ai/test-models/distilbert.zip")
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = ModelZoo.loadModel(criteria);
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
try {
return embedder.predict(
new NDList(data, data.getManager()
.full(new Shape(batchSize), maxLength)));
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt").getPath())
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
CheckpointsTrainingListener listener = new CheckpointsTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.12.0
%maven ai.djl:basicdataset:0.12.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/deepjavalibrary/djl/blob/master/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.12.0
%maven ai.djl.mxnet:mxnet-native-auto:1.8.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.12.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.8.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/awslabs/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/awslabs/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.10.0-SNAPSHOT
%maven ai.djl:basicdataset:0.10.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.10.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-backport
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.10.0-SNAPSHOT
// %maven ai.djl.pytorch:pytorch-native-auto:1.7.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.engine.Engine;
import ai.djl.basicdataset.tabular.CsvDataset;
import ai.djl.basicdataset.utils.DynamicBuffer;
import ai.djl.inference.Predictor;
import ai.djl.metric.Metrics;
import ai.djl.modality.Classifications;
import ai.djl.modality.nlp.SimpleVocabulary;
import ai.djl.modality.nlp.bert.BertFullTokenizer;
import ai.djl.ndarray.NDArray;
import ai.djl.ndarray.NDList;
import ai.djl.ndarray.types.DataType;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.Activation;
import ai.djl.nn.Block;
import ai.djl.nn.SequentialBlock;
import ai.djl.nn.core.Linear;
import ai.djl.nn.norm.Dropout;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.Batch;
import ai.djl.training.dataset.RandomAccessDataset;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.SaveModelTrainingListener;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.Loss;
import ai.djl.training.util.ProgressBar;
import ai.djl.translate.*;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.List;
import org.apache.commons.csv.CSVFormat;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
SimpleVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = ModelZoo.loadModel(criteria);
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
SimpleVocabulary vocabulary = SimpleVocabulary.builder()
.optMinFrequency(1)
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
CheckpointsTrainingListener listener = new CheckpointsTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private SimpleVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.17.0
%maven ai.djl:basicdataset:0.17.0
%maven org.slf4j:slf4j-simple:1.7.32
%maven ai.djl.mxnet:mxnet-model-zoo:0.17.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.17.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
Vocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private Vocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return Batchifier.STACK; }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
predictor.predict(review)
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.16.0
%maven ai.djl:basicdataset:0.16.0
%maven org.slf4j:slf4j-simple:1.7.32
%maven ai.djl.mxnet:mxnet-model-zoo:0.16.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.16.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
Vocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private Vocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return Batchifier.STACK; }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
predictor.predict(review)
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.12.0
%maven ai.djl:basicdataset:0.12.0
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
// See https://github.com/deepjavalibrary/djl/blob/master/engines/mxnet/mxnet-engine/README.md
// MXNet
%maven ai.djl.mxnet:mxnet-model-zoo:0.12.0
%maven ai.djl.mxnet:mxnet-native-auto:1.8.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.12.0
// %maven ai.djl.pytorch:pytorch-native-auto:1.8.1
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
DefaultVocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Device.getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private DefaultVocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return new StackBatchifier(); }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
System.out.println(predictor.predict(review));
###Output
_____no_output_____
###Markdown
Rank Classification using BERT on Amazon Review dataset IntroductionIn this tutorial, you learn how to train a rank classification model using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning). We will use a pretrained DistilBert model to train on the Amazon review dataset. About the dataset and model[Amazon Customer Review dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) consists of all different valid reviews from amazon.com. We will use the "Digital_software" category that consists of 102k valid reviews. As for the pre-trained model, use the DistilBERT[[1]](https://arxiv.org/abs/1910.01108) model. It's a light-weight BERT model already trained on [Wikipedia text corpora](https://en.wikipedia.org/wiki/List_of_text_corpora), a much larger dataset consisting of over millions text. The DistilBERT served as a base layer and we will add some more classification layers to output as rankings (1 - 5).Amazon Review exampleWe will use review body as our data input and ranking as label. Pre-requisitesThis tutorial assumes you have the following knowledge. Follow the READMEs and tutorials if you are not familiar with:1. How to setup and run [Java Kernel in Jupyter Notebook](https://github.com/deepjavalibrary/djl/blob/master/jupyter/README.md)2. Basic components of Deep Java Library, and how to [train your first model](https://github.com/deepjavalibrary/djl/blob/master/jupyter/tutorial/02_train_your_first_model.ipynb). Getting startedLoad the Deep Java Libarary and its dependencies from Maven. In here, you can choose between MXNet or PyTorch. MXNet is enabled by default. You can uncomment PyTorch dependencies and comment MXNet ones to switch to PyTorch.
###Code
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.14.0
%maven ai.djl:basicdataset:0.14.0
%maven org.slf4j:slf4j-api:1.7.32
%maven org.slf4j:slf4j-simple:1.7.32
%maven ai.djl.mxnet:mxnet-model-zoo:0.14.0
// PyTorch
// %maven ai.djl.pytorch:pytorch-model-zoo:0.14.0
###Output
_____no_output_____
###Markdown
Now let's import the necessary modules:
###Code
import ai.djl.*;
import ai.djl.basicdataset.tabular.*;
import ai.djl.basicdataset.utils.*;
import ai.djl.engine.*;
import ai.djl.inference.*;
import ai.djl.metric.*;
import ai.djl.modality.*;
import ai.djl.modality.nlp.*;
import ai.djl.modality.nlp.bert.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.*;
import ai.djl.nn.*;
import ai.djl.nn.core.*;
import ai.djl.nn.norm.*;
import ai.djl.repository.zoo.*;
import ai.djl.training.*;
import ai.djl.training.dataset.*;
import ai.djl.training.evaluator.*;
import ai.djl.training.listener.*;
import ai.djl.training.loss.*;
import ai.djl.training.util.*;
import ai.djl.translate.*;
import java.io.*;
import java.nio.file.*;
import java.util.*;
import org.apache.commons.csv.*;
System.out.println("You are using: " + Engine.getInstance().getEngineName() + " Engine");
###Output
_____no_output_____
###Markdown
Prepare DatasetFirst step is to prepare the dataset for training. Since the original data was in TSV format, we can use CSVDataset to be the dataset container. We will also need to specify how do we want to preprocess the raw data. For BERT model, the input data are required to be tokenized and mapped into indices based on the inputs. In DJL, we defined an interface called Fearurizer, it is designed to allow user customize operation on each selected row/column of a dataset. In our case, we would like to clean and tokenize our sentencies. So let's try to implement it to deal with customer review sentencies.
###Code
final class BertFeaturizer implements CsvDataset.Featurizer {
private final BertFullTokenizer tokenizer;
private final int maxLength; // the cut-off length
public BertFeaturizer(BertFullTokenizer tokenizer, int maxLength) {
this.tokenizer = tokenizer;
this.maxLength = maxLength;
}
/** {@inheritDoc} */
@Override
public void featurize(DynamicBuffer buf, String input) {
Vocabulary vocab = tokenizer.getVocabulary();
// convert sentence to tokens (toLowerCase for uncased model)
List<String> tokens = tokenizer.tokenize(input.toLowerCase());
// trim the tokens to maxLength
tokens = tokens.size() > maxLength ? tokens.subList(0, maxLength) : tokens;
// BERT embedding convention "[CLS] Your Sentence [SEP]"
buf.put(vocab.getIndex("[CLS]"));
tokens.forEach(token -> buf.put(vocab.getIndex(token)));
buf.put(vocab.getIndex("[SEP]"));
}
}
###Output
_____no_output_____
###Markdown
Once we got this part done, we can apply the `BertFeaturizer` into our Dataset. We take `review_body` column and apply the Featurizer. We also pick `star_rating` as our label set. Since we go for batch input, we need to tell the dataset to pad our data if it is less than the `maxLength` we defined. `PaddingStackBatchifier` will do the work for you.
###Code
CsvDataset getDataset(int batchSize, BertFullTokenizer tokenizer, int maxLength, int limit) {
String amazonReview =
"https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz";
float paddingToken = tokenizer.getVocabulary().getIndex("[PAD]");
return CsvDataset.builder()
.optCsvUrl(amazonReview) // load from Url
.setCsvFormat(CSVFormat.TDF.withQuote(null).withHeader()) // Setting TSV loading format
.setSampling(batchSize, true) // make sample size and random access
.optLimit(limit)
.addFeature(
new CsvDataset.Feature(
"review_body", new BertFeaturizer(tokenizer, maxLength)))
.addLabel(
new CsvDataset.Feature(
"star_rating", (buf, data) -> buf.put(Float.parseFloat(data) - 1.0f)))
.optDataBatchifier(
PaddingStackBatchifier.builder()
.optIncludeValidLengths(false)
.addPad(0, 0, (m) -> m.ones(new Shape(1)).mul(paddingToken))
.build()) // define how to pad dataset to a fix length
.build();
}
###Output
_____no_output_____
###Markdown
Construct your modelWe will load our pretrained model and prepare the classification. First construct the `criteria` to specify where to load the embedding (DistiledBERT), then call `loadModel` to download that embedding with pre-trained weights. Since this model is built without classification layer, we need to add a classification layer to the end of the model and train it. After you are done modifying the block, set it back to model using `setBlock`. Load the word embeddingWe will download our word embedding and load it to memory (this may take a while)
###Code
// MXNet base model
String modelUrls = "https://resources.djl.ai/test-models/distilbert.zip";
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
modelUrls = "https://resources.djl.ai/test-models/traced_distilbert_wikipedia_uncased.zip";
}
Criteria<NDList, NDList> criteria = Criteria.builder()
.optApplication(Application.NLP.WORD_EMBEDDING)
.setTypes(NDList.class, NDList.class)
.optModelUrls(modelUrls)
.optProgress(new ProgressBar())
.build();
ZooModel<NDList, NDList> embedding = criteria.loadModel();
###Output
_____no_output_____
###Markdown
Create classification layersThen let's build a simple MLP layer to classify the ranks. We set the output of last FullyConnected (Linear) layer to 5 to get the predictions for star 1 to 5. Then all we need to do is to load the block into the model. Before applying the classification layer, we also need to add text embedding to the front. In our case, we just create a Lambda function that do the followings:1. batch_data (batch size, token indices) -> batch_data + max_length (size of the token indices)2. generate embedding
###Code
Predictor<NDList, NDList> embedder = embedding.newPredictor();
Block classifier = new SequentialBlock()
// text embedding layer
.add(
ndList -> {
NDArray data = ndList.singletonOrThrow();
NDList inputs = new NDList();
long batchSize = data.getShape().get(0);
float maxLength = data.getShape().get(1);
if ("PyTorch".equals(Engine.getInstance().getEngineName())) {
inputs.add(data.toType(DataType.INT64, false));
inputs.add(data.getManager().full(data.getShape(), 1, DataType.INT64));
inputs.add(data.getManager().arange(maxLength)
.toType(DataType.INT64, false)
.broadcast(data.getShape()));
} else {
inputs.add(data);
inputs.add(data.getManager().full(new Shape(batchSize), maxLength));
}
// run embedding
try {
return embedder.predict(inputs);
} catch (TranslateException e) {
throw new IllegalArgumentException("embedding error", e);
}
})
// classification layer
.add(Linear.builder().setUnits(768).build()) // pre classifier
.add(Activation::relu)
.add(Dropout.builder().optRate(0.2f).build())
.add(Linear.builder().setUnits(5).build()) // 5 star rating
.addSingleton(nd -> nd.get(":,0")); // Take [CLS] as the head
Model model = Model.newInstance("AmazonReviewRatingClassification");
model.setBlock(classifier);
###Output
_____no_output_____
###Markdown
Start TrainingFinally, we can start building our training pipeline to train the model. Creating Training and Testing datasetFirstly, we need to create a voabulary that is used to map token to index such as "hello" to 1121 (1121 is the index of "hello" in dictionary). Then we simply feed the vocabulary to the tokenizer that used to tokenize the sentence. Finally, we just need to split the dataset based on the ratio.Note: we set the cut-off length to 64 which means only the first 64 tokens from the review will be used. You can increase this value to achieve better accuracy.
###Code
// Prepare the vocabulary
DefaultVocabulary vocabulary = DefaultVocabulary.builder()
.addFromTextFile(embedding.getArtifact("vocab.txt"))
.optUnknownToken("[UNK]")
.build();
// Prepare dataset
int maxTokenLength = 64; // cutoff tokens length
int batchSize = 8;
int limit = Integer.MAX_VALUE;
// int limit = 512; // uncomment for quick testing
BertFullTokenizer tokenizer = new BertFullTokenizer(vocabulary, true);
CsvDataset amazonReviewDataset = getDataset(batchSize, tokenizer, maxTokenLength, limit);
// split data with 7:3 train:valid ratio
RandomAccessDataset[] datasets = amazonReviewDataset.randomSplit(7, 3);
RandomAccessDataset trainingSet = datasets[0];
RandomAccessDataset validationSet = datasets[1];
###Output
_____no_output_____
###Markdown
Setup Trainer and training configThen, we need to setup our trainer. We set up the accuracy and loss function. The model training logs will be saved to `build/modlel`.
###Code
SaveModelTrainingListener listener = new SaveModelTrainingListener("build/model");
listener.setSaveModelCallback(
trainer -> {
TrainingResult result = trainer.getTrainingResult();
Model model = trainer.getModel();
// track for accuracy and loss
float accuracy = result.getValidateEvaluation("Accuracy");
model.setProperty("Accuracy", String.format("%.5f", accuracy));
model.setProperty("Loss", String.format("%.5f", result.getValidateLoss()));
});
DefaultTrainingConfig config = new DefaultTrainingConfig(Loss.softmaxCrossEntropyLoss()) // loss type
.addEvaluator(new Accuracy())
.optDevices(Engine.getInstance().getDevices(1)) // train using single GPU
.addTrainingListeners(TrainingListener.Defaults.logging("build/model"))
.addTrainingListeners(listener);
###Output
_____no_output_____
###Markdown
Start trainingWe will start our training process. Training on GPU will takes approximately 10 mins. For CPU, it will take more than 2 hours to finish.
###Code
int epoch = 2;
Trainer trainer = model.newTrainer(config);
trainer.setMetrics(new Metrics());
Shape encoderInputShape = new Shape(batchSize, maxTokenLength);
// initialize trainer with proper input shape
trainer.initialize(encoderInputShape);
EasyTrain.fit(trainer, epoch, trainingSet, validationSet);
System.out.println(trainer.getTrainingResult());
###Output
_____no_output_____
###Markdown
Save the model
###Code
model.save(Paths.get("build/model"), "amazon-review.param");
###Output
_____no_output_____
###Markdown
Verify the modelWe can create a predictor from the model to run inference on our customized dataset. Firstly, we can create a `Translator` for the model to do preprocessing and post processing. Similar to what we have done before, we need to tokenize the input sentence and get the output ranking.
###Code
class MyTranslator implements Translator<String, Classifications> {
private BertFullTokenizer tokenizer;
private Vocabulary vocab;
private List<String> ranks;
public MyTranslator(BertFullTokenizer tokenizer) {
this.tokenizer = tokenizer;
vocab = tokenizer.getVocabulary();
ranks = Arrays.asList("1", "2", "3", "4", "5");
}
@Override
public Batchifier getBatchifier() { return Batchifier.STACK; }
@Override
public NDList processInput(TranslatorContext ctx, String input) {
List<String> tokens = tokenizer.tokenize(input);
float[] indices = new float[tokens.size() + 2];
indices[0] = vocab.getIndex("[CLS]");
for (int i = 0; i < tokens.size(); i++) {
indices[i+1] = vocab.getIndex(tokens.get(i));
}
indices[indices.length - 1] = vocab.getIndex("[SEP]");
return new NDList(ctx.getNDManager().create(indices));
}
@Override
public Classifications processOutput(TranslatorContext ctx, NDList list) {
return new Classifications(ranks, list.singletonOrThrow().softmax(0));
}
}
###Output
_____no_output_____
###Markdown
Finally, we can create a `Predictor` to run the inference. Let's try with a random customer review:
###Code
String review = "It works great, but it takes too long to update itself and slows the system";
Predictor<String, Classifications> predictor = model.newPredictor(new MyTranslator(tokenizer));
predictor.predict(review)
###Output
_____no_output_____ |
quiz/m1_quant_basics/l3_market_mechanics/resample_data.ipynb | ###Markdown
Resample Data Pandas ResampleYou've learned about bucketing to different periods of time like Months. Let's see how it's done. We'll start with an example series of days.
###Code
import numpy as np
import pandas as pd
dates = pd.date_range('10/10/2018', periods=11, freq='D')
close_prices = np.arange(len(dates))
close = pd.Series(close_prices, dates)
close
###Output
_____no_output_____
###Markdown
Let's say we want to bucket these days into 3 day periods. To do that, we'll use the [DataFrame.resample](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.resample.html) function. The first parameter in this function is a string called `rule`, which is a representation of how to resample the data. This string representation is made using an offset alias. You can find a list of them [here](http://pandas.pydata.org/pandas-docs/stable/timeseries.htmloffset-aliases). To create 3 day periods, we'll set `rule` to "3D".
###Code
close.resample('3D')
###Output
_____no_output_____
###Markdown
This returns a `DatetimeIndexResampler` object. It's an intermediate object similar to the `GroupBy` object. Just like group by, it breaks the original data into groups. That means, we'll have to apply an operation to these groups. Let's make it simple and get the first element from each group.
###Code
close.resample('3D').first()
###Output
_____no_output_____
###Markdown
You might notice that this is the same as `.iloc[::3]`
###Code
close.iloc[::3]
###Output
_____no_output_____
###Markdown
So, why use the `resample` function instead of `.iloc[::3]` or the `groupby` function?The `resample` function shines when handling time and/or date specific tasks. In fact, you can't use this function if the index isn't a [time-related class](https://pandas.pydata.org/pandas-docs/version/0.21/timeseries.htmloverview).
###Code
try:
# Attempt resample on a series without a time index
pd.Series(close_prices).resample('W')
except TypeError:
print('It threw a TypeError.')
else:
print('It worked.')
###Output
It threw a TypeError.
###Markdown
One of the resampling tasks it can help with is resampling on periods, like weeks. Let's resample `close` from it's days frequency to weeks. We'll use the "W" offset allies, which stands for Weeks.
###Code
pd.DataFrame({
'days': close,
'weeks': close.resample('W').first()})
###Output
_____no_output_____
###Markdown
The weeks offset considers the start of a week on a Monday. Since 2018-10-10 is a Wednesday, the first group only looks at the first 5 items. There are offsets that handle more complicated problems like filtering for Holidays. For now, we'll only worry about resampling for days, weeks, months, quarters, and years. The frequency you want the data to be in, will depend on how often you'll be trading. If you're making trade decisions based on reports that come out at the end of the year, we might only care about a frequency of years or months. OHLCNow that you've seen how Pandas resamples time series data, we can apply this to Open, High, Low, and Close (OHLC). Pandas provides the [`Resampler.ohlc`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.ohlc.htmlpandas.core.resample.Resampler.ohlc) function will convert any resampling frequency to OHLC data. Let's get the Weekly OHLC.
###Code
close.resample('W').ohlc()
###Output
_____no_output_____
###Markdown
Can you spot a potential problem with that? It has to do with resampling data that has already been resampled.We're getting the OHLC from close data. If we want OHLC data from already resampled data, we should resample the first price from the open data, resample the highest price from the high data, etc..To get the weekly closing prices from `close`, you can use the [`Resampler.last`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.last.htmlpandas.core.resample.Resampler.last) function.
###Code
close.resample('W').last()
###Output
_____no_output_____
###Markdown
QuizImplement `days_to_weeks` function to resample OHLC price data to weekly OHLC price data. You find find more Resampler functions [here](https://pandas.pydata.org/pandas-docs/version/0.21.0/api.htmlid44) for calculating high and low prices.
###Code
import quiz_tests
def days_to_weeks(open_prices, high_prices, low_prices, close_prices):
"""Converts daily OHLC prices to weekly OHLC prices.
Parameters
----------
open_prices : DataFrame
Daily open prices for each ticker and date
high_prices : DataFrame
Daily high prices for each ticker and date
low_prices : DataFrame
Daily low prices for each ticker and date
close_prices : DataFrame
Daily close prices for each ticker and date
Returns
-------
open_prices_weekly : DataFrame
Weekly open prices for each ticker and date
high_prices_weekly : DataFrame
Weekly high prices for each ticker and date
low_prices_weekly : DataFrame
Weekly low prices for each ticker and date
close_prices_weekly : DataFrame
Weekly close prices for each ticker and date
"""
open_prices_weekly = open_prices.resample('W').first()
high_prices_weekly = high_prices.resample('W').max()
low_prices_weekly = low_prices.resample('W').min()
close_prices_weekly = close_prices.resample('W').last()
return open_prices_weekly, high_prices_weekly, low_prices_weekly, close_prices_weekly
quiz_tests.test_days_to_weeks(days_to_weeks)
###Output
Tests Passed
|
open-metadata-resources/open-metadata-labs/administration-labs/understanding-server-config.ipynb | ###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the [Egeria Server Configuration](../egeria-server-config.ipynb) lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe following command checks that the platforms and servers are running.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/).Once the platform is running you are ready to proceed.In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
----What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-cohorts.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ODPi Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the [Egeria Server Configuration](../egeria-server-config.ipynb) lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe following command checks that the platforms and servers are running.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/).Once the platform is running you are ready to proceed.In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
----What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-metadata-servers.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria-project.org/concepts/configuration-document) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://egeria-project.org/practices/coco-pharmaceuticals/personas/gary-geeke/) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://egeria-project.org/practices/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/egeria-docs/main/site/docs/practices/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the [Egeria Server Configuration](../egeria-server-config.ipynb) lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe following command checks that the platforms and servers are running.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria-project.org/education/open-metadata-labs/overview/).Once the platform is running you are ready to proceed.In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
----What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-cohorts.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ODPi Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the [Egeria Server Configuration](../egeria-server-config.ipynb) lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe following command checks that the platforms and servers are running.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/).Once the platform is running you are ready to proceed.In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
----What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-metadata-servers.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ODPi Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the **Egeria Server Configuration (../egeria-server-config.ipynb)** lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsBelow are the host name and port number for the core, data lake and development platforms.
###Code
import os
corePlatformURL = os.environ.get('corePlatformURL','http://localhost:8080')
dataLakePlatformURL = os.environ.get('dataLakePlatformURL','http://localhost:8081')
devPlatformURL = os.environ.get('devPlatformURL','http://localhost:8082')
###Output
_____no_output_____
###Markdown
In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
Checking that the Data Lake OMAG Server Platform is runningThe OMAG Server Platform is a single executable (application) that can be started from the command line or a script or as part of a pre-built container environment such as `docker-compose` or `kubernetes`.If you are running this notebook as part of an Egeria hands on lab then the server platforms you need are already started. Run the following command to check that the data lake platform is running.
###Code
import pprint
import json
import requests
isServerPlatformActiveURL = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/origin/"
print (" ")
print ("GET " + isServerPlatformActiveURL)
print (" ")
response = requests.get(isServerPlatformActiveURL)
print ("Returns:")
print (response.text)
if response.status_code == 200:
print("Server Platform " + platformURLroot + " is active - ready to begin")
else:
print("Server Platform " + platformURLroot + " is down - start it before proceeding")
print (" ")
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/).Once the platform is running you are ready to proceed.What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-metadata-servers.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
![Egeria Logo](https://raw.githubusercontent.com/odpi/egeria/master/assets/img/ODPi_Egeria_Logo_color.png) ODPi Egeria Hands-On Lab Welcome to the Understanding Server Configuration Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information about data and technology. This information is called metadata.Egeria provides servers to manage the exchange of metadata between different technologies. These servers are configured using REST API calls to an Open Metadata and Governance (OMAG) Server Platform. Each call either defines a default value or configures a service that must run within the server when it is started.As each configuration call is made, the server platform builds up a [configuration document](https://egeria.odpi.org/open-metadata-implementation/admin-services/docs/concepts/configuration-document.html) with the values passed. When the configuration is finished, the configuration document will have all of the information needed to start the server.The configuration document is deployed to the server platform that is hosting the server. When a request is made to this server platform to start the server, it reads the configuration document and initializes the server with the appropriate services.In this hands-on lab you will learn about the contents of configuration documents. The scenario[Gary Geeke](https://opengovernance.odpi.org/coco-pharmaceuticals/personas/gary-geeke.html) is the IT Infrastructure leader at [Coco Pharmaceuticals](https://opengovernance.odpi.org/coco-pharmaceuticals/).![Gary Geeke](https://raw.githubusercontent.com/odpi/data-governance/master/docs/coco-pharmaceuticals/personas/gary-geeke.png)Gary's userId is `garygeeke`.
###Code
adminUserId = "garygeeke"
###Output
_____no_output_____
###Markdown
In the **Egeria Server Configuration (../egeria-server-config.ipynb)** lab, Gary configured servers for the Open Metadata and Governance (OMAG) Server Platforms shown in Figure 1:![Figure 1](../images/coco-pharmaceuticals-systems-omag-server-platforms.png)> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsBelow are the host name and port number for the core, data lake and development platforms.
###Code
import os
corePlatformURL = os.environ.get('corePlatformURL','http://localhost:8080')
dataLakePlatformURL = os.environ.get('dataLakePlatformURL','http://localhost:8081')
devPlatformURL = os.environ.get('devPlatformURL','http://localhost:8082')
###Output
_____no_output_____
###Markdown
In this hands-on lab Gary is exploring the configuration document for the `cocoMDS1` server to understand how it is configured. The cocoMDS1 server runs on the Data Lake OMAG Server Platform.
###Code
mdrServerName = "cocoMDS1"
platformURLroot = dataLakePlatformURL
###Output
_____no_output_____
###Markdown
Checking that the Data Lake OMAG Server Platform is runningThe OMAG Server Platform is a single executable (application) that can be started from the command line or a script or as part of a pre-built container environment such as `docker-compose` or `kubernetes`.If you are running this notebook as part of an Egeria hands on lab then the server platforms you need are already started. Run the following command to check that the data lake platform is running.
###Code
import pprint
import json
import requests
isServerPlatformActiveURL = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/origin/"
print (" ")
print ("GET " + isServerPlatformActiveURL)
print (" ")
response = requests.get(isServerPlatformActiveURL)
print ("Returns:")
print (response.text)
if response.status_code == 200:
print("Server Platform " + platformURLroot + " is active - ready to begin")
else:
print("Server Platform " + platformURLroot + " is down - start it before proceeding")
print (" ")
###Output
_____no_output_____
###Markdown
----If the platform is not running, you will see a lot of red text. There are a number of choices on how to start it. Follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/).Once the platform is running you are ready to proceed.What follows are descriptions and coded requests to extract different parts of the configuration. Retrieve configuration for cocoMDS1 - Data Lake Operations metadata serverThe command below retrieves the configuration document for `cocoMDS1`. Its a big document so we will not display its full contents at this time.
###Code
operationalServicesURLcore = "/open-metadata/admin-services/users/" + adminUserId
print (" ")
print ("Retrieving stored configuration document for " + mdrServerName + " ...")
url = platformURLroot + operationalServicesURLcore + '/servers/' + mdrServerName + '/configuration'
print ("GET " + url)
response = requests.get(url)
if response.status_code == 200:
print("Server configuration for " + mdrServerName + " has been retrieved")
else:
print("Server configuration for " + mdrServerName + " is unavailable")
serverConfig=response.json().get('omagserverConfig')
###Output
_____no_output_____
###Markdown
----The configuration includes an audit trail that gives a high level overview of how the server has been configured. This is always a useful starting point to understand the content of the configuration document for the server.
###Code
auditTrail=serverConfig.get('auditTrail')
print (" ")
if auditTrail == None:
print ("Empty configuration - no audit trail - configure the server before continuing")
else:
print ("Audit Trail: ")
for x in range(len(auditTrail)):
print (auditTrail[x])
###Output
_____no_output_____
###Markdown
----The rest of the lab notebook extracts the different sections from the configuration document and explains what they mean and how they are used in the server. ---- Server names and identifiersA server has a unique name that is used on all REST calls that concern it. In addition, it is assigned a unique identifier (GUID) and an optional server type. It is also possible to set up the name of the organization that owns the server. These values are used in events the help locate the origin of metadata.
###Code
print (" ")
serverName=serverConfig.get('localServerName')
if serverName != None:
print ("Server name: " + serverName)
serverGUID=serverConfig.get('localServerId')
if serverGUID != None:
print ("Server GUID: " + serverGUID)
serverType=serverConfig.get('localServerType')
if serverType != None:
print ("Server Type: " + serverType)
organization=serverConfig.get('organizationName')
if organization != None:
print ("Organization: " + organization)
###Output
_____no_output_____
###Markdown
----In addition, if the server has a local repository then the collection of metadata stored in it has a unique identifier (GUID) and a name. These values are used to identify the origin of metadata instances since they are included in the audit header of any open metadata instance.
###Code
print (" ")
repositoryServicesConfig = serverConfig.get('repositoryServicesConfig')
if repositoryServicesConfig != None:
repositoryConfig = repositoryServicesConfig.get('localRepositoryConfig')
if repositoryConfig != None:
localMetadataCollectionId = repositoryConfig.get('metadataCollectionId')
if localMetadataCollectionId != None:
print ("Local metadata collection id: " + localMetadataCollectionId)
localMetadataCollectionName = repositoryConfig.get('metadataCollectionName')
if localMetadataCollectionName != None:
print ("Local metadata collection name: " + localMetadataCollectionName)
###Output
_____no_output_____
###Markdown
----Finally, a server with a repository that joins one or more cohorts needs to send out details of how a remote server should call this server during a federated query. This information is called the **local repository's remote connection**.By default, the network address that is defined in this connection begins with the value set in the **server URL root** property at the time the repository was configured. The server name is then added to the URL.The code below extracts the server URL root and the **full URL endpoint** sent to other servers in the same cohort(s) in the local repository's remote connection.
###Code
print (" ")
serverURLRoot=serverConfig.get('localServerURL')
if serverURLRoot != None:
print ("Server URL root: " + serverURLRoot)
if repositoryConfig != None:
localRepositoryRemoteConnection = repositoryConfig.get('localRepositoryRemoteConnection')
if localRepositoryRemoteConnection != None:
endpoint = localRepositoryRemoteConnection.get('endpoint')
if endpoint != None:
fullURLEndpoint = endpoint.get('address')
if fullURLEndpoint != None:
print ("Full URL endpoint: " + fullURLEndpoint)
print (" ")
###Output
_____no_output_____
###Markdown
You will notice that the platform's specific network address is used in both values.Using a specific network address is fine if the server is always going to run on this platform at this network address. If the server is likely to be moved to a different platform, or the platform to a different location, it is easier to set up the full URL endpoint to include a logical DNS name. This can be done by setting server URL root to this name before the local repository is configured, or updating the full URL endpoint in the local repository's remote connection. When the repository next registers with the cohort, it will send out its new full URL endpoint as part of the registration request.The complete local repository's remote connection is shown below. Notice the **connectorProviderClassName** towards the bottom of the definition. This is the factory class that creates the connector in the remote server.
###Code
print (" ")
prettyResponse = json.dumps(localRepositoryRemoteConnection, indent=4)
print ("localRepositoryRemoteConnection: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The repository services running in a metadata repository uses a number of connectors to access the resources it needs.The cocoMDS1 metadata server needs a local repository to store metadata about the data and processing occuring in the data lake.This is the **local repository's remote connection**.ODPi Egeria supports 2 types of repositories. One is an in-memory repository that stores metadata in hash maps. It is useful for demos and testing because a restart of the server results in an empty metadata repository. However, if you need metadata to persist from one run of the server to the next, you should use the graph repository.The code below shows which type of local repository is in use. It also shows the destinations where audit log records are to be sent. A server can have a list of destinations. In this example, the server is just using a simple console log.
###Code
print (" ")
if repositoryServicesConfig != None:
auditLogConnections = repositoryServicesConfig.get('auditLogConnections')
enterpriseAccessConfig = repositoryServicesConfig.get('enterpriseAccessConfig')
cohortConfigList = repositoryServicesConfig.get('cohortConfigList')
if auditLogConnections != None:
print ("Audit Log Destinations: ")
for logDestCount in range(len(auditLogConnections)):
auditLogConnection = auditLogConnections[logDestCount]
if auditLogConnection != None:
connectorType = auditLogConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (str(logDestCount+1) + ". description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
if repositoryConfig != None:
localRepositoryLocalConnection = repositoryConfig.get('localRepositoryLocalConnection')
print (" ")
if localRepositoryLocalConnection != None:
print ("Local Repository's Local Connection: ")
connectorType = localRepositoryLocalConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Configuring securityThere are two levels of security to set up for an ODPi Egeria server: authentication and authorization. Authentication of servers and peopleODPi Egeria recommends that each server has its own identity and that is embedded with each request as part of the transport level security (TLS). The members of the cohort (and the event topic) then grant access to each other and no-one else.The identity of the calling user also flows with each request, but this time as a unique string value (typically userId) in the URL of the request. You can see examples of this in the configuration requests being issued during this hands-on lab as Gary's userId `garygeeke` appears on each request.The server configuration supports a userId and password for TLS. The userId is also used when the server is processing requests that originate from an event and so there is no calling user.
###Code
print (" ")
localServerUserId=serverConfig.get('localServerUserId')
if localServerUserId != None:
print ("local Server UserId: " + localServerUserId)
localServerPassword=serverConfig.get('localServerPassword')
if localServerPassword != None:
print ("local Server Password: " + localServerPassword)
###Output
_____no_output_____
###Markdown
---- Authorization of metadata requestsODPi Egeria servers also support a metadata security connector that plugs into the server and is called to provide authorization decisions as part of every request.This connector is configured in the configuration document by passing the **Connection** object that provides the properties needed to create the connecto on the following call ...
###Code
print (" ")
serverSecurityConnection=serverConfig.get('serverSecurityConnection')
if serverSecurityConnection != None:
print ("Server's Security Connection:")
prettyResponse = json.dumps(serverSecurityConnection, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Setting up the event busThe server needs to define the event bus it will use to exchange events about metadata. This event bus configuration is used to connect to the cohorts and to provide the in / out topics for each of the Open Metadata Access Services (OMASs) - more later.The event bus configuration for cocoMDS1 provides the network address that the event bus (Apache Kafka) is using.
###Code
print (" ")
eventBusConfig=serverConfig.get('eventBusConfig')
if eventBusConfig != None:
print ("Event Bus Configuration:")
prettyResponse = json.dumps(eventBusConfig, indent=4)
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
---- Extracting the descriptions of the open metadata repository cohorts for the serverAn open metadata repository cohort defines the servers that will share metadata. A server can join multiple cohorts. ForCoco Pharmaceuticals, cocoMDS1 is a member of the core `cocoCohort`.![Figure 2](../images/coco-pharmaceuticals-systems-metadata-servers.png)> **Figure 2:** Membership of Coco Pharmaceuticals' cohortsYou can see this in the configuration below.
###Code
print (" ")
if cohortConfigList != None:
print ("Cohort(s) that this server is a member of: ")
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
cohortName = cohortConfig.get('cohortName')
print (str(cohortCount+1) + ". name: " + cohortName)
cohortRegistryConnection = cohortConfig.get('cohortRegistryConnection')
if cohortRegistryConnection != None:
print (" Cohort Registry Connection: ")
connectorType = cohortRegistryConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
print (" Cohort Topic Connection: ")
connectorType = topicConnection.get('connectorType')
if connectorType != None:
description = connectorType.get('description')
if description != None:
print (" description: " + description)
connectorProviderClassName = connectorType.get('connectorProviderClassName')
if connectorProviderClassName != None:
print (" className: " + connectorProviderClassName)
###Output
_____no_output_____
###Markdown
---- Reviewing the configured access servicesOpen Metadata Access Services (OMASs) provide the specialized APIs and events for specific tools and personas. ODPi Egeria provides an initial set of access services, and additional services can be pluggable into the server platform.To query the choice of access services available in the platform, use the follow command:
###Code
print (" ")
print ("Retrieving the registered access services ...")
url = platformURLroot + "/open-metadata/platform-services/users/" + adminUserId + "/server-platform/registered-services/access-services"
print ("GET " + url)
response = requests.get(url)
prettyResponse = json.dumps(response.json(), indent=4)
print ("Response: ")
print (prettyResponse)
print (" ")
###Output
_____no_output_____
###Markdown
----The `cocoMDS1` server is for the data lake operations. It needs the access services to support the onboarding and decommissioning of assets along with the access services that supports the different engines that maintain the data lake.
###Code
print (" ")
accessServiceConfig=serverConfig.get('accessServicesConfig')
if accessServiceConfig != None:
print ("Configured Access Services: ")
print (" ")
for accessServiceCount in range(len(accessServiceConfig)):
accessServiceDefinition = accessServiceConfig[accessServiceCount]
if accessServiceDefinition != None:
accessServiceName = accessServiceDefinition.get('accessServiceName')
accessServiceOptions = accessServiceDefinition.get('accessServiceOptions')
if accessServiceName != None:
print (" " + accessServiceName + " options: " + json.dumps(accessServiceOptions, indent=4))
print (" ")
###Output
_____no_output_____
###Markdown
---- Listing the topics used by a serverBoth the cohorts and the access services make extensive use of the event bus. The code below extracts the names of all of the event bus topics used by this server.
###Code
print (" ")
print ("List of Topics used by " + mdrServerName)
if cohortConfigList != None:
for cohortCount in range(len(cohortConfigList)):
cohortConfig = cohortConfigList[cohortCount]
if cohortConfig != None:
topicConnection = cohortConfig.get('cohortOMRSTopicConnection')
if topicConnection != None:
embeddedConnections = topicConnection.get('embeddedConnections')
if embeddedConnections != None:
for connCount in range(len(embeddedConnections)):
embeddedConnection = embeddedConnections[connCount]
if embeddedConnection != None:
eventBusConnection = embeddedConnection.get('embeddedConnection')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
if accessServiceConfig != None:
for accessServiceCount in range(len(accessServiceConfig)):
accessService = accessServiceConfig[accessServiceCount]
if accessService != None:
eventBusConnection = accessService.get('accessServiceInTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
eventBusConnection = accessService.get('accessServiceOutTopic')
if eventBusConnection != None:
endpoint = eventBusConnection.get('endpoint')
if endpoint != None:
topicName = endpoint.get('address')
if topicName != None:
print (" " + topicName)
print (" ")
###Output
_____no_output_____
###Markdown
---- Controlling the volume of metadata exchange in a single REST callTo ensure that a caller can not request too much metadata in a single request, it is possible to set a maximum page size for requests that return a list of items. The maximum page size puts a limit on the number of items that can be requested. The variable below defines the value that will be added to the configuration document for each server.
###Code
print (" ")
maxPageSize=serverConfig.get('maxPageSize')
if maxPageSize != None:
print ("Maximum records return on a REST call: " + str(maxPageSize))
###Output
_____no_output_____
###Markdown
----Finally, here is the configuration document in total
###Code
print (" ")
prettyResponse = json.dumps(serverConfig, indent=4)
print ("Configuration for server: " + mdrServerName)
print (prettyResponse)
print (" ")
###Output
_____no_output_____ |
CNN Concatenated data .ipynb | ###Markdown
Neccesary modules
###Code
import numpy as np
import matplotlib.pyplot as plt
import random
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Get the data
###Code
background = np.load("data/background_rf_LH_normalized.npy")
drone = np.load("data/drone_rf_LH_normalized.npy")
print(background.shape)
print(drone.shape)
num = random.randint(0, len(background)-1)
channel = 1
plt.plot(background[num][channel], label="background")
plt.plot(drone[num][channel],label="drone")
plt.legend(loc='upper right')
###Output
_____no_output_____
###Markdown
Train/ test split and data formatting
###Code
Y = np.array([0 for i in enumerate(background)] + [1 for i in enumerate(drone)])
X = np.append(background,drone,axis=0)
Y = Y.reshape(-1,1)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)
def join_rf(x_data):
low_high = []
for x in x_data:
low_high.append(x.flatten().reshape(-1,1).astype(np.float16))
low_high = np.array(low_high)
return low_high
x_train = join_rf(x_train)
x_test = join_rf(x_test)
# num = 11
# plt.plot(x_train[num])
# print(y_train[num])
x_train.shape
###Output
_____no_output_____
###Markdown
Model Specification
###Code
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, concatenate, Conv1D, MaxPooling1D, Dense, Dropout, Flatten
from tensorflow.keras.layers import Input
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(20000000,1)))
# model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=1000))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(1, activation='softmax'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size =1
epochs = 10
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
###Output
WARNING:tensorflow:From C:\Users\nihad\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 56 samples, validate on 24 samples
|
Project workbooks/Crash_Feature_Selection_cycles.ipynb | ###Markdown
Selecting features using Pearson's chi squared This notebook is the only one I did in Python. It will select the most correlated variables to fatal crashes out of the more than 100 categorical variables in my dataset. I added helmet data to the set, too.I also ran this test using only crashes since 2004 to see if that affected the helmet data (helmetless v. helmeted crashes weren't well documented before 2004) but using only post-2004 data did not change the variables selected.
###Code
#load libraries to use in the notebook
import os, sys
import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.feature_selection import f_regression
from sklearn.feature_selection import mutual_info_classif
from sklearn.linear_model import LinearRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.feature_selection import RFE
from sklearn.base import clone
# Dataset location
DATASET = 'Datasets/cycle_flag.csv'
assert os.path.exists(DATASET)
# # Load and shuffle
dataset = pd.read_csv(DATASET, sep=',').sample(frac = 1).reset_index(drop=True)
dataset.drop(['Unnamed: 0', 'CRN', 'FATAL_OR_MAJ_INJ','CRASH_YEAR','COUNTY','MUNICIPALITY','COUNTY_YEAR','MOTORCYCLE_COUNT',
'FATAL_COUNT','MCYCLE_DEATH_COUNT','DEC_LAT','DEC_LONG','PSP_REPORTED','MC_DVR_HLMT_TYPE','MC_PAS_HLMT_TYPE','MC_PAS_HLMTON_IND'], axis=1, inplace=True)
#eplore variable types. The chi squared test only works on numeric variables
g = dataset.columns.to_series().groupby(dataset.dtypes).groups
g
###Output
_____no_output_____
###Markdown
Below I'm one-hot encoding the helmet variable to make it into seperate binary columns. that allows me to work with it like the other binary variables in the dataset.
###Code
dataset = pd.get_dummies(dataset, columns=["MC_DVR_HLMTON_IND"])
#now that the helmet variable has been broken into new columns, remove the old variable and some other unnecessary columns
dataset.drop(['MC_PASSNGR_IND', 'MC_DVR_HLMTDOT_IND', 'MC_PAS_HLMTDOT_IND','MINOR_INJURY','MODERATE_INJURY','MAJOR_INJURY'], axis=1, inplace=True)
#look over the data to check that the one hot columns look ok
dataset.describe()
# # View some metadata of the dataset and see if that makes sense
print('dataset.shape', dataset.shape)
#split the dataset into x and y with x being all the data except fatalities and y being my target variable 'FATAL'
X = np.array(dataset.loc[:, dataset.columns != 'FATAL'])
y = np.array(dataset.FATAL)
#print the size and shape of selected data
print('X', X.shape, 'y', y.shape)
print('Label distribution:', {i: np.sum(y==i) for i in np.unique(dataset.FATAL)})
#run the pearson's chi squared test. the selected indicies at the bottom are the variables the test has chosen
selector = SelectKBest(chi2, k=5)
selector.fit(X, y)
print('χ² statistic', selector.scores_)
print('Selected indices', selector.get_support(True))
#Get the variable names of the selected indices
X_selected = selector.transform(X)
[dataset.columns[i] for i in selector.get_support(True)]
###Output
_____no_output_____ |
samples/core/dataflow/dataflow.ipynb | ###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a8d3b6977df26a89701cd229f01c1840a8475521/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0-rc.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.2.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
experiment_name = 'Dataflow - Launch Python'
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e598176c02f45371336ccaa819409e8ec83743df/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(experiment_name)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/06401ecc8f1561509ef095901a70b3543c2ca30f/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.2/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.2/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.0-alpha.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2-rc.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/caa2dc56f29b0dce5216bec390b1685fc0cdc4b7/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/02c991dd265054b040265b3dfa1903d5b49df859/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/38771da09094640cd2786a4b5130b26ea140f864/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/3f4b80127f35e40760eeb1813ce1d3f641502222/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/master/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/v1.7.0-alpha.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/4e7e6e866c1256e641b0c3effc55438e6e4b30f6/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.1-beta.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0-rc.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/0e794e8a0eff6f81ddc857946ee8311c7c431ec2/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a8d3b6977df26a89701cd229f01c1840a8475521/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
experiment_name = 'Dataflow - Launch Python'
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e598176c02f45371336ccaa819409e8ec83743df/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={}, experiment_name=experiment_name)
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.deprecated.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path: str, project_id: str, region: str, staging_dir: 'GCSPath' = '', requirements_file_path: 'GCSPath' = '', args: list = '[]', wait_interval: int = '30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline/sample-pipeline/word-count/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp.deprecated as kfp
from kfp.deprecated import dsl, Client
import json
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = json.dumps(['--output', f'{staging_dir}/wc/wordcount.out']),
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output/wc/wordcount.out
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
experiment_name = 'Dataflow - Launch Python'
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a97f1d0ad0e7b92203f35c5b0b9af3a314952e05/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(experiment_name)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.1/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/ff116b6f1a0f0cdaafb64fcd04214c169045e6fc/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/0ad0b368802eca8ca73b40fe08adb6d97af6a62f/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:'GCPProjectID', staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/5: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/5: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/5: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/5: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 5/5: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = project,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path:str, project_id:str, region:str, staging_dir:'GCSPath'='', requirements_file_path:'GCSPath'='', args:list='[]', wait_interval:int='30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
output_file = '{}/wc/wordcount.out'.format(output)
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
args = json.dumps([
'--output', output_file
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output_file
###Output
_____no_output_____
###Markdown
GCP Dataflow Component SampleA Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. Intended useUse this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline. Runtime argumentsName | Description | Optional | Data type| Accepted values | Default |:--- | :----------| :----------| :----------| :----------| :---------- |python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | String | | |region | The Google Cloud Platform (GCP) region to run the Cloud Dataflow job.| | String | | |staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schemaBefore you use the component, the following files must be ready in a Cloud Storage bucket:- A Beam Python code file.- A `requirements.txt` file which includes a list of dependent packages.The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:- It accepts the command line arguments `--project`, `--region`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-paramssetting-other-cloud-pipeline-options).- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code. OutputName | Description:--- | :----------job_id | The id of the Cloud Dataflow job that is created. Cautions & requirementsTo use the components, the following requirements must be met:- Cloud Dataflow API is enabled.- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:```component_op(...)```The Kubeflow user service account is a member of:- `roles/dataflow.developer` role of the project.- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`. Detailed descriptionThe component does several things during the execution:- Downloads `python_file_path` and `requirements_file_path` to local files.- Starts a subprocess to launch the Python program.- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.- Waits for the job to finish. Setup
###Code
project = 'Input your PROJECT ID'
region = 'Input GCP region' # For example, 'us-central1'
output = 'Input your GCS bucket name' # No ending slash
###Output
_____no_output_____
###Markdown
Install Pipeline SDK
###Code
!python3 -m pip install 'kfp>=0.1.31' --quiet
###Output
_____no_output_____
###Markdown
Load the component using KFP SDK
###Code
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
###Output
Help on function Launch Python:
Launch Python(python_file_path: str, project_id: str, region: str, staging_dir: 'GCSPath' = '', requirements_file_path: 'GCSPath' = '', args: list = '[]', wait_interval: int = '30')
Launch Python
Launch a self-executing beam python file.
###Markdown
Use the wordcount python sampleIn this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
###Code
!gsutil cat gs://ml-pipeline/sample-pipeline/word-count/wc.py
###Output
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""A minimalist word-counting workflow that counts words in Shakespeare.
This is the first in a series of successively more detailed 'word count'
examples.
Next, see the wordcount pipeline, then the wordcount_debugging pipeline, for
more detailed examples that introduce additional concepts.
Concepts:
1. Reading data from text files
2. Specifying 'inline' transforms
3. Counting a PCollection
4. Writing data to Cloud Storage as text files
To execute this pipeline locally, first edit the code to specify the output
location. Output location could be a local file path or an output prefix
on GCS. (Only update the output location marked with the first CHANGE comment.)
To execute this pipeline remotely, first edit the code to set your project ID,
runner type, the staging location, the temp location, and the output location.
The specified GCS bucket(s) must already exist. (Update all the places marked
with a CHANGE comment.)
Then, run the pipeline as described in the README. It will be deployed and run
using the Google Cloud Dataflow Service. No args are required to run the
pipeline. You can see the results in your output bucket in the GCS browser.
"""
from __future__ import absolute_import
import argparse
import logging
import re
from past.builtins import unicode
import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
def run(argv=None):
"""Main entry point; defines and runs the wordcount pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://dataflow-samples/shakespeare/kinglear.txt',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
# CHANGE 1/6: The Google Cloud Storage path is required
# for outputting the results.
default='gs://YOUR_OUTPUT_BUCKET/AND_OUTPUT_PREFIX',
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
# pipeline_args.extend([
# # CHANGE 2/6: (OPTIONAL) Change this to DataflowRunner to
# # run your pipeline on the Google Cloud Dataflow Service.
# '--runner=DirectRunner',
# # CHANGE 3/6: Your project ID is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--project=SET_YOUR_PROJECT_ID_HERE',
# # CHANGE 4/6: A GCP region is required in order to run your pipeline on
# # the Google Cloud Dataflow Service.
# '--region=SET_GCP_REGION_HERE',
# # CHANGE 5/6: Your Google Cloud Storage path is required for staging local
# # files.
# '--staging_location=gs://YOUR_BUCKET_NAME/AND_STAGING_DIRECTORY',
# # CHANGE 6/6: Your Google Cloud Storage path is required for temporary
# # files.
# '--temp_location=gs://YOUR_BUCKET_NAME/AND_TEMP_DIRECTORY',
# '--job_name=your-wordcount-job',
# ])
# We use the save_main_session option because one or more DoFn's in this
# workflow rely on global context (e.g., a module imported at module level).
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
with beam.Pipeline(options=pipeline_options) as p:
# Read the text file[pattern] into a PCollection.
lines = p | ReadFromText(known_args.input)
# Count the occurrences of each word.
counts = (
lines
| 'Split' >> (beam.FlatMap(lambda x: re.findall(r'[A-Za-z\']+', x))
.with_output_types(unicode))
| 'PairWithOne' >> beam.Map(lambda x: (x, 1))
| 'GroupAndSum' >> beam.CombinePerKey(sum))
# Format the counts into a PCollection of strings.
def format_result(word_count):
(word, count) = word_count
return '%s: %s' % (word, count)
output = counts | 'Format' >> beam.Map(format_result)
# Write the output using a "Write" transform that has side effects.
# pylint: disable=expression-not-assigned
output | WriteToText(known_args.output)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
###Markdown
Example pipeline that uses the component
###Code
import kfp
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='dataflow-launch-python-pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py',
project_id = project,
region = region,
staging_dir = output,
requirements_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/requirements.txt',
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
region = region,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = json.dumps(['--output', f'{staging_dir}/wc/wordcount.out']),
wait_interval = wait_interval)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
###Output
_____no_output_____
###Markdown
Inspect the output
###Code
!gsutil cat $output/wc/wordcount.out
###Output
_____no_output_____ |
tutorials/nlp/Token_Classification_Named_Entity_Recognition.ipynb | ###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'
config.trainer.devices = 1
config.trainer.accelerator = accelerator
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.strategy = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(devices=1, accelerator='gpu', fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss(class_balancing='weighted_loss')
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=[1], fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/main/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.distributed_backend = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss(class_balancing='weighted_loss')
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=[1], fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/stable/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="ner_en_bert")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
accelerator = 'gpu' if torch.cuda.is_available() else 'cpu'
config.trainer.devices = 1
config.trainer.accelerator = accelerator
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.strategy = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('ner_en_bert')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(devices=1, accelerator='gpu', fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification_train.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification_train.py).To run training script, use:`python token_classification_train.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Deployment with ONNXHere is an example generating single .onnx file from the pre-trained model anddelivering the same output. If you don't have ONNX Runtime you can install it like this:
###Code
! mkdir -p ort
! cd ort
! git clone --depth 1 --branch v1.5.1 https://github.com/microsoft/onnxruntime.git .
! ./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel
! pip install ./build/Linux/Release/dist/onnxruntime_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl
! cd ..
###Output
_____no_output_____
###Markdown
Then run
###Code
import onnxruntime
import torch
from nemo.collections import nlp as nemo_nlp
from nemo.collections.nlp.data.token_classification.token_classification_dataset import BertTokenClassificationInferDataset
from nemo.collections.nlp.modules.common.tokenizer_utils import get_tokenizer
from nemo.collections.nlp.parts.utils_funcs import tensor2list
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
# results = pretrained_ner_model.add_predictions(queries)
#
# for query, result in zip(queries, results):
# print()
# print(f'Query : {query}')
# print(f'Result: {result.strip()}\n')
pretrained_ner_model.export("NER.onnx")
tokenizer = get_tokenizer(tokenizer_name="bert-base-uncased")
dataset = BertTokenClassificationInferDataset(tokenizer=tokenizer, queries=queries, max_seq_length=-1)
infer_datalayer = torch.utils.data.DataLoader(
dataset=dataset,
collate_fn=dataset.collate_fn,
batch_size=32,
shuffle=False,
num_workers=2,
pin_memory=False,
drop_last=False,
)
ort_session = onnxruntime.InferenceSession("NER.onnx")
label_ids = {'O': 0, 'B-GPE': 1, 'B-LOC': 2, 'B-MISC': 3, 'B-ORG': 4, 'B-PER': 5, 'B-TIME': 6,
'I-GPE': 7, 'I-LOC': 8, 'I-MISC': 9, 'I-ORG': 10, 'I-PER': 11, 'I-TIME': 12}
pad_label = 'O'
results = []
all_preds = []
for batch in infer_datalayer:
input_ids, input_type_ids, input_mask, subtokens_mask = batch
ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_ids),
ort_session.get_inputs()[1].name: to_numpy(input_mask),
ort_session.get_inputs()[2].name: to_numpy(input_type_ids),}
ort_logits = ort_session.run(None, ort_inputs)
logits = torch.from_numpy(ort_logits[0])
subtokens_mask = subtokens_mask > 0.5
preds = tensor2list(logits.argmax(dim=-1)[subtokens_mask])
all_preds.extend(preds)
queries = [q.strip().split() for q in queries]
num_words = [len(q) for q in queries]
if sum(num_words) != len(all_preds):
raise ValueError('Pred and words must have the same length')
ids_to_labels = {v: k for k, v in label_ids.items()}
start_idx = 0
end_idx = 0
for query in queries:
end_idx += len(query)
# extract predictions for the current query from the list of all predictions
preds = all_preds[start_idx:end_idx]
start_idx = end_idx
query_with_entities = ''
for j, word in enumerate(query):
# strip out the punctuation to attach the entity tag to the word not to a punctuation mark
# that follows the word
if word[-1].isalpha():
punct = ''
else:
punct = word[-1]
word = word[:-1]
query_with_entities += word
label = ids_to_labels[preds[j]]
if label != pad_label:
query_with_entities += '[' + label + ']'
query_with_entities += punct + ' '
results.append(query_with_entities.strip())
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss(class_balancing='weighted_loss')
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=[1], fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss(class_balancing='weighted_loss')
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=[1], fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____
###Markdown
Task Description**Named entity recognition (NER)**, also referred to as entity chunking, identification or extraction, is the task of detecting and classifying key information (entities) in text.For example, in a sentence: `Mary lives in Santa Clara and works at NVIDIA`, we should detect that `Mary` is a person, `Santa Clara` is a location and `NVIDIA` is a company. DatasetIn this tutorial we going to use [GMB(Groningen Meaning Bank)](http://www.let.rug.nl/bjerva/gmb/about.php) corpus for entity recognition. GMB is a fairly large corpus with a lot of annotations. Note, that GMB is not completely human annotated and it’s not considered 100% correct. The data is labeled using the [IOB format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) (short for inside, outside, beginning). The following classes appear in the dataset:* LOC = Geographical Entity* ORG = Organization* PER = Person* GPE = Geopolitical Entity* TIME = Time indicator* ART = Artifact* EVE = Event* NAT = Natural PhenomenonFor this tutorial, classes ART, EVE, and NAT were combined into a MISC class due to small number of examples for these classes. NeMo Token Classification Data Format[TokenClassification Model](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/token_classification/token_classification_model.py) in NeMo supports NER and other token level classification tasks, as long as the data follows the format specified below. Token Classification Model requires the data to be split into 2 files: * text.txt and * labels.txt. Each line of the **text.txt** file contains text sequences, where words are separated with spaces, i.e.: [WORD] [SPACE] [WORD] [SPACE] [WORD].The **labels.txt** file contains corresponding labels for each word in text.txt, the labels are separated with spaces, i.e.:[LABEL] [SPACE] [LABEL] [SPACE] [LABEL].Example of a text.txt file:```Jennifer is from New York City .She likes ......```Corresponding labels.txt file:```B-PER O O B-LOC I-LOC I-LOC OO O ......``` To convert an IOB format data to the format required for training, run [examples/nlp/token_classification/data/import_from_iob_format.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/data/import_from_iob_format.py) on your train and dev files, as follows:```python examples/nlp/token_classification/data/import_from_iob_format.py --data_file PATH_TO_IOB_FORMAT_DATAFILE```For this tutorial, we are going to use the preprocessed GMB dataset. Download and preprocess the data¶
###Code
DATA_DIR = "DATA_DIR"
WORK_DIR = "WORK_DIR"
MODEL_CONFIG = "token_classification_config.yaml"
# download preprocessed data
os.makedirs(WORK_DIR, exist_ok=True)
os.makedirs(DATA_DIR, exist_ok=True)
print('Downloading GMB data...')
wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/gmb_v_2.2.0_clean.zip', DATA_DIR)
###Output
_____no_output_____
###Markdown
Let's extract files from the .zip file:
###Code
! unzip {DATA_DIR}/gmb_v_2.2.0_clean.zip -d {DATA_DIR}
DATA_DIR = os.path.join(DATA_DIR, 'gmb_v_2.2.0_clean')
###Output
_____no_output_____
###Markdown
Now, the data folder should contain 4 files: * labels_dev.txt* labels_train.txt* text_dev.txt* text_train.txt
###Code
! ls -l {DATA_DIR}
# let's take a look at the data
print('Text:')
! head -n 5 {DATA_DIR}/text_train.txt
print('\nLabels:')
! head -n 5 {DATA_DIR}/labels_train.txt
###Output
_____no_output_____
###Markdown
Model Configuration Using an Out-of-the-Box ModelTo use a pretrained NER model, run:
###Code
# this line will download pre-trained NER model from NVIDIA's NGC cloud and instantiate it for you
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained(model_name="NERModel")
###Output
_____no_output_____
###Markdown
To see how the model performs, let’s get model's predictions for a few examples:
###Code
# define the list of queries for inference
queries = [
'we bought four shirts from the nvidia gear store in santa clara.',
'Nvidia is a company.',
'The Adventures of Tom Sawyer by Mark Twain is an 1876 novel about a young boy growing '
+ 'up along the Mississippi River.',
]
results = pretrained_ner_model.add_predictions(queries)
for query, result in zip(queries, results):
print()
print(f'Query : {query}')
print(f'Result: {result.strip()}\n')
###Output
_____no_output_____
###Markdown
Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. Model configurationOur Named Entity Recognition model is comprised of the pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model followed by a Token Classification layer.The model is defined in a config file which declares multiple important sections. They are:- **model**: All arguments that are related to the Model - language model, token classifier, optimizer and schedulers, datasets and any other related information- **trainer**: Any argument to be passed to PyTorch Lightning
###Code
# download the model's configuration file
config_dir = WORK_DIR + '/configs/'
os.makedirs(config_dir, exist_ok=True)
if not os.path.exists(config_dir + MODEL_CONFIG):
print('Downloading config file...')
wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/token_classification/conf/' + MODEL_CONFIG, config_dir)
else:
print ('config file is already exists')
# this line will print the entire config of the model
config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}'
print(config_path)
config = OmegaConf.load(config_path)
print(OmegaConf.to_yaml(config))
###Output
_____no_output_____
###Markdown
Model Training From Scratch Setting up Data within the configAmong other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.We assume that both training and evaluation files are located in the same directory, and use the default names mentioned during the data download step. So, to start model training, we simply need to specify `model.dataset.data_dir`, like we are going to do below.Also notice that some config lines, including `model.dataset.data_dir`, have `???` in place of paths, this means that values for these fields are required to be specified by the user.Let's now add the data directory path to the config.
###Code
# in this tutorial train and dev datasets are located in the same folder, so it is enought to add the path of the data directory to the config
config.model.dataset.data_dir = DATA_DIR
# if you want to use the full dataset, set NUM_SAMPLES to -1
NUM_SAMPLES = 1000
config.model.train_ds.num_samples = NUM_SAMPLES
config.model.validation_ds.num_samples = NUM_SAMPLES
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem.Let's first instantiate a Trainer object
###Code
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# lets modify some trainer configs
# checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
config.trainer.precision = 16 if torch.cuda.is_available() else 32
# for mixed precision training, uncomment the line below (precision should be set to 16 and amp_level to O1):
# config.trainer.amp_level = O1
# remove distributed training flags
config.trainer.accelerator = None
# setup max number of steps to reduce training time for demonstration purposes of this tutorial
config.trainer.max_steps = 32
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo Experiment¶NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it:
###Code
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# the exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model:
###Code
# get the list of supported BERT-like models, for the complete list of HugginFace models, see https://huggingface.co/models
print(nemo_nlp.modules.get_pretrained_lm_models_list(include_external=True))
# specify BERT-like model, you want to use
PRETRAINED_BERT_MODEL = "bert-base-uncased"
# add the specified above model parameters to the config
config.model.language_model.pretrained_model_name = PRETRAINED_BERT_MODEL
###Output
_____no_output_____
###Markdown
Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders we'll be prepared for training and evaluation.Also, the pretrained BERT model will be downloaded, note it can take up to a few minutes depending on the size of the chosen BERT model.
###Code
model_from_scratch = nemo_nlp.models.TokenClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Monitoring training progressOptionally, you can create a Tensorboard visualization to monitor training progress.
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# start model training
trainer.fit(model_from_scratch)
###Output
_____no_output_____
###Markdown
After training for 5 epochs, with the default config and NUM_SAMPLES = -1 (i.e.all data is used), your model performance should look similar to this: ``` label precision recall f1 support O (label_id: 0) 99.14 99.19 99.17 131141 B-GPE (label_id: 1) 95.86 94.03 94.93 2362 B-LOC (label_id: 2) 83.99 90.31 87.04 5346 B-MISC (label_id: 3) 39.82 34.62 37.04 130 B-ORG (label_id: 4) 78.33 67.82 72.70 2980 B-PER (label_id: 5) 84.36 84.32 84.34 2577 B-TIME (label_id: 6) 91.94 91.23 91.58 2975 I-GPE (label_id: 7) 88.89 34.78 50.00 23 I-LOC (label_id: 8) 77.18 79.13 78.14 1030 I-MISC (label_id: 9) 28.57 24.00 26.09 75 I-ORG (label_id: 10) 78.67 75.67 77.14 2384 I-PER (label_id: 11) 86.69 90.17 88.40 2687 I-TIME (label_id: 12) 83.21 83.48 83.34 938 ------------------- micro avg 96.95 96.95 96.95 154648 macro avg 78.20 72.98 74.61 154648 weighted avg 96.92 96.95 96.92 154648``` InferenceTo see how the model performs, we can run generate prediction similar to the way we did it earlier Generate PredictionsTo see how the model performs, we can generate prediction the same way we did it earlier or we can use our model to generate predictions for a dataset from a file, for example, to perform final evaluation or to do error analysis.Below, we are using a subset of dev set, but it could be any text file as long as it follows the data format described above.Labels_file is optional here, and if provided will be used to get metrics.
###Code
# let's first create a subset of our dev data
! head -n 100 {DATA_DIR}/text_dev.txt > {DATA_DIR}/sample_text_dev.txt
! head -n 100 {DATA_DIR}/labels_dev.txt > {DATA_DIR}/sample_labels_dev.txt
###Output
_____no_output_____
###Markdown
Now, let's generate predictions for the provided text file.If labels file is also specified, the model will evaluate the predictions and plot confusion matrix.
###Code
model_from_scratch.evaluate_from_file(
text_file=os.path.join(DATA_DIR, 'sample_text_dev.txt'),
labels_file=os.path.join(DATA_DIR, 'sample_labels_dev.txt'),
output_dir=exp_dir,
)
###Output
_____no_output_____
###Markdown
Training ScriptIf you have NeMo installed locally, you can also train the model with [nlp/token_classification/token_classification.py](https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/token_classification.py).To run training script, use:`python token_classification.py model.dataset.data_dir=PATH_TO_DATA_DIR` Finetuning model with your dataWhen we were training from scratch, the datasets were prepared for training during the model initialization. When we are using a pretrained NER model, before training, we need to setup training and evaluation data.
###Code
# let's reload our pretrained NER model
pretrained_ner_model = nemo_nlp.models.TokenClassificationModel.from_pretrained('NERModel')
# then we need to setup the data dir to get class weights statistics
pretrained_ner_model.update_data_dir(DATA_DIR)
# setup train and validation Pytorch DataLoaders
pretrained_ner_model.setup_training_data()
pretrained_ner_model.setup_validation_data()
# then we're setting up loss, use class_balancing='weighted_loss' if you want to add class weights to the CrossEntropyLoss
pretrained_ner_model.setup_loss(class_balancing='weighted_loss')
# and now we can create a PyTorch Lightning trainer and call `fit` again
# for this tutorial we are setting fast_dev_run to True, and the trainer will run 1 training batch and 1 validation batch
# for actual model training, disable the flag
fast_dev_run = True
trainer = pl.Trainer(gpus=1, fast_dev_run=fast_dev_run)
trainer.fit(pretrained_ner_model)
###Output
_____no_output_____ |
examples/SpecifyAColormap.ipynb | ###Markdown
To change the colormap from the widget user interface, select the desired colormap use the dropdown above the transfer function editor.We can also probe the current value of the colormap with the `cmap` viewer traitlet.
###Code
viewer.cmap
###Output
_____no_output_____
###Markdown
Or, change the value of the colormap by assigning the `cmap` property to the desired *itkwidgets* colormap string identifier.
###Code
viewer.cmap = 'gist_earth'
###Output
_____no_output_____
###Markdown
Or, specify a custom colormap with an *Nx3* NumPy array.The colormap is specified with a series of `[red, blue, green]` values ranging from 0.0 to 1.0.For example, to manually create a grayscale colormap:
###Code
colormap = np.array([[0.0, 0.0, 0.0],
[1.0, 1.0, 1.0]])
viewer.cmap = colormap
###Output
_____no_output_____
###Markdown
Or, specify a custom [`matplotlib.colors.LinearSegmentedColormap`](https://matplotlib.org/api/_as_gen/matplotlib.colors.LinearSegmentedColormap.htmlmatplotlib.colors.LinearSegmentedColormap) from [matplotlib](https://matplotlib.org/tutorials/colors/colormaps.htmlsphx-glr-tutorials-colors-colormaps-py).
###Code
print(type(matplotlib.cm.autumn))
viewer.cmap = matplotlib.cm.autumn
###Output
<class 'matplotlib.colors.LinearSegmentedColormap'>
###Markdown
More colormaps in matplotlib format are available from the [colorcet](https://colorcet.pyviz.org/user_guide/Continuous.html) and and [palettable](https://jiffyclub.github.io/palettable/) packages.
###Code
viewer.cmap = colorcet.cm.isolum
viewer.cmap = palettable.scientific.sequential.Acton_14.mpl_colormap
###Output
_____no_output_____
###Markdown
It is also possible to set the desired colormap when creating the viewer with the `cmap` keyword argumentVariables for common preset colormaps are available at `itkwidgets.cm.*`.
###Code
view(image, gradient_opacity=0.5, cmap=itkwidgets.cm.bone, annotations=False, ui_collapsed=True)
###Output
_____no_output_____
###Markdown
To change the colormap from the widget user interface, select the desired colormap use the dropdown above the transfer function editor.
###Code
# Probe the current value of the colormap
viewer.cmap
# Or, change the value of the colormap by assigning the `cmap` property to the desired string
viewer.cmap = 'gist_earth'
# It is also possible to set the desired colormap when creating the viewer with the `cmap` keyword argument
# Variables for common colormaps are available at itkwidgets.cm.*
view(image, gradient_opacity=0.5, cmap=itkwidgets.cm.bone, annotations=False, ui_collapsed=True)
###Output
_____no_output_____
###Markdown
To change the colormap from the widget user interface, select the desired colormap use the dropdown above the transfer function editor.We can also probe the current value of the colormap with the `cmap` viewer traitlet.
###Code
viewer.cmap
###Output
_____no_output_____
###Markdown
Or, change the value of the colormap by assigning the `cmap` property to the desired *itkwidgets* colormap string identifier.
###Code
viewer.cmap = 'gist_earth'
###Output
_____no_output_____
###Markdown
Or, specify a custom colormap with an *Nx3* NumPy array.The colormap is specified with a series of `[red, blue, green]` values ranging from 0.0 to 1.0.For example, to manually create a grayscale colormap:
###Code
colormap = np.array([[0.0, 0.0, 0.0],
[1.0, 1.0, 1.0]])
viewer.cmap = colormap
###Output
_____no_output_____
###Markdown
Or, specify a custom [`matplotlib.colors.LinearSegmentedColormap`](https://matplotlib.org/api/_as_gen/matplotlib.colors.LinearSegmentedColormap.htmlmatplotlib.colors.LinearSegmentedColormap) from [matplotlib](https://matplotlib.org/tutorials/colors/colormaps.htmlsphx-glr-tutorials-colors-colormaps-py).
###Code
print(type(matplotlib.cm.autumn))
viewer.cmap = matplotlib.cm.autumn
###Output
<class 'matplotlib.colors.LinearSegmentedColormap'>
###Markdown
More colormaps in matplotlib format are available from the [colorcet](https://colorcet.pyviz.org/user_guide/Continuous.html) and and [palettable](https://jiffyclub.github.io/palettable/) packages.
###Code
viewer.cmap = colorcet.cm.isolum
viewer.cmap = palettable.scientific.sequential.Acton_14.mpl_colormap
###Output
_____no_output_____
###Markdown
It is also possible to set the desired colormap when creating the viewer with the `cmap` keyword argumentVariables for common preset colormaps are available at `itkwidgets.cm.*`.
###Code
view(image, gradient_opacity=0.5, cmap=itkwidgets.cm.bone, annotations=False, ui_collapsed=True)
###Output
_____no_output_____
###Markdown
To change the colormap from the widget user interface, select the desired colormap use the dropdown above the transfer function editor. We change the value of the colormap by assigning the `cmap` property to the desired *itkwidgets* colormap string identifier. This is a `list` where elements in the list are colormaps for image components / channels.
###Code
viewer.cmap = ['gist_earth',]
###Output
_____no_output_____
###Markdown
Or, specify a custom colormap with an *Nx3* NumPy array.The colormap is specified with a series of `[red, blue, green]` values ranging from 0.0 to 1.0.For example, to manually create a grayscale colormap:
###Code
colormap = np.array([[0.0, 0.0, 0.0],
[1.0, 1.0, 1.0]])
viewer.cmap = [colormap,]
###Output
_____no_output_____
###Markdown
Or, specify a custom [`matplotlib.colors.LinearSegmentedColormap`](https://matplotlib.org/api/_as_gen/matplotlib.colors.LinearSegmentedColormap.htmlmatplotlib.colors.LinearSegmentedColormap) from [matplotlib](https://matplotlib.org/tutorials/colors/colormaps.htmlsphx-glr-tutorials-colors-colormaps-py).
###Code
print(type(matplotlib.cm.autumn))
viewer.cmap = [matplotlib.cm.autumn,]
###Output
<class 'matplotlib.colors.LinearSegmentedColormap'>
###Markdown
More colormaps in matplotlib format are available from the [colorcet](https://colorcet.pyviz.org/user_guide/Continuous.html) and and [palettable](https://jiffyclub.github.io/palettable/) packages.
###Code
viewer.cmap = [colorcet.cm.isolum]
viewer.cmap = [palettable.scientific.sequential.Acton_14.mpl_colormap]
###Output
_____no_output_____
###Markdown
It is also possible to set the desired colormap when creating the viewer with the `cmap` keyword argumentVariables for common preset colormaps are available at `itkwidgets.cm.*`.
###Code
view(image, gradient_opacity=0.5, cmap=itkwidgets.cm.bone, annotations=False, ui_collapsed=True)
###Output
_____no_output_____ |
Chapter_4/Chapter_4-1_mac.ipynb | ###Markdown
Chapter4: 実践的なアプリケーションを作ってみよう ※このノートブックはMac専用です。Windowsユーザーは「Chapter_4-1.ipynb」をご参照ください。 本書中のコードをMacで実行できるように、一部変更していま。 4.1 アプリケーションランチャーを作ってみよう① 4.1.1 設定ファイルの保存・呼出し ※以下のコードは、Macに標準で搭載されているTextEdit.app, Safari.app, Preview.appを利用したコード例です。
###Code
# リスト4.1.1: 設定ファイルの生成
# configparserのインポート
import configparser
# インスタンス化
config = configparser.ConfigParser()
# 設定ファイルの内容
config["Run1"] = {
"app1": "/Applications/TextEdit.app",
"app2": "/Applications/Safari.app"
}
# 設定ファイルへ書込み
with open("config.ini", "w+") as file:
config.write(file)
# リスト4.1.2: 設定ファイル(config.ini)
[Run1]
app1 = /Applications/TextEdit.app
app2 = /Applications/Safari.app
# リスト4.1.3: セクション, 変数の追加例
# 設定ファイルの内容
config["Run1"] = {
"app1": "/Applications/TextEdit.app",
"app2": "/Applications/Safari.app"
}
config["Run2"] = {
"app1": "/Applications/TextEdit.app",
"app2": "/Applications/Safari.app",
"app3": "/Applications/Preview.app"
}
# リスト4.1.4: 設定の呼出し
# 読込む設定ファイルを指定
config.read("config.ini")
# 設定ファイルから値の取得
read_base = config["Run1"]
print(read_base.get("app1"))
###Output
_____no_output_____ |
12_pytorch/02_variables_gradient.ipynb | ###Markdown
2.1 variable Variable wraps Tensors and allow accumulating Gradient. (...What?)And in order to create variable with PyTorch, we need to import extra
###Code
import torch
from torch.autograd import Variable
a = Variable(torch.ones(2, 2), requires_grad=True)
a
b = Variable(torch.ones(2, 2), requires_grad=True)
print(a + b)
print(torch.add(a, b))
print(a * b)
print(torch.mul(a, b))
###Output
tensor([[1., 1.],
[1., 1.]], grad_fn=<ThMulBackward>)
###Markdown
2.2 GradientAccumulating Gradient...? Let's caculate the equation in PyTorch$$y_i = 5(x_i + 1)^2$$
###Code
x = Variable(torch.ones(2), requires_grad=True)
x
###Output
_____no_output_____
###Markdown
$$y_i | _{x_i=1} = 5(1+1)^2 = 5(2)^2 = 5(4) = 20$$
###Code
y = 5 * (x + 1) ** 2
y
###Output
_____no_output_____
###Markdown
To calcuate backward should be called only on a scalar. Which means, we need to reduce the output we have to a single value (1-element tensor). $$o = \frac{1}{2} \sum y_i$$
###Code
o = (1/2) * torch.sum(y)
o
###Output
_____no_output_____
###Markdown
Recap`y` equation : $y_i = 5(x_i + 1)^2$ `o` equation : $o = \frac{1}2 \sum y_i$**Rewrite `o` eqation ...**$$o = \frac{1}2 \sum 5(x_i + 1)^2$$$$\frac{\partial o}{\partial x_i} = \frac{1}{2}[10(x_i+1)]$$$$\frac{\partial o}{\partial x_i}| _{x_i=1} = \frac{1}{2}[10(1+1)] = \frac{10}{2} = 10$$
###Code
# execute backward
o.backward()
# calculate gradient
x.grad
###Output
_____no_output_____ |
segmentation_UNET_COVID_19.ipynb | ###Markdown
###Code
import os
import PIL
import numpy as np
from keras.models import Sequential
import matplotlib.pyplot as plt
from keras.layers import Flatten
from keras.layers import Input, Dense, Dropout, Activation
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D, GlobalAveragePooling2D, Flatten, UpSampling2D, concatenate,Reshape, Permute
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.utils import np_utils, plot_model, to_categorical
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
# /content/drive/My Drive/COVID_19_CNN/dataset
image = PIL.Image.open("/content/drive/My Drive/COVID_19_CNN/dataset/train/COVID/Covid (2).png")
image = image.convert('RGB')
image = np.array(image)
print(image.shape)
#Image_shape= X_train[0].shape
from google.colab import drive
drive.mount('/content/drive')
"""
classifier = Sequential()
classifier.add(Convolution2D(32,4,4,input_shape = (256,256,3), activation = 'relu'))
classifier.add (MaxPooling2D(pool_size=(2,2)))
classifier.add(Convolution2D(64, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(output_dim = 128, activation = 'relu'))
classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
"""
inputs = Input((512, 512, 1), name= 'inputs')
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal', name= 'conv1_1')(inputs)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal', name= 'conv1_2')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), name= 'conv1_3')(conv1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
#conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(5, (1,1), activation = 'softmax',)(conv9)
model = Model(input = inputs, output = conv10)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
plot_model(model, to_file='model.png')
data_gen_args = dict(rotation_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
mask_types=[0, 85, 127, 170, 255]
def fix_mask(mask, batch_size=2):
for i in range(batch_size):
temp= np.zeros((512,512))
for j in range(len(mask_types)):
if j==0: continue
img= (mask[i,:,:,0]==mask_types[j])*mask_types[j]
temp = temp+img
mask[i,:,:,0] = temp
return mask
def adjustData(img,mask,flag_multi_class, num_class):
if(flag_multi_class):
"""
img = img / 255
mask = mask[:,:,:,0] if(len(mask.shape) == 4) else mask[:,:,0]
new_mask = np.zeros(mask.shape + (num_class,))
for i in range(num_class):
new_mask[mask == i,i] = 1
new_mask = np.reshape(new_mask,(new_mask.shape[0],new_mask.shape[1]*new_mask.shape[2],new_mask.shape[3])) if flag_multi_class else np.reshape(new_mask,(new_mask.shape[0]*new_mask.shape[1],new_mask.shape[2]))
mask = new_mask
"""
img = img / 255
one_hot= np.zeros((mask.shape[0],512,512, num_class))
#print(one_hot.shape)
for k in range(mask.shape[0]):
data= (mask[k,:,:,0]!=0)*1
for i in range(num_class):
if i==0: continue
data= data+ (mask[k,:,:,0]>mask_types[i])*1
#print(np.unique(data))
#print(data.shape)
for i in range(num_class):
one_hot[k,:,:,i]= (data==i)*1
mask= one_hot
elif(np.max(img) > 1):
img = img / 255
mask = mask /255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
return (img,mask)
def trainGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "grayscale",
mask_color_mode = "grayscale",image_save_prefix = "image",mask_save_prefix = "mask",
flag_multi_class = False,num_class = 2,save_to_dir = None,target_size = (512,512),seed = 1):
'''
can generate image and mask at the same time
use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same
if you want to visualize the results of generator, set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_directory(
train_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
train_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
train_generator = zip(image_generator, mask_generator)
for (img,mask) in train_generator:
mask = fix_mask(mask, batch_size=2)
img,mask = adjustData(img,mask,flag_multi_class,num_class)
yield (img, mask)
myGene = trainGenerator(2,'/content/drive/My Drive/CT_SCAN_SARS-COV_2_datasets/dataset/medical_segmentation/part1',
'training_image','Training_mask',data_gen_args,save_to_dir = None,
flag_multi_class=True, num_class=len(mask_types))
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
image_generator = image_datagen.flow_from_directory(
'/content/drive/My Drive/CT_SCAN_SARS-COV_2_datasets/dataset/medical_segmentation/part1',
classes = ['training_image'],
class_mode = None,
color_mode = "grayscale",
target_size = (512,512),
batch_size = 2,
save_to_dir = None,
save_prefix = "image",
seed = 1)
mask_generator = mask_datagen.flow_from_directory(
'/content/drive/My Drive/CT_SCAN_SARS-COV_2_datasets/dataset/medical_segmentation/part1',
classes = ['Training_mask'],
class_mode = None,
color_mode = "grayscale",
target_size = (512,512),
batch_size = 2,
save_to_dir = None,
save_prefix = "mask",
seed = 1)
train_generator = zip(image_generator, mask_generator)
ii=0
c=[]
for (img,mask) in train_generator:
#print(img.shape[0])
#print(mask.shape)
#plt.imshow(mask[0,:,:,0])
#plt.show()
#plt.imshow(img[1,:,:,0])
mask = fix_mask(mask, batch_size=2)
#print(np.unique(mask[0,:,:,0]))
#img,mask = adjustData(img,mask,flag_multi_class= True,num_class=len(mask_types))
img = img / 255
one_hot= np.zeros((mask.shape[0],512,512, len(mask_types)))
print(one_hot.shape)
for k in range(mask.shape[0]):
data= (mask[k,:,:,0]!=0)*1
for i in range(len(mask_types)):
if i==0: continue
data= data+ (mask[k,:,:,0]>mask_types[i])*1
print(np.unique(data))
print(data.shape)
for i in range(len(mask_types)):
one_hot[k,:,:,i]= (data==i)*1
mask= one_hot
#mask= one_hot
ii=ii+1
if ii==1: break
def fix_mask(mask, batch_size=2):
for i in range(batch_size):
temp= np.zeros((512,512))
for j in range(len(mask_types)):
if j==0: continue
img= (mask[i,:,:,0]==mask_types[j])*mask_types[j]
temp = temp+img
mask[i,:,:,0] = temp
return mask
#plt.imshow(mask[0,:,:,0])
#maskkk= mask
#maskkk= fix_mask(maskkk, batch_size=2)
#for i in range(100,200):
#print(mask[0,200,i,0])
#c.shape[0]
model.fit_generator(myGene,steps_per_epoch=200,epochs=10)
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
plt.imshow(Y[0,:,:])
mask= Y[1,:,:] # 0 85 127 170 255
#for i in range(50, 80):
print(np.unique(mask))
plt.imshow(mask)
plt.imshow(mask)
data= (mask!=0)*1
mask_types=[0, 85, 127, 170, 255]
for i in range(len(mask_types)):
if i==0: continue
data= data+ (mask>mask_types[i])*1
plt.imshow(data)
data[400,400]
print(np.unique(data))
one_hot= np.zeros((512,512, len(mask_types)))
print(one_hot.shape)
for i in range(len(mask_types)):
one_hot[:,:,i]= (data==i)*1
plt.imshow(one_hot[:,:,0])
one_hot[2,2,0]
print(one_hot.shape)
import model_unet
model = model_unet.unet(input_size = (512,512,1))
###Output
_____no_output_____ |
tradingStrategy.ipynb | ###Markdown
CS7641 Machine Learning*Application of Machine Learning in Pairs Trading*
###Code
import pandas as pd
import numpy as np
import os
import datetime
import math
import sklearn
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
###Output
_____no_output_____
###Markdown
Price History TableHere is the price table we used for this function. I used the top 3 stocks as an example. I'll change the data after I get any pair.
###Code
# Import training dataset
training_set = pd.read_csv("training_data.csv")
# Remove all the data except the pairs we choose
pairs_list = [[83186, 89003],
[81294, 82581],
[53640, 83597],
[43350, 82651],
[12781, 48531],
[44644, 90458],
[21742, 76639],
[51633, 58819],
[24969, 24985],
[81294, 83186],
[42585, 83621],
[10395, 53640],
[23931, 48531],
[60186, 81095],
[13856, 48531],
[16548, 81577]
]
for i in range(len(pairs_list)):
if i==0:
pairs_training_set = \
training_set.loc[training_set['PERMNO']==pairs_list[0][0]]
pairs_training_set = pd.concat([pairs_training_set,
training_set.loc[training_set['PERMNO']==pairs_list[0][1]]])
else:
pairs_training_set = pd.concat([pairs_training_set,
training_set.loc[training_set['PERMNO']==pairs_list[i][0]]])
pairs_training_set = pd.concat([pairs_training_set,
training_set.loc[training_set['PERMNO']==pairs_list[i][1]]])
training_set = pairs_training_set
training_set.head(3)
# Filtering the table only for the price history
filter_col = ['PERMNO']
filter_col2 = [col for col in training_set if col.startswith('price_')]
filter_col.extend(filter_col2)
training_set_price = training_set[filter_col]
training_set_price.head(3)
###Output
_____no_output_____
###Markdown
Create the spread funtion (price pair's relation)We will create the function of spread in here. The basic function of spread is defined as blow:$Spread = log(a) - nlog(b)$where the 'a' and 'b' are prices of stocks A and B respectively and the 'n' is hedge ratio. Our target is finding dynamics of spread based on the machine learning. We will use the supervised machine learning to implement this part and the possible candidates are 'linear regression' and 'support vector machine (SVM)'
###Code
def create_spread_function(a, b, start_t, end_t, alg='log'):
"""
Apply the supervised machine learning to find the dynamics of spread
Args:
a, b: Stock A and B's price history
start_t, end_t: start/end time of the analysis on the data.
They use the same unit with the data. For example, 0 means the
first data of the a and b.
(Analyze the data from a[start_t], b[start_t] to a[end_t],
b[end_t])
alg: Type of algorithm. The 'log' means the log normalization
Return:
spread_func: The function of spread.
Output of this function is spread and z_score.
"""
def log_spread_func(a, b):
"""
Calculate the spread and z-score based on the log spread function.
Args:
a, b: Current stocks' prices
Return:
spread: The relation between a and b
z-score: Normalized relation between a and b
"""
spread = math.log(b) - w_avg * math.log(a)
z_score = spread/w_std
return (spread, z_score)
def lr_spread_func(a, b):
"""
Calculate the spread and z-score based on the linear regression.
Args:
a, b: Current stocks' prices
Return:
spread: The relation between a and b
z-score: Normalized relation between a and b
"""
# Change the a to polynomial form
a, b = np.log(a), np.log(b)
a = a * np.ones((1, 1))
poly = PolynomialFeatures(degree = degree)
a = poly.fit_transform(a)
# Calculate the spread & z_score
spread = b - regr.predict(a)
z_score = spread/spread_std
return (spread, z_score)
# Slice the date
target_a = a[start_t:end_t]
target_b = b[start_t:end_t]
# use the log function
target_a = np.log(target_a)
target_b = np.log(target_b)
total_date = end_t-start_t
# Find the coefficient of the log normalization
if alg == 'log':
# Calculate the weight
w_list = target_b/target_a
w_avg = np.average(w_list)
# Calculate the standard deviation for the z-score calculation
w_std = np.std(w_list)
return log_spread_func
# Find the coefficient of the linear regression
elif alg == 'lr':
# Change the data from 1-D to 2-D
target_a = target_a[:,np.newaxis]
# Change the data to the polonomial
degree = 4
poly = PolynomialFeatures(degree = degree)
target_a = poly.fit_transform(target_a)
# Train the data using linear regression
regr = linear_model.LinearRegression()
regr.fit(target_a, target_b)
# Calculate the standard deviation of spread for the z-score calculation
b_pred = regr.predict(target_a)
spread = target_b-b_pred
spread_std = np.std(spread)
return lr_spread_func
print("Check the algorithm. Input was " + alg)
pass
###Output
_____no_output_____
###Markdown
How to useHere, we will see how to use the spread function.Right now, the results are bad because the stock a and b is randomly choosen and does not have any relation.
###Code
for i in range(len(pairs_list)):
print("pairs = (" + str(pairs_list[i]) + ")\n")
# Generate input for the test
a = training_set_price.iloc[2*i].to_numpy()[1:]
b = training_set_price.iloc[2*i+1].to_numpy()[1:]
# Check the function based on the log normalization
spread_func = create_spread_function(a, b, 0, 1000, 'log')
(spread, z_score) = spread_func(a[0], b[0])
# Generate the graph about log based z_score
x = np.arange(1000)
z_score_history = np.zeros((1000))
for i in range(1000):
(spread, z_score_history[i]) = spread_func(a[i], b[i])
plt.plot(x, z_score_history)
plt.show()
# Check the function based on the linear regression
spread_func = create_spread_function(a, b, 0, 1000, 'lr')
(spread, z_score) = spread_func(a[0], b[0])
# Generate the graph about log based z_score
for i in range(1000):
(spread, z_score_history[i]) = spread_func(a[i], b[i])
plt.plot(x, z_score_history)
plt.show()
print("======================================================")
###Output
pairs = ([83186, 89003])
###Markdown
Generate the z-score history listWe will generate the z-score list.The spread function we will use is based on certain days previous price history (We will call this as 'window_width').We will update the spread function every 'update_period' like a moving windows.It will be used for our strategy to deciding when wewe will buy/sell the stocks.
###Code
def gen_z_score_history(price_a, price_b, windows_width=700, spread_func_update_period=30):
"""
Generate the z-scores history
Args:
price_a, price_b: stock's present price history dataset (T)
windows_width: Width training data (day)
spread_func_update_period: The period of spread function update (day)
Return:
z_score_list: z-score history (T-windows_width)
"""
# Initialization
T = len(price_a)
z_score_list = np.zeros((T-windows_width))
a = price_a
b = price_b
# Calculate the z_score one by one.
for t in range(T-windows_width):
# Generate the spread_function for every update_period
if t % spread_func_update_period==0:
spread_func = create_spread_function(
a, b, t, t + windows_width, 'lr')
# Generate the z-score with spread_function
spread, z_score = spread_func(a[t], b[t])
z_score_list[t] = z_score
return z_score_list
###Output
_____no_output_____
###Markdown
How to run this FunctionGenerate the one z-score history and plot the graph
###Code
# Run the function with one stock pair.
price_history = training_set_price.to_numpy()[:2,1:]
z_score_history = gen_z_score_history(price_history[0], price_history[1])
# Plot the graph.
x = np.arange(len(z_score_history))
plt.plot(x, z_score_history)
plt.show()
###Output
_____no_output_____
###Markdown
Run the z_score_history generation function for all the pairs
###Code
z_score_history_list = np.zeros((1, 1))
# Run the z_score_history generation function for each pair.
for pair in pairs_list:
# Generate the price_history table for each pair.
price_history_a = training_set_price.loc[
training_set_price['PERMNO'] == pair[0]].drop(columns=['PERMNO'])
price_history_a = price_history_a.to_numpy()[0]
price_history_b = training_set_price.loc[
training_set_price['PERMNO'] == pair[1]].drop(columns=['PERMNO'])
price_history_b = price_history_b.to_numpy()[0]
# Run the z-score history generation function for each pair.
try:
z_score_history = gen_z_score_history(
price_history_a, price_history_b)[np.newaxis]
z_score_history_list = np.append(
z_score_history_list, z_score_history, axis=0)
except:
z_score_history_list = gen_z_score_history(
price_history_a, price_history_b)[np.newaxis]
print(z_score_history_list.shape)
with open('z_score_history.npy', 'wb') as outfile:
np.save(outfile, z_score_history_list)
###Output
(16, 1566)
###Markdown
CS7641 Machine Learning*Application of Machine Learning in Pairs Trading*
###Code
import pandas as pd
import numpy as np
import os
import datetime
import math
import sklearn
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
from sklearn.preprocessing import PolynomialFeatures
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Price History TableHere is the price table we used for this function.
###Code
# Pairs list from the clutering
pairs_list = [[43350, 82651],
[44644, 90458],
[24969, 24985],
[42585, 83621],
[60186, 81095],
[16548, 81577]]
# Merge the training data and testing data, because we use the previous 700 days
# data as a training data.
training_set = pd.read_csv("training_data.csv")
# Filtering the table only for the price history
filter_col = ['PERMNO']
filter_col2 = [col for col in training_set if col.startswith('price_')]
filter_col.extend(filter_col2)
training_set = training_set[filter_col]
for i in range(len(pairs_list)):
if i==0:
training_pair_set = \
training_set.loc[training_set['PERMNO']==pairs_list[0][0]]
training_pair_set = pd.concat([training_pair_set,
training_set.loc[training_set['PERMNO']==pairs_list[0][1]]])
else:
training_pair_set = pd.concat([training_pair_set,
training_set.loc[training_set['PERMNO']==pairs_list[i][0]]])
training_pair_set = pd.concat([training_pair_set,
training_set.loc[training_set['PERMNO']==pairs_list[i][1]]])
training_pair_set = training_pair_set.reset_index()
training_pair_set = training_pair_set.drop(columns=['index'])
#print(training_pair_set)
training_pair_set.head(3)
testing_set = pd.read_csv("testing_data.csv")
for i in range(len(pairs_list)):
if i==0:
testing_pair_set = \
testing_set.loc[testing_set['PERMNO']==pairs_list[0][0]]
testing_pair_set = pd.concat([testing_pair_set,
testing_set.loc[testing_set['PERMNO']==pairs_list[0][1]]])
else:
testing_pair_set = pd.concat([testing_pair_set,
testing_set.loc[testing_set['PERMNO']==pairs_list[i][0]]])
testing_pair_set = pd.concat([testing_pair_set,
testing_set.loc[testing_set['PERMNO']==pairs_list[i][1]]])
testing_pair_set = testing_pair_set.reset_index().drop(columns=['index'])
#print(testing_pair_set)
testing_pair_set.head(3)
testing_pair_set = testing_pair_set.drop(columns=['PERMNO'])
total_set = pd.concat([training_pair_set, testing_pair_set], axis=1)
#print(total_set)
total_set.head(3)
# Remove all the data except the pairs we choose
total_set.to_csv('pairs_total_stock.csv', index=False)
###Output
_____no_output_____
###Markdown
Create the spread funtion (price pair's relation)We will create the function of spread in here. The basic function of spread is defined as blow:$Spread = log(a) - nlog(b)$where the 'a' and 'b' are prices of stocks A and B respectively and the 'n' is hedge ratio. Our target is finding dynamics of spread based on the machine learning. We will use the supervised machine learning to implement this part and the possible candidates are 'linear regression' and 'support vector machine (SVM)'
###Code
def create_spread_function(a, b, start_t, end_t, alg='log'):
"""
Apply the supervised machine learning to find the dynamics of spread
Args:
a, b: Stock A and B's price history
start_t, end_t: start/end time of the analysis on the data.
They use the same unit with the data. For example, 0 means the
first data of the a and b.
(Analyze the data from a[start_t], b[start_t] to a[end_t],
b[end_t])
alg: Type of algorithm. The 'log' means the log normalization
Return:
spread_func: The function of spread.
Output of this function is spread and z_score.
"""
def log_spread_func(a, b):
"""
Calculate the spread and z-score based on the log spread function.
Args:
a, b: Current stocks' prices
Return:
spread: The relation between a and b
z-score: Normalized relation between a and b
"""
spread = math.log(b) - w_avg * math.log(a)
z_score = spread/w_std
return (spread, z_score)
def lr_spread_func(a, b):
"""
Calculate the spread and z-score based on the linear regression.
Args:
a, b: Current stocks' prices
Return:
spread: The relation between a and b
z-score: Normalized relation between a and b
"""
# Change the a to polynomial form
a, b = np.log(a), np.log(b)
a = a * np.ones((1, 1))
poly = PolynomialFeatures(degree = best_degree)
a = poly.fit_transform(a)
# Calculate the spread & z_score
spread = b - regr.predict(a)
z_score = spread/spread_std
return (spread, z_score)
# Slice the date
target_a = a[start_t:end_t]
target_b = b[start_t:end_t]
# use the log function
target_a = np.log(target_a)
target_b = np.log(target_b)
total_date = end_t-start_t
# Find the coefficient of the log normalization
if alg == 'log':
# Calculate the weight
w_list = target_b/target_a
w_avg = np.average(w_list)
# Calculate the standard deviation for the z-score calculation
w_std = np.std(w_list)
return log_spread_func
# Find the coefficient of the linear regression
elif alg == 'lr':
# Initialization
min_cv_n = float("inf")
best_degree = 0
total_len = target_a.size
# Permute the a and b for training dataset & validation dataset
permute_order = np.random.permutation(total_len)
target_a = target_a[permute_order]
target_b = target_b[permute_order]
# Divide to train and validation datasets
train_num = int(target_a.size/3*2)
train_a = target_a[:train_num]
train_b = target_b[:train_num]
valid_a = target_a[train_num:]
valid_b = target_b[train_num:]
# Change the datasets from 1-D to 2-D
train_a = train_a[:, np.newaxis]
valid_a = valid_a[:, np.newaxis]
# Find the best degree
for degree in range(1, 10, 1):
# Change the train datasets to polynomial form
poly = PolynomialFeatures(degree = degree)
poly_train_a = poly.fit_transform(train_a)
poly_valid_a = poly.fit_transform(valid_a)
# Train the model with Lasso linear regression
# We used the Lasso instead fo Ridge because it's better
# https://hackernoon.com/practical-machine-learning-ridge-regression-vs-lasso-a00326371ece
regr = linear_model.LassoCV(cv=5)
regr.fit(poly_train_a, train_b)
# Calculate the error
cv_n = np.average((valid_b - regr.predict(poly_valid_a))**2)
# Check the best degree
if cv_n < min_cv_n:
best_degree = degree
min_cv_n = cv_n
if best_degree == 0:
print("Cross-validation error")
# Train again with the best degree
poly = PolynomialFeatures(degree=best_degree)
poly_train_a = poly.fit_transform(train_a)
regr = linear_model.LassoCV(cv=5)
regr.fit(poly_train_a, train_b)
# Calculate the standard deviation of spread for the z-score calculation
b_pred = regr.predict(poly_train_a)
spread = train_b - b_pred
spread_std = np.std(spread)
return lr_spread_func
print("Check the algorithm. Input was " + alg)
pass
###Output
_____no_output_____
###Markdown
How to useHere, we will see how to use the spread function.Right now, the results are bad because the stock a and b is randomly choosen and does not have any relation.
###Code
training_set_price = total_set.drop(columns=['PERMNO'])
for i in range(len(pairs_list)):
print("pairs = (" + str(pairs_list[i]) + ")\n")
# Generate input for the test
a = training_set_price.iloc[2*i].to_numpy()[1:]
b = training_set_price.iloc[2*i+1].to_numpy()[1:]
# Check the log based function on the log normalization
spread_func = create_spread_function(a, b, 0, 1000, 'log')
(spread, z_score) = spread_func(a[0], b[0])
# Generate the graph about log based z_score
x = np.arange(1000)
z_score_history = np.zeros((1000))
for i in range(1000):
(spread, z_score_history[i]) = spread_func(a[i], b[i])
plt.title('log based z-score')
plt.plot(x, z_score_history)
plt.show()
# Check the Lasso based function on the linear regression
spread_func = create_spread_function(a, b, 0, 1000, 'lr')
(spread, z_score) = spread_func(a[0], b[0])
# Generate the graph about log based z_score
for i in range(1000):
(spread, z_score_history[i]) = spread_func(a[i], b[i])
plt.title('Lasso based z-score')
plt.plot(x, z_score_history)
plt.show()
print("======================================================")
###Output
pairs = ([43350, 82651])
###Markdown
Generate the z-score history listWe will generate the z-score list.The spread function we will use is based on certain days previous price history (We will call this as 'window_width').We will update the spread function every 'update_period' like a moving windows.It will be used for our strategy to deciding when wewe will buy/sell the stocks.
###Code
def gen_z_score_history(price_a, price_b, windows_width=700, spread_func_update_period=30):
"""
Generate the z-scores history
Args:
price_a, price_b: stock's present price history dataset (T)
windows_width: Width training data (day)
spread_func_update_period: The period of spread function update (day)
Return:
z_score_list: z-score history (T-windows_width)
"""
# Initialization
T = len(price_a)
z_score_list = np.zeros((T-windows_width))
a = price_a
b = price_b
# Calculate the z_score one by one.
for t in range(T-windows_width):
# Generate the spread_function for every update_period
if t % spread_func_update_period==0:
spread_func = create_spread_function(
a, b, t, t + windows_width, 'lr')
# Generate the z-score with spread_function
spread, z_score = spread_func(a[t], b[t])
z_score_list[t] = z_score
return z_score_list
###Output
_____no_output_____
###Markdown
How to run this FunctionGenerate the one z-score history and plot the graph
###Code
# Run the function with one stock pair.
price_history = training_set_price.to_numpy()[:2,1:]
z_score_history = gen_z_score_history(price_history[0], price_history[1])
# Plot the graph.
x = np.arange(len(z_score_history))
plt.plot(x, z_score_history)
plt.show()
###Output
_____no_output_____
###Markdown
Run the z_score_history generation function for all the pairs
###Code
z_score_history_list = np.zeros((1, 1))
pairs_training_set = total_set
# Run the z_score_history generation function for each pair.
for pair in pairs_list:
# Generate the price_history table for each pair.
price_history_a = pairs_training_set.loc[
pairs_training_set['PERMNO'] == pair[0]].drop(columns=['PERMNO'])
price_history_a = price_history_a.to_numpy()[0]
price_history_b = pairs_training_set.loc[
pairs_training_set['PERMNO'] == pair[1]].drop(columns=['PERMNO'])
price_history_b = price_history_b.to_numpy()[0]
# Run the z-score history generation function for each pair.
try:
z_score_history = gen_z_score_history(
price_history_a, price_history_b)[np.newaxis]
z_score_history_list = np.append(
z_score_history_list, z_score_history, axis=0)
except:
z_score_history_list = gen_z_score_history(
price_history_a, price_history_b)[np.newaxis]
plt.figure(figsize=(12, 7))
plt.title('Each Pair\'s z-score')
plt.ylabel('z-score')
plt.xlabel('t (day)')
x = np.arange(z_score_history_list[0].shape[0])
for i, pair in enumerate(pairs_list):
plt.plot(x, z_score_history_list[i], label=str(pair))
plt.legend(framealpha=1, frameon=True, loc=1)
plt.savefig('each_pair_z_score.png')
plt.show()
with open('z_score_history.npy', 'wb') as outfile:
np.save(outfile, z_score_history_list)
###Output
_____no_output_____ |
notebooks/cnn_mnist_simple.ipynb | ###Markdown
SIMPLE CONVOLUTIONAL NEURAL NETWORK
###Code
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("PACKAGES LOADED")
###Output
PACKAGES LOADED
###Markdown
LOAD MNIST
###Code
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print ("MNIST ready")
###Output
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
MNIST ready
###Markdown
SELECT DEVICE TO BE USED
###Code
device_type = "/gpu:1"
###Output
_____no_output_____
###Markdown
DEFINE CNN
###Code
with tf.device(device_type): # <= This is optional
n_input = 784
n_output = 10
weights = {
'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)),
'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
def conv_simple(_input, _w, _b):
# Reshape input
_input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])
# Convolution
_conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')
# Add-bias
_conv2 = tf.nn.bias_add(_conv1, _b['bc1'])
# Pass ReLu
_conv3 = tf.nn.relu(_conv2)
# Max-pooling
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Vectorize
_dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]])
# Fully-connected layer
_out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1'])
# Return everything
out = {
'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'out': _out
}
return out
print ("CNN ready")
###Output
CNN ready
###Markdown
DEFINE COMPUTATIONAL GRAPH
###Code
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
# Parameters
learning_rate = 0.001
training_epochs = 10
batch_size = 100
display_step = 1
# Functions!
with tf.device(device_type): # <= This is optional
_pred = conv_simple(x, weights, biases)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(_pred, y))
optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
_corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy
init = tf.initialize_all_variables()
# Saver
save_step = 1;
savedir = "nets/"
saver = tf.train.Saver(max_to_keep=3)
print ("Network Ready to Go!")
###Output
Network Ready to Go!
###Markdown
OPTIMIZE DO TRAIN OR NOT
###Code
do_train = 1
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
sess.run(init)
if do_train == 1:
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Fit training using batch data
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Training accuracy: %.3f" % (train_acc))
test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel})
print (" Test accuracy: %.3f" % (test_acc))
# Save Net
if epoch % save_step == 0:
saver.save(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("Optimization Finished.")
###Output
Epoch: 000/010 cost: 0.326192696
Training accuracy: 0.980
Test accuracy: 0.959
Epoch: 001/010 cost: 0.105607550
Training accuracy: 0.960
Test accuracy: 0.975
Epoch: 002/010 cost: 0.072013733
Training accuracy: 0.960
Test accuracy: 0.979
Epoch: 003/010 cost: 0.056868095
Training accuracy: 0.990
Test accuracy: 0.980
Epoch: 004/010 cost: 0.047069814
Training accuracy: 0.990
Test accuracy: 0.983
Epoch: 005/010 cost: 0.040124569
Training accuracy: 0.980
Test accuracy: 0.983
Epoch: 006/010 cost: 0.035343169
Training accuracy: 0.990
Test accuracy: 0.983
Epoch: 007/010 cost: 0.030736405
Training accuracy: 1.000
Test accuracy: 0.984
Epoch: 008/010 cost: 0.026192359
Training accuracy: 0.990
Test accuracy: 0.983
Epoch: 009/010 cost: 0.024165640
Training accuracy: 1.000
Test accuracy: 0.983
Optimization Finished.
###Markdown
RESTORE
###Code
if do_train == 0:
epoch = training_epochs-1
saver.restore(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("NETWORK RESTORED")
###Output
_____no_output_____
###Markdown
LET'S SEE HOW CNN WORKS
###Code
with tf.device(device_type):
conv_out = conv_simple(x, weights, biases)
input_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]})
###Output
_____no_output_____
###Markdown
Input
###Code
# Let's see 'input_r'
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# Plot !
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
###Output
Size of 'input_r' is (1, 28, 28, 1)
Label is 7
###Markdown
Conv1 (convolution)
###Code
# Let's see 'conv1'
print ("Size of 'conv1' is %s" % (conv1.shape,))
# Plot !
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
###Output
Size of 'conv1' is (1, 28, 28, 64)
###Markdown
Conv2 (+bias)
###Code
# Let's see 'conv2'
print ("Size of 'conv2' is %s" % (conv2.shape,))
# Plot !
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
###Output
Size of 'conv2' is (1, 28, 28, 64)
###Markdown
Conv3 (ReLU)
###Code
# Let's see 'conv3'
print ("Size of 'conv3' is %s" % (conv3.shape,))
# Plot !
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
###Output
Size of 'conv3' is (1, 28, 28, 64)
###Markdown
Pool (max_pool)
###Code
# Let's see 'pool'
print ("Size of 'pool' is %s" % (pool.shape,))
# Plot !
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
###Output
Size of 'pool' is (1, 14, 14, 64)
###Markdown
Dense
###Code
# Let's see 'dense'
print ("Size of 'dense' is %s" % (dense.shape,))
# Let's see 'out'
print ("Size of 'out' is %s" % (out.shape,))
###Output
Size of 'dense' is (1, 12544)
Size of 'out' is (1, 10)
###Markdown
Convolution filters
###Code
# Let's see weight!
wc1 = sess.run(weights['wc1'])
print ("Size of 'wc1' is %s" % (wc1.shape,))
# Plot !
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
###Output
Size of 'wc1' is (3, 3, 1, 64)
###Markdown
SIMPLE CONVOLUTIONAL NEURAL NETWORK
###Code
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("PACKAGES LOADED")
import gc
gc.collect()
###Output
PACKAGES LOADED
###Markdown
LOAD MNIST
###Code
mnist = input_data.read_data_sets('data/', one_hot=True)
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print ("MNIST ready")
len(mnist.train.images[0,:])
###Output
Extracting data/train-images-idx3-ubyte.gz
###Markdown
SELECT DEVICE TO BE USED
###Code
device_type = "/gpu:1"
###Output
_____no_output_____
###Markdown
DEFINE CNN
###Code
with tf.device(device_type): # <= This is optional
n_input = 784
n_output = 10
weights = {
'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)),##[filter_height, filter_width, in_channels, out_channels]
'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
def conv_simple(_input, _w, _b):
# Reshape input
_input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])##[batch, in_height, in_width, in_channels]
# Convolution
_conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')
#步长stride是一个一维的向量,长度为4。形式是[a,x,y,z],分别代表[batch滑动步长,水平滑动步长,垂直滑动步长,通道滑动步长]
#在tensorflow中,stride的一般形式是[1,x,y,1]
#第一个1表示:在batch维度上的滑动步长为1,即不跳过任何一个样本
##x表示:卷积核的水平滑动步长
#y表示:卷积核的垂直滑动步长
#最后一个1表示:在通道维度上的滑动步长为1,即不跳过任何一个颜色通道
# Add-bias
_conv2 = tf.nn.bias_add(_conv1, _b['bc1'])
# Pass ReLu
_conv3 = tf.nn.relu(_conv2)
# Max-pooling
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')##池化窗口的大小,取一个四维向量,一般是[1, height, width, 1],因为我们不想在batch和channels上做池化,所以这两个维度设为了1
##和卷积类似,窗口在每一个维度上滑动的步长,一般也是[1, stride,stride, 1]
# Vectorize
_dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]])
# Fully-connected layer
_out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1'])
# Return everything
out = {
'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'out': _out
}
return out
print ("CNN ready")
tmp = mnist.train.images[0,:]
###Output
_____no_output_____
###Markdown
DEFINE COMPUTATIONAL GRAPH
###Code
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
# Parameters
learning_rate = 0.001
training_epochs = 10
batch_size = 100
display_step = 1
# Functions!
with tf.device(device_type): # <= This is optional
_pred = conv_simple(x, weights, biases)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits = _pred,labels = y))
optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
_corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy
init = tf.initialize_all_variables()
# Saver
save_step = 1;
savedir = "nets/"
saver = tf.train.Saver(max_to_keep=3) ##保留最近的三个模型
print ("Network Ready to Go!")
###Output
ERROR:tensorflow:==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Operation'>):
<tf.Operation 'init_4' type=NoOp>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\ipykernel\ipkernel.py", line 344, in do_execute
return reply_content File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 2667, in run_cell
return result File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 2801, in _run_cell
return result File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 2929, in run_ast_nodes
return False File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 2981, in run_code
return outflag File "<ipython-input-13-b347a2daf818>", line 16, in <module>
init = tf.initialize_all_variables() File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\tf_should_use.py", line 189, in wrapped
return _add_should_use_warning(fn(*args, **kwargs))
==================================
###Markdown
OPTIMIZE DO TRAIN OR NOT
###Code
do_train = 1
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))##设置tf.ConfigProto()中参数log_device_placement = True ,可以获取到 operations 和 Tensor 被指派到哪个设备(几号CPU或几号GPU)上运行,会在终端打印出各项操作是在哪个设备上运行的
##在tf中,通过命令 "with tf.device('/cpu:0'):",允许手动设置操作运行的设备。如果手动设置的设备不存在或者不可用,就会导致tf程序等待或异常,为了防止这种情况,可以设置tf.ConfigProto()中参数allow_soft_placement=True,允许tf自动选择一个存在并且可用的设备来运行操作。
sess.run(init)
if do_train == 1:
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)##下一个batch
# Fit training using batch data
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
train_acc = sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Training accuracy: %.3f" % (train_acc))
test_acc = sess.run(accr, feed_dict={x: testimg, y: testlabel})
print (" Test accuracy: %.3f" % (test_acc))
# Save Net
if epoch % save_step == 0:
saver.save(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("Optimization Finished.")
###Output
Epoch: 000/010 cost: 0.299619817
Training accuracy: 0.960
###Markdown
RESTORE
###Code
if do_train == 0:
epoch = training_epochs-1
saver.restore(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("NETWORK RESTORED")
###Output
_____no_output_____
###Markdown
LET'S SEE HOW CNN WORKS
###Code
with tf.device(device_type):
conv_out = conv_simple(x, weights, biases)
input_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]})
###Output
_____no_output_____
###Markdown
Input
###Code
# Let's see 'input_r'
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# Plot !
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
###Output
Size of 'input_r' is (1, 28, 28, 1)
Label is 7
###Markdown
Conv1 (convolution)
###Code
# Let's see 'conv1'
print ("Size of 'conv1' is %s" % (conv1.shape,))
# Plot !
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
###Output
Size of 'conv1' is (1, 28, 28, 64)
###Markdown
Conv2 (+bias)
###Code
# Let's see 'conv2'
print ("Size of 'conv2' is %s" % (conv2.shape,))
# Plot !
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
###Output
Size of 'conv2' is (1, 28, 28, 64)
###Markdown
Conv3 (ReLU)
###Code
# Let's see 'conv3'
print ("Size of 'conv3' is %s" % (conv3.shape,))
# Plot !
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
###Output
Size of 'conv3' is (1, 28, 28, 64)
###Markdown
Pool (max_pool)
###Code
# Let's see 'pool'
print ("Size of 'pool' is %s" % (pool.shape,))
# Plot !
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
###Output
Size of 'pool' is (1, 14, 14, 64)
###Markdown
Dense
###Code
# Let's see 'dense'
print ("Size of 'dense' is %s" % (dense.shape,))
# Let's see 'out'
print ("Size of 'out' is %s" % (out.shape,))
###Output
Size of 'dense' is (1, 12544)
Size of 'out' is (1, 10)
###Markdown
Convolution filters
###Code
# Let's see weight!
wc1 = sess.run(weights['wc1'])
print ("Size of 'wc1' is %s" % (wc1.shape,))
# Plot !
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
###Output
Size of 'wc1' is (3, 3, 1, 64)
###Markdown
SIMPLE CONVOLUTIONAL NEURAL NETWORK
###Code
import numpy as np
# import tensorflow as tf
import tensorflow.compat.v1 as tf
import matplotlib.pyplot as plt
# from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
print ("PACKAGES LOADED")
###Output
PACKAGES LOADED
###Markdown
LOAD MNIST
###Code
def OnehotEncoding(target):
from sklearn.preprocessing import OneHotEncoder
target_re = target.reshape(-1,1)
enc = OneHotEncoder()
enc.fit(target_re)
return enc.transform(target_re).toarray()
def SuffleWithNumpy(data_x, data_y):
idx = np.random.permutation(len(data_x))
x,y = data_x[idx], data_y[idx]
return x,y
# mnist = input_data.read_data_sets('data/', one_hot=True)
# trainimg = mnist.train.images
# trainlabel = mnist.train.labels
# testimg = mnist.test.images
# testlabel = mnist.test.labels
# print ("MNIST ready")
print ("Download and Extract MNIST dataset")
# mnist = input_data.read_data_sets('data/', one_hot=True)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
print
print (" tpye of 'mnist' is %s" % (type(mnist)))
print (" number of train data is %d" % (len(x_train)))
print (" number of test data is %d" % (len(x_test)))
num_train_data = len(x_train)
trainimg = x_train
trainimg = trainimg.reshape(len(trainimg),784)
trainlabel = OnehotEncoding(y_train)
testimg = x_test
testimg = testimg.reshape(len(testimg),784)
testlabel = OnehotEncoding(y_test)
print ("MNIST loaded")
tf.disable_eager_execution()
###Output
Download and Extract MNIST dataset
tpye of 'mnist' is <class 'tensorflow.python.util.module_wrapper.TFModuleWrapper'>
number of train data is 60000
number of test data is 10000
MNIST loaded
###Markdown
SELECT DEVICE TO BE USED
###Code
device_type = "/gpu:1"
###Output
_____no_output_____
###Markdown
DEFINE CNN
###Code
with tf.device(device_type): # <= This is optional
n_input = 784
n_output = 10
weights = {
'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64], stddev=0.1)),
'wd1': tf.Variable(tf.random_normal([14*14*64, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
def conv_simple(_input, _w, _b):
# Reshape input
_input_r = tf.reshape(_input, shape=[-1, 28, 28, 1])
# Convolution
_conv1 = tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')
# Add-bias
_conv2 = tf.nn.bias_add(_conv1, _b['bc1'])
# Pass ReLu
_conv3 = tf.nn.relu(_conv2)
# Max-pooling
_pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Vectorize
_dense = tf.reshape(_pool, [-1, _w['wd1'].get_shape().as_list()[0]])
# Fully-connected layer
_out = tf.add(tf.matmul(_dense, _w['wd1']), _b['bd1'])
# Return everything
out = {
'input_r': _input_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3
, 'pool': _pool, 'dense': _dense, 'out': _out
}
return out
print ("CNN ready")
###Output
CNN ready
###Markdown
DEFINE COMPUTATIONAL GRAPH
###Code
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_output])
# Parameters
learning_rate = 0.001
training_epochs = 10
batch_size = 10
display_step = 1
# Functions!
with tf.device(device_type): # <= This is optional
_pred = conv_simple(x, weights, biases)['out']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=_pred))
optm = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
_corr = tf.equal(tf.argmax(_pred,1), tf.argmax(y,1)) # Count corrects
accr = tf.reduce_mean(tf.cast(_corr, tf.float32)) # Accuracy
init = tf.global_variables_initializer()
# Saver
save_step = 1;
savedir = "nets/"
saver = tf.train.Saver(max_to_keep=3)
print ("Network Ready to Go!")
###Output
WARNING:tensorflow:From d:\program\python_3_8_5\lib\site-packages\tensorflow\python\util\dispatch.py:201: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See `tf.nn.softmax_cross_entropy_with_logits_v2`.
Network Ready to Go!
###Markdown
OPTIMIZE DO TRAIN OR NOT
###Code
do_train = 1
# check operation gpu or cpu
# sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.4
config.allow_soft_placement=True
sess = tf.Session(config=config)
sess.run(init)
len(testimg)
if do_train == 1:
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(num_train_data/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs=trainimg[i*batch_size:(i+1)*batch_size]
batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]
# Fit training using batch data
sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})/total_batch
# Display logs per epoch step
if (epoch +1)% display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost))
total_batch = int(num_train_data/batch_size)
train_acc=0
for i in range(total_batch):
batch_xs=trainimg[i*batch_size:(i+1)*batch_size]
batch_ys=trainlabel[i*batch_size:(i+1)*batch_size]
train_acc = train_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Training accuracy: %.3f" % (train_acc/total_batch))
# randidx = np.random.randint(len(testimg), size=1000)
# batch_test_xs = testimg[randidx, :]
# batch_test_ys = testlabel[randidx, :]
# test_acc = sess.run(accr, feed_dict={x: batch_test_xs, y: batch_test_ys})
total_batch = int(len(testimg)/batch_size)
test_acc=0
for i in range(total_batch):
batch_xs=testimg[i*batch_size:(i+1)*batch_size]
batch_ys=testlabel[i*batch_size:(i+1)*batch_size]
test_acc = test_acc + sess.run(accr, feed_dict={x: batch_xs, y: batch_ys})
print (" Test accuracy: %.3f" % (test_acc/total_batch))
# Save Net
if epoch % save_step == 0:
saver.save(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
trainimg,trainlabel = SuffleWithNumpy(trainimg,trainlabel)
print ("Optimization Finished.")
###Output
Epoch: 000/010 cost: 0.035667057
Training accuracy: 0.986
Test accuracy: 0.980
Epoch: 001/010 cost: 0.027397077
Training accuracy: 0.994
Test accuracy: 0.985
Epoch: 002/010 cost: 0.018888790
Training accuracy: 0.997
Test accuracy: 0.985
Epoch: 003/010 cost: 0.013064439
Training accuracy: 0.994
Test accuracy: 0.984
WARNING:tensorflow:From d:\program\python_3_8_5\lib\site-packages\tensorflow\python\training\saver.py:969: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
Epoch: 004/010 cost: 0.009839889
Training accuracy: 0.998
Test accuracy: 0.985
Epoch: 005/010 cost: 0.006764985
Training accuracy: 0.998
Test accuracy: 0.983
Epoch: 006/010 cost: 0.004653526
Training accuracy: 0.996
Test accuracy: 0.983
Epoch: 007/010 cost: 0.003468891
Training accuracy: 0.999
Test accuracy: 0.983
Epoch: 008/010 cost: 0.002509124
Training accuracy: 0.998
Test accuracy: 0.984
Epoch: 009/010 cost: 0.001840721
Training accuracy: 0.998
Test accuracy: 0.983
Optimization Finished.
###Markdown
RESTORE
###Code
do_train = 0
if do_train == 0:
epoch = training_epochs-1
# epoch = 3
saver.restore(sess, "nets/cnn_mnist_simple.ckpt-" + str(epoch))
print ("NETWORK RESTORED")
###Output
INFO:tensorflow:Restoring parameters from nets/cnn_mnist_simple.ckpt-9
NETWORK RESTORED
###Markdown
LET'S SEE HOW CNN WORKS
###Code
with tf.device(device_type):
conv_out = conv_simple(x, weights, biases)
input_r = sess.run(conv_out['input_r'], feed_dict={x: trainimg[0:1, :]})
conv1 = sess.run(conv_out['conv1'], feed_dict={x: trainimg[0:1, :]})
conv2 = sess.run(conv_out['conv2'], feed_dict={x: trainimg[0:1, :]})
conv3 = sess.run(conv_out['conv3'], feed_dict={x: trainimg[0:1, :]})
pool = sess.run(conv_out['pool'], feed_dict={x: trainimg[0:1, :]})
dense = sess.run(conv_out['dense'], feed_dict={x: trainimg[0:1, :]})
out = sess.run(conv_out['out'], feed_dict={x: trainimg[0:1, :]})
###Output
_____no_output_____
###Markdown
Input
###Code
# Let's see 'input_r'
print ("Size of 'input_r' is %s" % (input_r.shape,))
label = np.argmax(trainlabel[0, :])
print ("Label is %d" % (label))
# Plot !
plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray'))
plt.title("Label of this image is " + str(label) + "")
plt.colorbar()
plt.show()
###Output
Size of 'input_r' is (1, 28, 28, 1)
Label is 1
###Markdown
Conv1 (convolution)
###Code
# Let's see 'conv1'
print ("Size of 'conv1' is %s" % (conv1.shape,))
# Plot !
for i in range(3):
plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv1")
plt.colorbar()
plt.show()
###Output
Size of 'conv1' is (1, 28, 28, 64)
###Markdown
Conv2 (+bias)
###Code
# Let's see 'conv2'
print ("Size of 'conv2' is %s" % (conv2.shape,))
# Plot !
for i in range(3):
plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv2")
plt.colorbar()
plt.show()
###Output
Size of 'conv2' is (1, 28, 28, 64)
###Markdown
Conv3 (ReLU)
###Code
# Let's see 'conv3'
print ("Size of 'conv3' is %s" % (conv3.shape,))
# Plot !
for i in range(3):
plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv3")
plt.colorbar()
plt.show()
###Output
Size of 'conv3' is (1, 28, 28, 64)
###Markdown
Pool (max_pool)
###Code
# Let's see 'pool'
print ("Size of 'pool' is %s" % (pool.shape,))
# Plot !
for i in range(3):
plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th pool")
plt.colorbar()
plt.show()
###Output
Size of 'pool' is (1, 14, 14, 64)
###Markdown
Dense
###Code
# Let's see 'dense'
print ("Size of 'dense' is %s" % (dense.shape,))
# Let's see 'out'
print ("Size of 'out' is %s" % (out.shape,))
plt.matshow(out, cmap=plt.get_cmap('gray'))
plt.title("out")
plt.colorbar()
plt.show()
###Output
Size of 'dense' is (1, 12544)
Size of 'out' is (1, 10)
###Markdown
Convolution filters
###Code
# Let's see weight!
wc1 = sess.run(weights['wc1'])
print ("Size of 'wc1' is %s" % (wc1.shape,))
# Plot !
for i in range(3):
plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray'))
plt.title(str(i) + "th conv filter")
plt.colorbar()
plt.show()
###Output
Size of 'wc1' is (3, 3, 1, 64)
|
njsmith-async-concurrency-for-mere-mortals/2018-05-11-pycon-notebook.ipynb | ###Markdown
Checklist* Make sure to have a clock visible* Check network connectivity* Displays mirrored* Slides up* This notebook * ~170% zoom * Ideally using 3.7-pre because it has better error messages: demo-env/bin/jupyter notebook pycon-notebook.ipynb * Full screened (F11) * Hide header and toolbar * Turn on line numbers * Kernel → Restart and clear output* Examples: * getaddrinfo: on vorpus.org or blank * clear the async/await example and the happy eyeballs (maybe leaving the function prototype to seed things)* Two terminals ([tilix](https://gnunn1.github.io/tilix-web/)) with large font and * `nc -l -p 12345` * `nc -l -p 54321` * (For the `nc` included with MacOS, you leave out the `-p`, for example: `nc -l 12345`.)* No other windows on the same desktop* scrolled down to the getaddrinfo example `getaddrinfo` example
###Code
import socket
socket.getaddrinfo("debian.org", "https", type=socket.SOCK_STREAM)
###Output
_____no_output_____
###Markdown
Demo: bidirectional proxy
###Code
import trio
async def proxy_one_way(source, sink):
while True:
data = await source.receive_some(1024)
if not data:
await sink.send_eof()
break
await sink.send_all(data)
async def proxy_two_way(a, b):
async with trio.open_nursery() as nursery:
nursery.start_soon(proxy_one_way, a, b)
nursery.start_soon(proxy_one_way, b, a)
async def main():
with trio.move_on_after(10): # 10 second time limit
a = await trio.open_tcp_stream("localhost", 12345)
b = await trio.open_tcp_stream("localhost", 54321)
async with a, b:
await proxy_two_way(a, b)
print("all done!")
trio.run(main)
async def sleepy():
print("going to sleep")
await trio.sleep(1)
print("woke up")
async def sleepy_twice():
await sleepy()
await sleepy()
trio.run(sleepy_twice)
###Output
going to sleep
woke up
going to sleep
woke up
###Markdown
Happy Eyeballs!
###Code
async def open_tcp_socket(hostname, port, *, max_wait_time=0.250):
targets = await trio.socket.getaddrinfo(
hostname, port, type=trio.socket.SOCK_STREAM)
failed_attempts = [trio.Event() for _ in targets]
winning_socket = None
async def attempt(target_idx, nursery):
# wait for previous one to finish, or timeout to expire
if target_idx > 0:
with trio.move_on_after(max_wait_time):
await failed_attempts[target_idx - 1].wait()
# start next attempt
if target_idx + 1 < len(targets):
nursery.start_soon(attempt, target_idx + 1, nursery)
# try to connect to our target
try:
*socket_config, _, target = targets[target_idx]
socket = trio.socket.socket(*socket_config)
await socket.connect(target)
# if fails, tell next attempt to go ahead
except OSError:
failed_attempts[target_idx].set()
else:
# if succeds, save winning socket
nonlocal winning_socket
winning_socket = socket
# and cancel other attempts
nursery.cancel_scope.cancel()
async with trio.open_nursery() as nursery:
nursery.start_soon(attempt, 0, nursery)
if winning_socket is None:
raise OSError("ruh-oh")
else:
return winning_socket
# Let's try it out:
async def main():
print(await open_tcp_socket("debian.org", "https"))
trio.run(main)
###Output
<trio.socket.socket fd=45, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('10.12.141.79', 51108), raddr=('130.89.148.14', 443)>
###Markdown
Happy eyeballs (pre-prepared for timing emergencies)
###Code
async def open_connection(hostname, port, *, max_wait_time=0.250):
targets = await trio.socket.getaddrinfo(
hostname, port, type=trio.socket.SOCK_STREAM)
attempt_failed = [trio.Event() for _ in targets]
winning_socket = None
async def attempt_one(target_idx, nursery):
# wait for previous attempt to fail, or timeout
if which > 0:
with trio.move_on_after(max_wait_time):
await attempt_failed[target_idx - 1].wait()
# kick off next attempt
if target_idx + 1 < len(targets):
nursery.start_soon(attempt_one, target_idx + 1, nursery)
# try to connect to our target
*socket_config, _, target = targets[target_idx]
try:
sock = trio.socket.socket(*socket_config)
await sock.connect(target)
# if fail, tell next attempt to go ahead
except OSError:
attempt_failed[target_idx].set()
# if succeed, cancel other attempts and save winning socket
else:
nursery.cancel_scope.cancel()
nonlocal winning_socket
winning_socket = sock
async with trio.open_nursery() as nursery:
nursery.start_soon(attempt_one, 0, nursery)
if winning_socket is None:
raise OSError("failed")
else:
return winning_socket
trio.run(open_connection, "debian.org", "https")
###Output
_____no_output_____
###Markdown
async/await demo cheat sheet
###Code
async def sleep_one():
print("I'm tired")
await trio.sleep(1)
print("slept!")
async def sleep_twice():
await sleep_one()
await sleep_one()
trio.run(sleep_twice)
###Output
_____no_output_____
###Markdown
`trio.Event` example
###Code
async def sleeper(event):
print("sleeper: going to sleep!")
await trio.sleep(5)
print("sleeper: woke up! let's tell everyone")
event.set()
async def waiter(event, i):
print(f"waiter {i}: waiting for the sleeper")
await event.wait()
print(f"waiter {i}: received notification!")
async def main():
async with trio.open_nursery() as nursery:
event = trio.Event()
nursery.start_soon(sleeper, event)
nursery.start_soon(waiter, event, 1)
nursery.start_soon(waiter, event, 2)
trio.run(main)
###Output
_____no_output_____ |
Basics/Code.ipynb | ###Markdown
Logistic Regression
###Code
def sigma(z):
return(1 / (1 + np.exp(-z)))
def tanh(z):
return((np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z)))
def relu(z):
return(max(0,z))
def leaky_relu(z):
return(max(0.01*z,z))
###Output
_____no_output_____
###Markdown
**For one sample tuple**
###Code
def LogRegCompute(x_1, x_2, w_1, w_2, b, alpha,y):
def compute_da(y,a):
da = -(y/a)+(1-y)/(1-a)
return da
def compute_dz(da,a):
dz = da*a*(1-a)
return dz
def compute_d(dz, x=1):
d = dz * x
return d
z = w_1*x_1 + w_2*x_2 + b
a = sigma(z)
da = compute_dz(y,a)
dz = compute_dz(da,a)
dw1 = compute_d(dz, x_1)
dw2 = compute_d(dz, x_2)
db = compute_d(dz)
w_1 = w_1 + alpha*dw1
w_2 = w_2 + alpha*dw2
b = b + alpha*db
return(w_1,w_2,b)
LogRegCompute(1,2,0,0,1,0.01,2)
###Output
_____no_output_____
###Markdown
**For m samples, single step**
###Code
m = 1000
J_array, b = np.zeros((m,1)), 0
alpha = 0.01
np.random.seed(197)
w = np.zeros((1,2))
x_1 = np.random.randint(10, size = m).reshape(-1,m)
x_2 = np.random.randint(low = 25, high = 50, size = m).reshape(-1,m)
x = np.array([x_1,x_2]).reshape(2,m)
y = np.where(((x[1]<37.5) & (x[0]>5)), 1, 0)
for i in range(1000):
z = np.zeros(m)
a = np.zeros(m)
z = np.dot(w,x) + b
a = sigma(z)
J = (-(y * np.log(a) + (1-y)* np.log(1-a))).mean()
dz = a - y
dw = (np.dot(x,dz.T).reshape(-1,2))/m
db = dz.mean()
w = w - alpha * dw
b = b - alpha * db
J_array[i] = J
plt.plot(J_array)
plt.title("Cost function over 1000 iterations")
###Output
_____no_output_____
###Markdown
Accuracy
###Code
z = np.dot(w,x) + b
a = sigma(z)
res = np.mean(np.where(a>0.5,1,0)==y)
print(f'Accuracy = {res*100}%')
###Output
Accuracy = 93.5%
###Markdown
Neural Network Init network
###Code
def init_network():
hidden_layer_nodes = []
if(input("Load default network layer config[2 input, 1 hidden layer(4 nodes), 1 output]: (y/n): ").lower()!='y'):
input_layer_node = int(input("Enter number of nodes in input layer 0: "))
h_layer = int(input("Enter number of hidden layers in network: "))
n_layer = h_layer + 1
for i in range(1,n_layer):
hidden_layer_nodes.append(int(input(f"Enter no. of nodes in hidden layer {i} (layer {i})")))
output_layer_node = int(input(f"Enter number of nodes in output layer (layer {n_layer}): "))
else:
n_layer = 2
input_layer_node = 2
output_layer_node = 1
hidden_layer_nodes = [4]
n_per_layer = [input_layer_node] + hidden_layer_nodes + [output_layer_node]
hidden_layers = {}
for i in range(n_layer+1):
hidden_layers[f'Layer {i}'] = {'a':np.zeros(shape=(n_per_layer[i],1))}
if(i != 0):
hidden_layers[f'Layer {i}']['w'] = np.random.randn(n_per_layer[i],n_per_layer[i-1]) * 0.01
hidden_layers[f'Layer {i}']['b'] = np.zeros(shape=(n_per_layer[i],1))
return(hidden_layers)
###Output
_____no_output_____
###Markdown
Display net
###Code
def display_net(net, status = "original"):
print(f'\nStatus: {status}\n')
for key,value in net.items():
print(f'{key}:')
for key,value in value.items():
print(f'{key}: \n{value}')
###Output
_____no_output_____
###Markdown
Forward prop
###Code
def forward_prop():
n_layer = f'Layer {len(net)-1}'
for key in net:
if(key!='Layer 0'):
w = net[key]['w']
b = net[key]['b']
z = np.dot(w,a_prev) + b
if(key == n_layer):
a = sigma(z)
else:
a = np.tanh(z)
net[key]['a'] = a
a_prev = net[key]['a']
###Output
_____no_output_____
###Markdown
Back prop
###Code
def back_prop():
n_layer = f'Layer {len(net)-1}'
J = (-(y * np.log(net[n_layer]['a']) + (1-y)* np.log(1-net[n_layer]['a']))).mean()
_net = list(net.items())
dz = []
for j in range(len(list(net.items()))-1,0,-1):
a_2 = _net[j][1]['a']
a_1 = _net[j-1][1]['a']
if(j == int(n_layer[-1])):
_dz = a_2 - y
else:
_dz = np.dot(_net[j+1][1]['w'].T,dz[-1]) * (1 - np.power(a_2, 2))
dw = np.dot(_dz,a_1.T)/m
db = (_dz.sum(axis = 1, keepdims = True))/m
_net[j][1]['w'] = _net[j][1]['w'] - alpha * dw
_net[j][1]['b'] = _net[j][1]['b'] - alpha * db
dz.append(_dz)
return(J)
###Output
_____no_output_____
###Markdown
Initialize
###Code
m = 1000
alpha = 0.004
np.random.seed(197)
x_1 = np.random.randint(10, size = m).reshape(-1,m)
x_2 = np.random.randint(low = 25, high = 50, size = m).reshape(-1,m)
x = np.array([x_1,x_2]).reshape(2,m)
y = np.where(((x[1]<37.5) & (x[0]>5)), 1, 0).reshape(1,1000)
###Output
_____no_output_____
###Markdown
Iteration
###Code
net = init_network()
net['Layer 0']['a'] = x
J_array = []
for i in range(10000):
forward_prop()
J = back_prop()
J_array.append(J)
if(len(J_array)%1000==0):
print(f'Cost after {len(J_array)} iterations: {J_array[-1]}')
# display_net(net, "Output")
print("Complete!")
plt.plot(J_array)
plt.title("Cost function over 10000 iterations")
###Output
_____no_output_____
###Markdown
Accuracy
###Code
n_layer = f'Layer {len(net)-1}'
for key,value in net.items():
if(key!='Layer 0'):
w = value['w']
a = value['a']
b = value['b']
if(key == n_layer):
z = np.dot(w,a_prev) + b
a = sigma(z)
else:
z = np.dot(w,a_prev) + b
a = np.vectorize(relu)(z)
value['z'] = z
value['a'] = a
a_prev = value['a']
res = np.mean(np.where(net[n_layer]['a']>0.5,1,0)==y)
print(f'Accuracy = {res*100}%')
###Output
Accuracy = 88.1%
###Markdown
Comparing against accuracy of sklearn's MLPClassifier
###Code
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs', alpha=0.001, hidden_layer_sizes=(4), random_state=197)
x_temp = x.T
y_temp = y.reshape(1000,)
clf.fit(x_temp, y_temp)
res = np.mean(clf.predict(x_temp)==y)
print(f'Accuracy = {res*100}%')
###Output
Accuracy = 94.1%
|
Logistic Regression/Logistic regression.ipynb | ###Markdown
Logistic regressionThe purpose of this notebook is to fit logistic regression model for given dataset by implementing Newton-Raphson method and gradient descent method to minimize the cost function. Next we want to compare implemented models with *scikit-learn* model. Initial data analysis and visualizationFirst off let's import required libraries and display first few rows of dataset:
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
file_name = os.path.join(os.getcwd(), 'data/Social_Network_Ads.csv')
df = pd.read_csv(file_name, engine='python')
df.head()
###Output
_____no_output_____
###Markdown
Removing columns **User ID** and **Gender**, as they don't have any relevance:
###Code
df = df.drop(columns=['User ID', 'Gender'])
df.head()
###Output
_____no_output_____
###Markdown
Checking if there are any null values in dataset:
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Visualizing the target variable **Purchased** and exploring the data:
###Code
fig = plt.figure()
ax = fig.add_subplot(111)
counts = df['Purchased'].value_counts().plot(kind='bar', rot=0)
ax.set_xlabel('Purchased')
ax.set_ylabel('Counts')
plt.show()
df.groupby('Purchased').mean()
###Output
_____no_output_____
###Markdown
We can observe that people who purchased a product are generally older than people who did not. As we could expect the average of estimated salary is also higher for people that were determined to buy a product. Visualization of the complete dataset:
###Code
pos = df.loc[df['Purchased'] == 1]
neg = df.loc[df['Purchased'] == 0]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(pos['Age'], pos['EstimatedSalary'], c='b', marker='x')
ax.scatter(neg['Age'], neg['EstimatedSalary'], c='r', marker='o')
ax.set_xlabel('Age')
ax.set_ylabel('Estimated Salary')
plt.show()
###Output
_____no_output_____
###Markdown
Now, we split given data into training set (80%) and testing set (20%):
###Code
X = df[['Age', 'EstimatedSalary']]
y = df['Purchased']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
In our dataset, we have big numeric values for **EstimatedSalary** field, so we have to apply feature scaling:
###Code
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
In order to include the intercept term, we need to add a column of ones to the standardized features:
###Code
X_train_intercept = np.ones((len(X_train), 1))
X_test_intercept = np.ones((len(X_test), 1))
X_train = np.append(X_train_intercept, X_train, 1)
X_test = np.append(X_test_intercept, X_test, 1)
###Output
_____no_output_____
###Markdown
We convert the target variable into 2-dimensional array for easier data analysis:
###Code
y_train = y_train.values.reshape((-1, 1))
y_test = y_test.values.reshape((-1, 1))
###Output
_____no_output_____
###Markdown
Newton-Raphson algorithmBefore we implement an algorithm, we need to define sigmoid function and cost function:
###Code
def sigmoid(z):
return 1/(1+np.exp(-z))
def cost(X, y, theta):
m = len(y)
hypothesis = sigmoid(X@theta.T)
term_1 = np.dot(y.T, np.log(hypothesis))
term_2 = np.dot((1-y).T, np.log(1-hypothesis))
J = -(term_1 + term_2)/m
return np.float(J)
###Output
_____no_output_____
###Markdown
The update rule for generalized Newton-Raphson method is given by:\begin{equation}\theta := \theta - H^{-1}\nabla_{\theta}J(\theta),\end{equation}where:\begin{equation}H_{ij}=\frac{\partial^{2}J(\theta)}{\partial \theta_{i} \partial \theta_{j}}.\end{equation}
###Code
def newton(X, y, theta, tolerance=1e-5):
J = cost(X, y, theta)
d_J = np.Infinity
while abs(d_J) > tolerance:
weights = sigmoid(X@theta.T)*(1-sigmoid(X@theta.T))
weights = np.diag(weights[:, 0])
hessian = X.T@weights@X
gradient = X.T@(sigmoid(X@theta.T)-y)
theta = theta-(np.linalg.inv(hessian)@gradient).T
J_new = cost(X, y, theta)
d_J = J-J_new
J = J_new
return theta
###Output
_____no_output_____
###Markdown
Now, we will train our model and make predictions for the test dataset:
###Code
initial_theta = np.array([[0, 0, 0]])
theta = newton(X_train, y_train, initial_theta)
prediction = sigmoid(X_test@theta.T)
for i in range(len(prediction)):
prediction[i, 0] = 1 if prediction[i, 0] >= 0.5 else 0
###Output
_____no_output_____
###Markdown
To describe the performance of our model, we will construct a confusion matrix and compute parameters such as accuracy and precision:
###Code
def evaluate_metrics(y_true, y_pred):
cnf = metrics.confusion_matrix(y_true, y_pred)
accuracy = metrics.accuracy_score(y_true, y_pred)
precision = metrics.precision_score(y_true, y_pred, average='macro')
recall = metrics.recall_score(y_true, y_pred, average='macro')
print(f'Accuracy: {round(accuracy*100, 2)}%')
print(f'Precision: {round(precision*100, 2)}%')
print(f'Recall: {round(recall*100, 2)}%')
return cnf
def create_heatmap(cnf):
heatmap = plt.imshow(cnf)
ax = plt.gca()
ax.set_xticks(np.arange(0, 2, 1))
ax.set_yticks(np.arange(0, 2, 1))
ax.set_xticklabels(['positive', 'negative'])
ax.set_yticklabels(['positive', 'negative'])
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.set_title('Confusion matrix')
for i in range(np.shape(cnf)[0]):
for j in range(np.shape(cnf)[1]):
text = ax.text(j, i, cnf[i, j], ha='center', va='center')
plt.setp(ax.get_yticklabels(), rotation=90, ha='center', rotation_mode='anchor')
plt.colorbar(heatmap)
plt.show()
create_heatmap(evaluate_metrics(y_test, prediction))
###Output
Accuracy: 91.25%
Precision: 90.64%
Recall: 86.91%
###Markdown
Gradient Descent AlgorithmThe update rule for gradient descent algorithm is given by:\begin{equation}\theta := \theta - \alpha \nabla_{\theta}J(\theta)\end{equation}
###Code
def gradient_descent(X, y, theta, alpha, tolerance=1e-5):
J = cost(X, y, theta)
d_J = np.Infinity
while abs(d_J) > tolerance:
gradient = X.T@(sigmoid(X@theta.T)-y)
theta = theta-alpha*gradient.T
J_new = cost(X, y, theta)
d_J = J-J_new
J = J_new
return theta
###Output
_____no_output_____
###Markdown
Training of our model and making the predictions:
###Code
initial_theta = np.array([[0, 0, 0]])
alpha = 1e-3
theta = gradient_descent(X_train, y_train, initial_theta, alpha)
prediction = sigmoid(X_test@theta.T)
for i in range(len(prediction)):
prediction[i, 0] = 1 if prediction[i, 0] >= 0.5 else 0
###Output
_____no_output_____
###Markdown
Checking the efficiency of our model:
###Code
create_heatmap(evaluate_metrics(y_test, prediction))
###Output
Accuracy: 91.25%
Precision: 90.64%
Recall: 86.91%
###Markdown
As we can see, the efficiency of model created by gradient descent method is the same as efficiency of model created with Newton-Raphson method. Now, we will check *scikit-learn* logistic regression model. *Scikit-learn* modelInstantiating and training Logistic Regression model:
###Code
logreg = LogisticRegression()
logreg.fit(X_train, y_train.reshape(-1))
###Output
_____no_output_____
###Markdown
Making the predictions:
###Code
prediction = logreg.predict(X_test)
###Output
_____no_output_____
###Markdown
Model evaluation using confusion matrix:
###Code
create_heatmap(evaluate_metrics(y_test, prediction))
###Output
Accuracy: 91.25%
Precision: 90.64%
Recall: 86.91%
###Markdown
We can see *sci-kit* model offers the same performance as implemented Newton-Raphson algorithm and Gradient Descent algorithm, but it is much easier in use. Also it doesn't need data to be standardized. Now, we can plot the Receiver Operator Characteristic (ROC) and check the Area Under Curve (AUC) to specify how good is our classifier model (1 score represents perfect classifier and 0.5 means that it is worthless):
###Code
pred_probability = logreg.predict_proba(X_test)[:,1].reshape(-1, 1)
fpr, tpr, _ = metrics.roc_curve(y_test, pred_probability)
auc = metrics.roc_auc_score(y_test, pred_probability)
plt.plot(fpr, tpr, label="ROC, AUC="+str(auc))
plt.legend()
plt.show()
###Output
_____no_output_____ |
session-six/subject_questions/politics_session_5_6_solutions.ipynb | ###Markdown
Politics and Social Sciences - Session 5 and 6 In this notebook we are going to look into the results of US presidential elections and test the Benford's law.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Read the data
url = 'https://raw.githubusercontent.com/warwickdatasciencesociety/beginners-python/master/session-six/subject_questions/data/president_county_candidate.csv'
votes_df = pd.read_csv(url)
votes_df.head()
###Output
_____no_output_____
###Markdown
The above table (technically a dataframe) contains the results of US presidential elections grouped by each state, county and candidate. From this data set we extract two lists of numbers:`biden_votes` - a list of total votes for Biden. Each number represents the total number of votes for Biden in a county`trump_votes` - a list of total votes for Trump. Each number represents the total number of votes for Biden in a county
###Code
biden_votes = votes_df[votes_df['candidate'] == 'Joe Biden'].total_votes.to_list()
trump_votes = votes_df[votes_df['candidate'] == 'Donald Trump'].total_votes.to_list()
###Output
_____no_output_____
###Markdown
Benford's law The law of anomalous numbers, or the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data. The law states that in many naturally occurring collections of numbers, the leading digit is likely to be small. In sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time.[](http://google.com.au/) We would like to test if the 2020 elections data follows the Benford's distribution. The first step is to write a function which given a number returns its first digit. Define this funciton as `get_first_digit()`
###Code
def get_first_digit(x):
return int(str(x)[0])
###Output
_____no_output_____
###Markdown
Now we need to write another function `count_first_digits()` which will calculate the distribution of first digits.The input for this function is a list of integers $[x_1, x_2, ....., x_n]$ The function should return a new list $[y_0, y_1, ..., y_9]$ such that for each $i\in{0, 1, ..., 9}$, $y_i$ is the count of $x's$ such that the first digit of $x$ is equal to $i$. Example input: $ x = [123, 2343, 6535, 123, 456, 678]$ Expected output: $ y = [0, 2, 1, 0, 0, 0, 6, 0, 0, 0]$ In the input list there are 2 numbers whose first digit is 6, therefore $y[6] = 2$**HINT**: define a counter list of length 10 with every entry initially set to 0. Iterate through the input list and for each number in this list find its first digit and then increase the corresponding value in the counter list by one.
###Code
def count_first_digits(votes_ls):
digit_counter = [0 for i in range(0,10)]
for x in votes_ls:
first_digit = get_first_digit(x)
digit_counter[first_digit] += 1
return digit_counter
###Output
_____no_output_____
###Markdown
Use the `count_first_digits()` function to calculate the distribution of first digits for Biden and Trump votes. The Benford's law does not take into considaration 0's hence, truncate the lists to delete the first entry (which corresponds to the number of 0 votes for a candidate)
###Code
biden_1digits_count = count_first_digits(biden_votes)[1:]
trump_1digits_count = count_first_digits(trump_votes)[1:]
###Output
_____no_output_____
###Markdown
Create a function `calculate_percentages` which given a list of numbers returns a new list whose entries are the values of the input list divided by the total sum of the input list's entries and multiplied by 100. Apply this function to the `biden_1digits_count` and `trump_1digits_count`.
###Code
def calculate_percentages(ls):
sum_ls = sum(ls)
percentage_ls = []
for i in range(0,len(ls)):
percentage_ls.append(ls[i]/sum_ls * 100)
return percentage_ls
biden_1digits_pc = calculate_percentages(biden_1digits_count)
trump_1digits_pc = calculate_percentages(biden_1digits_count)
###Output
_____no_output_____
###Markdown
Run the cell below to generate the plots for distribution of first digits of Biden's and Trump's votes and compare it against the theoretical Benfords law distribution.
###Code
from math import log10
# generate theoretical Benfords distribution
benford = [log10(1 + 1/i)*100 for i in range(1, 10)]
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize = (20,10))
ax1.bar(x = list(range(1,10)), height = biden_1digits_pc, color = 'C0')
ax2.bar(x = list(range(1,10)), height = trump_1digits_pc, color = 'C3')
ax3.bar(x = list(range(1,10)), height = benford, color = 'C2')
ax1.set_title("Distribution of counts of first digits \n for Biden's votes per county")
ax2.set_title("Distribution of counts of first digits \n for Trump's votes per county")
ax3.set_title("Theoretical distribution of first digits \n according to Benford's law")
ax1.set_xticks(list(range(1,10)))
ax2.set_xticks(list(range(1,10)))
ax3.set_xticks(list(range(1,10)))
fig.show()
###Output
_____no_output_____
###Markdown
By visual inspection of the distribution plots we could suspect that the first digits law is applies. (To make this statement more rigorous we should run statistical tests to reject or confirm our hypothesis). Second-digit Benford's law Walter Mebane, a political scientist and statistician at the University of Michigan, was the first to apply the **second-digit** Benford's law-test in election forensics. Such analyses are considered a simple, though not foolproof, method of identifying irregularities in election results and helping to detect electoral fraud. In analogy to the previous exercise we would like to inspect now the distribution of second digits in the election results. Start by writing a function which given a number (you may assume that it has more than than 1 digit) returns its second digit. Define this funciton as `get_second_digit()`
###Code
def get_second_digit(x):
return int(str(x)[1])
###Output
_____no_output_____
###Markdown
Similarily as before define a function `count_first_digits()`. **HINT** before applying the `get_second_digit()` function you need to make sure that the number which is currently under consideration is at least 10. If not, then this number should be omitted in the calculations. (Make use of the control flow statements)
###Code
def count_first_digits(votes_ls):
digit_counter = [0 for i in range(0,10)]
for x in votes_ls:
if x < 10:
continue
else:
second_digit = get_second_digit(x)
digit_counter[second_digit] += 1
return digit_counter
###Output
_____no_output_____
###Markdown
Use the `count_second_digits()` function to calculate the distribution of first digits for Biden and Trump votes. (There is no need to disregard 0's in the case of second digits case). Next apply the `calculate_percentages` functions the newly created lists.
###Code
trump_2digits_count = count_first_digits(trump_votes)
biden_2digits_count = count_first_digits(biden_votes)
biden_2digits_pc = calculate_percentages(biden_2digits_count)
trump_2digits_pc = calculate_percentages(trump_2digits_count)
###Output
_____no_output_____
###Markdown
Run the cell below to generate the plots for distribution of second digits for Biden's and Trump's votes.
###Code
#theoretical distribution of Benford second digits
benford_2 = [12, 11.4, 10.9, 10.4, 10.0, 9.7, 9.3, 9.0, 8.8, 8.5]
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize = (20,10))
ax1.bar(x = list(range(0,10)), height = biden_2digits_pc, color = 'C0')
ax2.bar(x = list(range(0,10)), height = trump_2digits_pc, color = 'C3')
ax3.bar(x = list(range(0,10)), height = benford_2, color = 'C2')
ax1.set_title("Distribution of counts of second digits \n for Biden's votes per county")
ax2.set_title("Distribution of counts of second digits \n for Trump's votes per county")
ax3.set_title("Theoretical distribution of second digits \n according to Benford's law")
ax1.set_xticks(list(range(0,10)))
ax2.set_xticks(list(range(0,10)))
ax3.set_xticks(list(range(0,10)))
fig.show()
###Output
_____no_output_____ |
Inauguralproject/sev-Inauguralproject.ipynb | ###Markdown
$$\LARGE\text{PROJECT 0: Inaugural project}$$ $ \underline{\text{QUESTION 1} \hspace{0.5cm} \text{Solve household problem}}$ First step in solving this problem is to import the optimize module. The parametervalues are listed in 1.1.The objective function I want to maximize is named "value_of_choice" and is defined in 1.2. A multi-dimensional constrained solver is used. We are given three constraints in the problem, which I have reduced down to one constraint by substituting. The result is$$m = \tau(p_h,\tilde p_h) + c = rp_h + \tau^g h\epsilon +\tau^p \cdot max\{h\epsilon - \bar p\},$$which is added to the code in 1.3 where i define the constraint.
###Code
from scipy import optimize
#1.1 settings
phi = 0.3
eps = 0.5
r = 0.03
tau_g = 0.012
tau_p = 0.004
pbar = 3
m = 0.5
#1.2 Define objective function
def value_of_choice(x, phi, eps, r, tau_g, tau_p, pbar, m):
h = x[0]
c = x[1]
u_func=x[1]**(1-phi)*x[0]**phi
constraints = ({"type": "ineq", "fun": lambda x: m - (x[1]+ r*x[0] + tau_g*x[0]*eps + tau_p*max(x[0]*eps-pbar,0))})
return -u_func
#1.3 Constraint
constraints = ({"type": "ineq", "fun": lambda x: m - (x[1]+ r*x[0] + tau_g*x[0]*eps + tau_p*max(x[0]*eps-pbar,0))})
#1.4 Call solver
initial_guess = [1, 1]
sol = optimize.minimize(value_of_choice, initial_guess, args=(phi, eps, r, tau_g, tau_p, pbar, m), method="SLSQP", constraints=constraints)
h = sol.x[0]
c = sol.x[1]
u = c**(1-phi)*h**phi
#1.5 print solution
check = m - c - (r*h + tau_g*h*eps + tau_p*max(h*eps-pbar,0))
print(f'h = {h:.2f}, c = {c:.3f} --> u = {u:.3f}')
print(f'check = {check:.9f}')
print("")
print("Comments:")
print(f'The optimal choice of housing is {h:.2f} and optimal choice of other consumption is {c:.3f}. This gives an utility of {u:.3f}. Since output of "check" is zero, it can be concluded that the solution is quite precise (all income (cash-on-hand) is spent on both goods.')
###Output
h = 4.17, c = 0.350 --> u = 0.736
check = -0.000000000
Comments:
The optimal choice of housing is 4.17 and optimal choice of other consumption is 0.350. This gives an utility of 0.736. Since output of "check" is zero, it can be concluded that the solution is quite precise (all income (cash-on-hand) is spent on both goods.
###Markdown
$ \underline{\text{QUESTION 2} \hspace{0.5cm} \text{Plot optimal values as functions of} \hspace{0.15cm}\textit{m}}$ This problem illustrates the relationsship between optimal choices of housing and consumption at different levels of cash-on-hand. Two graphs are set up; first is the relationsship between housing and cash-on-hand, second is the relationsship between consumption and cash-on-hand.In order to work with numerical data and report the results in figures, it is required that numpy and matplotlib modules are imported. This is done in 2.1.
###Code
#2.1 import modules
import numpy as np
import matplotlib.pyplot as plt
#2.2 settings
N = 1000 #number of elements
m_min = 0.4 #minimum value of m
m_max = 1.5 #maximum value of m
# allocate numpy arrays (grids)
m_vec = np.linspace(0.4,2.5,1000)
h_vec = np.empty(N) #empty grid since it is h (housing) I am solving for
c_vec = np.empty(N) #same her, but for c (other consumption)
#2.3 Define opt problem
def solution_(phi, eps, r, tau_g, tau_p, pbar, m):
constraints = ({"type": "ineq", "fun": lambda x: m - (x[1]+ r*x[0] + tau_g*x[0]*eps + tau_p*max(x[0]*eps-pbar,0))})
sol = optimize.minimize(value_of_choice, initial_guess,
args=(phi, eps, r, tau_g, tau_p,
pbar, m), method="SLSQP",
constraints=constraints)
h = sol.x[0]
c = sol.x[1]
u = c**(1-phi)*h**phi
return h, c, u
for i,m in enumerate(m_vec): #loop over m_vec
hey = solution_(phi, eps, r, tau_g, tau_p, pbar, m)
h_vec[i] = hey[0]
c_vec[i] = hey[1]
#2.2 Plot the curves
fig = plt.figure(figsize=(12,5))
plt.style.use('fivethirtyeight')
#b. Figure 1
ax_fig1 = fig.add_subplot(1,2,1)
ax_fig1.plot(m_vec,c_vec)
ax_fig1.set_title('Figure 1: $c^*$ as a function of m', fontsize=15)
ax_fig1.set_xlabel('Cash-on-hand, $m$')
ax_fig1.set_ylabel('Consumption, $c$')
ax_fig1.grid(True)
#c. Figure 2
ax_fig2 = fig.add_subplot(1,2,2)
ax_fig2.plot(m_vec,h_vec)
ax_fig2.set_title('Figure 2: $h^*$ as a function of m', fontsize=15)
ax_fig2.set_xlabel('Cash-on-hand, $m$')
ax_fig2.set_ylabel('Housing, $h$')
ax_fig2.grid(True)
print("")
print("Comments:")
print("Both optimal housing and consumption are increasing in m (cash-on-hand).")
print("")
###Output
Comments:
Both optimal housing and consumption are increasing in m (cash-on-hand).
###Markdown
$ \underline{\text{QUESTION 3} \hspace{0.5cm} \text{Average tax burden per household}}$ I start off by defining the vector of random cash-on-hand levels followed by a function named "Total_tax" to calculate the total tax burden for all 10.000 households. Total taxes is calculated by inserting a "for loop" that calculates the optimal level of housing for every household $i$ with cash on hand level $m_i$. Average tax burden is calculated by dividing the total tax with total number of households.
###Code
#3.1 parameters
N = 10000
phi = 0.3
eps = 0.5
r = 0.03
tau_g = 0.012
tau_p = 0.004
pbar = 3
np.random.seed(1) #seed number is set to 1
m_i = np.random.lognormal(-0.4, 0.35, size=N)
#3.2 Defining total tax function
def Tot_tax(m_i, phi, eps, r, tau_g, tau_p, pbar):
N=len(m_i)
tax_i = np.zeros((N))
for i,m in enumerate(m_i):
OPT = solution_(phi, eps, r, tau_g, tau_p, pbar, m)
h_vec = OPT[0]
#individual tax payment
tax_i[i] = tau_g*eps*h_vec+tau_p*max(eps*h_vec-pbar,0) #tax for household i
Total_taxes = sum(tax_i)
return Total_taxes/N
#c. Calculate average
Average = Tot_tax(m_i, phi, eps, r, tau_g, tau_p, pbar)
print(f'Average tax burden per household is {Average:.5f}.')
###Output
Average tax burden per household is 0.03632.
###Markdown
$ \underline{\text{QUESTION 4} \hspace{0.5cm} \text{Average tax burden per household}}$ This problem is solved similar to question 3. The only difference is that the parameters for the tax system on housing are changed. The new parametervalues are coded in 4.1.
###Code
#4.1 new parametervalues
eps = 0.8
tau_g = 0.01
tau_p = 0.009
pbar = 8
#a. Defining vector of random m
np.random.seed(1)
m_i = np.random.lognormal(-0.4, 0.35, size=10000)
#3.2 Defining total tax function
def Tot_tax1(m_i, phi, eps, r, tau_g, tau_p, pbar):
N=len(m_i)
tax_i1 = np.zeros((N))
for i,m in enumerate(m_i):
OPT = solution_(phi, eps, r, tau_g, tau_p, pbar, m)
h_vec = OPT[0]
#individual tax
tax_i1[i] = tau_g*eps*h_vec+tau_p*max(eps*h_vec-pbar,0) #this is the tax for household i
Average_taxes = sum(tax_i1)/N
return Average_taxes
#c. Calculate average
Average1 = Tot_tax1(m_i, phi, eps, r, tau_g, tau_p, pbar)
print(f'Average tax burden per household is {Average1:.5f}')
print("The reform of the tax system on housing increases tax burden per household.")
###Output
Average tax burden per household is 0.04503
The reform of the tax system on housing increases tax burden per household.
###Markdown
$ \underline{\text{QUESTION 5} \hspace{0.5cm} \text{Change in reform}}$ hlggkhdgvmnvhtd
###Code
phi = 0.3
eps = 0.8
r = 0.03
tau_g = 0.01
tau_p = 0.009
pbar = 8
def function(tau_g, Tax_target, phi, eps, r, tau_p, pbar , m_i):
new_taxes = Tot_tax(m_i, phi, eps, r, tau_g, tau_p, pbar)
return Average - new_taxes
Tax_target = Average
tau_g0 = 0.005
tax_reform_sol = optimize.root(function, x0= tau_g0, args=(Tax_target, phi, eps, r, tau_p, pbar , m_i))
taug_opt = tax_reform_sol.x[0]
print(f'Average tax payments are unchanged from before the reform when tau_g is {taug_opt:2.8f}')
###Output
Average tax payments are unchanged from before the reform when tau_g is 0.00767081
|
ML Pipeline Preparation.ipynb | ###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline.My general notes:Have in mind, that we work on a multi-class, multi-output text classification which assigns to each message sample a set of category target classes. The messages are short and an imbalanced data distribution exists. The dataset has 19634 data points with 40 different target categories.During the disaster messages processing, the English text is tokenized, lower cased, lemmatized and the contractions are expanded. Additionally, e.g. spaces, punctuation and English stop words are removed. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and y
###Code
#
# import libraries
#
# download necessary NLTK data
#%pip install nltk
import nltk
nltk.download(['punkt', 'wordnet', 'stopwords'])
import random as rn
import numpy as np
import pandas as pd
import string
import pickle
from sqlalchemy import create_engine
from collections import Counter
import re
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
#%pip install bs4
from bs4 import BeautifulSoup
import sklearn.neighbors
from sklearn.utils import resample
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import accuracy_score, classification_report
from skmultilearn.model_selection import IterativeStratification
# from imblearn.combine import SMOTETomek - resampling not possible because of having a multi-class, multi-output task
from imblearn.ensemble import BalancedRandomForestClassifier
# warnings status to show
import warnings
warnings.warn("once")
###Output
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\Ilona\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\Ilona\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Ilona\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Using TensorFlow backend.
C:\anaconda\anaconda3\lib\site-packages\ipykernel_launcher.py:47: UserWarning: once
###Markdown
Make the code reproducible ...
###Code
FIXED_SEED = 42
# The below is necessary for starting NumPy generated random numbers in a well-defined initial state.
np.random.seed(FIXED_SEED)
# The below is necessary for starting core Python generated random numbers in a well-defined state.
rn.seed(FIXED_SEED)
# load data from database
try:
engine = create_engine('sqlite:///Disaster_Messages_engine.db')
df = pd.read_sql_table('Messages_Categories_table', engine)
# success
print("The dataset has {} data points with {} variables each.".format(*df.shape))
except:
print("The database 'Disaster_Messages_engine.db' could not be loaded. No ML pipeline activities possible.")
df.head()
# create input (X) and output (y) samples, we know that related is always one ...
# as input we have to take care about the messages
# the categories are the targets of the multi-class, multi-output classification
X = df['message']
y = df[df.columns[4:]]
TARGET_NAMES = y.columns
print("X datatype: {}".format(type(X)))
print("y datatype: {}".format(type(y)))
X.head(2)
y.head()
y.iloc[0:5,:].values
# for creation of train and test datasets it is important that no column includes only 0 values
# stratification will not work properly (errors are thrown)
for group in y.columns:
print("'{}' includes {} x value 1.".format(group, y[group].sum()))
###Output
'related' includes 19634 x value 1.
'request' includes 4374 x value 1.
'offer' includes 117 x value 1.
'aid_related' includes 10729 x value 1.
'medical_help' includes 2066 x value 1.
'medical_products' includes 1297 x value 1.
'search_and_rescue' includes 718 x value 1.
'security' includes 467 x value 1.
'military' includes 857 x value 1.
'child_alone' includes 19 x value 1.
'water' includes 1650 x value 1.
'food' includes 2885 x value 1.
'shelter' includes 2281 x value 1.
'clothing' includes 401 x value 1.
'money' includes 598 x value 1.
'missing_people' includes 297 x value 1.
'refugees' includes 872 x value 1.
'death' includes 1187 x value 1.
'other_aid' includes 3392 x value 1.
'infrastructure_related' includes 1688 x value 1.
'transport' includes 1197 x value 1.
'buildings' includes 1313 x value 1.
'electricity' includes 528 x value 1.
'tools' includes 158 x value 1.
'hospitals' includes 283 x value 1.
'shops' includes 118 x value 1.
'aid_centers' includes 308 x value 1.
'other_infrastructure' includes 1136 x value 1.
'weather_related' includes 7212 x value 1.
'floods' includes 2130 x value 1.
'storm' includes 2420 x value 1.
'fire' includes 282 x value 1.
'earthquake' includes 2422 x value 1.
'cold' includes 528 x value 1.
'other_weather' includes 1366 x value 1.
'direct_report' includes 4965 x value 1.
###Markdown
2. Write a tokenization function to process your text dataDuring EPL pipeline activities we realised that there are messages which are not useful (e.g. 'nonsense' character sequences, html characters) and there are probably web links included. We have to deal with this in the tokenize() function.
###Code
CONTRACTION_MAP = {
"ain't": "is not",
"aren't": "are not",
"can't": "cannot",
"can't've": "cannot have",
"'cause": "because",
"could've": "could have",
"couldn't": "could not",
"couldn't've": "could not have",
"didn't": "did not",
"doesn't": "does not",
"don't": "do not",
"hadn't": "had not",
"hadn't've": "had not have",
"hasn't": "has not",
"haven't": "have not",
"he'd": "he would",
"he'd've": "he would have",
"he'll": "he will",
"he'll've": "he he will have",
"he's": "he is",
"how'd": "how did",
"how'd'y": "how do you",
"how'll": "how will",
"how's": "how is",
"I'd": "I would",
"I'd've": "I would have",
"I'll": "I will",
"I'll've": "I will have",
"I'm": "I am",
"I've": "I have",
"i'd": "i would",
"i'd've": "i would have",
"i'll": "i will",
"i'll've": "i will have",
"i'm": "i am",
"i've": "i have",
"isn't": "is not",
"it'd": "it would",
"it'd've": "it would have",
"it'll": "it will",
"it'll've": "it will have",
"it's": "it is",
"let's": "let us",
"ma'am": "madam",
"mayn't": "may not",
"might've": "might have",
"mightn't": "might not",
"mightn't've": "might not have",
"must've": "must have",
"mustn't": "must not",
"mustn't've": "must not have",
"needn't": "need not",
"needn't've": "need not have",
"o'clock": "of the clock",
"oughtn't": "ought not",
"oughtn't've": "ought not have",
"shan't": "shall not",
"sha'n't": "shall not",
"shan't've": "shall not have",
"she'd": "she would",
"she'd've": "she would have",
"she'll": "she will",
"she'll've": "she will have",
"she's": "she is",
"should've": "should have",
"shouldn't": "should not",
"shouldn't've": "should not have",
"so've": "so have",
"so's": "so as",
"that'd": "that would",
"that'd've": "that would have",
"that's": "that is",
"there'd": "there would",
"there'd've": "there would have",
"there's": "there is",
"they'd": "they would",
"they'd've": "they would have",
"they'll": "they will",
"they'll've": "they will have",
"they're": "they are",
"they've": "they have",
"to've": "to have",
"wasn't": "was not",
"we'd": "we would",
"we'd've": "we would have",
"we'll": "we will",
"we'll've": "we will have",
"we're": "we are",
"we've": "we have",
"weren't": "were not",
"what'll": "what will",
"what'll've": "what will have",
"what're": "what are",
"what's": "what is",
"what've": "what have",
"when's": "when is",
"when've": "when have",
"where'd": "where did",
"where's": "where is",
"where've": "where have",
"who'll": "who will",
"who'll've": "who will have",
"who's": "who is",
"who've": "who have",
"why's": "why is",
"why've": "why have",
"will've": "will have",
"won't": "will not",
"won't've": "will not have",
"would've": "would have",
"wouldn't": "would not",
"wouldn't've": "would not have",
"y'all": "you all",
"y'all'd": "you all would",
"y'all'd've": "you all would have",
"y'all're": "you all are",
"y'all've": "you all have",
"you'd": "you would",
"you'd've": "you would have",
"you'll": "you will",
"you'll've": "you will have",
"you're": "you are",
"you've": "you have"
}
# function from Dipanjan's repository:
# https://github.com/dipanjanS/practical-machine-learning-with-python/blob/master/bonus%\
# 20content/nlp%20proven%20approach/NLP%20Strategy%20I%20-%20Processing%20and%20Understanding%20Text.ipynb
def expand_contractions(text, contraction_mapping):
contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())),
flags=re.IGNORECASE|re.DOTALL)
def expand_match(contraction):
match = contraction.group(0)
first_char = match[0]
expanded_contraction = contraction_mapping.get(match)\
if contraction_mapping.get(match)\
else contraction_mapping.get(match.lower())
expanded_contraction = first_char+expanded_contraction[1:]
return expanded_contraction
expanded_text = contractions_pattern.sub(expand_match, text)
expanded_text = re.sub("'", "", expanded_text)
return expanded_text
stop_words = set(stopwords.words('english'))
stop_words.remove('no')
stop_words.remove('not')
stop_words.add('please')
stop_words.add('would')
stop_words.add('should')
stop_words.add('could')
def tokenize(text):
# have in mind that we use this for a web app adding new messages;
# if still html, xml or other undefined parts in the existing messages:
# first remove such metatext from English messages
# see: https://docs.python.org/3.7/library/codecs.html#encodings-and-unicode
# "To be able to detect the endianness of a UTF-16 or UTF-32 byte sequence,
# there’s the so called BOM (“Byte Order Mark”). [...]
# In UTF-8, the use of the BOM is discouraged and should generally be avoided."
# specific ones are e.g. notepad signatures from Microsoft as part of the messages which should be avoided;
# other undefined characters have the coding of the 'replacement character' unicode u"\ufffd"
soup = BeautifulSoup(text, 'html')
souped = soup.get_text()
try:
bom_removed = souped.decode("utf-8-sig").replace(u"\ufffd", "?")
except:
bom_removed = souped
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, bom_removed)
for url in detected_urls:
text = bom_removed.replace(url, "urlplaceholder")
# change the negation wordings like don't to do not, won't to will not
# or other contractions like I'd to I would, I'll to I will etc. via dictionary
text = expand_contractions(text, CONTRACTION_MAP)
# remove punctuation [!”#$%&’()*+,-./:;<=>?@[\]^_`{|}~]
text = text.translate(str.maketrans('','', string.punctuation))
# remove numbers
letters_only = re.sub("[^a-zA-Z]", " ", text)
# during ETL pipeline we have reduced the dataset on English messages ('en' language coding,
# but there can be some wrong codings
tokens = word_tokenize(letters_only, language='english')
lemmatizer = WordNetLemmatizer() # for the lexical correctly found word stem (root)
clean_tokens = []
for tok in tokens:
# use only lower cases, remove leading and ending spaces
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
# remember: there have been nonsense sentences, so, now some strings could be empty
# toDo: what is the correct length number to use now? Small ones are probably no relevant words ...
# remove English stop words
if (len(clean_tok) > 2) & (clean_tok not in stop_words):
clean_tokens.append(clean_tok)
return clean_tokens
# example for unit test to remove punctuation [!”#$%&’()*+,-./:;<=>?@[\]^_`{|}~]
example_str = 'This [is an] example? {of} string. with.? some &punctuation &signs!!??!!'
result = example_str.translate(str.maketrans('','', string.punctuation))
print(result)
# output shall be: This is an example of string with some punctuation signs
# test tokenize
for message in X[:10]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
Is the Hurricane over or is it not over
['hurricane', 'not']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', 'destroyed', 'hospital', 'st', 'croix', 'functioning', 'needs', 'supply', 'desperately']
says: west side of Haiti, rest of the country today and tonight
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
Storm at sacred heart of jesus
['storm', 'sacred', 'heart', 'jesus']
Please, we need tents and water. We are in Silo, Thank you!
['please', 'need', 'tent', 'water', 'silo', 'thank']
I am in Croix-des-Bouquets. We have health issues. They ( workers ) are in Santo 15. ( an area in Croix-des-Bouquets )
['croixdesbouquets', 'health', 'issue', 'worker', 'santo', 'area', 'croixdesbouquets']
There's nothing to eat and water, we starving and thirsty.
['nothing', 'eat', 'water', 'starving', 'thirsty']
I am in Thomassin number 32, in the area named Pyron. I would like to have some water. Thank God we are fine, but we desperately need water. Thanks
['thomassin', 'number', 'area', 'named', 'pyron', 'would', 'like', 'water', 'thank', 'god', 'fine', 'desperately', 'need', 'water', 'thanks']
Let's do it together, need food in Delma 75, in didine area
['let', 'together', 'need', 'food', 'delma', 'didine', 'area']
###Markdown
3. Build a machine learning pipelineNotes:- Regarding the class default parameters, for this Python implementation scikit-learn version 0.21.2 anbd scikit-multilearn version 0.2.0 are used.- We use np.random.seed() too beside of random_state/random_seed parameters ([reason](https://stackoverflow.com/questions/47923258/random-seed-on-svm-sklearn-produces-different-results))- For the pipeline workflow a `FeatureUnion`instance concatenates results of multiple transformer objectsRemember, we are dealing with an imbalanced dataset, therefore not all models can be used. One machine learning classifier could be more biased towards the majority class, causing bad classification of the minority class compared to other model types. Therefore we have to take care and to evaluate some of them.This machine pipeline should take in the `message` column as input and output classification results on the other remaining target categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.According scikit-learn [documentation](https://scikit-learn.org/stable/modules/multiclass.html) we can choose only specific classifier using this meta-estimator. We start with `RandomForestClassier`.Its default parameter values are:RandomForestClassifier(n_estimators=100, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None).For our classifiation task, most important parameters are n_estimators and max_features. As stated in the scikit-learn documentation "using a random subset of size sqrt(n_features)) for classification tasks (where n_features is the number of features in the data)" is in general the best for the prediction results. This is the case with max_features='auto', therefore, we will not change this parameter.n_jos=1 is used because all other values throw errors and the training task crashed.
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, ngram_range=(1,2))),
('tfidf', TfidfTransformer(sublinear_tf=True)),
]))
])),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=100, class_weight='balanced',
n_jobs=1, random_state=FIXED_SEED)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# shuffle is by default set on True,
# usage of stratify param leads to stratify split technique for this imbalanced dataset,
# having both would be a StratifiedShuffleSplit algorithm in the background,
# but
# stratify=y leads to a ValueError: The least populated class in y has only 1 member, which is too few.
# The minimum number of groups for any class cannot be less than 2.
# ToDo: clarify why => solution, must be: stratify=y.iloc[:,:] but that throws errors;
# wrong coding with y.iloc[:,1] for getting the rest to run (wrong results with and after training)
#X_train, X_test, y_train, y_test = train_test_split(X.values, y.values, stratify=y.iloc[:,1],
# test_size=0.2, random_state=FIXED_SEED)
# therefore: creation of X and y with scikit-multilearn iterative stratifier,
# works only because 'child_alone' target class has been mapped to some messages
# if this would be still 0 on all rows ValueError would be thrown
test_size = 0.2
stratifier = IterativeStratification(n_splits=2, order=1,
sample_distribution_per_fold=[test_size, 1.0-test_size],
random_state=FIXED_SEED)
train_indexes, test_indexes = next(stratifier.split(X, y))
# y slicing with iloc because y is a dataframe, X is a series;
# by adding values to X and y we create numpy arrays
X_train, y_train = X[train_indexes].values, y.iloc[train_indexes, :].values
X_test, y_test = X[test_indexes].values, y.iloc[test_indexes, :].values
X_train.shape
y_train.shape
print("X_train datatype: {}".format(type(X_train)))
print("y_train datatype: {}".format(type(y_train)))
for i in range(y_train.shape[1]):
print("{}. numpy.ndarray element is: {}".format(i, y_train[i]))
print(set(y_train[i]))
###Output
0. numpy.ndarray element is: [1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0]
{0, 1}
1. numpy.ndarray element is: [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
2. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
3. numpy.ndarray element is: [1 1 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1]
{0, 1}
4. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
5. numpy.ndarray element is: [1 1 0 1 0 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
6. numpy.ndarray element is: [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0]
{0, 1}
7. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
8. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
9. numpy.ndarray element is: [1 1 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
10. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
11. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
12. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
13. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
14. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
15. numpy.ndarray element is: [1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
16. numpy.ndarray element is: [1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0]
{0, 1}
17. numpy.ndarray element is: [1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
18. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
19. numpy.ndarray element is: [1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
20. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
21. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
22. numpy.ndarray element is: [1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 0]
{0, 1}
23. numpy.ndarray element is: [1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
24. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
25. numpy.ndarray element is: [1 1 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
26. numpy.ndarray element is: [1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
27. numpy.ndarray element is: [1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
28. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
29. numpy.ndarray element is: [1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0]
{0, 1}
30. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
31. numpy.ndarray element is: [1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
32. numpy.ndarray element is: [1 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
33. numpy.ndarray element is: [1 1 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 1]
{0, 1}
34. numpy.ndarray element is: [1 1 0 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
{0, 1}
35. numpy.ndarray element is: [1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{0, 1}
###Markdown
**Note:**As we already know, the dataset is an imbalanced one, which will lead to emphasize the majority target classes too much. We want to get a more balanced dataset distribution by duplicating minority class instances of the training set. With this **oversampling** approach some overfitting may appear.
###Code
TARGET_NAMES
# datayptes are class 'numpy.ndarray'
print('Before resampling, shape of X_train: {}'.format(X_train.shape))
print('Before resampling, shape of y_train: {} \n'.format(y_train.shape))
print("Before resampling, label counts '1': {}".format(sum(y_train==1)))
print("Before resampling, label counts '0': {} \n".format(sum(y_train==0)))
# resampling with scikit-learn utils package
X_train_res, y_train_res = resample(X_train, y_train, n_samples=7000, random_state=FIXED_SEED)
print('After resampling, shape of X_train_res: {}'.format(X_train_res.shape))
print('After resampling, shape of y_train_res: {} \n'.format(y_train_res.shape))
print("After resampling, label counts '1': {}".format(sum(y_train_res==1)))
print("After resampling, label counts '0': {}".format(sum(y_train_res==0)))
###Output
After resampling, shape of X_train_res: (7000,)
After resampling, shape of y_train_res: (7000, 36)
After resampling, label counts '1': [7000 1137 36 3860 812 490 304 177 353 2 555 940 791 118
249 130 361 468 1180 653 489 539 225 62 103 35 128 456
2670 875 941 116 769 199 568 1487]
After resampling, label counts '0': [ 0 5863 6964 3140 6188 6510 6696 6823 6647 6998 6445 6060 6209 6882
6751 6870 6639 6532 5820 6347 6511 6461 6775 6938 6897 6965 6872 6544
4330 6125 6059 6884 6231 6801 6432 5513]
###Markdown
Now, we train the pipeline, first with the original training set afterwards with the resampled one ...
###Code
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
And calculate the model prediction for our original training and testing data ...
###Code
y_rfc_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Now, we do the same thing with the resampled dataset ...
###Code
pipeline.fit(X_train_res, y_train_res)
y_rfc_pred_res = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelFor evaluation:Report accuracy score, f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each, where:TP = TruePositive; FP = FalsePositive; TN = TrueNegative; FN = FalseNegative.**Accuracy Score** is a classification score. It is the number of correct predictions made divided by the total number of predictions made. In a multilabel classification task it computes subset accuracy. Furthermore, beside accuracy, we add additional metrics to compare the model performance having an originally imbalanced dataset. Accuracy would focus too much on the majority classes. Because of this overfitting of the majority classes, its value would be too good and therefore misleading.**Precision** quantifies the binary precision. In other words, a measure of a classifiers exactness. It is a ratio of true positives (messages correctly classified to their categories)) to all positives (all messages classified to categories, irrespective of whether that was the correct classification), in other words it is the ratio ofTP / (TP + FP)**Recall** tells us what proportion of messages that actually were classified to specific categories were classified by us as this categories. Means, a measure of a classifiers completeness. It is a ratio of true positives to all the correctly category classified messages that were actually disaster messages, in other words it is the ratio ofTP / (TP + FN)A model's ability to precisely predict those that are correctly categoriesed disaster messages is more important than the model's ability to recall those individuals. We can use **F-beta score** as a metric that considers both precision and recall. According scikit-learn, the F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0. F – Measure is nothing but the harmonic mean of Precision and Recall.Fβ=(1 + β2) (precision⋅recall / ((β2⋅precision) + recall))In particular, when β=0.5, more emphasis is placed on precision. And when β=1.0 recall and precision are equally important.According scikit-learn: "The **F1 score** ... reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is:F1 = 2 * (precision * recall) / (precision + recall)In the multi-class and multi-label case, this is the average of the F1 score of each class with weighting depending on the average parameter."From scikit-learn documentation for the classification report:The classification_report() function returns an additional value: **Support** - the number of occurrences of each label in y_true.The reported averages include macro average (averaging the unweighted mean per label), weighted average (averaging the support-weighted mean per label), sample average (only for multilabel classification) and micro average (averaging the total true positives, false negatives and false positives) it is only shown for multi-label or multi-class with a subset of classes because it is accuracy otherwise.
###Code
def display_results(target_names, y_test, y_pred, cv=None, parameters=None):
# text summary of the overall accuracy, precision, recall, F1 score for each class
print("\nFirst: overall accuracy score: {:5f}".format(accuracy_score(y_test, y_pred)))
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
# shows F1_score, precision and recall
class_report = classification_report(y_test, y_pred, target_names=target_names)
print("Classification Report for each target class:\n", class_report)
if cv != None:
print("\n\n---- Best Parameters: ----\n")
print("Best score: {:3f}".format(cv.best_score_))
print("Best estimators parameters set:")
best_parameters = cv.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t {}: {}".format(param_name, best_parameters[param_name]))
###Output
_____no_output_____
###Markdown
What are the metric results for our original data without resampling?
###Code
display_results(TARGET_NAMES, y_test, y_rfc_pred, None, None)
###Output
First: overall accuracy score: 0.065190
Classification Report for each target class:
precision recall f1-score support
related 1.00 1.00 1.00 3927
request 0.56 0.40 0.46 1810
offer 0.00 0.00 0.00 22
aid_related 0.59 0.71 0.64 2236
medical_help 0.07 0.00 0.01 291
medical_products 0.20 0.01 0.02 242
search_and_rescue 0.00 0.00 0.00 123
security 0.00 0.00 0.00 67
military 0.00 0.00 0.00 32
child_alone 0.00 0.00 0.00 9
water 0.25 0.01 0.01 421
food 0.31 0.02 0.03 891
shelter 0.06 0.00 0.01 554
clothing 0.29 0.02 0.03 133
money 0.14 0.01 0.02 81
missing_people 0.00 0.00 0.00 57
refugees 0.00 0.00 0.00 94
death 0.00 0.00 0.00 159
other_aid 0.16 0.01 0.02 796
infrastructure_related 0.00 0.00 0.00 172
transport 0.00 0.00 0.00 113
buildings 0.00 0.00 0.00 184
electricity 0.00 0.00 0.00 67
tools 0.00 0.00 0.00 22
hospitals 0.00 0.00 0.00 36
shops 0.00 0.00 0.00 19
aid_centers 0.00 0.00 0.00 38
other_infrastructure 0.00 0.00 0.00 93
weather_related 0.58 0.26 0.36 1090
floods 0.00 0.00 0.00 117
storm 0.21 0.03 0.05 234
fire 0.00 0.00 0.00 27
earthquake 0.77 0.26 0.39 695
cold 0.00 0.00 0.00 38
other_weather 0.10 0.01 0.02 115
direct_report 0.54 0.40 0.46 1813
micro avg 0.73 0.44 0.55 16818
macro avg 0.16 0.09 0.10 16818
weighted avg 0.54 0.44 0.46 16818
samples avg 0.78 0.59 0.57 16818
###Markdown
Such kind of behaviour has been expected because having an imbalanced dataset and in the output vectors for each message, most of the target label values are set to 0 - only few are set to 1. So, the vector is not a dense one.The accuracy metric is not an appropriate measure to evaluate model performance of such kind of dataset. It could classify all instances as part of the majority class and classifies the minority class targets as noise. It is not able to evaluate the model performance of a multi-class dataset with multi-output vectors.Additionally in this classification report, often the metrics are not reliable because of being set to 0.0 according calculation rules. If values are available, precision is often higher than recall, in other words, we have a high rate of false negatives (all items wrongly classified as not being part of the specific target classes). A hugh amount of the token inputs are noise features, not associated with the target response class features.Mainly for support values >1000 appropriate F1-score values exists (except earthquake, score >10%). This appeared for the following target features: request, aid_related, wheather related, earthquake and direct_report.And as we know from the ETL pipeline, some target features are correlated.In other words, we start to improve the model by using cross-validated hyperparameters. What are the metric results for our resampled data?
###Code
display_results(TARGET_NAMES, y_test, y_rfc_pred_res, None, None)
###Output
First: overall accuracy score: 0.050675
Classification Report for each target class:
precision recall f1-score support
related 1.00 1.00 1.00 3927
request 0.57 0.36 0.44 1810
offer 0.00 0.00 0.00 22
aid_related 0.58 0.77 0.66 2236
medical_help 0.33 0.00 0.01 291
medical_products 0.00 0.00 0.00 242
search_and_rescue 0.00 0.00 0.00 123
security 0.00 0.00 0.00 67
military 0.00 0.00 0.00 32
child_alone 0.00 0.00 0.00 9
water 0.30 0.01 0.01 421
food 0.26 0.01 0.02 891
shelter 0.40 0.00 0.01 554
clothing 0.00 0.00 0.00 133
money 0.00 0.00 0.00 81
missing_people 0.00 0.00 0.00 57
refugees 0.00 0.00 0.00 94
death 0.00 0.00 0.00 159
other_aid 0.23 0.01 0.03 796
infrastructure_related 0.00 0.00 0.00 172
transport 0.00 0.00 0.00 113
buildings 0.00 0.00 0.00 184
electricity 0.00 0.00 0.00 67
tools 0.00 0.00 0.00 22
hospitals 0.00 0.00 0.00 36
shops 0.00 0.00 0.00 19
aid_centers 0.00 0.00 0.00 38
other_infrastructure 0.00 0.00 0.00 93
weather_related 0.62 0.25 0.35 1090
floods 0.00 0.00 0.00 117
storm 0.20 0.00 0.01 234
fire 0.00 0.00 0.00 27
earthquake 0.79 0.27 0.41 695
cold 0.00 0.00 0.00 38
other_weather 0.00 0.00 0.00 115
direct_report 0.52 0.43 0.47 1813
micro avg 0.73 0.45 0.56 16818
macro avg 0.16 0.09 0.09 16818
weighted avg 0.55 0.45 0.46 16818
samples avg 0.77 0.59 0.57 16818
###Markdown
Regarding the F1 score values for each class of both trained models leads to the conclusion that for this dataset the calculated oversampling is no improvement. After having done the resampling the label counts 0 and 1 for each target class still looks being imbalanced and there are still target features having a very low score.The idea behind resampling was, that a hybrid method of doing resampling first and then using an ensemble classification model, would be less prone to imbalanced data and would lead to better prediction results. So, this one oversampling calculation is not good, but there are better resampling methods which are possible to get the desired result. We are using some in the next chapters. 6. Improve your modelWe use grid search to find better parameters for our model.
###Code
pipeline.get_params()
# specify parameters for grid search
rfc_param_grid = {
'features__text_pipeline__vect__ngram_range': [(1,2), (1,3)],
'clf__estimator__n_estimators': [200, 500, 1000],
'clf__estimator__max_depth': [10, 20],
'clf__estimator__class_weight': ['balanced']
}
# create grid search object
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV
# cv not higher than 5 buckets, training needs days with cv=10 if e.g. amazon AWS EC2 service is not available
# n_jobs set to 1 because cloud service throws TerminatedWorkerError if > 1
# for scoring and refit see: https://stackoverflow.com/questions/57591311/combination-of-gridsearchcvs-refit-and-scorer-unclear
#scoring = {'f1': make_scorer(f1_score, average="samples"), 'Accuracy': make_scorer(accuracy_score)}
grid_cv = GridSearchCV(pipeline, param_grid=rfc_param_grid, n_jobs=-1, cv=5,
return_train_score=True, verbose=2)# scoring = scoring, refit='f1', return_train_score=True, verbose=2)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, recall and F-score of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# model = cv
grid_cv.fit(X_train, y_train)
y_rfc_pred2 = grid_cv.predict(X_test)
y_rfc_pred2
print("CV results:")
sorted(grid_cv.cv_results_.keys())
for param_name, param_value in zip(grid_cv.cv_results_.keys(), grid_cv.cv_results_.values()):
print(param_name, "=", param_value, "\n")
type(grid_cv.best_estimator_)
print("Evaluation results for the 5 buckets cross validation tuned 'RandomForestClassifier' estimator:")
display_results(TARGET_NAMES, y_test, y_rfc_pred2, grid_cv, rfc_param_grid)
###Output
Evaluation results for the 5 buckets cross validation tuned 'RandomForestClassifier' estimator:
First: overall accuracy score: 0.035905
Classification Report for each target class:
precision recall f1-score support
related 1.00 1.00 1.00 3927
request 0.51 0.89 0.65 1810
offer 0.00 0.00 0.00 22
aid_related 0.60 0.70 0.65 2236
medical_help 0.00 0.00 0.00 291
medical_products 1.00 0.01 0.02 242
search_and_rescue 0.00 0.00 0.00 123
security 0.00 0.00 0.00 67
military 0.00 0.00 0.00 32
child_alone 0.00 0.00 0.00 9
water 0.16 0.08 0.11 421
food 0.26 0.59 0.36 891
shelter 0.16 0.18 0.17 554
clothing 1.00 0.02 0.03 133
money 0.03 0.01 0.02 81
missing_people 0.00 0.00 0.00 57
refugees 0.00 0.00 0.00 94
death 0.00 0.00 0.00 159
other_aid 0.21 0.38 0.28 796
infrastructure_related 0.00 0.00 0.00 172
transport 0.00 0.00 0.00 113
buildings 0.00 0.00 0.00 184
electricity 0.00 0.00 0.00 67
tools 0.00 0.00 0.00 22
hospitals 0.00 0.00 0.00 36
shops 0.00 0.00 0.00 19
aid_centers 0.00 0.00 0.00 38
other_infrastructure 0.00 0.00 0.00 93
weather_related 0.60 0.28 0.38 1090
floods 1.00 0.01 0.02 117
storm 0.23 0.09 0.13 234
fire 0.00 0.00 0.00 27
earthquake 0.57 0.39 0.46 695
cold 0.00 0.00 0.00 38
other_weather 0.00 0.00 0.00 115
direct_report 0.49 0.92 0.64 1813
micro avg 0.56 0.62 0.59 16818
macro avg 0.22 0.15 0.14 16818
weighted avg 0.55 0.62 0.55 16818
samples avg 0.59 0.72 0.57 16818
---- Best Parameters: ----
Best score: 0.057045
Best estimators parameters set:
clf__estimator__class_weight: balanced
clf__estimator__max_depth: 20
clf__estimator__n_estimators: 1000
features__text_pipeline__vect__ngram_range: (1, 3)
###Markdown
The evaluation result of the RandomForestClassifier with tuned hyperparameters is better, even there are still a lot of categories set to 0.0. Some recall values of specific target features are better. If the recall of minority target classes is very less, it proves that the model is still more biased towards majority classes. This issue is reduced as well, but still this is not the best model.With this approach target features for support values round about >400 appropriate F1-score values exists (>10%). This appeared for the following target features: request, direct_report, aid_related, earthquake, wheather related, food, other aid, shelter, storm and water. Additionally, the weighted avg F1 value is (now 55%), the samples F1 avg value is still the same (57%).Furthermore, have in mind that some target features are not disaster related, they are document type related, like 'direct_report' or 'request'. Other target features deliver no value for the prediction task: 'related' is always set to 1 being a disaster message or 'child_alone' which is set originally to 0 for all - means no message has been labelled to this target and the existing training examples are changed manually during the ETL pipeline activities. Nevertheless, there are not enough data sets for this target making a good prediction. 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF **First**, we try out other machine learning algorithms which are tuned by cross validation to compare their prediction results. Other estimator models for the requested `MultiOutputClassifier` are:- `KNeighborsClassifier` with its default parameters: (n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, metric_params=None, n_jobs=None, **kwargs)According [KNN with TF-IDF Based Framework for Text Categorization](https://core.ac.uk/download/pdf/82438337.pdf) from Bruno Trstenjak, Sasa Mikac and Dzenana Donko in '24th DAAAM International Symposium on Intelligent Manufacturing and Automation, 2013', "The algorithm assumes that it is possible to classify documents in the Euclidean space as points. Euclidean distance is the distance between two points in Euclidean space."But in [Effects of Distance Measure Choice on KNN Classifier Performance - A Review](https://arxiv.org/pdf/1708.04321.pdf) from V. B. Surya Prasath et al., 29.Sept.2019, in chapter '2.1. Brief overview of KNN classifier' 4 disadvantages of the KNN are mentioned. To determine a proper distance metric is one of them. Because a particular distance metric is problem and dataset dependent, we first try the euclidian default of the KNN classifier and afterwards other ones. - `AdaBoostClassifier` default values are: class sklearn.ensemble.AdaBoostClassifier(base_estimator=None, n_estimators=50, learning_rate=1.0, algorithm='SAMME.R', random_state=None).As stated in the scikit-learn [documentation](https://scikit-learn.org/stable/modules/neighbors.htmlclassification) "scikit-learn implements two different nearest neighbors classifiers. One of them is the KNeighborsClassifier implements learning based on the nearest neighbors of each query point, where is an integer value specified by the user. **Second**, because it is an imbalanced dataset we could do a balancing before classification. The categority classes with low numbers of observations are outnumbered. So, the dataset is highly skewed. To create a balanced dataset several strategies exists:- Undersampling the majority classes- Oversampling the minority classes- Combining over- and under-sampling- Create ensemble balanced setsBut have in mind, that minority class oversampling could result in overfitting problems doing it before cross-validation. Therefore we tried to use the 'imbalanced-learn' package to modify our dataset being more balanced.Note:Doing balancing activities the specific scikit package 'imbalanced-learn' is imported.For combining the strategies we implement a naive random oversampling of the minority classes.For undersampling the package can be used as well to create the pipeline with `PipelineImb`. The pipeline itself includes the class `RandomUnderSampler` directly before the MultiOutputClassifier to equalize the number of samples in all the classes before the training. Another possible approach is using the `SMOTETomek` class directly on the training dataset before classification.But using such package throws the following ValueError: 'Imbalanced-learn currently supports binary, multiclass and binarized encoded multiclasss targets. Multilabel and multioutput targets are not supported.' So, the associated package classes do not support the multi-target classification with multiple outputs as we need for our project. Therefore this coding is removed after such experiment.Another resampling technique is `cross-validation`, a method repeatingly creating additional training samples from the original training dataset to obtain additional fit information from the selected model. It creates an additional model validation set. The prediction model fits on the remaining training set and afterwards is doing its predictions on the validation set. This calculated validation error rate is an estimation of the datasets test error rate. Specific cross validation strategies exist, we are using the `k-fold cross-validation`, that divides the training set in k non-overlapping groups - called folders -. One of this folders acts as a validation set and the rest is used for training. This process is repeated k times, each time a different validation set is selected out of the group. The k-fold cross validation estimate is calculated by averaging the single k times estimation results. For k we use 5 because of time consuming calculations and not 10.According the [paper](https://arxiv.org/ftp/arxiv/papers/1810/1810.11612.pdf) Handling Imbalanced Dataset in Multi-label Text Categorization using Bagging and Adaptive Boosting of 27 October 2018 from Genta Indra Winata and Masayu Leylia Khodra, regarding new data, it is more appropriate to balance the dataset on the algorithm level instead of the data level to avoid overfitting. The algorithm "approach modifies algorithm by adjusting weight or cost of various classes."So, the `AdaBoostClassifier` is an ensemble method using boosting process to optimise weights. We will try this estimator as well for the MultiOutputClassifier. The AdaBoostClassifier is using the DecisionTreeClassifier as its own base estimator. The tree parameters are changed in the parameter grid to improve the imbalanced data situation. Weak learners are boosted to be stronger learners and the results are aggregated at the end.If the usage of the mentioned specific library is not possible for our task, what could we do instead having an appropriate input for the data classifier model? We do feature engineering.Another option is a `feature-selection` approach which can be done after the feature extraction of the `TfidfVectorizer`, which is creating [feature vectors](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.htmlsklearn.feature_extraction.text.TfidfVectorizer).Additionally, scikit-learn offers the package [feature decomposition](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.decomposition) to reduce the complexity of features. With its help a subsampling is added:- For the sparse matrix delivered from the `TfidfVectorizer` instance we use 3000 most frequent text features, each feature token shall appear at least 2 times and n-gram wording during grid search hyperparameter tuning. The importance of the token is increased proportionally to the number of appearing in the disaster messages.- Feature relationship of the sparse matrix is handled with `TruncatedSVD` for latent semantic analysis (LSA). There, a component relationship parameter is evaluated via grid search hyperparameter tuning. Afterwards we have to normalise again.
###Code
# This resampling with imbalanced package is not possible:
# The following ValueError is thrown:
# Imbalanced-learn currently supports binary, multiclass and binarized encoded multiclasss targets.
# Multilabel and multioutput targets are not supported.
# smote_tomek = SMOTETomek(random_state=FIXED_SEED)
# X_train_res, y_train_res = smote_tomek.fit_sample(X_train, y_train)
#print('After resampling, shape of train_X: {}'.format(X_train_res.shape))
#print('After resampling, shape of train_y: {} \n'.format(y_train_res.shape))
#print("After resampling, label counts '1': {}".format(sum(y_train_res==1)))
#print("After resampling, label counts '0': {}".format(sum(y_train_res==0)))
def build_model(model_type, params):
'''
input:
model_type - the estimator model used for the MultiOutputClassifier
params - the estimator model parameter grid used for the GridSearchCV
'''
# TfidfVectorizer, by default: use_idf=True, norm=’l2’
# TruncatedSVD: for SLA n_components of 100 is recommended, but it is stated:
# Desired dimensionality of output data. Must be strictly less than the number of features.
# We have 36 target categories. Some of them are 'useless'. We want to know the prio list of all.
# The max features are 3000 tokens, so we use a smaller value as n_compontents for LSA.
# A token is part of the result if it appears at least 2 times
#
# For RandomizedSearchCV: for RandomForestClassifier, we have 8 parameters => n_iter=8
pipeline2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('tfidf', TfidfVectorizer(tokenizer=tokenize, sublinear_tf=True,
max_features=3000, min_df=2)),
('best', TruncatedSVD(random_state=FIXED_SEED)),
('normalizer', Normalizer(copy=False))
]))
])),
('clf', MultiOutputClassifier(model_type))
])
# the higher the verbose number the more information is thrown
cv = GridSearchCV(pipeline2, param_grid=params, return_train_score=True, n_jobs=1, cv=5, verbose=2)
return cv
def build_model_randomcv(model_type, params, cv_iter):
'''
input:
model_type - the estimator model used for the MultiOutputClassifier
params - the estimator model parameter grid used for the GridSearchCV
'''
# TfidfVectorizer, by default: use_idf=True, norm=’l2’
# TruncatedSVD: for SLA n_components of 100 is recommended, but it is stated:
# Desired dimensionality of output data. Must be strictly less than the number of features.
# We have 36 target categories. Some of them are 'useless'. We want to know the prio list of all.
# The max features are 3000 tokens, so we use a smaller value as n_compontents for LSA.
# A token is part of the result if it appears at least 2 times
#
# For RandomizedSearchCV:
# RandomForestClassifier: we have 8 parameters => n_iter=8
# AdaBoostClassifier: we have parameters => n_iter=
pipeline2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('tfidf', TfidfVectorizer(tokenizer=tokenize, sublinear_tf=True,
max_features=3000, min_df=2)),
('best', TruncatedSVD(random_state=FIXED_SEED)),
('normalizer', Normalizer(copy=False))
]))
])),
('clf', MultiOutputClassifier(model_type))
])
# the higher the verbose number the more information is thrown
cv = RandomizedSearchCV(pipeline2, param_distributions=params, n_jobs=1, cv=5, n_iter=cv_iter,
return_train_score=True, verbose=2, random_state=FIXED_SEED)
return cv
###Output
_____no_output_____
###Markdown
We try this new pipeline including feature selection and decomposition first with the other mentioned classifiers and afterwards with an additionally tuned RandomForestClassifier. This simple `KNN` parameter grid needs a long time for calculation, means the computational time cost is high. As stated in the mentioned KNN paper from Sept. 2019, Euclidian distance is not an appropriate metric if the feature dimension is high. This is the case with a high n_components value of >=1000. So, we try 'best' n_components=100 and 500 instead of 1000 or higher (note: in the scikit-learn documentation 100 is proposed for LSA tasks) and do other parameter modifications.
###Code
sorted(sklearn.neighbors.VALID_METRICS['brute'])
# create param grids for the models
# KNeighborsClassifier
# according http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.5135&rep=rep1&type=pdf
# cosine distance metric is commonly used,
# compared are the cosine angles between two documents/vectors
# (the term frequencies in different documents collected as metrics).
# This particular metric is used when the magnitude between vectors does not matter but the orientation.
#
# The hamming distance tells us about the differences of compared strings of equal length.
# It is defined as the amount of positions having different characters or symbols.
knn_param_grid = {
'features__text_pipeline__tfidf__ngram_range': [(1, 2), (1,3)],
'features__text_pipeline__best__n_components':[100, 500],
'clf__estimator__n_neighbors': [1, 3],
'clf__estimator__metric': ['euclidean', 'cosine', 'hamming'],
'clf__estimator__weights': ['uniform', 'distance']
}
# according scikitlearn: we have a sparse matrix therefore use algorithm 'brute'
print("\n----- KNeighborsClassifier with feature engineering -----")
print("Build best model: ...")
cv_knn_model = build_model(KNeighborsClassifier(n_jobs=1, algorithm='brute'), knn_param_grid)
print("Train model: ...")
cv_knn_model.fit(X_train, y_train)
y_knn_pred = cv_knn_model.predict(X_test)
type(cv_knn_model.estimator)
type(cv_knn_model.estimator['features'])
type(cv_knn_model.estimator['features'].get_params()['transformer_list'][0])
type(cv_knn_model.estimator['features'].get_params()['transformer_list'][0][1])
cv_knn_model.estimator['features'].get_params()['transformer_list'][0][1]
print("Best score: %0.3f" % cv_knn_model.best_score_)
print("Best parameters set:")
best_parameters = cv_knn_model.best_estimator_.get_params()
for param_name in sorted(knn_param_grid.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))
print("\nModel evaluation on tuned KNeighborsClassifier ...")
display_results(TARGET_NAMES, y_test, y_knn_pred, cv_knn_model, knn_param_grid)
###Output
Model evaluation on tuned KNeighborsClassifier ...
First: overall accuracy score: 0.089127
Classification Report for each target class:
precision recall f1-score support
related 1.00 1.00 1.00 3927
request 0.53 0.43 0.47 1810
offer 0.00 0.00 0.00 22
aid_related 0.58 0.60 0.59 2236
medical_help 0.10 0.03 0.04 291
medical_products 0.15 0.04 0.06 242
search_and_rescue 0.00 0.00 0.00 123
security 0.00 0.00 0.00 67
military 0.00 0.00 0.00 32
child_alone 0.00 0.00 0.00 9
water 0.11 0.03 0.05 421
food 0.24 0.12 0.16 891
shelter 0.13 0.05 0.07 554
clothing 0.07 0.01 0.01 133
money 0.06 0.02 0.04 81
missing_people 0.00 0.00 0.00 57
refugees 0.00 0.00 0.00 94
death 0.06 0.01 0.02 159
other_aid 0.19 0.12 0.15 796
infrastructure_related 0.04 0.01 0.02 172
transport 0.04 0.01 0.01 113
buildings 0.02 0.01 0.01 184
electricity 0.00 0.00 0.00 67
tools 0.00 0.00 0.00 22
hospitals 0.00 0.00 0.00 36
shops 0.00 0.00 0.00 19
aid_centers 0.00 0.00 0.00 38
other_infrastructure 0.00 0.00 0.00 93
weather_related 0.43 0.35 0.38 1090
floods 0.03 0.01 0.01 117
storm 0.15 0.09 0.11 234
fire 0.00 0.00 0.00 27
earthquake 0.53 0.34 0.41 695
cold 0.00 0.00 0.00 38
other_weather 0.04 0.01 0.01 115
direct_report 0.50 0.41 0.45 1813
micro avg 0.62 0.46 0.53 16818
macro avg 0.14 0.10 0.11 16818
weighted avg 0.51 0.46 0.48 16818
samples avg 0.71 0.60 0.54 16818
---- Best Parameters: ----
Best score: 0.076208
Best estimators parameters set:
clf__estimator__metric: euclidean
clf__estimator__n_neighbors: 3
clf__estimator__weights: distance
features__text_pipeline__best__n_components: 100
features__text_pipeline__tfidf__ngram_range: (1, 2)
###Markdown
Can we improve the hyperparameter settings for the KNN classifier? By default with p=2 euclidian metric is set.
###Code
better_knn_param_grid = {
'features__text_pipeline__tfidf__ngram_range': [(1, 2)],
'features__text_pipeline__best__n_components':[35, 50, 100],
'clf__estimator__n_neighbors': [5, 7],
'clf__estimator__weights': ['distance', 'uniform']
}
# according scikitlearn: we have a sparse matrix therefore use algorithm 'brute'
print("\n----- KNeighborsClassifier with feature engineering, better param grid -----")
print("Build best model: ...")
better_cv_knn_model = build_model(KNeighborsClassifier(n_jobs=1, algorithm='brute'), better_knn_param_grid)
print("Train model: ...")
better_cv_knn_model.fit(X_train, y_train)
better_y_knn_pred = better_cv_knn_model.predict(X_test)
print("\nModel evaluation on second better tuned KNeighborsClassifier ...")
display_results(TARGET_NAMES, y_test, better_y_knn_pred, better_cv_knn_model, better_knn_param_grid)
###Output
Model evaluation on second better tuned KNeighborsClassifier ...
First: overall accuracy score: 0.082506
Classification Report for each target class:
precision recall f1-score support
related 1.00 1.00 1.00 3927
request 0.54 0.42 0.47 1810
offer 0.00 0.00 0.00 22
aid_related 0.58 0.64 0.61 2236
medical_help 0.20 0.01 0.03 291
medical_products 0.18 0.01 0.02 242
search_and_rescue 0.00 0.00 0.00 123
security 0.00 0.00 0.00 67
military 0.00 0.00 0.00 32
child_alone 0.00 0.00 0.00 9
water 0.14 0.01 0.01 421
food 0.24 0.07 0.11 891
shelter 0.10 0.01 0.03 554
clothing 0.00 0.00 0.00 133
money 0.00 0.00 0.00 81
missing_people 0.00 0.00 0.00 57
refugees 0.00 0.00 0.00 94
death 0.22 0.01 0.02 159
other_aid 0.19 0.06 0.09 796
infrastructure_related 0.00 0.00 0.00 172
transport 0.00 0.00 0.00 113
buildings 0.00 0.00 0.00 184
electricity 0.00 0.00 0.00 67
tools 0.00 0.00 0.00 22
hospitals 0.00 0.00 0.00 36
shops 0.00 0.00 0.00 19
aid_centers 0.00 0.00 0.00 38
other_infrastructure 0.00 0.00 0.00 93
weather_related 0.46 0.32 0.38 1090
floods 0.00 0.00 0.00 117
storm 0.17 0.06 0.08 234
fire 0.00 0.00 0.00 27
earthquake 0.67 0.32 0.44 695
cold 0.00 0.00 0.00 38
other_weather 0.00 0.00 0.00 115
direct_report 0.51 0.39 0.44 1813
micro avg 0.68 0.45 0.54 16818
macro avg 0.14 0.09 0.10 16818
weighted avg 0.52 0.45 0.47 16818
samples avg 0.75 0.59 0.56 16818
---- Best Parameters: ----
Best score: 0.076781
Best estimators parameters set:
clf__estimator__n_neighbors: 5
clf__estimator__weights: uniform
features__text_pipeline__best__n_components: 100
features__text_pipeline__tfidf__ngram_range: (1, 2)
###Markdown
The result of this KNN training and prediction is still not good for the single categories. Only the categories with highest amount of samples are predicted properly. **Now**, we try the other ensemble model for prediction - the `AdaBoostClassifier`. AdaBoost is an iterative ensemble method. AdaBoost classifier builds a strong classifier by combining multiple poorly performing classifiers to get high accuracy by using classifier weights and with them optimising the training data samples in each iteration by minimising training error. Therefore it deals with imbalanced datasets more appropriate compared to e.g. KNN. So, we expect to have better prediction results.
###Code
# ensemble model AdaBoostClassifier
# class sklearn.ensemble.AdaBoostClassifier(base_estimator=None, n_estimators=50, learning_rate=1.0,
# algorithm='SAMME.R', random_state=None)
# base_estimator is by default DecisionTreeClassifier(max_depth=1), changed it
ada_param_grid = {
'features__text_pipeline__tfidf__ngram_range': [(1,2), (1,3)],
'features__text_pipeline__best__n_components':[35, 50, 100],
'clf__estimator__base_estimator__max_depth': [1, 3],
'clf__estimator__n_estimators': [50, 100]
}
print("\n----- AdaBoostClassifier with feature engineering -----")
print("Build best model: ...")
cv_ada_model = build_model_randomcv(model_type=AdaBoostClassifier(
base_estimator=DecisionTreeClassifier(class_weight='balanced',
random_state=FIXED_SEED),
random_state=FIXED_SEED),
params=ada_param_grid, cv_iter=24)
print("Train model: ...")
cv_ada_model.fit(X_train, y_train)
y_ada_pred = cv_ada_model.predict(X_test)
print("\nModel evaluation on tuned AdaBoostClassifier ...")
display_results(TARGET_NAMES, y_test, y_ada_pred, cv_ada_model, ada_param_grid)
for param_name, param_value in zip(cv_ada_model.cv_results_.keys(), cv_ada_model.cv_results_.values()):
print(param_name, "=", param_value, "\n")
###Output
mean_fit_time = [ 500.81215472 518.36387935 661.96901188 678.1040729 1236.76648321
1201.19012208 827.04902592 939.91503806 1260.20814085 1284.69999075
2389.56403565 2387.29037957 1182.47269912 1201.14438825 1652.31215382
1686.02609415 3106.19282417 3212.97365623 2355.07406769 2338.73415484
3277.80501637 3249.80948391 6335.19864755 6429.43141322]
std_fit_time = [ 8.47003324 2.52748568 42.09369518 9.42296901 6.65352007
48.74900499 139.98581277 6.2688438 52.92003108 5.07312582
54.70592671 42.07549276 43.46590703 16.46598529 59.09347479
13.28271674 177.09868614 80.95115605 35.47786403 72.78984239
74.35818212 95.04932708 279.46561469 69.63976451]
mean_score_time = [21.3091783 18.82001257 19.09625058 18.72874808 24.22245455 24.28735037
23.99989381 25.65693498 26.947825 25.40314484 35.97902799 37.59176879
18.39714007 21.13625941 22.5374536 18.29763508 22.71581182 24.37856922
26.40464034 25.89849601 23.85839944 26.32019887 31.92602406 38.29589491]
std_score_time = [1.89919869 3.62561075 3.16507513 3.57525825 4.2688432 2.29873204
4.25342429 3.03839586 2.05738499 4.84414961 2.48898048 2.43463747
3.71156766 1.54657689 1.51780125 3.81152132 6.14831158 4.9139368
2.48049165 1.64306652 7.65718654 2.87478328 9.67506405 3.04638813]
param_features__text_pipeline__tfidf__ngram_range = [(1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3)
(1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3) (1, 2) (1, 3)
(1, 2) (1, 3) (1, 2) (1, 3)]
param_features__text_pipeline__best__n_components = [35 35 50 50 100 100 35 35 50 50 100 100 35 35 50 50 100 100 35 35 50 50
100 100]
param_clf__estimator__n_estimators = [50 50 50 50 50 50 100 100 100 100 100 100 50 50 50 50 50 50 100 100 100
100 100 100]
param_clf__estimator__base_estimator__max_depth = [1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3]
params = [{'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 1}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 50, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 2), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 100, 'clf__estimator__base_estimator__max_depth': 3}]
split0_test_score = [0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.00031827 0. 0.00031827 0.00063654 0.00063654
0.00095481 0.00095481 0.00413749 0.00381922 0.0050923 0.00318269]
split1_test_score = [0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00031827 0. 0. 0.00095481 0.00063654 0.00095481
0.00159134 0.00031827 0.00222788 0.00254615 0.00318269 0.00413749]
split2_test_score = [0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.00031837 0. 0. 0. 0.00031837
0.00095511 0.00191022 0.00286533 0.00286533 0.00159185 0.00286533]
split3_test_score = [0. 0. 0. 0. 0.00031837 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.00127348 0.00031837 0.00095511 0.00159185
0.00254696 0.00222859 0.00095511 0.00413881 0.0031837 0.00477555]
split4_test_score = [0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.00063674 0. 0.00095511 0.00063674 0.00063674 0.00063674
0.00350207 0.00095511 0.00254696 0.0031837 0.00509392 0.0063674 ]
mean_test_score = [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
6.36658815e-05 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
1.90997644e-04 1.27331763e-04 4.45661170e-04 4.45661170e-04
5.72992933e-04 8.27656459e-04 1.90997644e-03 1.27331763e-03
2.54663526e-03 3.31062584e-03 3.62895524e-03 4.26561406e-03]
std_test_score = [0. 0. 0. 0. 0.00012734 0.
0. 0. 0. 0. 0. 0.
0.00025468 0.00015594 0.00055508 0.00032459 0.0003119 0.00043183
0.00098649 0.00069757 0.00102636 0.0005905 0.00132923 0.00125115]
rank_test_score = [14 14 14 14 13 14 14 14 14 14 14 14 11 12 9 9 8 7 5 6 4 3 2 1]
split0_train_score = [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
7.95861520e-05 0.00000000e+00 7.95861520e-05 7.95861520e-05
7.95861520e-05 0.00000000e+00 7.95861520e-05 1.59172304e-04
2.22841226e-03 1.91006765e-03 4.45682451e-03 3.02427378e-03
7.56068444e-03 7.71985674e-03 1.80660565e-02 2.10903303e-02
3.26303223e-02 3.28690808e-02 6.17588540e-02 6.42260247e-02]
split1_train_score = [7.95861520e-05 0.00000000e+00 1.59172304e-04 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 7.95861520e-05
0.00000000e+00 1.59172304e-04 0.00000000e+00 7.95861520e-05
2.78551532e-03 2.46717071e-03 4.13847990e-03 4.05889375e-03
8.03820135e-03 8.19737366e-03 2.14086749e-02 2.14882610e-02
3.14365300e-02 3.19936331e-02 6.28730601e-02 5.99283725e-02]
split2_train_score = [0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
1.59159637e-04 0.00000000e+00 7.95798186e-05 0.00000000e+00
7.95798186e-05 7.95798186e-05 7.95798186e-05 7.95798186e-05
3.02403311e-03 3.26277256e-03 4.21773038e-03 4.21773038e-03
1.00270571e-02 1.07432755e-02 2.77733567e-02 2.77733567e-02
3.39805825e-02 3.50151202e-02 6.85182238e-02 7.25767945e-02]
split3_train_score = [1.59159637e-04 0.00000000e+00 7.95798186e-05 0.00000000e+00
0.00000000e+00 1.59159637e-04 1.59159637e-04 7.95798186e-05
0.00000000e+00 3.18319274e-04 7.95798186e-04 3.18319274e-04
4.05857075e-03 3.81983129e-03 3.58109184e-03 6.04806621e-03
9.78831768e-03 9.94747732e-03 2.35556263e-02 2.52268025e-02
3.11157091e-02 3.01607512e-02 6.60512494e-02 6.85182238e-02]
split4_train_score = [0.00000000e+00 0.00000000e+00 0.00000000e+00 1.59159637e-04
7.95798186e-05 0.00000000e+00 1.59159637e-04 7.95798186e-05
7.95798186e-05 1.59159637e-04 5.57058730e-04 3.97899093e-04
3.58109184e-03 3.66067165e-03 5.80932675e-03 5.80932675e-03
1.11411746e-02 1.06636957e-02 2.12478116e-02 2.42718447e-02
3.53334394e-02 3.56517587e-02 7.19401560e-02 6.62104090e-02]
mean_train_score = [4.77491578e-05 0.00000000e+00 4.77504245e-05 3.18319274e-05
6.36651215e-05 3.18319274e-05 9.54970490e-05 6.36663882e-05
4.77491578e-05 1.43246207e-04 3.02404577e-04 2.06911328e-04
3.13552465e-03 3.02410277e-03 4.44069068e-03 4.63165818e-03
9.31108704e-03 9.45433578e-03 2.24103052e-02 2.39701190e-02
3.28993167e-02 3.31380688e-02 6.62283086e-02 6.62919649e-02]
std_train_score = [6.36644882e-05 0.00000000e+00 6.36682883e-05 6.36638548e-05
5.95524218e-05 6.36638548e-05 5.95517447e-05 3.18331942e-05
3.89870242e-05 1.05574942e-04 3.15921966e-04 1.29299718e-04
6.33771884e-04 7.27545828e-04 7.42075175e-04 1.13808567e-03
1.32466826e-03 1.26138629e-03 3.20316197e-03 2.47336655e-03
1.58034479e-03 2.00441843e-03 3.71843998e-03 4.22434329e-03]
###Markdown
Regarding the evaluation results of the KNeighborsClassifier model, it is not acceptable comparing the single target features. The hamming distance is not valuable at all, still euclidian metric has been the best. Compared to the KNN model the AdaBoostClassifier model can handle the imbalanced dataset much better and has much more appropriate predictions regarding the metric values of the single target categories. By now, this is the best model we have been evaluated yet.Would the feature selection and decomposition improve the RandomForestClassifier? Because of calculation time range we use the RandomizedSearchCV, knowing that this has a little bit lesser performance.
###Code
# for the other models 100 best n_components have been the best hyperparameter for TruncatedSVD
better_rfc_param_grid = {
'features__text_pipeline__tfidf__ngram_range': [(1,3)],
'features__text_pipeline__best__n_components':[35, 50, 100],
'clf__estimator__n_estimators': [200, 600, 800],
'clf__estimator__max_depth': [20],
'clf__estimator__class_weight': ['balanced']
}
print("\n----- RandomForestClassifier with feature engineering and modified param grid -----")
print("Build best model: ...")
cv_better_rfc_model = build_model_randomcv(model_type=RandomForestClassifier(n_jobs=1, random_state=FIXED_SEED),
params=better_rfc_param_grid, cv_iter=8)
print("Train model: ...")
cv_better_rfc_model.fit(X_train, y_train)
y_better_rfc_pred = cv_better_rfc_model.predict(X_test)
print("\nModel evaluation on tuned RandomForestClassifier with feature engineering...")
display_results(TARGET_NAMES, y_test, y_better_rfc_pred, cv_better_rfc_model, better_rfc_param_grid)
for param_name, param_value in zip(cv_better_rfc_model.cv_results_.keys(),
cv_better_rfc_model.cv_results_.values()):
print(param_name, "=", param_value, "\n")
###Output
mean_fit_time = [ 6766.61998363 1934.81381388 7879.77318654 1270.81696072
10694.54701014 2624.38108959 5573.72268753 4256.54027619]
std_fit_time = [1137.62940298 23.27127936 121.31311604 182.72848141 192.71065327
72.68445393 338.75292779 144.00183303]
mean_score_time = [406.60862017 35.02166605 136.52052999 33.98980808 278.84143052
38.55520611 130.07929959 133.5517818 ]
std_score_time = [315.68765054 4.12343799 6.40957199 8.13886705 99.33203965
1.16232684 9.0522394 14.06957846]
param_features__text_pipeline__tfidf__ngram_range = [(1, 3) (1, 3) (1, 3) (1, 3) (1, 3) (1, 3) (1, 3) (1, 3)]
param_features__text_pipeline__best__n_components = [50 50 100 35 100 100 50 35]
param_clf__estimator__n_estimators = [800 200 600 200 800 200 600 600]
param_clf__estimator__max_depth = [20 20 20 20 20 20 20 20]
param_clf__estimator__class_weight = ['balanced' 'balanced' 'balanced' 'balanced' 'balanced' 'balanced'
'balanced' 'balanced']
params = [{'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 800, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 200, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 600, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 200, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 800, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 100, 'clf__estimator__n_estimators': 200, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 50, 'clf__estimator__n_estimators': 600, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}, {'features__text_pipeline__tfidf__ngram_range': (1, 3), 'features__text_pipeline__best__n_components': 35, 'clf__estimator__n_estimators': 600, 'clf__estimator__max_depth': 20, 'clf__estimator__class_weight': 'balanced'}]
split0_test_score = [0.06015277 0.07129217 0.06142584 0.06015277 0.06078931 0.06747295
0.06015277 0.06269892]
split1_test_score = [0.02991725 0.02864418 0.02259707 0.03723743 0.0222788 0.02705283
0.02991725 0.03150859]
split2_test_score = [0.02451449 0.0286533 0.0222859 0.03724928 0.02069405 0.02769819
0.0254696 0.03438395]
split3_test_score = [0.04266157 0.04393505 0.04106972 0.05380452 0.03884113 0.05380452
0.04202483 0.04839223]
split4_test_score = [0.05666985 0.06049029 0.05762496 0.06844954 0.0582617 0.06494747
0.055078 0.06208214]
mean_test_score = [0.04278347 0.04660343 0.04100083 0.05137837 0.04017317 0.04819507
0.04252881 0.04781308]
std_test_score = [0.01409863 0.01705493 0.01662865 0.01244096 0.01705105 0.01761207
0.01355316 0.01320418]
rank_test_score = [5 4 7 1 8 2 6 3]
split0_train_score = [0.42793474 0.42976522 0.49454835 0.39450856 0.4959809 0.4908078
0.42785515 0.39570235]
split1_train_score = [0.43828094 0.44027059 0.49279745 0.40031834 0.49502587 0.49208118
0.43836053 0.3965778 ]
split2_train_score = [0.46227917 0.45941429 0.54424638 0.44111093 0.54512176 0.53485596
0.46347286 0.44190673]
split3_train_score = [0.47039631 0.466099 0.5441668 0.43577909 0.54392806 0.53915327
0.46928219 0.43530161]
split4_train_score = [0.45551488 0.45384371 0.54432596 0.43466497 0.54384848 0.54050613
0.45877765 0.43522203]
mean_train_score = [0.45088121 0.44987856 0.52401699 0.42127638 0.52478101 0.51948087
0.45154968 0.4209421 ]
std_train_score = [0.01560468 0.01316522 0.02478208 0.01969136 0.02391125 0.02297105
0.01577487 0.02039751]
###Markdown
**Note**:For the RandomForestClassifier the usage of the feature selection and decomposition improves the prediction results for the specific target features and the model is much less biased towards the majority classes.Nevertheless, still the `AdaBoostClassifier`can handle the imbalanced dataset much better compared to all other used model types. So, we store it as our pickle file. 9. Export your model as a pickle file Finally, having found the best model from our model selection list, we save this model with its best parameters as a pickle file. Pickle is the standard way of serialising objects in Python. With this pickle file we can deserialise our model and use it to make new predictions.
###Code
def save_model(model, model_filepath):
pickle.dump(model, open(model_filepath, "wb" ))
# see train_classifier.py file
model_filepath = "classifier.pkl"
model = cv_ada_model
print('Saving model...\n MODEL: {}'.format(model_filepath))
save_model(model, model_filepath)
print('Best trained model saved!')
###Output
Saving model...
MODEL: classifier.pkl
Best trained model saved!
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# Import and download nltk package
import nltk
nltk.download('punkt')
nltk.download('wordnet')
# Import libraries
import pandas as pd
from collections import defaultdict
import pickle
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report, fbeta_score, precision_score, recall_score
from sklearn.tree import DecisionTreeClassifier
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sqlalchemy import create_engine
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
# Get table names
table = engine.table_names()
# Read in the sqlite table
df = pd.read_sql('SELECT * FROM {}'.format(table[0]), con=engine)
X = df['message']
Y = df[df.columns[4:]]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
# Create function that tokenizes text input
def tokenize(text):
'''
Function splitting messages into words, converting to lower case and removing punctuation
Args: text = message in form of string
Return: clean_tokens = list of cleaned tokens
'''
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for token in tokens:
clean_token = lemmatizer.lemmatize(token).lower().strip()
clean_tokens.append(clean_token)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Create pipeline that uses CountVectorizer, a TfidfTransformer and then classifies the message via RandomForstClassier
# The predictor is supplemented by the MultiOutputClassifier to ensure that multiple target variables are predicted
pipeline_rf_not_opt = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('randomf', MultiOutputClassifier(RandomForestClassifier(n_estimators=10)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Splitting data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# Training the machine learning pipeline
pipeline_rf_not_opt.fit(X_train, y_train)
y_pred_rf_not_opt = pipeline_rf_not_opt.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each. I decided to display the performance metrics for each model in plot format to make it easier to determine which model has the best scores.
###Code
# Create function to store performance metrics
def performance_metrics(y_pred):
'''
Function to compare the performance metrics of different ML pipelines
Args: y_pred = list of predicted labels
Returns: dictionary with precision, recall and f1_score for each target category
'''
# Convert y_pred from array to dataframe
y_pred = pd.DataFrame(y_pred, columns=df.columns[4:])
# Create dictionary where keys are the target categories and values is a list of the performance metrics
score_dict = defaultdict()
for col in y_pred.columns:
precision = precision_score(y_test[col], y_pred[col])
recall = recall_score(y_test[col], y_pred[col])
f1_score = fbeta_score(y_test[col], y_pred[col], beta=1)
score_dict[col] = [float(precision), float(recall), float(f1_score)]
return score_dict
# Get the performance metrics of random forest classifier whose parameters have not been tuned
random_forest_not_opt = performance_metrics(y_pred_rf_not_opt)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Get list of all parameters that can be used for GridSearchCV
pipeline_rf_not_opt.get_params().keys()
# Define parameters to be used for grid search
parameters = {
'vect__max_df': [0.5, 0.7, 0.9, 1],
'tfidf__use_idf': [True, False],
'randomf__estimator__n_estimators': [5, 20, 30],
'randomf__estimator__max_depth': [5, 7, 9, 11],
}
# Instantiate GridSearchCV
cv = GridSearchCV(pipeline_rf_not_opt, param_grid=parameters, cv=3)
# Fit the grid search model and return the predictions for the optimal parameter combination
cv.fit(X_train, y_train)
y_pred_rf_opt = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Get the performance metrics of random forest classifier whose parameters have been tuned
random_forest_opt = performance_metrics(y_pred_rf_opt)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Create alternative pipeline with different machine learning algorithm
pipeline_adaboost = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('adaboost', MultiOutputClassifier(AdaBoostClassifier()))
])
# Training the machine learning pipeline
pipeline_adaboost.fit(X_train, y_train)
y_pred_ada = pipeline_adaboost.predict(X_test)
# Iterate through columns of y_pred and y_test and calculate precision, recall and f1_score for Adaboost classifier
adaboost = performance_metrics(y_pred_ada)
# Create alternative pipeline with onehot encoding
pipeline_binary = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, binary=True)),
('tfidf', TfidfTransformer()),
('adaboost', MultiOutputClassifier(AdaBoostClassifier()))
])
# Training the machine learning pipeline
pipeline_binary.fit(X_train, y_train)
y_pred_binary = pipeline_binary.predict(X_test)
# Iterate through columns of y_pred and y_test and calculate precision, recall and f1_score for onehot encoding
binary = performance_metrics(y_pred_binary)
# Define parameters to be used for grid search
parameters_ada = {
'vect__max_df': [0.7, 0.9, 1],
'vect__binary': [True, False], # setting this to True will introduce onehot encoding on the vectorization level
'tfidf__use_idf': [True, False],
'adaboost__estimator__n_estimators': [10, 20, 50],
'adaboost__estimator__learning_rate': [.5, 1],
}
# Instantiate GridSearchCV
cv_ada = GridSearchCV(pipeline_adaboost, param_grid=parameters_ada, cv=3)
# Fit the grid search model and return the predictions for the optimal parameter combination
cv_ada.fit(X_train, y_train)
y_pred_ada_opt = cv_ada.predict(X_test)
# Iterate through columns of y_pred and y_test and calculate precision, recall and f1_score for optimized Adaboost classifier
adaboost_opt = performance_metrics(y_pred_ada_opt)
list_all_pred = [adaboost, random_forest_not_opt, random_forest_opt, binary, adaboost_opt]
list_all_pred_names = ['AdaBoost', 'Random Forest not optimized', 'Random Forest optimized', 'Binarization', 'AdaBoost optimized']
# Create a function that will take a list of the performance scores of all models as an input
# and returns a combined dataframe
def transform_predictions(list_all_pred, names):
'''
Function to transform the prediction outputs of each model and combine them in one dataframe
Args: list_all_pred = list with all prediction dictionaries
names = list of model names
Returns: df_melt = dataframe of all performance scores
'''
# Names of the scores
score = ['precision', 'recall', 'f1_score']
# Convert arrays into dataframes
df_all_models = pd.DataFrame()
for i in range(len(list_all_pred)):
df_temp = pd.DataFrame(list_all_pred[i])
df_temp['score'] = score
df_temp['model'] = names[i]
df_all_models = df_all_models.append(df_temp)
# Melt the dataframe to get a structure that will allow plotting the results in a barchart
df_melt = pd.melt(df_all_models, id_vars=['model', 'score'], var_name='cat_type', value_name='value')
return df_melt
# Create the melted dataframe that will serve as an input to the plotting function below
df_melt = transform_predictions(list_all_pred, list_all_pred_names)
df_melt
# Plotting the performance scores of all models and for all target labels
sns.factorplot(x='score', y='value', hue='model', col='cat_type', data=df_melt, sharey=False, col_wrap=5, kind='bar');
###Output
_____no_output_____
###Markdown
In almost all of these cases, the default AdaBoost model has better performance scores than all other models I tested. Thus, I will use AdaBoost with the default parameters as the predictor in my ML pipeline. 9. Export your model as a pickle file
###Code
# Filename of the pickle file
filename = 'adaboost_ml_pipeline'
pickle.dump(pipeline_adaboost, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'wordnet'])
nltk.download('averaged_perceptron_tagger')
nltk.download('omw')
nltk.download('stopwords')
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet,stopwords
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.metrics import f1_score,accuracy_score,precision_score,recall_score,make_scorer,classification_report
import re
import pickle
# load data from database
engine = create_engine('sqlite:///data/cleaned_data.db')
df = pd.read_sql('SELECT * FROM message', engine)
df_tmp = df.drop(['id','message','original','genre'],axis=1)
count_per_category = df_tmp[df_tmp!=0].sum()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def replace_URLs_with_placeholder(text):
# Regular Expression to detect URLs for http and https urls (does not cater for uppercase HTTP/S or other protocols)
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
#detect all URLs in a text message
url_list = re.findall(url_regex, text)
#remove the URLs
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
return text
def tokenize_sentences_by_words(text):
# this will make every sentence a token by itself
sentence_list = nltk.sent_tokenize(text)
# iterate through the sentences and make each one an array of token seperately.
array_of_tokenized_sentences = []
for sentence in sentence_list:
word_tokenized_sentence = word_tokenize(sentence.lower())
array_of_tokenized_sentences.append(word_tokenized_sentence)
return array_of_tokenized_sentences
def tag_POS_for_sentence_tokens(array_of_tokenized_sentences):
# take the array of tokens for each sentence seperately and get its POS tags
array_of_tagged_sentence_tokens = []
for sentence_tokens in array_of_tokenized_sentences:
pos_tags = nltk.pos_tag(sentence_tokens)
array_of_tagged_sentence_tokens.append(pos_tags)
return array_of_tagged_sentence_tokens
def lemmatize_tokens_based_on_POS_tags(array_of_tagged_sentence_tokens):
# this mapping is from the POS tags to the wordnet tags understood by the lemmatization function
tag_dict = {"J": wordnet.ADJ,"N": wordnet.NOUN,"V": wordnet.VERB,"R": wordnet.ADV}
lemmatizer = WordNetLemmatizer()
lemmatized_tokens = []
for sentence_tokens in array_of_tagged_sentence_tokens:
for token_pair in sentence_tokens:
token = token_pair[0]
stop_words = set(stopwords.words('english'))
if (token not in stop_words) & token.isalpha():
oldTag = token_pair[1].upper()
newTag = tag_dict.get(oldTag, wordnet.NOUN)
# Here we lemmatize based on the POS tag for better accuracy of lemmatization
newToken = lemmatizer.lemmatize(token,newTag)
lemmatized_tokens.append(newToken)
return lemmatized_tokens
def tokenize(text):
text = replace_URLs_with_placeholder(text)
array_of_tokenized_sentences = tokenize_sentences_by_words(text)
array_of_tagged_sentence_tokens = tag_POS_for_sentence_tokens(array_of_tokenized_sentences)
lemmatized_tokens = lemmatize_tokens_based_on_POS_tags(array_of_tagged_sentence_tokens)
return lemmatized_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def model_pipeline():
pipeline = Pipeline(
[
('text_pipeline', Pipeline(
[
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
]
)),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=10,n_jobs=12)))
]
)
return pipeline
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X = df['message']
y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
def train_valid_test_split(X,y):
# split the dataset to training, validation, and testing sets
X_others, X_test, y_others, y_test = train_test_split(X, y,test_size=0.1, random_state = 42)
X_train, X_valid, y_train, y_valid = train_test_split(X_others, y_others,test_size=0.05, random_state = 42)
return X_train,X_valid,X_test,y_train,y_valid,y_test
# validation sets will be used to quickly test the fitting function for code errors
X_train,X_valid,X_test,y_train,y_valid,y_test = train_valid_test_split(X,y)
model = model_pipeline()
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_test_pred = model.predict(X_test)
def print_model_metrics(y_pred,y_target,categories):
y_target = pd.DataFrame(y_target,columns=categories)
y_pred = pd.DataFrame(y_pred,columns=categories)
for category in categories:
print("Scores for Category '"+category+"'")
temp = classification_report(y_target[category],y_pred[category])
print(temp)
print_model_metrics(y_test_pred,y_test,y_test.columns.values)
###Output
Scores for Category 'related'
precision recall f1-score support
0 0.57 0.27 0.36 646
1 0.79 0.93 0.85 1951
2 0.00 0.00 0.00 21
accuracy 0.76 2618
macro avg 0.45 0.40 0.41 2618
weighted avg 0.73 0.76 0.73 2618
Scores for Category 'request'
precision recall f1-score support
0 0.85 0.98 0.91 2142
1 0.73 0.20 0.32 476
accuracy 0.84 2618
macro avg 0.79 0.59 0.61 2618
weighted avg 0.83 0.84 0.80 2618
Scores for Category 'offer'
precision recall f1-score support
0 0.99 1.00 1.00 2601
1 0.00 0.00 0.00 17
accuracy 0.99 2618
macro avg 0.50 0.50 0.50 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'aid_related'
precision recall f1-score support
0 0.73 0.82 0.77 1539
1 0.68 0.56 0.61 1079
accuracy 0.71 2618
macro avg 0.71 0.69 0.69 2618
weighted avg 0.71 0.71 0.71 2618
Scores for Category 'medical_help'
precision recall f1-score support
0 0.92 1.00 0.96 2396
1 0.68 0.09 0.15 222
accuracy 0.92 2618
macro avg 0.80 0.54 0.55 2618
weighted avg 0.90 0.92 0.89 2618
Scores for Category 'medical_products'
precision recall f1-score support
0 0.95 1.00 0.97 2481
1 0.00 0.00 0.00 137
accuracy 0.95 2618
macro avg 0.47 0.50 0.49 2618
weighted avg 0.90 0.95 0.92 2618
Scores for Category 'search_and_rescue'
precision recall f1-score support
0 0.97 1.00 0.98 2536
1 0.00 0.00 0.00 82
accuracy 0.97 2618
macro avg 0.48 0.50 0.49 2618
weighted avg 0.94 0.97 0.95 2618
Scores for Category 'security'
precision recall f1-score support
0 0.98 1.00 0.99 2574
1 0.00 0.00 0.00 44
accuracy 0.98 2618
macro avg 0.49 0.50 0.50 2618
weighted avg 0.97 0.98 0.97 2618
Scores for Category 'military'
precision recall f1-score support
0 0.97 1.00 0.99 2544
1 0.00 0.00 0.00 74
accuracy 0.97 2618
macro avg 0.49 0.50 0.49 2618
weighted avg 0.94 0.97 0.96 2618
Scores for Category 'child_alone'
precision recall f1-score support
0 1.00 1.00 1.00 2618
accuracy 1.00 2618
macro avg 1.00 1.00 1.00 2618
weighted avg 1.00 1.00 1.00 2618
Scores for Category 'water'
precision recall f1-score support
0 0.95 1.00 0.97 2456
1 0.85 0.17 0.29 162
accuracy 0.95 2618
macro avg 0.90 0.59 0.63 2618
weighted avg 0.94 0.95 0.93 2618
Scores for Category 'food'
precision recall f1-score support
0 0.93 0.99 0.96 2339
1 0.81 0.41 0.55 279
accuracy 0.93 2618
macro avg 0.87 0.70 0.75 2618
weighted avg 0.92 0.93 0.92 2618
Scores for Category 'shelter'
precision recall f1-score support
0 0.92 1.00 0.96 2378
1 0.83 0.18 0.30 240
accuracy 0.92 2618
macro avg 0.88 0.59 0.63 2618
weighted avg 0.92 0.92 0.90 2618
Scores for Category 'clothing'
precision recall f1-score support
0 0.99 1.00 0.99 2588
1 0.80 0.13 0.23 30
accuracy 0.99 2618
macro avg 0.90 0.57 0.61 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'money'
precision recall f1-score support
0 0.98 1.00 0.99 2573
1 0.00 0.00 0.00 45
accuracy 0.98 2618
macro avg 0.49 0.50 0.50 2618
weighted avg 0.97 0.98 0.97 2618
Scores for Category 'missing_people'
precision recall f1-score support
0 0.99 1.00 0.99 2587
1 0.00 0.00 0.00 31
accuracy 0.99 2618
macro avg 0.49 0.50 0.50 2618
weighted avg 0.98 0.99 0.98 2618
Scores for Category 'refugees'
precision recall f1-score support
0 0.97 1.00 0.98 2533
1 0.57 0.05 0.09 85
accuracy 0.97 2618
macro avg 0.77 0.52 0.54 2618
weighted avg 0.96 0.97 0.95 2618
Scores for Category 'death'
precision recall f1-score support
0 0.96 1.00 0.98 2506
1 1.00 0.01 0.02 112
accuracy 0.96 2618
macro avg 0.98 0.50 0.50 2618
weighted avg 0.96 0.96 0.94 2618
Scores for Category 'other_aid'
precision recall f1-score support
0 0.87 0.99 0.93 2268
1 0.30 0.02 0.04 350
accuracy 0.86 2618
macro avg 0.59 0.51 0.48 2618
weighted avg 0.79 0.86 0.81 2618
Scores for Category 'infrastructure_related'
precision recall f1-score support
0 0.93 1.00 0.97 2446
1 0.00 0.00 0.00 172
accuracy 0.93 2618
macro avg 0.47 0.50 0.48 2618
weighted avg 0.87 0.93 0.90 2618
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
model = model_pipeline()
RandomForest_parameters = {
'clf__estimator__n_estimators': list(range(50,151,25)),
'clf__estimator__max_features': ["sqrt","log2"]
}
# 12 jobs are used to utilize the multiple cores of the CPU.
# If it fails to execute try changing the number of jobs and run again.
# If it keeps failing, remove the n_jobs parameter to run the optimization on a single core
cv_random_forest = GridSearchCV(estimator=model, param_grid=RandomForest_parameters, verbose=3,n_jobs=12)
cv_random_forest.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 10 candidates, totalling 30 fits
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_test_pred = cv_random_forest.predict(X_test)
print_model_metrics(y_test_pred,y_test,y_test.columns.values)
pickle.dump(cv_random_forest, open('RandomForestModel.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
def model_pipeline2():
pipeline = Pipeline(
[
('text_pipeline', Pipeline(
[
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
]
)),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
]
)
return pipeline
model2 = model_pipeline2()
parameters_AdaBoost = {
'clf__estimator__n_estimators' : list(range(50,151,25)),
'clf__estimator__learning_rate': [0.01,0.05,0.1,0.25]
}
cv_AdaBoost = GridSearchCV(estimator=model2, param_grid=parameters_AdaBoost,verbose=3,n_jobs=12)
cv_AdaBoost.fit(X_train, y_train)
y_test_pred = cv_AdaBoost.predict(X_test)
print_model_metrics(y_test_pred,y_test,y_test.columns.values)
###Output
Scores for Category 'related'
precision recall f1-score support
0 0.71 0.05 0.09 646
1 0.75 0.99 0.86 1951
2 1.00 0.05 0.09 21
accuracy 0.75 2618
macro avg 0.82 0.36 0.35 2618
weighted avg 0.75 0.75 0.66 2618
Scores for Category 'request'
precision recall f1-score support
0 0.88 0.98 0.93 2142
1 0.80 0.42 0.55 476
accuracy 0.87 2618
macro avg 0.84 0.70 0.74 2618
weighted avg 0.87 0.87 0.86 2618
Scores for Category 'offer'
precision recall f1-score support
0 0.99 1.00 1.00 2601
1 0.00 0.00 0.00 17
accuracy 0.99 2618
macro avg 0.50 0.50 0.50 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'aid_related'
precision recall f1-score support
0 0.74 0.89 0.81 1539
1 0.78 0.56 0.65 1079
accuracy 0.75 2618
macro avg 0.76 0.73 0.73 2618
weighted avg 0.76 0.75 0.75 2618
Scores for Category 'medical_help'
precision recall f1-score support
0 0.93 0.99 0.96 2396
1 0.68 0.18 0.28 222
accuracy 0.92 2618
macro avg 0.80 0.59 0.62 2618
weighted avg 0.91 0.92 0.90 2618
Scores for Category 'medical_products'
precision recall f1-score support
0 0.96 1.00 0.98 2481
1 0.74 0.20 0.32 137
accuracy 0.95 2618
macro avg 0.85 0.60 0.65 2618
weighted avg 0.95 0.95 0.94 2618
Scores for Category 'search_and_rescue'
precision recall f1-score support
0 0.97 1.00 0.98 2536
1 0.62 0.10 0.17 82
accuracy 0.97 2618
macro avg 0.79 0.55 0.58 2618
weighted avg 0.96 0.97 0.96 2618
Scores for Category 'security'
precision recall f1-score support
0 0.98 1.00 0.99 2574
1 1.00 0.02 0.04 44
accuracy 0.98 2618
macro avg 0.99 0.51 0.52 2618
weighted avg 0.98 0.98 0.98 2618
Scores for Category 'military'
precision recall f1-score support
0 0.98 1.00 0.99 2544
1 0.68 0.20 0.31 74
accuracy 0.97 2618
macro avg 0.83 0.60 0.65 2618
weighted avg 0.97 0.97 0.97 2618
Scores for Category 'child_alone'
precision recall f1-score support
0 1.00 1.00 1.00 2618
accuracy 1.00 2618
macro avg 1.00 1.00 1.00 2618
weighted avg 1.00 1.00 1.00 2618
Scores for Category 'water'
precision recall f1-score support
0 0.98 0.99 0.98 2456
1 0.76 0.65 0.70 162
accuracy 0.97 2618
macro avg 0.87 0.82 0.84 2618
weighted avg 0.96 0.97 0.96 2618
Scores for Category 'food'
precision recall f1-score support
0 0.97 0.98 0.97 2339
1 0.79 0.73 0.76 279
accuracy 0.95 2618
macro avg 0.88 0.85 0.86 2618
weighted avg 0.95 0.95 0.95 2618
Scores for Category 'shelter'
precision recall f1-score support
0 0.95 0.99 0.97 2378
1 0.82 0.51 0.63 240
accuracy 0.94 2618
macro avg 0.89 0.75 0.80 2618
weighted avg 0.94 0.94 0.94 2618
Scores for Category 'clothing'
precision recall f1-score support
0 0.99 1.00 1.00 2588
1 0.73 0.27 0.39 30
accuracy 0.99 2618
macro avg 0.86 0.63 0.69 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'money'
precision recall f1-score support
0 0.99 1.00 0.99 2573
1 0.50 0.16 0.24 45
accuracy 0.98 2618
macro avg 0.74 0.58 0.61 2618
weighted avg 0.98 0.98 0.98 2618
Scores for Category 'missing_people'
precision recall f1-score support
0 0.99 1.00 0.99 2587
1 0.62 0.16 0.26 31
accuracy 0.99 2618
macro avg 0.81 0.58 0.63 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'refugees'
precision recall f1-score support
0 0.97 1.00 0.98 2533
1 0.53 0.12 0.19 85
accuracy 0.97 2618
macro avg 0.75 0.56 0.59 2618
weighted avg 0.96 0.97 0.96 2618
Scores for Category 'death'
precision recall f1-score support
0 0.97 0.99 0.98 2506
1 0.76 0.38 0.50 112
accuracy 0.97 2618
macro avg 0.87 0.68 0.74 2618
weighted avg 0.96 0.97 0.96 2618
Scores for Category 'other_aid'
precision recall f1-score support
0 0.87 0.99 0.93 2268
1 0.63 0.07 0.12 350
accuracy 0.87 2618
macro avg 0.75 0.53 0.53 2618
weighted avg 0.84 0.87 0.82 2618
Scores for Category 'infrastructure_related'
precision recall f1-score support
0 0.94 1.00 0.97 2446
1 0.50 0.02 0.04 172
accuracy 0.93 2618
macro avg 0.72 0.51 0.51 2618
weighted avg 0.91 0.93 0.91 2618
Scores for Category 'transport'
precision recall f1-score support
0 0.96 1.00 0.98 2504
1 0.65 0.15 0.24 114
accuracy 0.96 2618
macro avg 0.81 0.57 0.61 2618
weighted avg 0.95 0.96 0.95 2618
Scores for Category 'buildings'
precision recall f1-score support
0 0.96 0.99 0.98 2482
1 0.67 0.25 0.36 136
accuracy 0.95 2618
macro avg 0.81 0.62 0.67 2618
weighted avg 0.95 0.95 0.94 2618
Scores for Category 'electricity'
precision recall f1-score support
0 0.98 1.00 0.99 2571
1 0.40 0.09 0.14 47
accuracy 0.98 2618
macro avg 0.69 0.54 0.57 2618
weighted avg 0.97 0.98 0.98 2618
Scores for Category 'tools'
precision recall f1-score support
0 0.99 1.00 1.00 2601
1 0.00 0.00 0.00 17
accuracy 0.99 2618
macro avg 0.50 0.50 0.50 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'hospitals'
precision recall f1-score support
0 0.99 1.00 0.99 2591
1 0.25 0.04 0.06 27
accuracy 0.99 2618
macro avg 0.62 0.52 0.53 2618
weighted avg 0.98 0.99 0.98 2618
Scores for Category 'shops'
precision recall f1-score support
0 1.00 1.00 1.00 2605
1 0.00 0.00 0.00 13
accuracy 1.00 2618
macro avg 0.50 0.50 0.50 2618
weighted avg 0.99 1.00 0.99 2618
Scores for Category 'aid_centers'
precision recall f1-score support
0 0.99 1.00 1.00 2591
1 0.75 0.11 0.19 27
accuracy 0.99 2618
macro avg 0.87 0.56 0.59 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'other_infrastructure'
precision recall f1-score support
0 0.95 1.00 0.98 2498
1 0.00 0.00 0.00 120
accuracy 0.95 2618
macro avg 0.48 0.50 0.49 2618
weighted avg 0.91 0.95 0.93 2618
Scores for Category 'weather_related'
precision recall f1-score support
0 0.86 0.97 0.91 1940
1 0.87 0.55 0.68 678
accuracy 0.86 2618
macro avg 0.87 0.76 0.79 2618
weighted avg 0.86 0.86 0.85 2618
Scores for Category 'floods'
precision recall f1-score support
0 0.96 1.00 0.98 2408
1 0.90 0.47 0.62 210
accuracy 0.95 2618
macro avg 0.93 0.73 0.80 2618
weighted avg 0.95 0.95 0.95 2618
Scores for Category 'storm'
precision recall f1-score support
0 0.94 0.99 0.97 2398
1 0.74 0.37 0.49 220
accuracy 0.94 2618
macro avg 0.84 0.68 0.73 2618
weighted avg 0.93 0.94 0.93 2618
Scores for Category 'fire'
precision recall f1-score support
0 0.99 1.00 0.99 2592
1 0.38 0.12 0.18 26
accuracy 0.99 2618
macro avg 0.68 0.56 0.59 2618
weighted avg 0.99 0.99 0.99 2618
Scores for Category 'earthquake'
precision recall f1-score support
0 0.98 0.99 0.99 2407
1 0.86 0.81 0.83 211
accuracy 0.97 2618
macro avg 0.92 0.90 0.91 2618
weighted avg 0.97 0.97 0.97 2618
Scores for Category 'cold'
precision recall f1-score support
0 0.99 1.00 0.99 2576
1 0.44 0.17 0.24 42
accuracy 0.98 2618
macro avg 0.71 0.58 0.62 2618
weighted avg 0.98 0.98 0.98 2618
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv_AdaBoost, open('AdaBoost_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import os
# import libraries
from sqlalchemy import create_engine
import pandas as pd
# nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
# scikit-learn
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
# pickle
import pickle
# load data from database
path = os.path.abspath(os.getcwd())
#print(path)
#tmp_str = 'sqlite:///{}'.format(path + database_filepath[7:])
engine = create_engine('sqlite:///{}'.format(path+'/DisasterResponse.db'))
df = pd.read_sql('SELECT * FROM {}'.format('DisasterResponse'), engine)
X = df.message
Y = df.drop(columns=['id','message','original','genre'])
category_names = Y.columns
X.head(10)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok.lower().strip())
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf',MultiOutputClassifier(LogisticRegression(random_state=42, max_iter = 500)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=42)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
y_pred = pipeline.predict(X_test)
for idx, col in enumerate(category_names):
print('For category {}:'.format(col))
print(classification_report(y_test[col], y_pred[:,idx]))
###Output
For category related:
precision recall f1-score support
0 0.70 0.45 0.55 1563
1 0.84 0.94 0.89 4944
2 0.00 0.00 0.00 47
accuracy 0.82 6554
macro avg 0.51 0.46 0.48 6554
weighted avg 0.80 0.82 0.80 6554
For category request:
precision recall f1-score support
0 0.91 0.98 0.95 5443
1 0.84 0.55 0.67 1111
accuracy 0.91 6554
macro avg 0.88 0.77 0.81 6554
weighted avg 0.90 0.91 0.90 6554
For category offer:
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
accuracy 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
For category aid_related:
precision recall f1-score support
0 0.80 0.85 0.82 3884
1 0.76 0.68 0.72 2670
accuracy 0.78 6554
macro avg 0.78 0.77 0.77 6554
weighted avg 0.78 0.78 0.78 6554
For category medical_help:
precision recall f1-score support
0 0.93 0.99 0.96 6019
1 0.66 0.15 0.25 535
accuracy 0.92 6554
macro avg 0.79 0.57 0.60 6554
weighted avg 0.91 0.92 0.90 6554
For category medical_products:
precision recall f1-score support
0 0.96 1.00 0.98 6210
1 0.86 0.17 0.29 344
accuracy 0.96 6554
macro avg 0.91 0.59 0.63 6554
weighted avg 0.95 0.96 0.94 6554
For category search_and_rescue:
precision recall f1-score support
0 0.98 1.00 0.99 6395
1 1.00 0.04 0.08 159
accuracy 0.98 6554
macro avg 0.99 0.52 0.54 6554
weighted avg 0.98 0.98 0.97 6554
For category security:
precision recall f1-score support
0 0.98 1.00 0.99 6438
1 0.00 0.00 0.00 116
accuracy 0.98 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.96 0.98 0.97 6554
For category military:
precision recall f1-score support
0 0.97 1.00 0.98 6354
1 0.62 0.07 0.13 200
accuracy 0.97 6554
macro avg 0.80 0.54 0.56 6554
weighted avg 0.96 0.97 0.96 6554
For category water:
precision recall f1-score support
0 0.96 0.99 0.98 6136
1 0.80 0.46 0.58 418
accuracy 0.96 6554
macro avg 0.88 0.72 0.78 6554
weighted avg 0.95 0.96 0.95 6554
For category food:
precision recall f1-score support
0 0.95 0.99 0.97 5809
1 0.88 0.58 0.70 745
accuracy 0.94 6554
macro avg 0.91 0.78 0.83 6554
weighted avg 0.94 0.94 0.94 6554
For category shelter:
precision recall f1-score support
0 0.95 0.99 0.97 5973
1 0.84 0.47 0.61 581
accuracy 0.95 6554
macro avg 0.90 0.73 0.79 6554
weighted avg 0.94 0.95 0.94 6554
For category clothing:
precision recall f1-score support
0 0.99 1.00 0.99 6456
1 0.88 0.14 0.25 98
accuracy 0.99 6554
macro avg 0.93 0.57 0.62 6554
weighted avg 0.99 0.99 0.98 6554
For category money:
precision recall f1-score support
0 0.98 1.00 0.99 6421
1 0.67 0.09 0.16 133
accuracy 0.98 6554
macro avg 0.82 0.54 0.57 6554
weighted avg 0.98 0.98 0.97 6554
For category missing_people:
precision recall f1-score support
0 0.99 1.00 0.99 6481
1 1.00 0.01 0.03 73
accuracy 0.99 6554
macro avg 0.99 0.51 0.51 6554
weighted avg 0.99 0.99 0.98 6554
For category refugees:
precision recall f1-score support
0 0.97 1.00 0.98 6339
1 0.63 0.06 0.10 215
accuracy 0.97 6554
macro avg 0.80 0.53 0.54 6554
weighted avg 0.96 0.97 0.95 6554
For category death:
precision recall f1-score support
0 0.97 1.00 0.98 6257
1 0.92 0.24 0.38 297
accuracy 0.96 6554
macro avg 0.94 0.62 0.68 6554
weighted avg 0.96 0.96 0.95 6554
For category other_aid:
precision recall f1-score support
0 0.88 0.99 0.93 5690
1 0.59 0.11 0.19 864
accuracy 0.87 6554
macro avg 0.74 0.55 0.56 6554
weighted avg 0.84 0.87 0.83 6554
For category infrastructure_related:
precision recall f1-score support
0 0.94 1.00 0.97 6143
1 0.48 0.02 0.05 411
accuracy 0.94 6554
macro avg 0.71 0.51 0.51 6554
weighted avg 0.91 0.94 0.91 6554
For category transport:
precision recall f1-score support
0 0.96 1.00 0.98 6251
1 0.78 0.08 0.15 303
accuracy 0.96 6554
macro avg 0.87 0.54 0.56 6554
weighted avg 0.95 0.96 0.94 6554
For category buildings:
precision recall f1-score support
0 0.96 1.00 0.98 6231
1 0.92 0.21 0.35 323
accuracy 0.96 6554
macro avg 0.94 0.61 0.66 6554
weighted avg 0.96 0.96 0.95 6554
For category electricity:
precision recall f1-score support
0 0.98 1.00 0.99 6407
1 0.80 0.08 0.15 147
accuracy 0.98 6554
macro avg 0.89 0.54 0.57 6554
weighted avg 0.98 0.98 0.97 6554
For category tools:
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# specify parameters for grid search
parameters = {'vect__ngram_range' : [(1,1), (1,2)],
'tfidf__use_idf': [True, False]
}
# create grid search object
pipeline_cv = GridSearchCV(pipeline, parameters, n_jobs=-1, cv=3, verbose=1)
pipeline_cv.fit(X_train, y_train)
print(pipeline_cv.best_params_)
###Output
{'tfidf__use_idf': True, 'vect__ngram_range': (1, 1)}
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# predict on test data
y_pred = pipeline_cv.predict(X_test)
for idx,col in enumerate(category_names):
print('For category {}:'.format(col))
print(classification_report(y_test[col], y_pred[:,idx]))
###Output
For category related:
precision recall f1-score support
0 0.70 0.45 0.55 1563
1 0.84 0.94 0.89 4944
2 0.00 0.00 0.00 47
accuracy 0.82 6554
macro avg 0.51 0.46 0.48 6554
weighted avg 0.80 0.82 0.80 6554
For category request:
precision recall f1-score support
0 0.91 0.98 0.95 5443
1 0.84 0.55 0.67 1111
accuracy 0.91 6554
macro avg 0.88 0.77 0.81 6554
weighted avg 0.90 0.91 0.90 6554
For category offer:
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
accuracy 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
For category aid_related:
precision recall f1-score support
0 0.80 0.85 0.82 3884
1 0.76 0.68 0.72 2670
accuracy 0.78 6554
macro avg 0.78 0.77 0.77 6554
weighted avg 0.78 0.78 0.78 6554
For category medical_help:
precision recall f1-score support
0 0.93 0.99 0.96 6019
1 0.66 0.15 0.25 535
accuracy 0.92 6554
macro avg 0.79 0.57 0.60 6554
weighted avg 0.91 0.92 0.90 6554
For category medical_products:
precision recall f1-score support
0 0.96 1.00 0.98 6210
1 0.86 0.17 0.29 344
accuracy 0.96 6554
macro avg 0.91 0.59 0.63 6554
weighted avg 0.95 0.96 0.94 6554
For category search_and_rescue:
precision recall f1-score support
0 0.98 1.00 0.99 6395
1 1.00 0.04 0.08 159
accuracy 0.98 6554
macro avg 0.99 0.52 0.54 6554
weighted avg 0.98 0.98 0.97 6554
For category security:
precision recall f1-score support
0 0.98 1.00 0.99 6438
1 0.00 0.00 0.00 116
accuracy 0.98 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.96 0.98 0.97 6554
For category military:
precision recall f1-score support
0 0.97 1.00 0.98 6354
1 0.62 0.07 0.13 200
accuracy 0.97 6554
macro avg 0.80 0.54 0.56 6554
weighted avg 0.96 0.97 0.96 6554
For category water:
precision recall f1-score support
0 0.96 0.99 0.98 6136
1 0.80 0.46 0.58 418
accuracy 0.96 6554
macro avg 0.88 0.72 0.78 6554
weighted avg 0.95 0.96 0.95 6554
For category food:
precision recall f1-score support
0 0.95 0.99 0.97 5809
1 0.88 0.58 0.70 745
accuracy 0.94 6554
macro avg 0.91 0.78 0.83 6554
weighted avg 0.94 0.94 0.94 6554
For category shelter:
precision recall f1-score support
0 0.95 0.99 0.97 5973
1 0.84 0.47 0.61 581
accuracy 0.95 6554
macro avg 0.90 0.73 0.79 6554
weighted avg 0.94 0.95 0.94 6554
For category clothing:
precision recall f1-score support
0 0.99 1.00 0.99 6456
1 0.88 0.14 0.25 98
accuracy 0.99 6554
macro avg 0.93 0.57 0.62 6554
weighted avg 0.99 0.99 0.98 6554
For category money:
precision recall f1-score support
0 0.98 1.00 0.99 6421
1 0.67 0.09 0.16 133
accuracy 0.98 6554
macro avg 0.82 0.54 0.57 6554
weighted avg 0.98 0.98 0.97 6554
For category missing_people:
precision recall f1-score support
0 0.99 1.00 0.99 6481
1 1.00 0.01 0.03 73
accuracy 0.99 6554
macro avg 0.99 0.51 0.51 6554
weighted avg 0.99 0.99 0.98 6554
For category refugees:
precision recall f1-score support
0 0.97 1.00 0.98 6339
1 0.63 0.06 0.10 215
accuracy 0.97 6554
macro avg 0.80 0.53 0.54 6554
weighted avg 0.96 0.97 0.95 6554
For category death:
precision recall f1-score support
0 0.97 1.00 0.98 6257
1 0.92 0.24 0.38 297
accuracy 0.96 6554
macro avg 0.94 0.62 0.68 6554
weighted avg 0.96 0.96 0.95 6554
For category other_aid:
precision recall f1-score support
0 0.88 0.99 0.93 5690
1 0.59 0.11 0.19 864
accuracy 0.87 6554
macro avg 0.74 0.55 0.56 6554
weighted avg 0.84 0.87 0.83 6554
For category infrastructure_related:
precision recall f1-score support
0 0.94 1.00 0.97 6143
1 0.48 0.02 0.05 411
accuracy 0.94 6554
macro avg 0.71 0.51 0.51 6554
weighted avg 0.91 0.94 0.91 6554
For category transport:
precision recall f1-score support
0 0.96 1.00 0.98 6251
1 0.78 0.08 0.15 303
accuracy 0.96 6554
macro avg 0.87 0.54 0.56 6554
weighted avg 0.95 0.96 0.94 6554
For category buildings:
precision recall f1-score support
0 0.96 1.00 0.98 6231
1 0.92 0.21 0.35 323
accuracy 0.96 6554
macro avg 0.94 0.61 0.66 6554
weighted avg 0.96 0.96 0.95 6554
For category electricity:
precision recall f1-score support
0 0.98 1.00 0.99 6407
1 0.80 0.08 0.15 147
accuracy 0.98 6554
macro avg 0.89 0.54 0.57 6554
weighted avg 0.98 0.98 0.97 6554
For category tools:
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# done via trial and error above
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
model_filepath = './models/classifier_notebook.pkl'
outfile = open(model_filepath,'wb')
pickle.dump(pipeline, outfile)
outfile.close()
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import pickle
import re
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.tokenize import word_tokenize
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
from sklearn.metrics import precision_score, recall_score, f1_score, make_scorer
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from joblib import parallel_backend
from imblearn.ensemble import BalancedRandomForestClassifier
import warnings
warnings.simplefilter('ignore')
# load data from database
engine = create_engine(r'sqlite:///data/DisasterResponse.db', pool_pre_ping=True)
df = pd.read_sql_table('CleanData', engine)
X = df.message
Y = df[df.columns[4:]]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Normalize and tokenize message strings.
Args:
text: String - message text to process
Returns:
clean_tokens: list of strings - list of tokens from the message
"""
# normalize case and remove punctuation
text = text = re.sub('\W', ' ', text.lower())
tokens = word_tokenize(text)
stop_words = stopwords.words("english")
# Reduce words to their stems
clean_tokens = [PorterStemmer().stem(tok).strip() for tok in tokens if tok not in stop_words]
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize) ),
('tfidf', TfidfTransformer() ),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=-1)) )
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def report_results(Y_test, Y_pred):
"""Report precision, recall and f1_score for the Machine Learning Model."""
results = pd.DataFrame(columns= ['category', 'precision', 'recall', 'f1-score'])
for i, category in enumerate(Y_test.columns):
y_true = Y_test.iloc[:,i].values
y_pred = Y_pred[:,i]
row = {'category':category,
'precision':precision_score(y_true, y_pred, zero_division=0, average='macro'),
'recall':recall_score(y_true, y_pred, zero_division=0, average='macro'),
'f1-score':f1_score(y_true, y_pred, zero_division=0, average='macro')}
results = results.append(row, ignore_index=True)
median_values = {'category':'median_values',
'precision':results['precision'].median(),
'recall':results['recall'].median(),
'f1-score':results['f1-score'].median()}
results = results.append(median_values, ignore_index=True)
return results
pipeline.fit(X_train, Y_train)
Y_pred = pipeline.predict(X_test)
print('Writing results to DB in table "Pipeline".')
report_results(Y_test, Y_pred).to_sql('Pipeline', engine, index=False, if_exists='replace')
###Output
_____no_output_____
###Markdown
Due to remote execution of this code we will later transfer this notebook into a plain python script and write the performance results into the existing SQL database. When we transfer the data back to our local machine we're able to read out the tables and copmpare the models. 6. Improve your modelUse grid search to find better parameters.
###Code
def f1_scorer(y_true, y_pred):
"""
Calculate median F1-Score to measure model performance.
Args:
y_true: DataFrame containing the actual labels
y_pred: Array containing the predicted labels
Returns:
f1_score: Float representing the median F1-Score for the model.
"""
scores = []
for i in range(y_pred.shape[1]):
scores.append(f1_score(np.array(y_true)[:,i], y_pred[:,i], zero_division=0, average='macro'))
score = np.median(scores)
return score
parameters = {
'vect__ngram_range': [(1,1), (1,2), (1,4)],
'clf__estimator__min_samples_leaf':[1, 5],
'clf__estimator__class_weight': [None, 'balanced'],
'clf__estimator__n_estimators': [50, 100, 200]
}
scorer = make_scorer(f1_scorer)
cv = GridSearchCV(pipeline, param_grid=parameters, scoring=scorer, verbose=3)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, Y_train)
# Get results of grid search
data = {'parameter': list(cv.best_params_.keys()),
'value': [str(value) for value in cv.best_params_.values()]}
cv_results = pd.DataFrame(data)
cv_results = cv_results.append(
{'parameter': 'median f1-score','value': np.max(cv.cv_results_['mean_test_score'])},
ignore_index=True)
print('Writing results of GridSearch.fit to DB in table "GsFit".')
cv_results.to_sql('GsFit', engine, index=False, if_exists='replace')
Y_pred = cv.predict(X_test)
print('Writing results of GridSearch.predict to DB in table "GsPredict".')
report_results(Y_test, Y_pred).to_sql('GsPredict', engine, index=False, if_exists='replace')
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
balanced_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize) ),
('tfidf', TfidfTransformer() ),
('clf', MultiOutputClassifier(BalancedRandomForestClassifier(n_jobs=-1) ))
])
keys = ['vect__ngram_range',
'clf__estimator__min_samples_leaf',
'clf__estimator__class_weight',
'clf__estimator__n_estimators']
values = [cv.get_params(True)[key] for key in keys]
tuning_params = dict(zip(keys, values))
balanced_pipeline.set_params(
vect__ngram_range = tuning_params['vect__ngram_range'],
clf__estimator__min_samples_leaf = tuning_params['clf__estimator__min_samples_leaf'],
clf__estimator__class_weight = tuning_params['clf__estimator__class_weight'],
clf__estimator__n_estimators = tuning_params['clf__estimator__n_estimators']
)
balanced_pipeline.fit(X_train, Y_train)
Y_pred = balanced_pipeline.predict(X_test)
print('Writing results of BalancedPipeline to DB in table "BalancedPipeline".')
report_results(Y_test, Y_pred).to_sql('BalancedPipeline', engine, index=False, if_exists='replace')
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
print('Saving models in pickle files.')
pickle.dump(cv, open('disaster_model.pkl', 'wb'))
pickle.dump(balanced_pipeline, open('balanced_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import warnings
warnings.filterwarnings('ignore')
# import libraries
import re
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import pickle
import nltk
#nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords', 'ignore'])
nltk.download(['punkt', 'wordnet', 'stopwords', 'averaged_perceptron_tagger'])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.base import BaseEstimator, TransformerMixin
# load data from database with read_sql_table
engine = create_engine('sqlite:///DRP_Messages.db')
df = pd.read_sql_table('DRP_Messages', con = engine)
df.head()
# take a look at the data
df.describe()
# drop child_alone since it is all zeros
df = df.drop('child_alone', axis = 1)
# replace the 2's in related with 1's - assuming these are errors
#df['related'] = df['related'].map(lambda x: 1 if x==2 else x)
df['related'] = df['related'].replace(2, 1)
#Define feature and target variables X and Y
X = df['message']
#y = df.iloc[:,4:]
y = df.drop(['id', 'message', 'original', 'genre'], axis=1)
y.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Clean and tokenize the text data
Input: text - text data that needs to be cleaned and tokenized
Output: clean_tokens - list of tokens extracted from the text data
"""
#regular expression to detect a url
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
tokens = [t for t in tokens if t not in stopwords.words("english")]
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).strip()
clean_tokens.append(clean_tok)
return clean_tokens
# test out function
for message in X[:5]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
Is the Hurricane over or is it not over
['hurricane']
Looking for someone but no name
['looking', 'someone', 'name']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', '80', '90', 'destroyed', 'hospital', 'st', 'croix', 'functioning', 'need', 'supply', 'desperately']
says: west side of Haiti, rest of the country today and tonight
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Evaluation metrics for training data
y_pred_train = pipeline.predict(X_train)
# Print the report on f1 score, precision and recall for each output category
print(classification_report(y_train.values, y_pred_train, target_names = y.columns.values))
# Evaluation metrics for test data
y_pred_test = pipeline.predict(X_test)
# Print the report on f1 score, precision and recall for each output category
print(classification_report(y_test.values, y_pred_test, target_names = y.columns.values))
###Output
precision recall f1-score support
related 0.85 0.92 0.88 5045
request 0.80 0.43 0.56 1103
offer 0.00 0.00 0.00 31
aid_related 0.75 0.60 0.67 2707
medical_help 0.55 0.08 0.15 511
medical_products 0.72 0.07 0.13 327
search_and_rescue 0.50 0.04 0.07 188
security 0.67 0.02 0.03 119
military 0.45 0.11 0.17 206
water 0.87 0.32 0.47 425
food 0.79 0.60 0.69 711
shelter 0.77 0.28 0.41 573
clothing 0.64 0.10 0.17 89
money 0.50 0.03 0.06 145
missing_people 0.00 0.00 0.00 86
refugees 0.57 0.04 0.07 219
death 0.73 0.14 0.24 304
other_aid 0.54 0.05 0.09 867
infrastructure_related 0.43 0.01 0.03 443
transport 0.79 0.09 0.16 313
buildings 0.74 0.10 0.17 348
electricity 0.67 0.04 0.08 139
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 73
shops 0.00 0.00 0.00 26
aid_centers 0.00 0.00 0.00 75
other_infrastructure 0.00 0.00 0.00 310
weather_related 0.84 0.61 0.70 1886
floods 0.88 0.38 0.53 545
storm 0.73 0.44 0.55 626
fire 1.00 0.03 0.05 75
earthquake 0.88 0.73 0.80 622
cold 0.69 0.07 0.13 128
other_weather 0.38 0.02 0.04 374
direct_report 0.73 0.30 0.42 1302
avg / total 0.73 0.49 0.54 20973
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# See parameters of pipeline
pipeline.get_params()
# Running grid search can take a while, especially if you are searching over a lot of parameters!
# Therefore I have limited the number of parameters in my grid search
#specify parameters for grid search
parameters = {'clf__estimator__min_samples_split': [3, 4],
'clf__estimator__n_estimators': [20, 40]}
# create a grid search object
cv = GridSearchCV(pipeline, param_grid = parameters)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Retesting the model using the grid search results
y_pred_test = cv.predict(X_test)
# Print the report on f1 score, precision and recall for each output category
print(classification_report(y_test.values, y_pred_test, target_names = y.columns.values))
###Output
precision recall f1-score support
related 0.84 0.95 0.89 5045
request 0.81 0.48 0.60 1103
offer 0.00 0.00 0.00 31
aid_related 0.74 0.71 0.72 2707
medical_help 0.65 0.09 0.15 511
medical_products 0.87 0.12 0.21 327
search_and_rescue 1.00 0.03 0.05 188
security 1.00 0.02 0.03 119
military 0.68 0.11 0.19 206
water 0.91 0.39 0.54 425
food 0.83 0.65 0.73 711
shelter 0.81 0.39 0.53 573
clothing 0.86 0.07 0.12 89
money 0.80 0.03 0.05 145
missing_people 0.00 0.00 0.00 86
refugees 0.55 0.03 0.05 219
death 0.84 0.18 0.29 304
other_aid 0.61 0.04 0.07 867
infrastructure_related 0.00 0.00 0.00 443
transport 0.82 0.07 0.13 313
buildings 0.83 0.13 0.22 348
electricity 1.00 0.04 0.07 139
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 73
shops 0.00 0.00 0.00 26
aid_centers 0.00 0.00 0.00 75
other_infrastructure 0.00 0.00 0.00 310
weather_related 0.84 0.70 0.76 1886
floods 0.87 0.46 0.60 545
storm 0.77 0.54 0.63 626
fire 0.00 0.00 0.00 75
earthquake 0.89 0.81 0.85 622
cold 0.81 0.16 0.27 128
other_weather 0.53 0.05 0.08 374
direct_report 0.76 0.36 0.49 1302
avg / total 0.75 0.54 0.58 20973
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Change the classifier to AdaBoostClassifier
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.fit(X_train, y_train)
y_updated_pred_test = pipeline2.predict(X_test)
# Print the report on f1 score, precision and recall for each output category
print(classification_report(y_test.values, y_updated_pred_test, target_names = y.columns.values))
###Output
precision recall f1-score support
related 0.80 0.97 0.88 5045
request 0.74 0.50 0.60 1103
offer 0.00 0.00 0.00 31
aid_related 0.76 0.61 0.68 2707
medical_help 0.57 0.28 0.38 511
medical_products 0.67 0.30 0.42 327
search_and_rescue 0.60 0.18 0.28 188
security 0.22 0.04 0.07 119
military 0.57 0.31 0.40 206
water 0.74 0.65 0.69 425
food 0.80 0.64 0.71 711
shelter 0.75 0.54 0.62 573
clothing 0.67 0.35 0.46 89
money 0.54 0.26 0.35 145
missing_people 0.47 0.09 0.16 86
refugees 0.50 0.22 0.30 219
death 0.72 0.44 0.55 304
other_aid 0.57 0.15 0.24 867
infrastructure_related 0.39 0.11 0.17 443
transport 0.70 0.20 0.32 313
buildings 0.70 0.44 0.54 348
electricity 0.68 0.24 0.36 139
tools 0.12 0.03 0.05 32
hospitals 0.32 0.10 0.15 73
shops 0.17 0.04 0.06 26
aid_centers 0.27 0.05 0.09 75
other_infrastructure 0.34 0.10 0.15 310
weather_related 0.86 0.68 0.76 1886
floods 0.85 0.56 0.67 545
storm 0.75 0.50 0.60 626
fire 0.44 0.09 0.15 75
earthquake 0.88 0.82 0.85 622
cold 0.64 0.32 0.43 128
other_weather 0.48 0.14 0.21 374
direct_report 0.67 0.40 0.50 1302
avg / total 0.72 0.58 0.62 20973
###Markdown
9. Export your model as a pickle file
###Code
# export the model as a pickle file
# the model from the grid search seem to do the best
pickle.dump(cv, open('my_final_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import re
import pickle
import pandas as pd
from sqlalchemy import create_engine
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.base import BaseEstimator,TransformerMixin
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
nltk.download(['wordnet', 'punkt', 'stopwords'])
# load data from database
engine = create_engine('sqlite:///disaster_response.db')
df = pd.read_sql_table('tb_disaster_messages',engine)
# message Column
X = df['message']
# classification label
Y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Function:
split text into words and return the root form of the words
Args:
text(str): the message
Return:
clean_tokens(list of str): a list of the root form of the message words
"""
# tokenize text
tokens = word_tokenize(text)
# lemmatization
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# pipeline: Random Forest Classifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def gen_classifrep(model, X_test, y_test):
'''
Function to generate classification report on the model
Input:
model, test set: X_test & y_test
Output: Prints the classification report
'''
y_pred = model.predict(X_test)
for i, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, i]))
gen_classifrep(pipeline, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.64 0.36 0.47 1533
1 0.82 0.94 0.88 4975
2 0.37 0.22 0.27 46
avg / total 0.78 0.80 0.78 6554
request
precision recall f1-score support
0 0.89 0.98 0.93 5445
1 0.81 0.40 0.53 1109
avg / total 0.88 0.88 0.87 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.72 0.88 0.79 3850
1 0.75 0.52 0.61 2704
avg / total 0.73 0.73 0.72 6554
medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6030
1 0.64 0.10 0.18 524
avg / total 0.90 0.92 0.90 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 6220
1 0.72 0.08 0.14 334
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6380
1 0.44 0.02 0.04 174
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 0.00 0.00 0.00 127
avg / total 0.96 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6336
1 0.69 0.04 0.08 218
avg / total 0.96 0.97 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.96 0.99 0.98 6146
1 0.80 0.32 0.45 408
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.94 0.99 0.96 5801
1 0.81 0.48 0.60 753
avg / total 0.92 0.93 0.92 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 6004
1 0.80 0.14 0.24 550
avg / total 0.92 0.92 0.90 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6453
1 0.67 0.10 0.17 101
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6415
1 0.71 0.07 0.13 139
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.50 0.01 0.03 76
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6340
1 0.57 0.06 0.10 214
avg / total 0.96 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6229
1 0.80 0.15 0.25 325
avg / total 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5675
1 0.51 0.03 0.05 879
avg / total 0.82 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6129
1 0.20 0.00 0.00 425
avg / total 0.89 0.93 0.90 6554
transport
precision recall f1-score support
0 0.96 1.00 0.98 6267
1 0.72 0.10 0.18 287
avg / total 0.95 0.96 0.94 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.98 6211
1 0.79 0.13 0.22 343
avg / total 0.95 0.95 0.94 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.55 0.04 0.08 142
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6496
1 1.00 0.02 0.03 58
avg / total 0.99 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6468
1 0.00 0.00 0.00 86
avg / total 0.97 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6266
1 0.33 0.00 0.01 288
avg / total 0.93 0.96 0.93 6554
weather_related
precision recall f1-score support
0 0.83 0.97 0.89 4665
1 0.86 0.51 0.64 1889
avg / total 0.84 0.83 0.82 6554
floods
precision recall f1-score support
0 0.93 1.00 0.96 6014
1 0.94 0.18 0.30 540
avg / total 0.93 0.93 0.91 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5923
1 0.76 0.31 0.44 631
avg / total 0.91 0.92 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.00 0.00 0.00 77
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.95 0.99 0.97 5894
1 0.89 0.52 0.66 660
avg / total 0.94 0.95 0.94 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6428
1 0.76 0.17 0.28 126
avg / total 0.98 0.98 0.98 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6230
1 0.40 0.01 0.02 324
avg / total 0.92 0.95 0.93 6554
direct_report
precision recall f1-score support
0 0.86 0.97 0.92 5305
1 0.76 0.34 0.47 1249
avg / total 0.84 0.85 0.83 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Using grid search
# Create Grid search parameters for Random Forest Classifier
parameters = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [10, 20]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
gen_classifrep(cv, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.69 0.31 0.43 1533
1 0.81 0.95 0.88 4975
2 0.50 0.30 0.38 46
avg / total 0.78 0.80 0.77 6554
request
precision recall f1-score support
0 0.89 0.98 0.94 5445
1 0.84 0.41 0.55 1109
avg / total 0.88 0.89 0.87 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.74 0.88 0.80 3850
1 0.76 0.57 0.65 2704
avg / total 0.75 0.75 0.74 6554
medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6030
1 0.69 0.07 0.13 524
avg / total 0.91 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6220
1 0.75 0.05 0.10 334
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6380
1 0.67 0.03 0.07 174
avg / total 0.97 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 0.00 0.00 0.00 127
avg / total 0.96 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6336
1 0.47 0.04 0.07 218
avg / total 0.95 0.97 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.98 6146
1 0.89 0.29 0.44 408
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5801
1 0.87 0.41 0.56 753
avg / total 0.92 0.93 0.91 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 6004
1 0.87 0.23 0.37 550
avg / total 0.93 0.93 0.91 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6453
1 0.64 0.09 0.16 101
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6415
1 0.89 0.06 0.11 139
avg / total 0.98 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6340
1 0.60 0.04 0.08 214
avg / total 0.96 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6229
1 0.81 0.14 0.24 325
avg / total 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5675
1 0.59 0.02 0.04 879
avg / total 0.83 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6129
1 0.25 0.00 0.00 425
avg / total 0.89 0.93 0.90 6554
transport
precision recall f1-score support
0 0.96 1.00 0.98 6267
1 0.71 0.06 0.11 287
avg / total 0.95 0.96 0.94 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6211
1 0.67 0.06 0.12 343
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.67 0.01 0.03 142
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6496
1 1.00 0.02 0.03 58
avg / total 0.99 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6468
1 0.00 0.00 0.00 86
avg / total 0.97 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6266
1 0.00 0.00 0.00 288
avg / total 0.91 0.96 0.93 6554
weather_related
precision recall f1-score support
0 0.84 0.97 0.90 4665
1 0.88 0.54 0.67 1889
avg / total 0.85 0.85 0.83 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 6014
1 0.95 0.29 0.44 540
avg / total 0.94 0.94 0.92 6554
storm
precision recall f1-score support
0 0.94 0.99 0.96 5923
1 0.77 0.43 0.55 631
avg / total 0.93 0.93 0.92 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.00 0.00 0.00 77
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.96 0.99 0.98 5894
1 0.90 0.63 0.74 660
avg / total 0.95 0.96 0.95 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6428
1 0.83 0.04 0.08 126
avg / total 0.98 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6230
1 0.60 0.02 0.04 324
avg / total 0.93 0.95 0.93 6554
direct_report
precision recall f1-score support
0 0.87 0.98 0.92 5305
1 0.81 0.35 0.49 1249
avg / total 0.85 0.86 0.84 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# pipeline: Ada Booster Classifier
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.fit(X_train, y_train)
gen_classifrep(pipeline2, X_test, y_test)
# Using grid search
# Create Grid search parameters for Ada Booster Classifier
parameters2 = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [10, 20]
}
cv2 = GridSearchCV(pipeline2, param_grid=parameters2)
cv2
cv2.fit(X_train, y_train)
gen_classifrep(cv2, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.55 0.17 0.26 1533
1 0.78 0.96 0.86 4975
2 0.00 0.00 0.00 46
avg / total 0.72 0.77 0.71 6554
request
precision recall f1-score support
0 0.91 0.97 0.94 5445
1 0.75 0.51 0.61 1109
avg / total 0.88 0.89 0.88 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 27
avg / total 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.74 0.86 0.79 3850
1 0.73 0.57 0.64 2704
avg / total 0.74 0.74 0.73 6554
medical_help
precision recall f1-score support
0 0.94 0.99 0.96 6030
1 0.63 0.23 0.34 524
avg / total 0.91 0.93 0.91 6554
medical_products
precision recall f1-score support
0 0.96 0.99 0.98 6220
1 0.68 0.27 0.39 334
avg / total 0.95 0.96 0.95 6554
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6380
1 0.65 0.16 0.26 174
avg / total 0.97 0.98 0.97 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 0.33 0.02 0.04 127
avg / total 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 0.99 0.98 6336
1 0.48 0.24 0.32 218
avg / total 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.97 0.99 0.98 6146
1 0.79 0.61 0.69 408
avg / total 0.96 0.97 0.96 6554
food
precision recall f1-score support
0 0.96 0.98 0.97 5801
1 0.81 0.67 0.74 753
avg / total 0.94 0.94 0.94 6554
shelter
precision recall f1-score support
0 0.95 0.99 0.97 6004
1 0.76 0.48 0.59 550
avg / total 0.94 0.94 0.94 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6453
1 0.69 0.51 0.59 101
avg / total 0.99 0.99 0.99 6554
money
precision recall f1-score support
0 0.99 0.99 0.99 6415
1 0.51 0.32 0.39 139
avg / total 0.98 0.98 0.98 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.64 0.24 0.35 76
avg / total 0.99 0.99 0.99 6554
refugees
precision recall f1-score support
0 0.97 0.99 0.98 6340
1 0.52 0.21 0.29 214
avg / total 0.96 0.97 0.96 6554
death
precision recall f1-score support
0 0.97 1.00 0.98 6229
1 0.85 0.38 0.52 325
avg / total 0.96 0.97 0.96 6554
other_aid
precision recall f1-score support
0 0.88 0.98 0.93 5675
1 0.52 0.11 0.18 879
avg / total 0.83 0.87 0.83 6554
infrastructure_related
precision recall f1-score support
0 0.94 0.99 0.97 6129
1 0.47 0.09 0.15 425
avg / total 0.91 0.93 0.91 6554
transport
precision recall f1-score support
0 0.97 0.99 0.98 6267
1 0.56 0.26 0.35 287
avg / total 0.95 0.96 0.95 6554
buildings
precision recall f1-score support
0 0.96 0.99 0.98 6211
1 0.72 0.33 0.46 343
avg / total 0.95 0.96 0.95 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.61 0.22 0.32 142
avg / total 0.97 0.98 0.98 6554
tools
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.50 0.07 0.12 30
avg / total 0.99 1.00 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6496
1 0.33 0.09 0.14 58
avg / total 0.99 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6468
1 0.83 0.06 0.11 86
avg / total 0.99 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6266
1 0.42 0.07 0.11 288
avg / total 0.94 0.95 0.94 6554
weather_related
precision recall f1-score support
0 0.87 0.96 0.91 4665
1 0.87 0.64 0.74 1889
avg / total 0.87 0.87 0.86 6554
floods
precision recall f1-score support
0 0.96 0.99 0.98 6014
1 0.89 0.53 0.66 540
avg / total 0.95 0.96 0.95 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5923
1 0.76 0.33 0.46 631
avg / total 0.92 0.93 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.65 0.22 0.33 77
avg / total 0.99 0.99 0.99 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5894
1 0.89 0.76 0.82 660
avg / total 0.97 0.97 0.97 6554
cold
precision recall f1-score support
0 0.99 1.00 0.99 6428
1 0.76 0.28 0.41 126
avg / total 0.98 0.98 0.98 6554
other_weather
precision recall f1-score support
0 0.96 0.99 0.97 6230
1 0.48 0.13 0.20 324
avg / total 0.93 0.95 0.94 6554
direct_report
precision recall f1-score support
0 0.88 0.95 0.92 5305
1 0.70 0.47 0.56 1249
avg / total 0.85 0.86 0.85 6554
###Markdown
9. Export your model as a pickle file
###Code
file_name = 'classifier.pkl'
with open (file_name, 'wb') as f:
pickle.dump(cv2, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sys
import re
import nltk
import pandas as pd
import pickle
from sqlalchemy import create_engine
import matplotlib.pyplot as plt
%matplotlib inline
# import statements
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report
import warnings
warnings.filterwarnings("ignore")
nltk.download(['punkt', 'wordnet', 'stopwords'])
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName', con=engine)
X = df['message'].values
Y = df.iloc[:,4:].values
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# Remove stopwords
tokens = [t for t in tokens if t not in stopwords.words('english')]
# initiate lemmatizer and lemmatize
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
categories = df.columns[4:]
range(0, len(categories))
def evaluate_model(model, X_test, y_test):
'''
Evaluate model's performance on test data
Input: Model, test data set
Output: the Classification report
'''
y_pred = model.predict(X_test)
for i in range(0, len(categories)):
print(categories[i])
print(classification_report(y_test[:, i], y_pred[:, i]))
evaluate_model(pipeline, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.64 0.47 0.54 1511
1 0.85 0.92 0.88 4999
2 0.37 0.36 0.37 44
avg / total 0.80 0.81 0.80 6554
request
precision recall f1-score support
0 0.89 0.98 0.93 5400
1 0.79 0.43 0.55 1154
avg / total 0.87 0.88 0.86 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6531
1 0.00 0.00 0.00 23
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.75 0.86 0.80 3776
1 0.77 0.60 0.67 2778
avg / total 0.75 0.75 0.75 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6017
1 0.61 0.07 0.13 537
avg / total 0.90 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 6232
1 0.72 0.09 0.16 322
avg / total 0.94 0.95 0.94 6554
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6362
1 0.75 0.19 0.30 192
avg / total 0.97 0.97 0.97 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6434
1 0.00 0.00 0.00 120
avg / total 0.96 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.99 6369
1 0.57 0.09 0.15 185
avg / total 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.97 6112
1 0.83 0.32 0.46 442
avg / total 0.94 0.95 0.94 6554
food
precision recall f1-score support
0 0.94 0.99 0.96 5812
1 0.85 0.50 0.63 742
avg / total 0.93 0.93 0.93 6554
shelter
precision recall f1-score support
0 0.93 0.99 0.96 5989
1 0.81 0.26 0.40 565
avg / total 0.92 0.93 0.91 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6450
1 0.67 0.13 0.22 104
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6418
1 0.56 0.04 0.07 136
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6480
1 1.00 0.04 0.08 74
avg / total 0.99 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6332
1 0.52 0.06 0.11 222
avg / total 0.95 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6249
1 0.81 0.17 0.28 305
avg / total 0.95 0.96 0.95 6554
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 5666
1 0.57 0.06 0.12 888
avg / total 0.83 0.87 0.82 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6146
1 0.18 0.00 0.01 408
avg / total 0.89 0.94 0.91 6554
transport
precision recall f1-score support
0 0.95 1.00 0.97 6230
1 0.56 0.06 0.11 324
avg / total 0.93 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6190
1 0.93 0.10 0.19 364
avg / total 0.95 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6422
1 0.50 0.02 0.03 132
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6507
1 0.00 0.00 0.00 47
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6502
1 1.00 0.02 0.04 52
avg / total 0.99 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 31
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 0.00 0.00 0.00 70
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6272
1 0.00 0.00 0.00 282
avg / total 0.92 0.96 0.94 6554
weather_related
precision recall f1-score support
0 0.86 0.96 0.91 4668
1 0.85 0.62 0.72 1886
avg / total 0.86 0.86 0.85 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5995
1 0.91 0.33 0.48 559
avg / total 0.94 0.94 0.93 6554
storm
precision recall f1-score support
0 0.95 0.98 0.96 5915
1 0.75 0.48 0.59 639
avg / total 0.93 0.93 0.93 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6482
1 1.00 0.04 0.08 72
avg / total 0.99 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5940
1 0.90 0.74 0.81 614
avg / total 0.97 0.97 0.97 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6435
1 0.85 0.09 0.17 119
avg / total 0.98 0.98 0.98 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6192
1 0.45 0.02 0.05 362
avg / total 0.92 0.94 0.92 6554
direct_report
precision recall f1-score support
0 0.85 0.97 0.91 5210
1 0.75 0.32 0.44 1344
avg / total 0.83 0.84 0.81 6554
###Markdown
6. Improve your modelUse grid search to find better parameters. parameters = { 'clf__estimator__min_samples_split': [2, 4], 'clf__estimator__max_features': [None, 'log2', 'sqrt'], 'clf__estimator__criterion': ['gini', 'entropy'], 'clf__estimator__max_depth': [25, 100, 200],}parameters = { 'vect__max_df': (0.5, 0.75, 1.0), 'vect__max_features': (None, 5000, 10000, 50000), 'vect__ngram_range': ((1, 1), (1, 2)), unigrams or bigrams 'tfidf__use_idf': (True, False), 'tfidf__norm': ('l1', 'l2'), 'clf__max_iter': (20,), 'clf__alpha': (0.00001, 0.000001), 'clf__penalty': ('l2', 'elasticnet'), 'clf__max_iter': (10, 50, 80),}
###Code
parameters = {
#'vect__max_df': (0.5, 0.75, 1.0),
# 'vect__max_features': (None, 5000, 10000, 50000),
'vect__ngram_range': ((1, 1), (1, 2)), # unigrams or bigrams
'tfidf__use_idf': (True, False),
# 'tfidf__norm': ('l1', 'l2'),
#'clf__alpha': (0.00001, 0.000001),
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__min_samples_split': [2, 4],
# 'clf__max_iter': (10, 50, 80),
}
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=-1, verbose=5)
cv.get_params().keys()
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
evaluate_model(cv, X_test, y_test)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
with open('model_cv.pkl', 'wb') as file:
pickle.dump(cv, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sqlite3
import pandas as pd
from sqlalchemy import create_engine
import nltk
#nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import pickle
import warnings
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import accuracy_score, precision_score
import os
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report, make_scorer, f1_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
import nltk
from nltk.corpus import stopwords
print(os.getcwd())
# load data from database
engine = create_engine('sqlite:////Users/davideffiong/Documents/Disaster-Response-Pipeline/data/DisasterResponse.db')
df = pd.read_sql("SELECT * FROM disaster_table", engine)
X = df['message'] #feature variable
Y = df.iloc[:,4:] #target variable
# return X, Y
df['related'].value_counts()
# dataframe head
df.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
#print text data in row 15
print(X[15])
#instantiate lemmatizer
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
lemmatizer
# a function to tokenize and lemmatize the text
def tokenize(text):
# normalize case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# lemmatize andremove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
# call the tokenize function and print the result
print(tokenize(X[15]))
###Output
['comitee', 'delmas', '19', 'rue', 'street', 'janvier', 'impasse', 'charite', '2', '500', 'people', 'temporary', 'shelter', 'dire', 'need', 'water', 'food', 'medication', 'tent', 'clothes', 'please', 'stop', 'see', 'u']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables. Build a multi output classifier on Random Forest and Ada Boost classifier
###Code
#Random Forest Classifier pipeline
pipeline_rfc = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
# Adaboost Classifier pipeline
pipeline_ada = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
#split data into training and test data set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 1)
#fit ada classifier
pipeline_ada.fit(X_train, Y_train)
#fit random forest classifier
pipeline_rfc.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
from sklearn.metrics import classification_report, make_scorer, f1_score
from sklearn.metrics import precision_recall_fscore_support
#function plot_scores to test the model and print the classification report
def plot_scores(Y_test, Y_pred):
i = 0
for col in Y_test:
print('Feature {}: {}'.format(i+1, col))
print(classification_report(Y_test[col], Y_pred[:, i]))
i = i + 1
accuracy = (Y_pred == Y_test.values).mean()
print('The model accuracy is {:.3f}'.format(accuracy))
# Prediction: the Random Forest Classifier
Y_pred = pipeline_rfc.predict(X_test)
plot_scores(Y_test, Y_pred)
# Prediction: the ADA Classifier
Y_pred = pipeline_ada.predict(X_test)
plot_scores(Y_test, Y_pred)
np.mean(Y_pred_test == Y_test)
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Show parameters for random forest pipline
pipeline_rfc.get_params()
# Show parameters for ada boost classifier pipline
pipeline_ada.get_params()
# Grid search parameters for Random Forest Classifier
parameters_rfc = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [10, 20]
}
cv_rfc = GridSearchCV(pipeline_rfc, param_grid = parameters_rfc)
cv_rfc
# Create Grid search parameters for Ada boost classisfier
parameters_ada = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 60]
}
cv_ada = GridSearchCV(pipeline_ada, param_grid = parameters_ada)
cv_ada
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Fit the first tuned model
cv_rfc.fit(X_train, Y_train)
# Fit the second tuned model
cv_ada.fit(X_train, Y_train)
# Predicting using the first tuned model
Y_pred = cv_rfc.predict(X_test)
plot_scores(Y_test, Y_pred)
# Predicting using the second tuned model
Y_pred_rfc = cv_rfc.predict(X_test)
plot_scores(Y_test, Y_pred)
###Output
Feature 1: related
precision recall f1-score support
0 0.68 0.45 0.54 1550
1 0.84 0.93 0.88 4951
2 0.88 0.13 0.23 53
accuracy 0.81 6554
macro avg 0.80 0.50 0.55 6554
weighted avg 0.80 0.81 0.80 6554
Feature 2: request
precision recall f1-score support
0 0.90 0.97 0.94 5415
1 0.80 0.49 0.61 1139
accuracy 0.89 6554
macro avg 0.85 0.73 0.77 6554
weighted avg 0.88 0.89 0.88 6554
Feature 3: offer
precision recall f1-score support
0 1.00 1.00 1.00 6528
1 0.00 0.00 0.00 26
accuracy 1.00 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 1.00 0.99 6554
Feature 4: aid_related
precision recall f1-score support
0 0.77 0.85 0.81 3815
1 0.75 0.65 0.70 2739
accuracy 0.76 6554
macro avg 0.76 0.75 0.75 6554
weighted avg 0.76 0.76 0.76 6554
Feature 5: medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6001
1 0.60 0.08 0.14 553
accuracy 0.92 6554
macro avg 0.76 0.54 0.55 6554
weighted avg 0.89 0.92 0.89 6554
Feature 6: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6205
1 0.73 0.10 0.18 349
accuracy 0.95 6554
macro avg 0.84 0.55 0.58 6554
weighted avg 0.94 0.95 0.93 6554
Feature 7: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6387
1 0.62 0.06 0.11 167
accuracy 0.98 6554
macro avg 0.80 0.53 0.55 6554
weighted avg 0.97 0.98 0.97 6554
Feature 8: security
precision recall f1-score support
0 0.98 1.00 0.99 6442
1 0.00 0.00 0.00 112
accuracy 0.98 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.97 0.98 0.97 6554
Feature 9: military
precision recall f1-score support
0 0.97 1.00 0.99 6364
1 0.59 0.07 0.12 190
accuracy 0.97 6554
macro avg 0.78 0.53 0.55 6554
weighted avg 0.96 0.97 0.96 6554
Feature 10: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
accuracy 1.00 6554
macro avg 1.00 1.00 1.00 6554
weighted avg 1.00 1.00 1.00 6554
Feature 11: water
precision recall f1-score support
0 0.96 1.00 0.98 6124
1 0.86 0.42 0.57 430
accuracy 0.96 6554
macro avg 0.91 0.71 0.77 6554
weighted avg 0.95 0.96 0.95 6554
Feature 12: food
precision recall f1-score support
0 0.94 0.99 0.96 5800
1 0.84 0.51 0.64 754
accuracy 0.93 6554
macro avg 0.89 0.75 0.80 6554
weighted avg 0.93 0.93 0.93 6554
Feature 13: shelter
precision recall f1-score support
0 0.93 0.99 0.96 5978
1 0.81 0.26 0.40 576
accuracy 0.93 6554
macro avg 0.87 0.63 0.68 6554
weighted avg 0.92 0.93 0.91 6554
Feature 14: clothing
precision recall f1-score support
0 0.99 1.00 0.99 6455
1 0.90 0.18 0.30 99
accuracy 0.99 6554
macro avg 0.94 0.59 0.65 6554
weighted avg 0.99 0.99 0.98 6554
Feature 15: money
precision recall f1-score support
0 0.98 1.00 0.99 6413
1 0.75 0.02 0.04 141
accuracy 0.98 6554
macro avg 0.86 0.51 0.52 6554
weighted avg 0.97 0.98 0.97 6554
Feature 16: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 1.00 0.04 0.07 80
accuracy 0.99 6554
macro avg 0.99 0.52 0.53 6554
weighted avg 0.99 0.99 0.98 6554
Feature 17: refugees
precision recall f1-score support
0 0.97 1.00 0.98 6338
1 0.68 0.06 0.11 216
accuracy 0.97 6554
macro avg 0.83 0.53 0.55 6554
weighted avg 0.96 0.97 0.95 6554
Feature 18: death
precision recall f1-score support
0 0.96 1.00 0.98 6267
1 0.77 0.15 0.25 287
accuracy 0.96 6554
macro avg 0.87 0.57 0.62 6554
weighted avg 0.95 0.96 0.95 6554
Feature 19: other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5665
1 0.60 0.05 0.09 889
accuracy 0.87 6554
macro avg 0.74 0.52 0.51 6554
weighted avg 0.83 0.87 0.81 6554
Feature 20: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6133
1 0.33 0.00 0.01 421
accuracy 0.94 6554
macro avg 0.63 0.50 0.49 6554
weighted avg 0.90 0.94 0.91 6554
Feature 21: transport
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
# Create a pickle file for the model
file_name = 'classifier.pkl'
with open (file_name, 'wb') as f:
pickle.dump(cv_ada, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import pickle
from sqlalchemy import create_engine
import re
import nltk
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from nltk.tokenize import word_tokenize
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.tree import DecisionTreeClassifier
from nltk.corpus import stopwords
from sklearn.metrics import classification_report
nltk.download('stopwords')
nltk.download(['punkt', 'wordnet'])
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName', con=engine)
X = df['message']
Y = df[df.columns[4:]]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
inputs:
messages
Returns:
list of words into numbers of same meaning
"""
rx = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
d_urls = re.findall(rx, text)
for i in d_urls:
text = text.replace(i, "urlplaceholder")
# tokenize
tokens = word_tokenize(text)
stop_words = stopwords.words("english")
# stemming
stem = [PorterStemmer().stem(tok) for tok in tokens]
# lemmatizing
lem = [WordNetLemmatizer().lemmatize(tok) for tok in stem if tok not in stop_words]
return lem
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Creating pipeline with classifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# split data, train
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state = 42)
pipeline.fit(X_train, y_train)
# predict
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each. Evalute metrics of the ML pipeline model Function to generate classification report on the modeInput: pipe, y_pred & y_testOutput: Prints the scores
###Code
def cal_score(pipe, y_pred, y_test):
'''
Function to generate classification report on the model
Input: pipe, y_pred & y_test
Output: Prints the scores
'''
for i, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, i]))
# calculating and displaying scores
cal_score(pipeline, y_pred, y_test)
###Output
related
precision recall f1-score support
0 0.61 0.38 0.47 1563
1 0.82 0.92 0.87 4944
2 0.60 0.19 0.29 47
avg / total 0.77 0.79 0.77 6554
request
precision recall f1-score support
0 0.89 0.98 0.93 5443
1 0.80 0.40 0.53 1111
avg / total 0.87 0.88 0.86 6554
offer
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.75 0.86 0.80 3884
1 0.74 0.59 0.65 2670
avg / total 0.75 0.75 0.74 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6019
1 0.62 0.09 0.15 535
avg / total 0.90 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6210
1 0.64 0.04 0.08 344
avg / total 0.93 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6395
1 0.67 0.08 0.14 159
avg / total 0.97 0.98 0.97 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6438
1 0.33 0.01 0.02 116
avg / total 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6354
1 0.55 0.09 0.15 200
avg / total 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.97 6136
1 0.83 0.25 0.38 418
avg / total 0.94 0.95 0.94 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5809
1 0.86 0.40 0.55 745
avg / total 0.92 0.92 0.91 6554
shelter
precision recall f1-score support
0 0.93 0.99 0.96 5973
1 0.76 0.19 0.31 581
avg / total 0.91 0.92 0.90 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6456
1 0.72 0.13 0.22 98
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6421
1 0.81 0.10 0.17 133
avg / total 0.98 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6481
1 0.00 0.00 0.00 73
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6339
1 0.33 0.02 0.04 215
avg / total 0.95 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6257
1 0.78 0.13 0.23 297
avg / total 0.95 0.96 0.95 6554
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 5690
1 0.55 0.05 0.09 864
avg / total 0.83 0.87 0.82 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6143
1 0.33 0.01 0.01 411
avg / total 0.90 0.94 0.91 6554
transport
precision recall f1-score support
0 0.96 1.00 0.98 6251
1 0.64 0.05 0.10 303
avg / total 0.94 0.95 0.94 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.98 6231
1 0.68 0.09 0.16 323
avg / total 0.94 0.95 0.94 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6407
1 0.75 0.04 0.08 147
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6511
1 0.00 0.00 0.00 43
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6498
1 0.00 0.00 0.00 56
avg / total 0.98 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6530
1 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6473
1 0.00 0.00 0.00 81
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6271
1 0.00 0.00 0.00 283
avg / total 0.92 0.96 0.94 6554
weather_related
precision recall f1-score support
0 0.86 0.96 0.91 4781
1 0.83 0.58 0.69 1773
avg / total 0.85 0.86 0.85 6554
floods
precision recall f1-score support
0 0.94 0.99 0.97 6035
1 0.83 0.31 0.46 519
avg / total 0.93 0.94 0.93 6554
storm
precision recall f1-score support
0 0.94 0.98 0.96 5949
1 0.73 0.41 0.52 605
avg / total 0.92 0.93 0.92 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6488
1 0.00 0.00 0.00 66
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5964
1 0.90 0.64 0.75 590
avg / total 0.96 0.96 0.96 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6413
1 0.75 0.11 0.19 141
avg / total 0.98 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6219
1 0.57 0.01 0.02 335
avg / total 0.93 0.95 0.93 6554
direct_report
precision recall f1-score support
0 0.85 0.97 0.91 5282
1 0.72 0.28 0.40 1272
avg / total 0.82 0.84 0.81 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# parameter for cross valuation
parameters = {'clf__estimator__max_depth': [10, 50, None],
'clf__estimator__min_samples_leaf':[2, 5, 10]}
# grid search
cv = GridSearchCV(pipeline, parameters)
cv
# training model
cv.fit(X_train, y_train)
y_pred1 = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Testing model
cal_score(cv, y_pred1, y_test)
###Output
related
precision recall f1-score support
0 0.70 0.29 0.41 1563
1 0.81 0.96 0.88 4944
2 0.67 0.04 0.08 47
avg / total 0.78 0.79 0.76 6554
request
precision recall f1-score support
0 0.89 0.98 0.93 5443
1 0.83 0.38 0.52 1111
avg / total 0.88 0.88 0.86 6554
offer
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.79 0.82 0.81 3884
1 0.72 0.69 0.71 2670
avg / total 0.76 0.77 0.76 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6019
1 0.70 0.09 0.16 535
avg / total 0.91 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6210
1 0.67 0.03 0.07 344
avg / total 0.93 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6395
1 1.00 0.04 0.07 159
avg / total 0.98 0.98 0.97 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6438
1 0.25 0.01 0.02 116
avg / total 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6354
1 0.56 0.05 0.09 200
avg / total 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.94 1.00 0.97 6136
1 0.71 0.08 0.14 418
avg / total 0.93 0.94 0.92 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5809
1 0.84 0.45 0.59 745
avg / total 0.92 0.93 0.92 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5973
1 0.84 0.18 0.30 581
avg / total 0.92 0.92 0.90 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6456
1 0.78 0.07 0.13 98
avg / total 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6421
1 1.00 0.01 0.01 133
avg / total 0.98 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6481
1 0.00 0.00 0.00 73
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6339
1 0.00 0.00 0.00 215
avg / total 0.94 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6257
1 0.93 0.04 0.08 297
avg / total 0.96 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5690
1 0.53 0.01 0.02 864
avg / total 0.82 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6143
1 1.00 0.00 0.01 411
avg / total 0.94 0.94 0.91 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6251
1 0.64 0.02 0.04 303
avg / total 0.94 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.98 6231
1 0.67 0.03 0.06 323
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6407
1 0.00 0.00 0.00 147
avg / total 0.96 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6511
1 0.00 0.00 0.00 43
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6498
1 0.00 0.00 0.00 56
avg / total 0.98 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6530
1 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6473
1 0.00 0.00 0.00 81
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6271
1 0.00 0.00 0.00 283
avg / total 0.92 0.96 0.94 6554
weather_related
precision recall f1-score support
0 0.88 0.94 0.91 4781
1 0.80 0.64 0.71 1773
avg / total 0.86 0.86 0.86 6554
floods
precision recall f1-score support
0 0.95 1.00 0.97 6035
1 0.87 0.35 0.50 519
avg / total 0.94 0.94 0.93 6554
storm
precision recall f1-score support
0 0.95 0.98 0.97 5949
1 0.74 0.47 0.57 605
avg / total 0.93 0.94 0.93 6554
fire
precision recall f1-score support
0 0.99 1.00 1.00 6488
1 1.00 0.03 0.06 66
avg / total 0.99 0.99 0.99 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5964
1 0.88 0.74 0.81 590
avg / total 0.97 0.97 0.97 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6413
1 0.78 0.05 0.09 141
avg / total 0.98 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6219
1 0.50 0.01 0.03 335
avg / total 0.93 0.95 0.93 6554
direct_report
precision recall f1-score support
0 0.84 0.98 0.91 5282
1 0.78 0.24 0.36 1272
avg / total 0.83 0.84 0.80 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# testing a pure decision tree classifier
new_model = MultiOutputClassifier(DecisionTreeClassifier())
pipeline1 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', new_model)
])
# training the model
pipeline1.fit(X_train, y_train)
# predicting
y_new_pred = pipeline1.predict(X_test)
# testing the new model
cal_score(pipeline1, y_new_pred, y_test)
###Output
related
precision recall f1-score support
0 0.50 0.46 0.48 1563
1 0.83 0.85 0.84 4944
2 0.42 0.17 0.24 47
avg / total 0.75 0.75 0.75 6554
request
precision recall f1-score support
0 0.91 0.91 0.91 5443
1 0.56 0.55 0.55 1111
avg / total 0.85 0.85 0.85 6554
offer
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.76 0.74 0.75 3884
1 0.63 0.66 0.65 2670
avg / total 0.71 0.71 0.71 6554
medical_help
precision recall f1-score support
0 0.94 0.95 0.95 6019
1 0.38 0.36 0.37 535
avg / total 0.90 0.90 0.90 6554
medical_products
precision recall f1-score support
0 0.97 0.97 0.97 6210
1 0.43 0.36 0.39 344
avg / total 0.94 0.94 0.94 6554
search_and_rescue
precision recall f1-score support
0 0.98 0.98 0.98 6395
1 0.18 0.17 0.17 159
avg / total 0.96 0.96 0.96 6554
security
precision recall f1-score support
0 0.98 0.99 0.98 6438
1 0.08 0.07 0.07 116
avg / total 0.97 0.97 0.97 6554
military
precision recall f1-score support
0 0.98 0.98 0.98 6354
1 0.35 0.37 0.36 200
avg / total 0.96 0.96 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.98 0.97 0.98 6136
1 0.63 0.65 0.64 418
avg / total 0.95 0.95 0.95 6554
food
precision recall f1-score support
0 0.96 0.97 0.97 5809
1 0.74 0.72 0.73 745
avg / total 0.94 0.94 0.94 6554
shelter
precision recall f1-score support
0 0.96 0.96 0.96 5973
1 0.60 0.59 0.60 581
avg / total 0.93 0.93 0.93 6554
clothing
precision recall f1-score support
0 0.99 0.99 0.99 6456
1 0.54 0.43 0.48 98
avg / total 0.98 0.99 0.99 6554
money
precision recall f1-score support
0 0.99 0.99 0.99 6421
1 0.35 0.35 0.35 133
avg / total 0.97 0.97 0.97 6554
missing_people
precision recall f1-score support
0 0.99 0.99 0.99 6481
1 0.30 0.25 0.27 73
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.98 0.98 0.98 6339
1 0.30 0.26 0.28 215
avg / total 0.95 0.96 0.95 6554
death
precision recall f1-score support
0 0.98 0.98 0.98 6257
1 0.54 0.54 0.54 297
avg / total 0.96 0.96 0.96 6554
other_aid
precision recall f1-score support
0 0.89 0.90 0.90 5690
1 0.29 0.28 0.28 864
avg / total 0.81 0.82 0.82 6554
infrastructure_related
precision recall f1-score support
0 0.94 0.95 0.95 6143
1 0.17 0.15 0.16 411
avg / total 0.90 0.90 0.90 6554
transport
precision recall f1-score support
0 0.96 0.97 0.97 6251
1 0.30 0.26 0.28 303
avg / total 0.93 0.94 0.94 6554
buildings
precision recall f1-score support
0 0.97 0.97 0.97 6231
1 0.46 0.42 0.44 323
avg / total 0.95 0.95 0.95 6554
electricity
precision recall f1-score support
0 0.98 0.99 0.99 6407
1 0.38 0.26 0.31 147
avg / total 0.97 0.97 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6511
1 0.05 0.02 0.03 43
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 0.99 0.99 6498
1 0.08 0.11 0.09 56
avg / total 0.98 0.98 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6530
1 0.00 0.00 0.00 24
avg / total 0.99 0.99 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 0.99 0.99 6473
1 0.11 0.07 0.09 81
avg / total 0.98 0.98 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 0.96 0.96 6271
1 0.12 0.11 0.11 283
avg / total 0.92 0.93 0.93 6554
weather_related
precision recall f1-score support
0 0.90 0.90 0.90 4781
1 0.73 0.73 0.73 1773
avg / total 0.85 0.85 0.85 6554
floods
precision recall f1-score support
0 0.97 0.96 0.97 6035
1 0.60 0.61 0.60 519
avg / total 0.94 0.94 0.94 6554
storm
precision recall f1-score support
0 0.97 0.96 0.97 5949
1 0.66 0.68 0.67 605
avg / total 0.94 0.94 0.94 6554
fire
precision recall f1-score support
0 0.99 0.99 0.99 6488
1 0.32 0.27 0.30 66
avg / total 0.99 0.99 0.99 6554
earthquake
precision recall f1-score support
0 0.98 0.98 0.98 5964
1 0.76 0.79 0.78 590
avg / total 0.96 0.96 0.96 6554
cold
precision recall f1-score support
0 0.99 0.99 0.99 6413
1 0.49 0.45 0.47 141
avg / total 0.98 0.98 0.98 6554
other_weather
precision recall f1-score support
0 0.96 0.97 0.96 6219
1 0.26 0.21 0.24 335
avg / total 0.92 0.93 0.93 6554
direct_report
precision recall f1-score support
0 0.87 0.88 0.87 5282
1 0.47 0.45 0.46 1272
avg / total 0.79 0.80 0.79 6554
###Markdown
9. Export your model as a pickle file
###Code
# saving the model as a pickle file
pickle.dump(cv, open('model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sqlalchemy import create_engine
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.preprocessing import StandardScaler
# load data from database
engine = create_engine('sqlite:///Messages.db')
df = pd.read_sql_table('Msg_table',engine)
X = df.message.values
Y = df[df.columns[4:]].values.astype('int64')
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y,random_state=3)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
y_pred = pipeline.predict(X_test)
f1_scores=np.zeros((36,1))
for i,col in enumerate(df.columns[4:]):
print(f"{col} performance metrics")
print(classification_report(y_test[:,i], y_pred[:,i]))
f1_scores[i,]=f1_score(y_test[:,i], y_pred[:,i])
print("##########################################################")
print(f"Average f1-score for this model is {np.mean(f1_scores)}")
###Output
related performance metrics
precision recall f1-score support
0 0.68 0.62 0.65 879
1 0.80 0.84 0.82 1631
avg / total 0.76 0.76 0.76 2510
##########################################################
request performance metrics
precision recall f1-score support
0 0.82 0.92 0.87 1621
1 0.81 0.63 0.71 889
avg / total 0.81 0.82 0.81 2510
##########################################################
offer performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2505
1 0.00 0.00 0.00 5
avg / total 1.00 1.00 1.00 2510
##########################################################
aid_related performance metrics
precision recall f1-score support
0 0.77 0.90 0.83 1547
1 0.78 0.58 0.66 963
avg / total 0.78 0.78 0.77 2510
##########################################################
medical_help performance metrics
precision recall f1-score support
0 0.95 1.00 0.97 2385
1 0.43 0.02 0.05 125
avg / total 0.93 0.95 0.93 2510
##########################################################
medical_products performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2423
1 0.50 0.03 0.06 87
avg / total 0.95 0.97 0.95 2510
##########################################################
search_and_rescue performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2465
1 0.50 0.02 0.04 45
avg / total 0.97 0.98 0.97 2510
##########################################################
security performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2488
1 0.00 0.00 0.00 22
avg / total 0.98 0.99 0.99 2510
##########################################################
military performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
child_alone performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2510
avg / total 1.00 1.00 1.00 2510
##########################################################
water performance metrics
precision recall f1-score support
0 0.97 1.00 0.99 2310
1 0.96 0.69 0.81 200
avg / total 0.97 0.97 0.97 2510
##########################################################
food performance metrics
precision recall f1-score support
0 0.93 0.99 0.96 2142
1 0.90 0.56 0.69 368
avg / total 0.93 0.93 0.92 2510
##########################################################
shelter performance metrics
precision recall f1-score support
0 0.92 1.00 0.96 2241
1 0.91 0.27 0.42 269
avg / total 0.92 0.92 0.90 2510
##########################################################
clothing performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2488
1 0.00 0.00 0.00 22
avg / total 0.98 0.99 0.99 2510
##########################################################
money performance metrics
precision recall f1-score support
0 0.99 1.00 0.99 2475
1 0.00 0.00 0.00 35
avg / total 0.97 0.99 0.98 2510
##########################################################
missing_people performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2492
1 0.00 0.00 0.00 18
avg / total 0.99 0.99 0.99 2510
##########################################################
refugees performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2472
1 0.00 0.00 0.00 38
avg / total 0.97 0.98 0.98 2510
##########################################################
death performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2453
1 0.00 0.00 0.00 57
avg / total 0.96 0.98 0.97 2510
##########################################################
other_aid performance metrics
precision recall f1-score support
0 0.86 0.99 0.92 2135
1 0.61 0.08 0.14 375
avg / total 0.82 0.85 0.80 2510
##########################################################
infrastructure_related performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2432
1 0.00 0.00 0.00 78
avg / total 0.94 0.97 0.95 2510
##########################################################
transport performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2466
1 0.00 0.00 0.00 44
avg / total 0.97 0.98 0.97 2510
##########################################################
buildings performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2419
1 0.75 0.13 0.22 91
avg / total 0.96 0.97 0.96 2510
##########################################################
electricity performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2499
1 0.00 0.00 0.00 11
avg / total 0.99 1.00 0.99 2510
##########################################################
tools performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2506
1 0.00 0.00 0.00 4
avg / total 1.00 1.00 1.00 2510
##########################################################
hospitals performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2502
1 0.00 0.00 0.00 8
avg / total 0.99 1.00 1.00 2510
##########################################################
shops performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2502
1 0.00 0.00 0.00 8
avg / total 0.99 1.00 1.00 2510
##########################################################
aid_centers performance metrics
precision recall f1-score support
0 0.99 1.00 0.99 2484
1 0.00 0.00 0.00 26
avg / total 0.98 0.99 0.98 2510
##########################################################
other_infrastructure performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2465
1 0.00 0.00 0.00 45
avg / total 0.96 0.98 0.97 2510
##########################################################
weather_related performance metrics
precision recall f1-score support
0 0.91 0.99 0.95 2160
1 0.85 0.40 0.54 350
avg / total 0.90 0.91 0.89 2510
##########################################################
floods performance metrics
precision recall f1-score support
0 0.97 1.00 0.99 2438
1 1.00 0.08 0.15 72
avg / total 0.97 0.97 0.96 2510
##########################################################
storm performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2445
1 0.57 0.06 0.11 65
avg / total 0.97 0.97 0.96 2510
##########################################################
fire performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
earthquake performance metrics
precision recall f1-score support
0 0.97 0.99 0.98 2332
1 0.87 0.53 0.66 178
avg / total 0.96 0.96 0.96 2510
##########################################################
cold performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
other_weather performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2457
1 1.00 0.02 0.04 53
avg / total 0.98 0.98 0.97 2510
##########################################################
direct_report performance metrics
precision recall f1-score support
0 0.80 0.93 0.86 1653
1 0.81 0.55 0.66 857
avg / total 0.80 0.80 0.79 2510
##########################################################
Average f1-score for this model is 0.1885226934365356
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__max_df': ( 0.75,1.00),
#'vect__max_features': (7500,10000),
'clf__estimator__n_estimators': [100,200],}
#'clf__estimator__min_samples_split': [3, 4],
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train,y_train)
y_pred = cv.predict(X_test)
from sklearn.metrics import classification_report
f1_scores_cv=np.zeros((36,1))
for i,col in enumerate(df.columns[4:]):
print(f"{col} performance metrics")
print(classification_report(y_test[:,i], y_pred[:,i]))
f1_scores_cv[i,]=f1_score(y_test[:,i], y_pred[:,i])
print("##########################################################")
print(f"Average f1-score for this model is {np.mean(f1_scores_cv)}")
###Output
related performance metrics
precision recall f1-score support
0 0.76 0.54 0.63 879
1 0.79 0.91 0.84 1631
avg / total 0.78 0.78 0.77 2510
##########################################################
request performance metrics
precision recall f1-score support
0 0.85 0.93 0.89 1621
1 0.84 0.69 0.76 889
avg / total 0.84 0.84 0.84 2510
##########################################################
offer performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2505
1 0.00 0.00 0.00 5
avg / total 1.00 1.00 1.00 2510
##########################################################
aid_related performance metrics
precision recall f1-score support
0 0.84 0.91 0.87 1547
1 0.83 0.72 0.77 963
avg / total 0.84 0.84 0.83 2510
##########################################################
medical_help performance metrics
precision recall f1-score support
0 0.95 1.00 0.97 2385
1 0.57 0.03 0.06 125
avg / total 0.93 0.95 0.93 2510
##########################################################
medical_products performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2423
1 1.00 0.03 0.07 87
avg / total 0.97 0.97 0.95 2510
##########################################################
search_and_rescue performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2465
1 0.50 0.02 0.04 45
avg / total 0.97 0.98 0.97 2510
##########################################################
security performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2488
1 0.00 0.00 0.00 22
avg / total 0.98 0.99 0.99 2510
##########################################################
military performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
child_alone performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2510
avg / total 1.00 1.00 1.00 2510
##########################################################
water performance metrics
precision recall f1-score support
0 0.96 1.00 0.98 2310
1 0.98 0.52 0.68 200
avg / total 0.96 0.96 0.95 2510
##########################################################
food performance metrics
precision recall f1-score support
0 0.95 0.99 0.97 2142
1 0.95 0.67 0.78 368
avg / total 0.95 0.95 0.94 2510
##########################################################
shelter performance metrics
precision recall f1-score support
0 0.92 1.00 0.96 2241
1 0.95 0.32 0.48 269
avg / total 0.93 0.93 0.91 2510
##########################################################
clothing performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2488
1 0.00 0.00 0.00 22
avg / total 0.98 0.99 0.99 2510
##########################################################
money performance metrics
precision recall f1-score support
0 0.99 1.00 0.99 2475
1 0.00 0.00 0.00 35
avg / total 0.97 0.99 0.98 2510
##########################################################
missing_people performance metrics
precision recall f1-score support
0 0.99 1.00 1.00 2492
1 0.00 0.00 0.00 18
avg / total 0.99 0.99 0.99 2510
##########################################################
refugees performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2472
1 0.00 0.00 0.00 38
avg / total 0.97 0.98 0.98 2510
##########################################################
death performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2453
1 1.00 0.05 0.10 57
avg / total 0.98 0.98 0.97 2510
##########################################################
other_aid performance metrics
precision recall f1-score support
0 0.85 1.00 0.92 2135
1 0.56 0.01 0.03 375
avg / total 0.81 0.85 0.79 2510
##########################################################
infrastructure_related performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2432
1 0.00 0.00 0.00 78
avg / total 0.94 0.97 0.95 2510
##########################################################
transport performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2466
1 0.00 0.00 0.00 44
avg / total 0.97 0.98 0.97 2510
##########################################################
buildings performance metrics
precision recall f1-score support
0 0.97 1.00 0.98 2419
1 0.90 0.10 0.18 91
avg / total 0.96 0.97 0.95 2510
##########################################################
electricity performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2499
1 0.00 0.00 0.00 11
avg / total 0.99 1.00 0.99 2510
##########################################################
tools performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2506
1 0.00 0.00 0.00 4
avg / total 1.00 1.00 1.00 2510
##########################################################
hospitals performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2502
1 0.00 0.00 0.00 8
avg / total 0.99 1.00 1.00 2510
##########################################################
shops performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2502
1 0.00 0.00 0.00 8
avg / total 0.99 1.00 1.00 2510
##########################################################
aid_centers performance metrics
precision recall f1-score support
0 0.99 1.00 0.99 2484
1 0.00 0.00 0.00 26
avg / total 0.98 0.99 0.98 2510
##########################################################
other_infrastructure performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2465
1 0.00 0.00 0.00 45
avg / total 0.96 0.98 0.97 2510
##########################################################
weather_related performance metrics
precision recall f1-score support
0 0.92 0.99 0.96 2160
1 0.90 0.50 0.64 350
avg / total 0.92 0.92 0.91 2510
##########################################################
floods performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2438
1 0.91 0.28 0.43 72
avg / total 0.98 0.98 0.97 2510
##########################################################
storm performance metrics
precision recall f1-score support
0 0.97 1.00 0.99 2445
1 0.25 0.02 0.03 65
avg / total 0.96 0.97 0.96 2510
##########################################################
fire performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
earthquake performance metrics
precision recall f1-score support
0 0.98 0.99 0.98 2332
1 0.87 0.68 0.76 178
avg / total 0.97 0.97 0.97 2510
##########################################################
cold performance metrics
precision recall f1-score support
0 1.00 1.00 1.00 2500
1 0.00 0.00 0.00 10
avg / total 0.99 1.00 0.99 2510
##########################################################
other_weather performance metrics
precision recall f1-score support
0 0.98 1.00 0.99 2457
1 1.00 0.02 0.04 53
avg / total 0.98 0.98 0.97 2510
##########################################################
direct_report performance metrics
precision recall f1-score support
0 0.84 0.94 0.89 1653
1 0.84 0.65 0.74 857
avg / total 0.84 0.84 0.83 2510
##########################################################
Average f1-score for this model is 0.2061420230428141
###Markdown
As evident, this model is better as average F1-score improved from .194 to 0.211 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF Based on the above model, we first try to identify, which classes have lower F1-scores and number of data point available for training.
###Code
for i,col in enumerate(df.columns[4:]):
print(f1_scores_cv[i],col,y_train[:,i].sum())
###Output
[ 0.8428246] related 5012
[ 0.7595561] request 2725
[ 0.] offer 5
[ 0.77111111] aid_related 2975
[ 0.06060606] medical_help 449
[ 0.06666667] medical_products 255
[ 0.04255319] search_and_rescue 161
[ 0.] security 107
[ 0.] military 34
[ 0.] child_alone 0
[ 0.67540984] water 591
[ 0.7827476] food 1155
[ 0.48199446] shelter 822
[ 0.] clothing 78
[ 0.] money 90
[ 0.] missing_people 65
[ 0.] refugees 129
[ 0.1] death 193
[ 0.02604167] other_aid 1085
[ 0.] infrastructure_related 235
[ 0.] transport 148
[ 0.17821782] buildings 290
[ 0.] electricity 55
[ 0.] tools 24
[ 0.] hospitals 45
[ 0.] shops 23
[ 0.] aid_centers 48
[ 0.] other_infrastructure 133
[ 0.64220183] weather_related 1097
[ 0.42553191] floods 211
[ 0.02898551] storm 211
[ 0.] fire 28
[ 0.76340694] earthquake 612
[ 0.] cold 49
[ 0.03703704] other_weather 141
[ 0.73622047] direct_report 2616
###Markdown
Class "storm" has very low F1-score (0.0289) even though the training samples available for it are comparable to compared to class "floods" with F1-score 0.426. Addressing tweets that seek information about imminent storm (cyclone, hurricane, etc) are important to prevent loss of life. After going through some of the tweets with "storm" as one of the classes, following associated words were identified.* rain* raining* storm* cycloneWe will implement an estimator class to extract word count of "storm" words. The code for the same is adapted from this repository https://github.com/rajatsharma369007/udacity-mentorship-repository/blob/master/nlp/custom_text_transformers.pyTo further improve model on another classes too, we will apply a new algorithm from scikit learn called ExtraTreesClassifier
###Code
class StormWordCounter(BaseEstimator, TransformerMixin):
'''
Customized transformer for counting number of storm words in text.
'''
# Adding 'activate' parameter to activate the transformer or not:
def __init__(self, activate = True):
self.activate = activate
# Defining fit method:
def fit(self, X, y = None):
return self
# Defining transform method:
def transform(self, X):
'''
It recieves an array of messages and counts the number of characters
for each message.
Input:
X: array of text messages
Output:
elec_words_arr: array with the number of storm words for each
message.
'''
# If activate parameter is set to True:
if self.activate:
st_words_count = list()
st_list = ['rain','raining','storm','cyclone']#['shelter', 'home','house', 'housing', 'tent']
# Counting shelter words:
for text in X:
# Creating empty list:
st_words = 0
tokens = word_tokenize(text.lower())
for word in tokens:
if word in st_list:
st_words += 1
st_words_count.append(st_words)
# Transforming list into array:
st_words_arr = np.array(st_words_count)
st_words_arr = sh_words_arr.reshape((len(st_words_arr), 1))
return st_words_arr
# If activate parameter is set to False:
else:
pass
pipeline3 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('storm_count', StormWordCounter())
])),
('scale', StandardScaler(with_mean=False)),
('clf', MultiOutputClassifier(ExtraTreesClassifier()))
])
parameters = {
'features__text_pipeline__vect__max_df': (0.5, 0.75),
#'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'clf__estimator__n_estimators': [100,200],}
#'clf__estimator__min_samples_split': [ 3, 4],}
cv_new = GridSearchCV(pipeline3, param_grid=parameters,cv=3)
#X_train, X_test, y_train, y_test = train_test_split(X, Y)
cv_new.fit(X_train, y_train)
y_pred = cv_new.predict(X_test)
f1_scores_nalg=np.zeros((36,1))
for i,col in enumerate(df.columns[4:]):
print(f"{col} performance metrics")
print(classification_report(y_test[:,i], y_pred[:,i]))
f1_scores_nalg[i,]=f1_score(y_test[:,i], y_pred[:,i])
print("##########################################################")
print(f"Average f1-score for this model is {np.mean(f1_scores_nalg)}")
for i,col in enumerate(df.columns[4:]):
print(f1_scores_nalg[i],col,y_train[:,i].sum())
###Output
[ 0.84724588] related 5012
[ 0.75883069] request 2725
[ 0.] offer 5
[ 0.75198188] aid_related 2975
[ 0.05882353] medical_help 449
[ 0.04444444] medical_products 255
[ 0.08163265] search_and_rescue 161
[ 0.] security 107
[ 0.] military 34
[ 0.] child_alone 0
[ 0.53090909] water 591
[ 0.69830508] food 1155
[ 0.4863388] shelter 822
[ 0.] clothing 78
[ 0.05555556] money 90
[ 0.] missing_people 65
[ 0.] refugees 129
[ 0.1] death 193
[ 0.04639175] other_aid 1085
[ 0.] infrastructure_related 235
[ 0.] transport 148
[ 0.26168224] buildings 290
[ 0.] electricity 55
[ 0.] tools 24
[ 0.] hospitals 45
[ 0.] shops 23
[ 0.] aid_centers 48
[ 0.] other_infrastructure 133
[ 0.61737523] weather_related 1097
[ 0.33707865] floods 211
[ 0.35714286] storm 211
[ 0.] fire 28
[ 0.67595819] earthquake 612
[ 0.] cold 49
[ 0.03703704] other_weather 141
[ 0.73649967] direct_report 2616
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(cv_new.best_estimator_, open("bestmodel.pkl", 'wb'))
pickle.dump(cv_new, open("whole_gs_config.pkl", 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import re
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger','stopwords'])
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, AdaBoostClassifier
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report, precision_recall_fscore_support
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
# check table names in the database
print(engine.table_names())
# read table from database
df = pd.read_sql_table('DisasterResponse', engine)
# close the connection to the database
conn = engine.raw_connection()
conn.close()
df.head(2)
X = df['message']
Y = df.drop(['id', 'message','original', 'genre'], axis = 1)
# Check how many messages contain web links
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
X.str.contains(url_regex).sum()
# 'related' category has value 2, but no larger than 2
print('Counts of Y=2: {}; counts of Y>2: {}'.format(Y[Y==2].count().sum(), Y[Y>2].count().sum()) )
Y[Y==2].count()
###Output
Counts of Y=2: 188; counts of Y>2: 0
###Markdown
Check the labels of messages which have "2" in 'related'.
###Code
Y[Y['related']==2]
df.iloc[117,:]
###Output
_____no_output_____
###Markdown
Message labeled as "2" in "related" has 0 in all other categories. Since there are only 188 "2" in the "related" category, it makes sense to replace all "2" with "0" to simplify the features.
###Code
Y[Y>=2]=0
print('Counts of Y>=2: {}'.format(Y[Y>=2].count().sum()))
# Check the "1" count in each category
Y[Y>0].count()
###Output
_____no_output_____
###Markdown
Note that "child_alone" category has only "0". It is an unbalanced category.
###Code
# Check count in each category
print('Total messages: {}'.format(Y.shape[0]))
fig, ax = plt.subplots(figsize=(8, 8))
Y[Y>0].sum().sort_values().plot.barh(ax=ax,title='Category Counts',fontsize=10);
# Check histogram of number of labels each message has
fig, ax = plt.subplots(figsize=(8, 6))
# Y.sum(axis=1).hist(ax=ax,title='Category Counts',fontsize=10);
Y.sum(axis=1).hist(ax=ax);
plt.title('Histogram of label Counts');
plt.xlabel('Number of labels');
plt.ylabel('Counts');
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# Only consider word which is alphanumeric and not in stopwords
tokens = [ w for w in word_tokenize(text) if (w.isalnum()) &
(w not in nltk.corpus.stopwords.words("english"))
]
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
# Get list of word tokens and counts
vect = CountVectorizer(tokenizer=tokenize)
X_vectorized = vect.fit_transform(X)
word_list = vect.get_feature_names();
count_list = X_vectorized.toarray().sum(axis=0)
top10_index=np.argsort(count_list)[::-1][:10]
top10_word=[word_list[i] for i in top10_index]
top10_count=[count_list[i] for i in top10_index]
top10_token=pd.DataFrame.from_dict(dict(zip(top10_word,top10_count)),orient='index')
top10_token.columns=['count']
top10_token
top10_token.sort_values(by='count',ascending=True,inplace=True)
ax=top10_token.plot.barh()
plt.title('Top 10 word counts in {} messages'.format(X.shape[0]));
plt.ylabel('count');
plt.xlabel('word');
plt.barh(top10_word,top10_count);
plt.title('Top 10 word counts in {} messages'.format(X.shape[0]));
plt.xlabel('count');
plt.ylabel('word');
# plot lists only sort bars by 1st list in alphabet order
vocabulary=pd.DataFrame.from_dict(vect.vocabulary_, orient='index')
vocabulary.columns=['count']
vocabulary_first20=vocabulary['count'].iloc[0:20]
fig, ax = plt.subplots(figsize=(8, 8))
vocabulary_first20.sort_values().plot.barh(ax=ax,title='Counts of first 20 words in vocabulary',fontsize=10);
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Split data into train and test sets
X_train, X_test, y_train, y_test=train_test_split(X,Y,test_size=0.3, random_state=4)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
y_pred=pipeline.predict(X_test)
def display_results_0(y_test, y_pred):
Category_name=y_test.columns.values
# Accumulate the test score for each category
score=[]
for i in range(y_pred.shape[1]):
score.append(precision_recall_fscore_support(y_test.iloc[:,i], y_pred[:,i], average='macro')[0:3])
# Print out score of first category
print('Category: {} \n'.format(Category_name[i]))
print(classification_report(y_test.iloc[:,i], y_pred[:,i]))
# Calculate weighted score for each category and sum them up as and average score
weights=(y_test[y_test>0].count())/(y_test[y_test>0].count().sum())
score_weight=[]
for i in range(len(score)):
score_weight.append(pd.DataFrame(score).iloc[i,:].apply(lambda x: x*weights[i]).values)
score_Avg=sum(score_weight)
# print out average score
print('Model Average Score [precision, recall, f1-score]={}'.format(score_Avg))
display_results_0(y_test, y_pred)
###Output
Category: related
precision recall f1-score support
0 0.63 0.45 0.52 1874
1 0.84 0.91 0.87 5941
2 0.21 0.32 0.25 50
avg / total 0.79 0.80 0.79 7865
Category: request
precision recall f1-score support
0 0.89 0.97 0.93 6511
1 0.78 0.44 0.57 1354
avg / total 0.87 0.88 0.87 7865
Category: offer
precision recall f1-score support
0 1.00 1.00 1.00 7827
1 0.00 0.00 0.00 38
avg / total 0.99 1.00 0.99 7865
Category: aid_related
precision recall f1-score support
0 0.76 0.84 0.80 4634
1 0.73 0.62 0.67 3231
avg / total 0.75 0.75 0.75 7865
Category: medical_help
precision recall f1-score support
0 0.93 0.99 0.96 7263
1 0.60 0.14 0.23 602
avg / total 0.91 0.93 0.91 7865
Category: medical_products
precision recall f1-score support
0 0.96 1.00 0.98 7482
1 0.68 0.09 0.16 383
avg / total 0.94 0.95 0.94 7865
Category: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 7651
1 0.42 0.02 0.04 214
avg / total 0.96 0.97 0.96 7865
Category: security
precision recall f1-score support
0 0.98 1.00 0.99 7729
1 0.33 0.01 0.03 136
avg / total 0.97 0.98 0.97 7865
Category: military
precision recall f1-score support
0 0.97 1.00 0.98 7587
1 0.49 0.07 0.12 278
avg / total 0.95 0.96 0.95 7865
Category: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 7865
avg / total 1.00 1.00 1.00 7865
Category: water
precision recall f1-score support
0 0.97 0.99 0.98 7366
1 0.84 0.48 0.61 499
avg / total 0.96 0.96 0.96 7865
Category: food
precision recall f1-score support
0 0.94 0.99 0.96 6960
1 0.82 0.48 0.61 905
avg / total 0.92 0.93 0.92 7865
Category: shelter
precision recall f1-score support
0 0.93 0.99 0.96 7182
1 0.79 0.24 0.37 683
avg / total 0.92 0.93 0.91 7865
Category: clothing
precision recall f1-score support
0 0.99 1.00 0.99 7733
1 0.82 0.17 0.29 132
avg / total 0.98 0.99 0.98 7865
Category: money
precision recall f1-score support
0 0.98 1.00 0.99 7695
1 0.67 0.02 0.05 170
avg / total 0.97 0.98 0.97 7865
Category: missing_people
precision recall f1-score support
0 0.99 1.00 1.00 7786
1 1.00 0.01 0.02 79
avg / total 0.99 0.99 0.99 7865
Category: refugees
precision recall f1-score support
0 0.97 1.00 0.98 7601
1 0.65 0.06 0.10 264
avg / total 0.96 0.97 0.95 7865
Category: death
precision recall f1-score support
0 0.96 1.00 0.98 7544
1 0.70 0.12 0.20 321
avg / total 0.95 0.96 0.95 7865
Category: other_aid
precision recall f1-score support
0 0.88 0.99 0.93 6854
1 0.49 0.05 0.10 1011
avg / total 0.83 0.87 0.82 7865
Category: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 7369
1 0.20 0.01 0.02 496
avg / total 0.89 0.94 0.91 7865
Category: transport
precision recall f1-score support
0 0.96 1.00 0.98 7495
1 0.60 0.08 0.14 370
avg / total 0.94 0.95 0.94 7865
Category: buildings
precision recall f1-score support
0 0.96 1.00 0.98 7473
1 0.77 0.12 0.21 392
avg / total 0.95 0.95 0.94 7865
Category: electricity
precision recall f1-score support
0 0.98 1.00 0.99 7703
1 0.62 0.05 0.09 162
avg / total 0.97 0.98 0.97 7865
Category: tools
precision recall f1-score support
0 0.99 1.00 1.00 7825
1 0.00 0.00 0.00 40
avg / total 0.99 0.99 0.99 7865
Category: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 7776
1 0.00 0.00 0.00 89
avg / total 0.98 0.99 0.98 7865
Category: shops
precision recall f1-score support
0 0.99 1.00 1.00 7823
1 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 7865
Category: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 7775
1 0.00 0.00 0.00 90
avg / total 0.98 0.99 0.98 7865
Category: other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 7541
1 0.12 0.00 0.01 324
avg / total 0.92 0.96 0.94 7865
Category: weather_related
precision recall f1-score support
0 0.86 0.96 0.91 5715
1 0.84 0.60 0.70 2150
avg / total 0.86 0.86 0.85 7865
Category: floods
precision recall f1-score support
0 0.95 0.99 0.97 7222
1 0.87 0.40 0.55 643
avg / total 0.94 0.95 0.94 7865
Category: storm
precision recall f1-score support
0 0.94 0.99 0.96 7127
1 0.76 0.36 0.49 738
avg / total 0.92 0.93 0.92 7865
Category: fire
precision recall f1-score support
0 0.99 1.00 0.99 7778
1 0.40 0.02 0.04 87
avg / total 0.98 0.99 0.98 7865
Category: earthquake
precision recall f1-score support
0 0.98 0.99 0.98 7158
1 0.89 0.76 0.82 707
avg / total 0.97 0.97 0.97 7865
Category: cold
precision recall f1-score support
0 0.98 1.00 0.99 7714
1 0.72 0.09 0.15 151
avg / total 0.98 0.98 0.97 7865
Category: other_weather
precision recall f1-score support
0 0.95 1.00 0.97 7475
1 0.47 0.06 0.10 390
avg / total 0.93 0.95 0.93 7865
Category: direct_report
precision recall f1-score support
0 0.86 0.96 0.91 6330
1 0.69 0.37 0.49 1535
avg / total 0.83 0.84 0.83 7865
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline2 = Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters = {
'clf__estimator__n_estimators': [20, 50]
}
cv = GridSearchCV(pipeline2, param_grid=parameters,scoring='f1_macro',cv=3)
cv.fit(X_train, y_train)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred = cv.predict(X_test)
def display_results(y_test, y_pred, cv):
Category_name=y_test.columns.values
# Accumulate the test score for each category
score=[]
for i in range(y_pred.shape[1]):
score.append(precision_recall_fscore_support(y_test.iloc[:,i], y_pred[:,i], average='macro')[0:3])
# Print out score of first category
if i==0:
print('Category: {} \n'.format(Category_name[i]))
print(classification_report(y_test.iloc[:,i], y_pred[:,i]))
# Calculate weighted score for each category and sum them up as and average score
weights=(y_test[y_test>0].count())/(y_test[y_test>0].count().sum())
score_weight=[]
for i in range(len(score)):
score_weight.append(pd.DataFrame(score).iloc[i,:].apply(lambda x: x*weights[i]).values)
score_Avg=sum(score_weight)
# Print out model if from GridSearch
print("\nBest Parameters:", cv.best_params_)
# print out average score
print('Model Average Score [precision, recall, f1-score]={}'.format(score_Avg))
display_results(y_test, y_pred,cv)
###Output
Category: related
precision recall f1-score support
0 0.69 0.37 0.48 1924
1 0.82 0.95 0.88 5941
avg / total 0.79 0.81 0.78 7865
Best Parameters: {'clf__estimator__n_estimators': 20}
Model Average Score [precision, recall, f1-score]=[ 0.78704026 0.66223147 0.68529651]
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return 1
return 0
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def load_data():
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
# read table from database
df = pd.read_sql_table('DisasterResponse', engine)
# close the connection to the database
conn = engine.raw_connection()
conn.close()
X = df['message']
Y = df.drop(['id', 'message','original', 'genre'], axis = 1)
Y[Y>=2]=0
return X, Y
def build_model():
Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('clf', MultiOutputClassifier(LinearSVC()))
])
parameters = {
'clf__estimator__C': [10, 50.0]
}
# create grid search object
cv = GridSearchCV(pipeline,param_grid=parameters,scoring='f1_macro',cv=3)
return cv
###Output
_____no_output_____
###Markdown
Try if my pipeline is still working after adding "StartingVerbExtractor" featureI found that I have to modify StartingVerbExtractor() by making sure sentence has more than 1 word
###Code
X,Y=load_data();
vect = CountVectorizer(tokenizer=tokenize)
X_vectorized = vect.fit_transform(X)
X_vectorized.shape
transformer = TfidfTransformer(smooth_idf=False)
tfidf = transformer.fit_transform(X_vectorized)
tfidf.shape
transformer_tag=StartingVerbExtractor()
for i in range(X.shape[0]):
try:
tag=(transformer_tag.transform(X.iloc[i]))
X_tag[i]=tag
except:
print(i)
print(X.iloc[i])
transformer_tag=StartingVerbExtractor()
for i in range(X.shape[0]):
try:
tag=(transformer_tag.transform(X.iloc[i]))
X_tag[i]=tag
except:
print(i)
print(X.iloc[i])
sentence_list = nltk.sent_tokenize(X.iloc[10487])
sentence_list
pos_tags = nltk.pos_tag(tokenize(sentence_list[2]))
pos_tags
class StartingVerbExtractor_v2(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence)) # part-of-speech tagging
if pos_tags != []:
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return 1
return 0
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
transformer_tag=StartingVerbExtractor_v2()
X_tag=transformer_tag.transform(X)
X_tag.shape
###Output
_____no_output_____
###Markdown
Build pipeline3 with parallel "StartingVerbExtractor" feature and "LinearSVC" estimator
###Code
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=4)
pipeline3 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor_v2())
])),
('clf', MultiOutputClassifier(LinearSVC()))
])
# This will not work since LinearSVC() can't handel label "child_alone" with only 1 class
# train classifier
pipeline3.fit(X_train, y_train)
# Predict for test data
y_pred = pipeline3.predict(X_test)
display_results(y_test, y_pred, model)
###Output
_____no_output_____
###Markdown
Now, build model_v2 with pipeline3 (parallel "StartingVerbExtractor" feature and "LinearSVC" estimator) and GridSearchCV
###Code
def build_model_v2():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor_v2())
])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
parameters = {
'features__text_pipeline__vect__max_df': (0.5, 1.0),
'clf__estimator__n_estimators': [20, 50]
}
# create grid search object
cv = GridSearchCV(pipeline,param_grid=parameters,scoring='f1_macro',cv=3)
return cv
def main():
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=4)
model = build_model_v2()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(y_test, y_pred, model)
main()
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/opt/conda/lib/python3.6/site-packages/sklearn/metrics/classification.py:1137: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no true samples.
'recall', 'true', average, warn_for)
###Markdown
9. Export your model as a pickle file
###Code
filename = 'model.pkl'
pickle.dump(model, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import re
from nltk.tokenize import word_tokenize
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report
import nltk
import pickle
nltk.download(['punkt','stopwords','wordnet'])
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName', con = engine.connect())
feature_cols = list(df.columns)[4:]
X = df['message'] # Message Column
y = df[feature_cols] # Classification label
y.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Split text into words and return the root form of the words
Args:
text (str): the message
Return:
lemm (list of str): a list of the root form of the message words
"""
# Normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# Tokenize text
words = word_tokenize(text)
# Remove stop words
stop = stopwords.words("english")
words = [t for t in words if t not in stop]
# Lemmatization
lemm = [WordNetLemmatizer().lemmatize(w) for w in words]
return lemm
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Pipleine: Random Forest Classifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier(random_state = 42))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Fit the Random Forest Classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Prediction: Random Forest Classifier
y_pred = pipeline.predict(X_test)
# Plot classification reports(f1-score, precision, recall) for each feature
for idx, col in enumerate(y_test):
print('Feature: {}'.format(col))
print(classification_report(y_test[col], y_pred[:, idx]))
# compute and plot model accuracy
accuracy = (y_test.values == y_pred).mean()
print('Model accuracy: {}'.format(accuracy))
###Output
Feature: request
precision recall f1-score support
0 0.90 0.98 0.94 5458
1 0.80 0.45 0.57 1096
avg / total 0.88 0.89 0.88 6554
Feature: offer
precision recall f1-score support
0 0.99 1.00 1.00 6520
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 6554
Feature: aid_related
precision recall f1-score support
0 0.71 0.90 0.79 3871
1 0.77 0.47 0.58 2683
avg / total 0.73 0.72 0.71 6554
Feature: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6060
1 0.57 0.02 0.05 494
avg / total 0.90 0.93 0.89 6554
Feature: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6223
1 0.65 0.05 0.10 331
avg / total 0.94 0.95 0.93 6554
Feature: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6379
1 0.36 0.02 0.04 175
avg / total 0.96 0.97 0.96 6554
Feature: security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 0.00 0.00 0.00 127
avg / total 0.96 0.98 0.97 6554
Feature: military
precision recall f1-score support
0 0.97 1.00 0.98 6347
1 0.58 0.05 0.10 207
avg / total 0.96 0.97 0.96 6554
Feature: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
Feature: water
precision recall f1-score support
0 0.95 1.00 0.97 6142
1 0.83 0.22 0.35 412
avg / total 0.94 0.95 0.93 6554
Feature: food
precision recall f1-score support
0 0.93 0.99 0.96 5805
1 0.85 0.40 0.54 749
avg / total 0.92 0.92 0.91 6554
Feature: shelter
precision recall f1-score support
0 0.93 0.99 0.96 5985
1 0.82 0.26 0.39 569
avg / total 0.92 0.93 0.91 6554
Feature: clothing
precision recall f1-score support
0 0.98 1.00 0.99 6445
1 0.75 0.06 0.10 109
avg / total 0.98 0.98 0.98 6554
Feature: money
precision recall f1-score support
0 0.98 1.00 0.99 6402
1 0.86 0.04 0.08 152
avg / total 0.97 0.98 0.97 6554
Feature: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6471
1 0.00 0.00 0.00 83
avg / total 0.97 0.99 0.98 6554
Feature: refugees
precision recall f1-score support
0 0.96 1.00 0.98 6315
1 0.50 0.02 0.04 239
avg / total 0.95 0.96 0.95 6554
Feature: death
precision recall f1-score support
0 0.96 1.00 0.98 6258
1 0.70 0.05 0.10 296
avg / total 0.95 0.96 0.94 6554
Feature: other_aid
precision recall f1-score support
0 0.88 0.99 0.93 5728
1 0.62 0.06 0.11 826
avg / total 0.85 0.88 0.83 6554
Feature: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6158
1 0.14 0.00 0.00 396
avg / total 0.89 0.94 0.91 6554
Feature: transport
precision recall f1-score support
0 0.95 1.00 0.98 6246
1 0.60 0.03 0.06 308
avg / total 0.94 0.95 0.93 6554
Feature: buildings
precision recall f1-score support
0 0.95 1.00 0.97 6223
1 0.68 0.06 0.11 331
avg / total 0.94 0.95 0.93 6554
Feature: electricity
precision recall f1-score support
0 0.98 1.00 0.99 6420
1 0.57 0.03 0.06 134
avg / total 0.97 0.98 0.97 6554
Feature: tools
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6554
Feature: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.98 6554
Feature: shops
precision recall f1-score support
0 1.00 1.00 1.00 6522
1 0.00 0.00 0.00 32
avg / total 0.99 1.00 0.99 6554
Feature: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.98 6554
Feature: other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6292
1 0.00 0.00 0.00 262
avg / total 0.92 0.96 0.94 6554
Feature: weather_related
precision recall f1-score support
0 0.85 0.97 0.90 4765
1 0.86 0.55 0.67 1789
avg / total 0.85 0.85 0.84 6554
Feature: floods
precision recall f1-score support
0 0.94 1.00 0.97 6050
1 0.87 0.26 0.40 504
avg / total 0.94 0.94 0.92 6554
Feature: storm
precision recall f1-score support
0 0.94 0.99 0.96 5952
1 0.73 0.39 0.51 602
avg / total 0.92 0.93 0.92 6554
Feature: fire
precision recall f1-score support
0 0.99 1.00 1.00 6495
1 0.00 0.00 0.00 59
avg / total 0.98 0.99 0.99 6554
Feature: earthquake
precision recall f1-score support
0 0.96 0.99 0.98 5940
1 0.89 0.63 0.74 614
avg / total 0.96 0.96 0.96 6554
Feature: cold
precision recall f1-score support
0 0.98 1.00 0.99 6411
1 0.50 0.04 0.08 143
avg / total 0.97 0.98 0.97 6554
Feature: other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6207
1 0.42 0.02 0.04 347
avg / total 0.92 0.95 0.92 6554
Feature: direct_report
precision recall f1-score support
0 0.86 0.97 0.91 5289
1 0.75 0.35 0.47 1265
avg / total 0.84 0.85 0.83 6554
Model accuracy: 0.9471598587558306
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Show parameters for the pipline
pipeline.get_params()
# Create Grid search parameters for Random Forest Classifier
parameters = {
'tfidf__use_idf': (True, False),
'clf__n_estimators': [1, 10, 20]
}
gridsearch = GridSearchCV(pipeline, param_grid = parameters)
gridsearch
# Fit the Random Forest Classifier using GridSearch
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Prediction with GridSearch: Random Forest Classifier
y_pred = cv.predict(X_test)
# Plot classification reports(f1-score, precision, recall) for each feature
for idx, col in enumerate(y_test):
print('Feature: {}'.format(col))
print(classification_report(y_test[col], y_pred[:, idx]))
# compute and plot model accuracy
accuracy = (y_test.values == y_pred).mean()
print('Model accuracy: {}'.format(accuracy))
###Output
Feature: request
precision recall f1-score support
0 0.90 0.98 0.94 5458
1 0.81 0.48 0.60 1096
avg / total 0.89 0.89 0.88 6554
Feature: offer
precision recall f1-score support
0 0.99 1.00 1.00 6520
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 6554
Feature: aid_related
precision recall f1-score support
0 0.73 0.90 0.81 3871
1 0.78 0.52 0.62 2683
avg / total 0.75 0.74 0.73 6554
Feature: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6060
1 0.58 0.02 0.04 494
avg / total 0.90 0.93 0.89 6554
Feature: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6223
1 0.80 0.05 0.09 331
avg / total 0.94 0.95 0.93 6554
Feature: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6379
1 0.55 0.03 0.06 175
avg / total 0.96 0.97 0.96 6554
Feature: security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 0.00 0.00 0.00 127
avg / total 0.96 0.98 0.97 6554
Feature: military
precision recall f1-score support
0 0.97 1.00 0.98 6347
1 0.53 0.04 0.07 207
avg / total 0.96 0.97 0.96 6554
Feature: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
Feature: water
precision recall f1-score support
0 0.95 1.00 0.97 6142
1 0.84 0.28 0.42 412
avg / total 0.95 0.95 0.94 6554
Feature: food
precision recall f1-score support
0 0.93 0.99 0.96 5805
1 0.85 0.46 0.60 749
avg / total 0.92 0.93 0.92 6554
Feature: shelter
precision recall f1-score support
0 0.93 0.99 0.96 5985
1 0.81 0.26 0.40 569
avg / total 0.92 0.93 0.91 6554
Feature: clothing
precision recall f1-score support
0 0.98 1.00 0.99 6445
1 0.90 0.08 0.15 109
avg / total 0.98 0.98 0.98 6554
Feature: money
precision recall f1-score support
0 0.98 1.00 0.99 6402
1 0.86 0.04 0.08 152
avg / total 0.97 0.98 0.97 6554
Feature: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6471
1 0.50 0.01 0.02 83
avg / total 0.98 0.99 0.98 6554
Feature: refugees
precision recall f1-score support
0 0.96 1.00 0.98 6315
1 0.53 0.03 0.06 239
avg / total 0.95 0.96 0.95 6554
Feature: death
precision recall f1-score support
0 0.96 1.00 0.98 6258
1 0.78 0.07 0.13 296
avg / total 0.95 0.96 0.94 6554
Feature: other_aid
precision recall f1-score support
0 0.88 1.00 0.94 5728
1 0.72 0.07 0.12 826
avg / total 0.86 0.88 0.83 6554
Feature: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6158
1 0.40 0.01 0.01 396
avg / total 0.91 0.94 0.91 6554
Feature: transport
precision recall f1-score support
0 0.95 1.00 0.98 6246
1 0.73 0.03 0.05 308
avg / total 0.94 0.95 0.93 6554
Feature: buildings
precision recall f1-score support
0 0.95 1.00 0.98 6223
1 0.85 0.07 0.12 331
avg / total 0.95 0.95 0.93 6554
Feature: electricity
precision recall f1-score support
0 0.98 1.00 0.99 6420
1 1.00 0.01 0.01 134
avg / total 0.98 0.98 0.97 6554
Feature: tools
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6554
Feature: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.98 6554
Feature: shops
precision recall f1-score support
0 1.00 1.00 1.00 6522
1 0.00 0.00 0.00 32
avg / total 0.99 1.00 0.99 6554
Feature: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.98 6554
Feature: other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6292
1 0.00 0.00 0.00 262
avg / total 0.92 0.96 0.94 6554
Feature: weather_related
precision recall f1-score support
0 0.85 0.97 0.91 4765
1 0.87 0.54 0.67 1789
avg / total 0.86 0.85 0.84 6554
Feature: floods
precision recall f1-score support
0 0.94 1.00 0.97 6050
1 0.89 0.25 0.39 504
avg / total 0.94 0.94 0.92 6554
Feature: storm
precision recall f1-score support
0 0.94 0.99 0.96 5952
1 0.79 0.36 0.49 602
avg / total 0.92 0.93 0.92 6554
Feature: fire
precision recall f1-score support
0 0.99 1.00 1.00 6495
1 0.00 0.00 0.00 59
avg / total 0.98 0.99 0.99 6554
Feature: earthquake
precision recall f1-score support
0 0.96 0.99 0.98 5940
1 0.89 0.62 0.73 614
avg / total 0.95 0.96 0.95 6554
Feature: cold
precision recall f1-score support
0 0.98 1.00 0.99 6411
1 0.67 0.03 0.05 143
avg / total 0.97 0.98 0.97 6554
Feature: other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6207
1 0.33 0.01 0.02 347
avg / total 0.91 0.95 0.92 6554
Feature: direct_report
precision recall f1-score support
0 0.86 0.97 0.91 5289
1 0.77 0.35 0.48 1265
avg / total 0.84 0.85 0.83 6554
Model accuracy: 0.9484851126901783
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 8.1 Bagging Ensemble Classifier
###Code
# Create pipeline with Adaboost Classifier
pipeline_ada = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
# Fit the model
pipeline_ada.fit(X_train, y_train)
# Prediction with Adaboost Classifier
y_pred = pipeline_ada.predict(X_test)
# Plot classification reports(f1-score, precision, recall) for each feature
for idx, col in enumerate(y_test):
print('Feature: {}'.format(col))
print(classification_report(y_test[col], y_pred[:, idx]))
# compute and plot model accuracy
accuracy = (y_test.values == y_pred).mean()
print('Model accuracy: {}'.format(accuracy))
# Show parameters for the pipline
pipeline_ada.get_params()
# Create Grid search parameters for Adaboost Classifier
parameters_ada = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [10,20,50],
}
gridsearch_ada = GridSearchCV(pipeline_ada, param_grid = parameters_ada)
gridsearch_ada
# Fit GridSearchCV for Adaboost model
gridsearch_ada.fit(X_train, y_train)
# Prediction with Adaboost Classifier
y_pred = gridsearch_ada.predict(X_test)
# Plot classification reports(f1-score, precision, recall) for each feature
for idx, col in enumerate(y_test):
print('Feature: {}'.format(col))
print(classification_report(y_test[col], y_pred[:, idx]))
# compute and plot model accuracy
accuracy = (y_test.values == y_pred).mean()
print('Model accuracy: {}'.format(accuracy))
###Output
Feature: request
precision recall f1-score support
0 0.90 0.96 0.93 5422
1 0.73 0.51 0.60 1132
avg / total 0.87 0.88 0.87 6554
Feature: offer
precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 27
avg / total 0.99 0.99 0.99 6554
Feature: aid_related
precision recall f1-score support
0 0.73 0.87 0.79 3785
1 0.75 0.56 0.65 2769
avg / total 0.74 0.74 0.73 6554
Feature: medical_help
precision recall f1-score support
0 0.94 0.99 0.96 6035
1 0.58 0.21 0.30 519
avg / total 0.91 0.93 0.91 6554
Feature: medical_products
precision recall f1-score support
0 0.97 0.99 0.98 6233
1 0.69 0.35 0.46 321
avg / total 0.95 0.96 0.95 6554
Feature: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6381
1 0.62 0.23 0.34 173
avg / total 0.97 0.98 0.97 6554
Feature: security
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.19 0.03 0.04 118
avg / total 0.97 0.98 0.97 6554
Feature: military
precision recall f1-score support
0 0.97 0.99 0.98 6334
1 0.58 0.26 0.36 220
avg / total 0.96 0.97 0.96 6554
Feature: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
Feature: water
precision recall f1-score support
0 0.97 0.99 0.98 6124
1 0.78 0.61 0.68 430
avg / total 0.96 0.96 0.96 6554
Feature: food
precision recall f1-score support
0 0.97 0.97 0.97 5812
1 0.77 0.76 0.77 742
avg / total 0.95 0.95 0.95 6554
Feature: shelter
precision recall f1-score support
0 0.95 0.99 0.97 5993
1 0.78 0.48 0.59 561
avg / total 0.94 0.94 0.94 6554
Feature: clothing
precision recall f1-score support
0 0.99 1.00 1.00 6447
1 0.85 0.52 0.65 107
avg / total 0.99 0.99 0.99 6554
Feature: money
precision recall f1-score support
0 0.98 1.00 0.99 6393
1 0.55 0.22 0.31 161
avg / total 0.97 0.98 0.97 6554
Feature: missing_people
precision recall f1-score support
0 0.99 1.00 1.00 6484
1 0.65 0.24 0.35 70
avg / total 0.99 0.99 0.99 6554
Feature: refugees
precision recall f1-score support
0 0.98 1.00 0.99 6351
1 0.61 0.24 0.35 203
avg / total 0.96 0.97 0.97 6554
Feature: death
precision recall f1-score support
0 0.97 0.99 0.98 6243
1 0.80 0.45 0.57 311
avg / total 0.96 0.97 0.96 6554
Feature: other_aid
precision recall f1-score support
0 0.88 0.98 0.93 5716
1 0.47 0.11 0.18 838
avg / total 0.83 0.87 0.83 6554
Feature: infrastructure_related
precision recall f1-score support
0 0.94 0.99 0.97 6124
1 0.55 0.11 0.18 430
avg / total 0.92 0.94 0.92 6554
Feature: transport
precision recall f1-score support
0 0.96 1.00 0.98 6285
1 0.66 0.14 0.24 269
avg / total 0.95 0.96 0.95 6554
Feature: buildings
precision recall f1-score support
0 0.97 0.99 0.98 6217
1 0.75 0.37 0.49 337
avg / total 0.96 0.96 0.95 6554
Feature: electricity
precision recall f1-score support
0 0.98 1.00 0.99 6425
1 0.60 0.19 0.28 129
avg / total 0.98 0.98 0.98 6554
Feature: tools
precision recall f1-score support
0 0.99 1.00 1.00 6506
1 0.33 0.02 0.04 48
avg / total 0.99 0.99 0.99 6554
Feature: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6493
1 0.27 0.10 0.14 61
avg / total 0.98 0.99 0.99 6554
Feature: shops
precision recall f1-score support
0 0.99 1.00 1.00 6520
1 0.17 0.03 0.05 34
avg / total 0.99 0.99 0.99 6554
Feature: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6468
1 0.25 0.06 0.09 86
avg / total 0.98 0.99 0.98 6554
Feature: other_infrastructure
precision recall f1-score support
0 0.96 0.99 0.98 6265
1 0.40 0.09 0.15 289
avg / total 0.94 0.95 0.94 6554
Feature: weather_related
precision recall f1-score support
0 0.87 0.96 0.91 4731
1 0.86 0.64 0.73 1823
avg / total 0.87 0.87 0.86 6554
Feature: floods
precision recall f1-score support
0 0.96 1.00 0.98 6014
1 0.92 0.54 0.68 540
avg / total 0.96 0.96 0.95 6554
Feature: storm
precision recall f1-score support
0 0.95 0.99 0.97 5926
1 0.79 0.47 0.59 628
avg / total 0.93 0.94 0.93 6554
Feature: fire
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.61 0.25 0.35 77
avg / total 0.99 0.99 0.99 6554
Feature: earthquake
precision recall f1-score support
0 0.98 0.99 0.98 5934
1 0.89 0.79 0.83 620
avg / total 0.97 0.97 0.97 6554
Feature: cold
precision recall f1-score support
0 0.98 1.00 0.99 6428
1 0.62 0.18 0.28 126
avg / total 0.98 0.98 0.98 6554
Feature: other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6191
1 0.47 0.07 0.13 363
avg / total 0.92 0.94 0.92 6554
Feature: direct_report
precision recall f1-score support
0 0.86 0.96 0.91 5262
1 0.68 0.35 0.47 1292
avg / total 0.82 0.84 0.82 6554
Model accuracy: 0.9513099960765509
###Markdown
9. Export your model as a pickle file
###Code
# Export model as pickle file
file_name = 'classifier.pkl'
with open (file_name, 'wb') as file:
pickle.dump(gridsearch_ada, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql('SELECT * FROM message', engine)
X = df.message
y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# First choose KNN calssifier , it is suitable for this situation
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier())),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# split the train and test data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size =.3,random_state = 42)
# train the model and make predictions
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# build a function to see if the prediction matches the y_test data
from sklearn.metrics import classification_report
'''
set a function to see the average precision, recall and f1 scores for each column
'''
def get_scores(y_pred,y_test):
result = []
for i in range(y_test.shape[1]):
test_value = y_test.iloc[:, i]
pred_value = [a[i] for a in y_pred]
result.append(list(classification_report(test_value, pred_value,output_dict = True)['0'].values())[:3])
return pd.DataFrame(result,columns=['precision','recall','f1_score']).mean()
KNeighborsClassifier_score = get_scores(y_pred,y_test)
KNeighborsClassifier_score
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# use GridSearchCV to test several parameter combinations
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier())),
])
# first get the parameters of the pipeline
pipeline.get_params()
# set 2*3*3 = 12 kinds of combinations
from sklearn.model_selection import GridSearchCV
parameters = {
'vect__max_df':[0.5,1.0],
'clf__estimator__n_neighbors':[3,5,7],
'clf__estimator__leaf_size':[20,30,40],
}
# in order to be quicker, set bags to 2 , and no limits to jobs
cv = GridSearchCV(pipeline, param_grid=parameters, cv = 2, n_jobs = -1)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# fit the train data
cv.fit(X_train, y_train)
# get the pred value
y_pred = cv.predict(X_test)
# check the finest parameter combinations
cv.best_params_
# get the three scores of the best combination
get_scores(y_pred,y_test)
###Output
_____no_output_____
###Markdown
We can see, after setting 3 parameters, the average f1 score increased about 0.0017 (from 0.9505 to 0.9522), that's not good enough. 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#first, try to add a text length feature and a starting verb feature
import nltk
from sklearn.pipeline import FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
class TextLengthExtractor(BaseEstimator, TransformerMixin):
def textlength(self, text):
return len(tokenize(text))
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.textlength)
return pd.DataFrame(X_tagged)
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
# add the above two features to the pipeline
pipeline_union = Pipeline([
('features', FeatureUnion([
('nlp_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize,max_df =0.5)),
('tfidf', TfidfTransformer())
])),
('txt_len', TextLengthExtractor()),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(KNeighborsClassifier(leaf_size =20)))
])
# fit the data, and make predictions
pipeline_union.fit(X_train, y_train)
y_pred_union = pipeline_union.predict(X_test)
# get the score
get_scores(y_pred_union,y_test)
###Output
_____no_output_____
###Markdown
After adding the new feature, the f1 score do not getting better.Next, I will try another classifier
###Code
# try randomforestclassifier
from sklearn.ensemble import RandomForestClassifier
pipeline_rf = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
# use default settings and run the fitting and prediction
pipeline_rf.fit(X_train, y_train)
y_pred_rf = pipeline_rf.predict(X_test)
# get the scores
get_scores(y_pred_rf,y_test)
###Output
_____no_output_____
###Markdown
The average f1_scores increased for about 0.003, to 0.9551.I will try NBclassifier
###Code
# try naive bayes classifier
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import FunctionTransformer
pipeline_nb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('todense',FunctionTransformer(lambda x: x.todense(), accept_sparse=True)),
('clf', MultiOutputClassifier(GaussianNB())),
])
pipeline_nb.fit(X_train, y_train)
y_pred_nb = pipeline_nb.predict(X_test)
get_scores(y_pred_nb,y_test)
###Output
_____no_output_____
###Markdown
It seems GaussianNB is not as good as above two classifiers.RandomForestClassifier has the highest f1 score.At last, I will use GridSearchCV to set several parameters to improve RandomForestClassifier
###Code
# set the pipeline
pipeline_randomfo = Pipeline([
('features', FeatureUnion([
('nlp_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, max_df = 0.5)),
('tfidf', TfidfTransformer())
])),
('txt_len', TextLengthExtractor()),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier(min_samples_leaf =1,n_estimators = 1000)))
])
# get parameters
pipeline_randomfo.get_params()
# set several parameter combinations.
# Because RandomforestClassier is very time consumming, I can't try too many parameters
parameters = {
'features__nlp_pipeline__vect__max_features': (5000, 10000),
'features__nlp_pipeline__tfidf__use_idf': (True, False),
'clf__estimator__min_samples_split': [2, 4],
}
# set the GridSearchCV, in order to run quicker, set cv = 2
cv_randomfo = GridSearchCV(pipeline_randomfo, param_grid=parameters, cv = 2, n_jobs = -1)
# fit the train data
cv_randomfo.fit(X_train, y_train)
# get the pred value
y_pred_randomfo = cv_randomfo.predict(X_test)
cv_randomfo.best_params_
get_scores(y_pred_randomfo,y_test)
# set the parameters and rerun the ML process using randomforest
pipeline_randomforest = Pipeline([
('features', FeatureUnion([
('nlp_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize,
max_df = 0.5,
max_features = 5000)),
('tfidf', TfidfTransformer(use_idf = True))
])),
('txt_len', TextLengthExtractor()),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier(min_samples_leaf =1,
n_estimators = 1000,
min_samples_split = 4)))
])
pipeline_randomforest.fit(X_train, y_train)
y_pred_randomforest = pipeline_randomforest.predict(X_test)
get_scores(y_pred_randomforest,y_test)
###Output
/Users/xuhao3/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
###Markdown
After setting several parameters, the f1 score increased for another 0.0015.I will stop improving my model here for:1. The f1 score is high enough, it's difficult to have big improvement.2. Run every single randomforestcalssifier for the dataset costs about an hour, ifIn order to save the model to a pickle file, I will rerun the ML process again.
###Code
category_names = list(y.columns)
def evaluate_model(pipeline_randomforest, X_test, y_test, category_names):
y_pred = model.predict(X_test)
result = []
for i in range(y_test.shape[1]):
test_value = y_test.iloc[:, i]
pred_value = [a[i] for a in y_pred]
result.append(list(classification_report(test_value, pred_value,output_dict = True)['0'].values())[:3])
df = pd.DataFrame(result,columns=['precision','recall','f1_score'])
df['indicator'] = pd.Series(category_names)
print(df)
print('The average precision, recall and f1_score are {},{},{}'.
format(df.precision.mean(),df.recall.mean(),df.f1_score.mean()))
category_names = list(y.columns)
result = []
for i in range(y_test.shape[1]):
test_value = y_test.iloc[:, i]
pred_value = [a[i] for a in y_pred]
result.append(list(classification_report(test_value, pred_value,output_dict = True)['0'].values())[:3])
df3 = pd.DataFrame(result,columns=['precision','recall','f1_score'])
df3['indicator'] = category_names
df3
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(pipeline_randomforest, open('model_randomforest.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
%%file train_classifier.py
import sys
from sqlalchemy import create_engine
import pandas as pd
import pickle
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report
from sklearn.base import BaseEstimator, TransformerMixin
def load_data(database_filepath):
'''
Load dataframe from a database
'''
engine = create_engine('sqlite:///'+database_filepath)
df = pd.read_sql('SELECT * FROM message', engine)
X = df.message
y = df.iloc[:, 4:]
category_names = list(y.columns)
return X, y, category_names
def tokenize(text):
'''
Tokenize and lemmatize the text
'''
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class TextLengthExtractor(BaseEstimator, TransformerMixin):
'''
A class to get the length of each tokenized text, and apply the function to all cells
'''
def textlength(self, text):
return len(tokenize(text))
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.textlength)
return pd.DataFrame(X_tagged)
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
'''
A class to see if the first letter is a verb, and apply the function to all cells
'''
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def build_model():
'''
Build the model
'''
pipeline_randomforest = Pipeline([
('features', FeatureUnion([
('nlp_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize,
max_df = 0.5,
max_features = 5000)),
('tfidf', TfidfTransformer(use_idf = True))
])),
('txt_len', TextLengthExtractor()),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier(min_samples_leaf =1,
n_estimators = 1000,
min_samples_split = 4)))
])
return pipeline_randomforest
def evaluate_model(model, X_test, y_test, category_names):
'''
use the model to make prediction, and print out every column's precision, recall and fi scores
'''
y_pred = model.predict(X_test)
result = []
for i in range(y_test.shape[1]):
test_value = y_test.iloc[:, i]
pred_value = [a[i] for a in y_pred]
result.append(list(classification_report(test_value, pred_value,output_dict = True)['0'].values())[:3])
df = pd.DataFrame(result,columns=['precision','recall','f1_score'])
df['indicator'] = pd.Series(category_names)
print(df)
print('The average precision, recall and f1_score are {},{},{}'.
format(df.precision.mean(),df.recall.mean(),df.f1_score.mean()))
def save_model(model, model_filepath):
'''
Save the model to a .pkl file
'''
pickle.dump(model, open('model_randomforest.pkl', 'wb'))
def main():
if len(sys.argv) == 3:
database_filepath, model_filepath = sys.argv[1:]
print('Loading data...\n DATABASE: {}'.format(database_filepath))
X, y, category_names = load_data(database_filepath)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print('Building model...')
model = build_model()
print('Training model...')
model.fit(X_train, y_train)
print('Evaluating model...')
evaluate_model(model, X_test, y_test, category_names)
print('Saving model...\n MODEL: {}'.format(model_filepath))
save_model(model, model_filepath)
print('Trained model saved!')
else:
print('Please provide the filepath of the disaster messages database '\
'as the first argument and the filepath of the pickle file to '\
'save the model to as the second argument. \n\nExample: python '\
'train_classifier.py ../data/DisasterResponse.db classifier.pkl')
if __name__ == '__main__':
main()
!python train_classifier.py DisasterResponse.db model_randomforest.pkl
###Output
Loading data...
DATABASE: DisasterResponse.db
Building model...
Training model...
Evaluating model...
/Users/xuhao3/opt/anaconda3/lib/python3.7/site-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
precision recall f1_score indicator
0 0.718941 0.286526 0.409750 related
1 0.903472 0.980699 0.940502 request
2 0.995423 1.000000 0.997706 offer
3 0.784237 0.850227 0.815900 aid_related
4 0.929373 0.995028 0.961081 medical_help
5 0.956890 0.998594 0.977297 medical_products
6 0.974506 0.998037 0.986131 search_and_rescue
7 0.981298 0.999417 0.990274 security
8 0.972616 0.999213 0.985735 military
9 1.000000 1.000000 1.000000 child_alone
10 0.959103 0.996934 0.977653 water
11 0.959992 0.981908 0.970826 food
12 0.949529 0.991634 0.970125 shelter
13 0.986417 0.999225 0.992779 clothing
14 0.978626 1.000000 0.989198 money
15 0.990084 1.000000 0.995017 missing_people
16 0.972430 0.998624 0.985353 refugees
17 0.969979 0.995824 0.982732 death
18 0.874713 0.998251 0.932408 other_aid
19 0.933435 0.999796 0.965476 infrastructure_related
20 0.961014 0.999401 0.979832 transport
21 0.954160 0.996981 0.975101 buildings
22 0.978431 1.000000 0.989098 electricity
23 0.992372 1.000000 0.996172 tools
24 0.988940 1.000000 0.994439 hospitals
25 0.994088 1.000000 0.997035 shops
26 0.985698 1.000000 0.992797 aid_centers
27 0.957276 0.999801 0.978077 other_infrastructure
28 0.900577 0.950450 0.924842 weather_related
29 0.957882 0.995622 0.976388 floods
30 0.961610 0.981049 0.971232 storm
31 0.989508 1.000000 0.994726 fire
32 0.980869 0.990550 0.985686 earthquake
33 0.983519 0.998638 0.991021 cold
34 0.951597 0.998795 0.974625 other_weather
35 0.872712 0.979457 0.923008 direct_report
The average precision, recall and f1_score are 0.95003660154207,0.9711300242218263,0.9575006509041314
Saving model...
MODEL: model_randomforest.pkl
Trained model saved!
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
# download necessary NLTK data
import nltk
nltk.download(['punkt', 'wordnet', 'stopwords'])
# import statements
import re
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
# import ML modules
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report, precision_recall_fscore_support
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
import pickle
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# load data from database
engine = create_engine('sqlite:///DisasterResponseYT.db')
df = pd.read_sql_table('DisasterResponseMaster', engine)
X = df.message
Y = df.iloc[:,4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
# url regular expression and english stop words
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
stop_words = stopwords.words("english")
def tokenize(text):
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# remove punctuation characters
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
if tok not in stop_words:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
for message in X[:5]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
Is the Hurricane over or is it not over
['is', 'hurricane']
Looking for someone but no name
['looking', 'someone', 'name']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', '80', '90', 'destroyed', 'only', 'hospital', 'st', 'croix', 'functioning', 'needs', 'supply', 'desperately']
says: west side of Haiti, rest of the country today and tonight
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# train test split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 23)
# train classifier
pipeline.fit(X_train, Y_train)
# predict on test data
Y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
#accuracy = (Y_pred == Y_test).mean()
accuracy = (Y_pred == Y_test).values.mean()
print (accuracy)
print(classification_report(Y_test.values, Y_pred, target_names=Y.columns.values))
###Output
precision recall f1-score support
related 0.85 0.92 0.88 4979
request 0.80 0.45 0.57 1116
offer 0.00 0.00 0.00 34
aid_related 0.73 0.61 0.66 2696
medical_help 0.59 0.09 0.16 509
medical_products 0.74 0.10 0.18 314
search_and_rescue 0.67 0.05 0.09 171
security 0.00 0.00 0.00 105
military 0.61 0.14 0.23 208
child_alone 0.00 0.00 0.00 0
water 0.81 0.39 0.52 409
food 0.85 0.59 0.69 736
shelter 0.83 0.26 0.39 575
clothing 1.00 0.16 0.27 76
money 0.80 0.03 0.05 154
missing_people 0.33 0.01 0.02 88
refugees 0.50 0.04 0.07 216
death 0.76 0.17 0.28 330
other_aid 0.58 0.06 0.11 858
infrastructure_related 0.23 0.01 0.01 408
transport 0.71 0.09 0.15 316
buildings 0.77 0.10 0.18 338
electricity 0.71 0.08 0.15 121
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 56
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 68
other_infrastructure 0.33 0.01 0.01 292
weather_related 0.83 0.61 0.70 1789
floods 0.88 0.36 0.51 543
storm 0.74 0.44 0.55 579
fire 0.67 0.03 0.06 64
earthquake 0.88 0.75 0.81 603
cold 0.61 0.10 0.17 114
other_weather 0.44 0.02 0.04 351
direct_report 0.73 0.35 0.47 1230
micro avg 0.81 0.50 0.62 20505
macro avg 0.56 0.19 0.25 20505
weighted avg 0.74 0.50 0.55 20505
samples avg 0.64 0.45 0.48 20505
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__min_samples_split': [2, 3],
}
cv = GridSearchCV(pipeline, param_grid = parameters, n_jobs = 4, verbose = 2)
# train classifier
cv.fit(X_train, Y_train)
# find the best model
optimised_model = cv.best_estimator_
# predict on test data using the best model
Y_pred = optimised_model.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
accuracy = (Y_pred == Y_test).values.mean()
print (accuracy)
print(classification_report(Y_test.values, Y_pred, target_names=Y.columns.values))
###Output
precision recall f1-score support
related 0.84 0.95 0.89 4979
request 0.84 0.50 0.63 1116
offer 0.00 0.00 0.00 34
aid_related 0.75 0.68 0.72 2696
medical_help 0.75 0.08 0.15 509
medical_products 0.76 0.11 0.19 314
search_and_rescue 0.86 0.04 0.07 171
security 0.00 0.00 0.00 105
military 0.80 0.06 0.11 208
child_alone 0.00 0.00 0.00 0
water 0.88 0.38 0.53 409
food 0.85 0.64 0.73 736
shelter 0.86 0.36 0.51 575
clothing 0.89 0.11 0.19 76
money 1.00 0.04 0.08 154
missing_people 0.00 0.00 0.00 88
refugees 0.82 0.04 0.08 216
death 0.75 0.08 0.15 330
other_aid 0.57 0.03 0.06 858
infrastructure_related 0.00 0.00 0.00 408
transport 0.79 0.07 0.13 316
buildings 0.78 0.11 0.20 338
electricity 0.56 0.04 0.08 121
tools 0.00 0.00 0.00 32
hospitals 1.00 0.02 0.04 56
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 68
other_infrastructure 0.50 0.00 0.01 292
weather_related 0.84 0.69 0.76 1789
floods 0.90 0.44 0.59 543
storm 0.76 0.56 0.64 579
fire 1.00 0.05 0.09 64
earthquake 0.88 0.78 0.83 603
cold 0.72 0.11 0.20 114
other_weather 0.60 0.03 0.05 351
direct_report 0.81 0.39 0.52 1230
micro avg 0.82 0.53 0.65 20505
macro avg 0.62 0.21 0.26 20505
weighted avg 0.77 0.53 0.58 20505
samples avg 0.67 0.48 0.51 20505
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
# train classifier
pipeline2.fit(X_train, Y_train)
# predict on test data
Y_pred = pipeline2.predict(X_test)
#accuracy = (Y_pred == Y_test).mean()
accuracy = (Y_pred == Y_test).values.mean()
print (accuracy)
print(classification_report(Y_test.values, Y_pred, target_names=Y.columns.values))
###Output
precision recall f1-score support
related 0.80 0.97 0.88 4979
request 0.77 0.52 0.62 1116
offer 0.00 0.00 0.00 34
aid_related 0.76 0.63 0.69 2696
medical_help 0.61 0.26 0.36 509
medical_products 0.68 0.37 0.48 314
search_and_rescue 0.62 0.20 0.30 171
security 0.21 0.04 0.06 105
military 0.61 0.34 0.43 208
child_alone 0.00 0.00 0.00 0
water 0.70 0.62 0.66 409
food 0.81 0.72 0.76 736
shelter 0.80 0.58 0.67 575
clothing 0.71 0.36 0.47 76
money 0.67 0.33 0.44 154
missing_people 0.63 0.14 0.22 88
refugees 0.57 0.28 0.37 216
death 0.76 0.43 0.55 330
other_aid 0.57 0.16 0.25 858
infrastructure_related 0.40 0.10 0.16 408
transport 0.65 0.22 0.33 316
buildings 0.67 0.38 0.48 338
electricity 0.60 0.28 0.38 121
tools 0.09 0.03 0.05 32
hospitals 0.19 0.09 0.12 56
shops 0.33 0.04 0.07 27
aid_centers 0.17 0.04 0.07 68
other_infrastructure 0.44 0.09 0.14 292
weather_related 0.84 0.67 0.75 1789
floods 0.87 0.56 0.68 543
storm 0.74 0.48 0.58 579
fire 0.61 0.27 0.37 64
earthquake 0.87 0.77 0.82 603
cold 0.56 0.31 0.40 114
other_weather 0.40 0.13 0.19 351
direct_report 0.72 0.43 0.54 1230
micro avg 0.76 0.60 0.67 20505
macro avg 0.57 0.33 0.40 20505
weighted avg 0.73 0.60 0.63 20505
samples avg 0.66 0.53 0.54 20505
###Markdown
9. Export your model as a pickle file
###Code
with open('MLclassifier.pkl', 'wb') as file:
pickle.dump(optimised_model, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# download necessary NLTK data
import nltk
nltk.download(['punkt', 'wordnet','stopwords'])
import sqlite3
# import libraries
import re
import numpy as np
import pandas as pd
import pickle
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from sqlalchemy import create_engine
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.neighbors import KNeighborsClassifier
# load data from database
def load_data(database_filepath):
'''
Function that takes a table from database and returns array of messages and categories
Input: database_filepath: The path of sql database
Output: X: Messages, y: Categories, category_names: Labels for categories
'''
conn = sqlite3.connect(database_filepath)
df = pd.read_sql("SELECT * from tidy_dataset", conn)
conn.close()
# define features and label arrays
X = df.iloc[:,1].values
y = df.iloc[:,3:].values
category_names = list(df.iloc[:,3:].columns)
return X, y, category_names
# Function from ETL script
def clean_data():
'''
Function that reads messages and categories files, merges and cleans the data and loads it to sql database
Input: -
Output: loads tidy data in database
'''
# read in file
messages = pd.read_csv('messages.csv')
categories = pd.read_csv('categories.csv')
# merge datasets
df = messages.merge(categories, how='outer',on='id')
# create a dataframe of the 36 individual category columns
categories = df['categories'].str.split(pat=';',expand=True)
# select the first row of the categories dataframe
row = categories.iloc[0,]
# extract a list of new column names for categories.
category_colnames = list(map(lambda x: x[:-2], row))
# rename the columns of `categories`
categories.columns = category_colnames
# set each value to be the last character of the string
for column in categories:
categories[column] = categories[column].apply(lambda x: x[-1])
# convert column from string to numeric
categories[column] = pd.to_numeric(categories[column])
# drop the original, categories column from `df`
df.drop(labels=['categories','original'], axis=1, inplace=True)
# concatenate the original dataframe with the new `categories` dataframe
df = pd.concat([df,categories], axis=1)
# Converting category columns to numeric
df.iloc[:,3:] = df.iloc[:,3:].apply(pd.to_numeric)
# drop duplicates
df.drop_duplicates(inplace=True)
# removing rows labelled as 2
df.drop(df[df['related']==2].index, inplace=True)
# load to database
engine = create_engine('sqlite:///disaster_response.db')
df.to_sql('tidy_dataset', engine, if_exists='replace', index=False)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text): # This function will be used in the pipeline
'''
Function that takes a text, cleans and lemmatizes it and returns clean tokens
Input: text: array of messages
Output: clean tokens : clean and lemmatized tokens
'''
# Remove punctuation
text = re.sub(r"[^a-zA-Z0-9]"," ",text)
# tokenize text
tokens = word_tokenize(text)
# initiate stop words
stop_words = stopwords.words("english")
# remove stop words
tokens = [t for t in tokens if t not in stop_words]
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
# testing load_data and tokenize functions
test_X, test_y, test_labels = load_data('disaster_response.db')
for message in test_X[:5]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
Is the Hurricane over or is it not over
['is', 'hurricane']
Looking for someone but no name
['looking', 'someone', 'name']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', '80', '90', 'destroyed', 'only', 'hospital', 'st', 'croix', 'functioning', 'needs', 'supply', 'desperately']
says: west side of Haiti, rest of the country today and tonight
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Build a pipeline, noted classes are imbalanced, used n_jobs = -1 to improve processing speeds
pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier(class_weight='balanced',n_jobs=-1)))])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Getting X, y from load_data()
X,y,category_names = load_data('disaster_response.db')
# Perform train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2)
# Train classifier
pipeline.fit(X_train,y_train)
# Predict on test data
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Evaluating the model
class_report = classification_report(y_test, y_pred, target_names=category_names)
print(class_report)
###Output
precision recall f1-score support
related 0.86 0.94 0.90 4004
request 0.80 0.51 0.63 878
offer 0.00 0.00 0.00 23
aid_related 0.75 0.72 0.74 2164
medical_help 0.77 0.10 0.17 424
medical_products 0.78 0.09 0.15 244
search_and_rescue 1.00 0.01 0.03 140
security 0.00 0.00 0.00 95
military 0.33 0.02 0.05 163
child_alone 0.00 0.00 0.00 0
water 0.77 0.34 0.47 323
food 0.85 0.51 0.64 556
shelter 0.87 0.28 0.42 452
clothing 0.86 0.07 0.13 83
money 0.80 0.03 0.06 125
missing_people 1.00 0.04 0.07 57
refugees 0.00 0.00 0.00 164
death 0.88 0.15 0.25 248
other_aid 0.77 0.07 0.12 692
infrastructure_related 0.33 0.00 0.01 333
transport 1.00 0.03 0.07 233
buildings 0.77 0.09 0.17 244
electricity 0.50 0.01 0.02 104
tools 0.00 0.00 0.00 31
hospitals 0.00 0.00 0.00 62
shops 0.00 0.00 0.00 18
aid_centers 0.00 0.00 0.00 56
other_infrastructure 0.00 0.00 0.00 226
weather_related 0.86 0.70 0.77 1458
floods 0.92 0.27 0.42 439
storm 0.82 0.40 0.54 495
fire 0.00 0.00 0.00 57
earthquake 0.90 0.70 0.78 490
cold 0.20 0.01 0.02 103
other_weather 0.00 0.00 0.00 267
direct_report 0.77 0.40 0.53 1012
micro avg 0.83 0.52 0.64 16463
macro avg 0.53 0.18 0.23 16463
weighted avg 0.76 0.52 0.56 16463
samples avg 0.66 0.47 0.51 16463
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Checking pipeline hyperparameters
pipeline.get_params().keys()
# hyperparameter tuning, using f1 score as scoring method
parameters = {'clf__estimator__max_depth': [3,4,5],
'clf__estimator__min_samples_split': [3,5,7]}
cv = GridSearchCV(pipeline, param_grid=parameters,scoring = 'f1_micro')
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Train grid search classifier
cv.fit(X_train,y_train)
# Predict on test data
y_pred = cv.predict(X_test)
# Evaluate the model
class_report = classification_report(y_test, y_pred, target_names=category_names)
print(class_report)
###Output
precision recall f1-score support
related 0.92 0.66 0.77 4004
request 0.54 0.72 0.61 878
offer 0.00 0.00 0.00 23
aid_related 0.72 0.63 0.67 2164
medical_help 0.38 0.54 0.44 424
medical_products 0.25 0.56 0.35 244
search_and_rescue 0.20 0.44 0.27 140
security 0.09 0.21 0.12 95
military 0.33 0.71 0.45 163
child_alone 0.00 0.00 0.00 0
water 0.39 0.78 0.52 323
food 0.53 0.77 0.62 556
shelter 0.39 0.72 0.50 452
clothing 0.24 0.48 0.32 83
money 0.22 0.55 0.31 125
missing_people 0.20 0.35 0.25 57
refugees 0.20 0.49 0.29 164
death 0.38 0.64 0.47 248
other_aid 0.33 0.50 0.40 692
infrastructure_related 0.19 0.49 0.28 333
transport 0.19 0.44 0.27 233
buildings 0.31 0.61 0.41 244
electricity 0.21 0.49 0.30 104
tools 0.12 0.10 0.11 31
hospitals 0.23 0.40 0.29 62
shops 0.17 0.06 0.08 18
aid_centers 0.14 0.25 0.18 56
other_infrastructure 0.17 0.52 0.25 226
weather_related 0.68 0.66 0.67 1458
floods 0.40 0.65 0.50 439
storm 0.52 0.68 0.59 495
fire 0.11 0.16 0.13 57
earthquake 0.65 0.67 0.66 490
cold 0.31 0.50 0.39 103
other_weather 0.19 0.53 0.28 267
direct_report 0.49 0.65 0.56 1012
micro avg 0.48 0.63 0.55 16463
macro avg 0.32 0.49 0.37 16463
weighted avg 0.58 0.63 0.58 16463
samples avg 0.37 0.45 0.36 16463
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Build a pipeline using KNN classifier
pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),('tfidf',TfidfTransformer()),
('knn',MultiOutputClassifier(KNeighborsClassifier(n_jobs=-1)))])
# Checking pipeline hyperparameters
pipeline.get_params().keys()
# Using grid search to find better parameters
parameters = {'knn__estimator__n_neighbors': [3,5,7],
'knn__estimator__p': [1,2]}
cv = GridSearchCV(pipeline, param_grid=parameters,scoring = 'f1_micro')
# Train grid search classifier
cv.fit(X_train,y_train)
# Predict on test data
y_pred = cv.predict(X_test)
# Evaluating the model
class_report = classification_report(y_test, y_pred, target_names=category_names)
print(class_report)
###Output
precision recall f1-score support
related 0.83 0.93 0.88 4004
request 0.74 0.46 0.56 878
offer 0.00 0.00 0.00 23
aid_related 0.73 0.46 0.56 2164
medical_help 0.62 0.07 0.12 424
medical_products 0.69 0.11 0.19 244
search_and_rescue 0.62 0.04 0.07 140
security 0.00 0.00 0.00 95
military 0.77 0.10 0.18 163
child_alone 0.00 0.00 0.00 0
water 0.67 0.19 0.29 323
food 0.72 0.29 0.42 556
shelter 0.70 0.18 0.28 452
clothing 0.71 0.14 0.24 83
money 0.71 0.04 0.08 125
missing_people 1.00 0.02 0.03 57
refugees 0.44 0.02 0.05 164
death 0.94 0.14 0.24 248
other_aid 0.54 0.05 0.10 692
infrastructure_related 0.29 0.01 0.01 333
transport 0.92 0.05 0.10 233
buildings 0.72 0.09 0.15 244
electricity 0.80 0.08 0.14 104
tools 0.00 0.00 0.00 31
hospitals 0.00 0.00 0.00 62
shops 0.00 0.00 0.00 18
aid_centers 1.00 0.02 0.04 56
other_infrastructure 0.00 0.00 0.00 226
weather_related 0.77 0.44 0.56 1458
floods 0.82 0.16 0.26 439
storm 0.74 0.22 0.34 495
fire 0.50 0.04 0.07 57
earthquake 0.79 0.47 0.59 490
cold 0.88 0.07 0.13 103
other_weather 0.44 0.03 0.06 267
direct_report 0.68 0.32 0.43 1012
micro avg 0.78 0.43 0.55 16463
macro avg 0.58 0.14 0.20 16463
weighted avg 0.72 0.43 0.48 16463
samples avg 0.66 0.42 0.46 16463
###Markdown
9. Export your model as a pickle file
###Code
# save model
pickled_filename = 'trained_model.pkl'
pickle.dump(cv, open(pickled_filename, 'wb'))
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
# Load data and split into train/test sets
X, y, test_labels = load_data('disaster_response.db')
def build_model():
'''
Function that uses a ML pipeline and grid search to return the best model
Input: -
Output: model: best classification model
'''
# Build a pipeline, Note: classes are imbalanced
pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier(class_weight='balanced',n_jobs=-1)))])
# Using grid search to find better parameters
parameters = {'clf__estimator__max_depth': [3,4,5],
'clf__estimator__min_samples_split': [3,5,7]}
# Create grid search object
model = GridSearchCV(pipeline, param_grid=parameters ,scoring='f1_micro')
return model
def evaluate_model(model, X_test, y_test, category_names):
'''
Function that takes the model, X_test, y_test, and category names to evaluate the model and print classification report
Input: model: best model from build_model(), X_test: testing set, y_test: test set categories, category_names: labels for categories
Output: classification report: classification report for y_test vs predicted values
'''
# Predict on test data
y_pred = model.predict(X_test)
class_report = classification_report(y_test, y_pred, target_names=category_names)
print(class_report)
def save_model(model, model_filepath): # Saving pickled file
'''
Function that takes the model and the model file path and saves it as a pickled file
Input: model: best model from build_model(), model_filepath: file path of the model
Output: saves the model as pickled file
'''
pickle.dump(model, open(model_filepath, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import re
import time
import nltk
nltk.download('words')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report, f1_score, accuracy_score, precision_score, recall_score, make_scorer
import pickle
from IPython.display import FileLink
# load data from database
engine = create_engine('sqlite:///DisasterDB.db')
df = pd.read_sql_table('EmergencyMessage', engine, )
df.head(2)
X= df.message
Y= df.iloc[:,4:]
# Dropping the column with no 1 lables
Y.drop(columns='child_alone', inplace= True)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""Function to tokenize the given text.
It removes punctuation, lowers the case, removes the stopwords, lemmatizes and stems the words in the text.
Parameters:
text: str
The input text that needs to be tokenized
Returns:
List that is tokenized """
# replace punctuations with spaces and change text to lower case
temp = word_tokenize(re.sub(r'[\W]',' ',text).lower())
# remove stop words from the sentence
words= [x for x in temp if x not in stopwords.words('english')]
# lemmatize and stem the words
return [PorterStemmer().stem( WordNetLemmatizer().lemmatize(w)) for w in words]
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline(
[ ('vect', CountVectorizer(tokenizer= tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X,Y, test_size= 0.3 )
X_train.shape, y_train.shape
pipeline.fit(X_train, y_train)
###Output
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\srini\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_train_pred= pipeline.predict(X_train)
y_test_pred= pipeline.predict(X_test)
f1_test_score= []
for i,col in enumerate(y_test.columns):
print(classification_report(y_test[col], np.transpose(y_test_pred)[i]))
# weighted average f1 score
f1_score(np.array(y_test), y_test_pred, average='micro')
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Setting up a grid search with Random forest to find the best parameters
parameters = {'clf__estimator__max_depth': [300], #, 200, 250
#'clf__estimator__min_samples_leaf': [1,4],
#'clf__estimator__min_samples_split': [2,5],
'clf__estimator__n_estimators': [80], # [20,80]
#'tfidf__use_idf':[True, False]
}
my_scorer = make_scorer(f1_score,average='micro' )
cv = GridSearchCV(estimator= pipeline, param_grid= parameters, scoring= my_scorer, cv=3, verbose= 3, n_jobs= 2 )
cv.fit(X_train, y_train )
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
def get_train_test_score(X_train, X_test, y_train, y_test, grid_search=False):
""" Function that returns the average F1 score, precision, accuracy and recall for given training and testing set.
The response (y_train/y_test) is a multi-column/ category output.
Input: Training data
Output: Average F1, accuracy, precision, recall scores across all the columns"""
# Calculating the predicted values from the model
if grid_search==True:
y_train_pred= cv.predict(X_train)
y_test_pred= cv.predict(X_test)
if grid_search== False:
y_train_pred= pipeline.predict(X_train)
y_test_pred= pipeline.predict(X_test)
print('F1 \nTrain:', f1_score(y_train,y_train_pred, average= 'micro') )
print('F1 \nTest:', f1_score(y_test,y_test_pred, average= 'micro'), )
get_train_test_score(X_train, X_test, y_train, y_test, grid_search= True)
###Output
F1
Train: 0.9819456800460516
F1
Test: 0.6519033145172962
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#Using SVC algorithm for classification
pipeline2 = Pipeline(
[ ('vect', CountVectorizer(tokenizer= tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier() ))
])
pipeline2.get_params()
parameter2= {'clf__estimator__learning_rate': [1],
'clf__estimator__n_estimators': [100,150,200],
}
my_scorer = make_scorer(f1_score,average='micro' )
cv2 = GridSearchCV(estimator= pipeline2, param_grid= parameter2, scoring= my_scorer, cv=3, verbose= 3, n_jobs= 3)
cv2.fit(X_train, y_train)
y_train_pred= cv2.predict(X_train)
y_test_pred= cv2.predict(X_test)
print('F1 \nTrain:', f1_score(y_train,y_train_pred, average= 'micro') )
print('F1 \nTest:', f1_score(y_test,y_test_pred, average= 'micro'), )
cv2.best_params_
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv, open('random_forest_model.pkl','wb'))
#FileLink(r'random_forest_model.pkl')
pickle.dump(cv2, open('ada_boost_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
import pickle
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
from sklearn.model_selection import GridSearchCV
import nltk
nltk.download('punkt')
nltk.download('stopwords')
import pandas as pd
from nltk.tokenize import word_tokenize
from sqlalchemy import create_engine
from sklearn.preprocessing import OneHotEncoder
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
import re
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
import re
from sklearn.multioutput import MultiOutputClassifier
import numpy as np
import warnings
warnings.simplefilter('ignore')
from sklearn.metrics import f1_score, accuracy_score, classification_report, fbeta_score, make_scorer
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql("SELECT * FROM messages", engine)
col=[i for i in df.columns if i not in ['id','original', 'genre']]
X = df["message"]
Y = df.iloc[:,4:]
#global category_names
category_names = Y.columns
#print(category_names)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('clf', RandomForestClassifier())
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
model =pipeline
model.fit(X_train, y_train)
model.get_params().keys()
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = model.predict(X_test)
#print(y_pred)
def display_results(y_test, y_pred):
labels = np.unique(y_pred)
print(labels)
#accuracy = (y_pred == y_test).mean()
#print("f1 score", my_)
# print("Accuracy:", accuracy)
#classification_report(y_test, y_pred, target_names=category_names)
print(classification_report(y_test.values, y_pred, target_names=Y.columns.values))
display_results(y_test, y_pred)
###Output
[ 0. 1.]
precision recall f1-score support
related 0.83 0.90 0.87 4996
request 0.85 0.38 0.52 1104
offer 0.00 0.00 0.00 29
aid_related 0.78 0.41 0.54 2726
medical_help 0.71 0.02 0.04 535
medical_products 1.00 0.01 0.03 351
search_and_rescue 1.00 0.01 0.02 194
security 0.00 0.00 0.00 137
military 0.67 0.01 0.02 251
child_alone 0.00 0.00 0.00 0
water 0.90 0.14 0.24 408
food 0.89 0.25 0.39 730
shelter 0.89 0.10 0.17 563
clothing 1.00 0.04 0.07 104
money 0.83 0.03 0.06 159
missing_people 0.00 0.00 0.00 76
refugees 0.50 0.01 0.03 227
death 0.91 0.07 0.13 293
other_aid 0.70 0.04 0.07 843
infrastructure_related 0.33 0.00 0.00 430
transport 0.00 0.00 0.00 295
buildings 0.69 0.03 0.05 331
electricity 0.00 0.00 0.00 129
tools 0.00 0.00 0.00 40
hospitals 0.00 0.00 0.00 73
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 75
other_infrastructure 0.00 0.00 0.00 299
weather_related 0.86 0.40 0.54 1817
floods 0.83 0.18 0.29 522
storm 0.75 0.19 0.30 611
fire 0.00 0.00 0.00 72
earthquake 0.91 0.38 0.54 603
cold 0.67 0.02 0.03 115
other_weather 0.29 0.01 0.01 345
direct_report 0.83 0.32 0.46 1288
avg / total 0.75 0.39 0.45 20798
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
def build_model():
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize))
, ('tfidf', TfidfTransformer())
, ('clf', MultiOutputClassifier(RandomForestClassifier()))])
parameters = {'vect__min_df': [1, 5],
# 'tfidf__use_idf':[True, False],
'clf__estimator__n_estimators':[50, 100],
#'clf__estimator__min_samples_split':[5],
#'vect__max_features': (5000, 10000)
}
#cv = GridSearchCV(estimator=pipeline, param_grid=parameters, verbose=3)
#my_scorer = make_scorer(f1_score(y_test, y_pred, average='macro'), greater_is_better=True)
cv = GridSearchCV(pipeline, param_grid=parameters, scoring="f1_weighted")
return cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
model=build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(y_test, y_pred)
###Output
[0 1]
precision recall f1-score support
related 0.81 0.97 0.88 4996
request 0.91 0.47 0.62 1104
offer 0.00 0.00 0.00 29
aid_related 0.78 0.62 0.69 2726
medical_help 0.57 0.08 0.14 535
medical_products 0.83 0.10 0.18 351
search_and_rescue 0.88 0.08 0.14 194
security 0.50 0.01 0.01 137
military 0.82 0.06 0.10 251
child_alone 0.00 0.00 0.00 0
water 0.92 0.32 0.48 408
food 0.81 0.63 0.71 730
shelter 0.84 0.39 0.54 563
clothing 0.73 0.08 0.14 104
money 0.80 0.03 0.05 159
missing_people 0.00 0.00 0.00 76
refugees 0.55 0.03 0.05 227
death 0.76 0.18 0.29 293
other_aid 0.96 0.03 0.05 843
infrastructure_related 0.25 0.00 0.00 430
transport 0.76 0.11 0.19 295
buildings 0.77 0.12 0.21 331
electricity 0.40 0.03 0.06 129
tools 0.00 0.00 0.00 40
hospitals 0.00 0.00 0.00 73
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 75
other_infrastructure 0.00 0.00 0.00 299
weather_related 0.85 0.68 0.76 1817
floods 0.89 0.49 0.63 522
storm 0.77 0.58 0.66 611
fire 0.00 0.00 0.00 72
earthquake 0.92 0.80 0.86 603
cold 0.63 0.15 0.24 115
other_weather 0.48 0.04 0.07 345
direct_report 0.88 0.39 0.54 1288
avg / total 0.77 0.53 0.57 20798
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
def build_model_new():
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize))
, ('tfidf', TfidfTransformer())
, ('clf', MultiOutputClassifier(ExtraTreesClassifier()))])
parameters = {'vect__min_df': [1, 5],
# 'tfidf__use_idf':[True, False],
'clf__estimator__n_estimators':[50, 100],
}
#cv = GridSearchCV(estimator=pipeline, param_grid=parameters, verbose=3)
#my_scorer = make_scorer(f1_score(y_test, y_pred, average='macro'), greater_is_better=True)
cv = GridSearchCV(pipeline, param_grid=parameters, scoring="f1_weighted")
return cv
model=build_model_new()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(y_test, y_pred)
###Output
[0 1]
precision recall f1-score support
related 0.81 0.97 0.88 4996
request 0.91 0.47 0.62 1104
offer 0.00 0.00 0.00 29
aid_related 0.79 0.65 0.72 2726
medical_help 0.61 0.08 0.14 535
medical_products 0.90 0.10 0.18 351
search_and_rescue 0.83 0.03 0.05 194
security 0.33 0.01 0.01 137
military 0.71 0.05 0.09 251
child_alone 0.00 0.00 0.00 0
water 0.94 0.24 0.39 408
food 0.85 0.43 0.57 730
shelter 0.85 0.36 0.50 563
clothing 0.74 0.13 0.23 104
money 0.88 0.04 0.08 159
missing_people 1.00 0.01 0.03 76
refugees 0.46 0.03 0.05 227
death 0.81 0.12 0.20 293
other_aid 0.70 0.02 0.04 843
infrastructure_related 0.29 0.00 0.01 430
transport 0.61 0.06 0.11 295
buildings 0.73 0.10 0.18 331
electricity 0.43 0.02 0.04 129
tools 0.00 0.00 0.00 40
hospitals 1.00 0.01 0.03 73
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 75
other_infrastructure 0.00 0.00 0.00 299
weather_related 0.84 0.68 0.75 1817
floods 0.89 0.44 0.59 522
storm 0.77 0.48 0.59 611
fire 1.00 0.01 0.03 72
earthquake 0.90 0.62 0.73 603
cold 0.55 0.10 0.16 115
other_weather 0.64 0.05 0.10 345
direct_report 0.89 0.38 0.54 1288
avg / total 0.78 0.51 0.56 20798
###Markdown
I obtained best result with Random Forest classifier with mean f1 score 0.57. I used also Adaboost it was very slow that is why I removed it, when I needed to rerun notebook and result was not that good. 9. Export your model as a pickle file
###Code
filename = 'finalized_model.sav'
pickle.dump(model, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import pickle
import string
import unittest
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report, f1_score, make_scorer
from sklearn.base import BaseEstimator, TransformerMixin
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql("SELECT * FROM Disaster", engine)
# Since the original messages are in multiple languages droping it for now
df.drop(['original','genre','id'],inplace=True,axis=1)
#df.set_index('id',inplace=True)
print(df.shape)
df.head(2)
X = df.message.values
Y = df[df.columns[1:]]
Y.head(3)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens=word_tokenize(text)
lemmatizer=WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.Reference: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html```classification_report(y_true, y_pred, target_names=target_names)```We need to iterate through every class for train and test , with column name
###Code
def get_result(y_pred,y_test):
results_dict = {}
for pred, label, col in zip(y_pred.transpose(), y_test.values.transpose(), y_test.columns):
#print(col)
#print(classification_report(label, pred))
results_dict[col] = classification_report(label, pred,output_dict=True)
weighted_avg = {}
for key in results_dict.keys():
weighted_avg[key] = results_dict[key]['weighted avg']
df_wavg = pd.DataFrame(weighted_avg).transpose()
return df_wavg
y_pred=pipeline.predict(X_test)
df_wavg=get_result(y_pred,y_test)
df_wavg.head()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': [0.5],
'vect__max_features': [5000],
'clf__estimator__max_depth': (25, 50, 100, None),
'clf__estimator__min_samples_split': (2, 10, 25, 50, 100),
'clf__estimator__n_estimators': [200]
}
cv = GridSearchCV(pipeline, parameters, cv=5, n_jobs=3)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
cv.best_params_
cv.best_score_
y_preds = cv.predict(X_test)
results_cv = get_result(y_preds,y_test)
results_cv.head()
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF A. Let's improve are tokenizer by removing stop words, lemmatizing and stemming
###Code
import re
def tokenize(text):
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
words = word_tokenize(text)
words = [word for word in words if word not in stopwords.words('english')]
lemmed = [WordNetLemmatizer().lemmatize(word) for word in words]
lemmed = [WordNetLemmatizer().lemmatize(word, pos='v') for word in lemmed]
stemmed = [PorterStemmer().stem(word) for word in lemmed]
return stemmed
###Output
_____no_output_____
###Markdown
B. Lets add additional features using custom transformer
###Code
class WordCount(BaseEstimator, TransformerMixin):
def word_count(self, text):
table = text.maketrans(dict.fromkeys(string.punctuation))
words = word_tokenize(text.lower().strip().translate(table))
return len(words)
def fit(self, x, y=None):
return self
def transform(self, x):
count = pd.Series(x).apply(self.word_count)
return pd.DataFrame(count)
pipeline = Pipeline([
('feature',FeatureUnion([
('text',Pipeline([
('vect',CountVectorizer(tokenizer=tokenize, max_df=0.5,
max_features=5000, ngram_range=(1, 2),
)),
('tfidf',TfidfTransformer())
])),
('word_count',WordCount())
])),
("clf", MultiOutputClassifier(RandomForestClassifier(min_samples_split=2, random_state=42, verbose=3)))
])
pipeline.fit(X_train, y_train)
y_pred=pipeline.predict(X_test)
df_wavg=get_result(y_pred,y_test)
df_wavg.head()
###Output
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.0s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.4s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.4s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.3s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.0s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.2s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.9s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.8s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.1s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.7s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.9s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.9s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.9s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.7s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.8s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.9s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.0s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.4s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.2s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.4s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 1.0s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.8s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.5s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.6s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 100 out of 100 | elapsed: 0.4s finished
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 0.0s remaining: 0.0s
###Markdown
9. Export your model as a pickle file
###Code
with open('model.pkl', 'wb') as file:
pickle.dump(pipeline, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
import numpy as np
import pandas as pd
import pickle
from pprint import pprint
import re
import sys
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.multiclass import OneVsRestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import precision_recall_fscore_support
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sqlalchemy import create_engine
import time
import warnings
warnings.filterwarnings('ignore')
# load data from database
engine = create_engine('sqlite:///DisasterMessages.db')
df = pd.read_sql_table("DisasterMessages", con=engine)
df.head()
X = df['message']
Y = df.iloc[:, 4:]
df['related'].value_counts()
Y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# normalize text
text = re.sub(r"[^\w]", " ", text.lower())
# tokenize text
words = word_tokenize(text)
# remove stopwords
stopwords_ = stopwords.words("english")
words = [word for word in words if word not in stopwords_]
# extract root form of words
words = [WordNetLemmatizer().lemmatize(word, pos='v') for word in words]
return words
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(OneVsRestClassifier(LinearSVC())))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_test, X_train, Y_test, Y_train = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, Y_train)
# predict on test data
Y_pred = pipeline.predict(X_test)
Y_pred
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
for col in range(36):
print(Y_test.columns[col])
print(classification_report(Y_test.iloc[:,col], Y_pred[:,col]))
print('-----------------------------------------------------')
print('Accuracy: {}'.format(np.mean(Y_test.values == Y_pred)))
###Output
Accuracy: 0.9465409871268888
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.75, 1.0)
}
model = GridSearchCV(estimator=pipeline, param_grid=parameters, cv=5)
model
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
model.fit(X_train, Y_train)
Y_pred = model.predict(X_test)
for col in range(36):
print(Y_test.columns[col])
print(classification_report(Y_test.iloc[:,col], Y_pred[:,col]))
print('-----------------------------------------------------')
print('Accuracy: {}'.format(np.mean(Y_test.values == Y_pred)))
###Output
Accuracy: 0.9482193514845331
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))])
# train classifier
pipeline.fit(X_train, Y_train)
# predict on test data
Y_pred = pipeline.predict(X_test)
for col in range(36):
print(Y_test.columns[col])
print(classification_report(Y_test.iloc[:,col], Y_pred[:,col]))
print('-----------------------------------------------------')
print('Accuracy: {}'.format(np.mean(Y_test.values == Y_pred)))
###Output
Accuracy: 0.9437889216650278
###Markdown
9. Export your model as a pickle file
###Code
filename = 'classifier.sav'
pickle.dump(model, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
import sqlite3
import re
import nltk
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk import PorterStemmer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import Normalizer
from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_fscore_support
from sklearn.model_selection import GridSearchCV
nltk.download(['punkt', 'wordnet','stopwords','averaged_perceptron_tagger'])
# load data from database
engine = create_engine('sqlite:///DisasterTable.db')
df = df = pd.read_sql("SELECT * FROM DisasterTable", engine)
df.nunique()
df.isnull().sum()
df.head()
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
X.head()
Y.head()
dic={}
for col in Y.columns:
dic[col] = Y[col].sum()
dic
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
text = text.lower()
text = re.sub(r"[^a-zA-z0-9]"," ",text)
words = word_tokenize(text)
words = [w for w in words if w not in stopwords.words("english")]
clean_tokens = []
lemmatizer = WordNetLemmatizer()
for w in words:
clean_tok = lemmatizer.lemmatize(w , pos='v').strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer(use_idf=True)),
('clf', MultiOutputClassifier(AdaBoostClassifier())),
])
pipeline.get_params()
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,test_size = 0.2, random_state = 45)
pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
Y_pred = pipeline.predict(X_test)
def display_results(y_test, y_pred, y_col):
"""
Display f1 score, precision, recall, accuracy and confusion_matrix
for each category of the test dataset
"""
clf_report = classification_report(y_test, y_pred)
accuracy = (y_pred == y_test).mean()
print(y_col, ":")
print('\n')
print(clf_report)
print('Accuracy =', accuracy)
print('-'*60)
print('\n')
col=0
cols = Y_test.columns
for categorie in Y_test.columns:
display_results(Y_test[categorie],
Y_pred[:,col],
categorie)
col+=1
###Output
related :
precision recall f1-score support
0 0.60 0.16 0.25 1198
1 0.79 0.97 0.87 4002
2 0.45 0.11 0.18 44
avg / total 0.74 0.78 0.72 5244
Accuracy = 0.775553012967
------------------------------------------------------------
request :
precision recall f1-score support
0 0.91 0.97 0.94 4335
1 0.77 0.52 0.62 909
avg / total 0.88 0.89 0.88 5244
Accuracy = 0.890350877193
------------------------------------------------------------
offer :
precision recall f1-score support
0 0.99 1.00 1.00 5214
1 0.00 0.00 0.00 30
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.993325705568
------------------------------------------------------------
aid_related :
precision recall f1-score support
0 0.75 0.85 0.80 3044
1 0.75 0.60 0.67 2200
avg / total 0.75 0.75 0.74 5244
Accuracy = 0.747902364607
------------------------------------------------------------
medical_help :
precision recall f1-score support
0 0.94 0.99 0.96 4827
1 0.59 0.24 0.34 417
avg / total 0.91 0.93 0.91 5244
Accuracy = 0.926010678871
------------------------------------------------------------
medical_products :
precision recall f1-score support
0 0.96 0.99 0.98 4990
1 0.59 0.26 0.36 254
avg / total 0.95 0.96 0.95 5244
Accuracy = 0.955377574371
------------------------------------------------------------
search_and_rescue :
precision recall f1-score support
0 0.98 1.00 0.99 5086
1 0.67 0.18 0.29 158
avg / total 0.97 0.97 0.97 5244
Accuracy = 0.972730739893
------------------------------------------------------------
security :
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.53 0.07 0.13 111
avg / total 0.97 0.98 0.97 5244
Accuracy = 0.979023646072
------------------------------------------------------------
military :
precision recall f1-score support
0 0.97 0.99 0.98 5059
1 0.59 0.29 0.39 185
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.967772692601
------------------------------------------------------------
child_alone :
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
Accuracy = 1.0
------------------------------------------------------------
water :
precision recall f1-score support
0 0.98 0.99 0.98 4918
1 0.76 0.66 0.71 326
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.965865751335
------------------------------------------------------------
food :
precision recall f1-score support
0 0.96 0.98 0.97 4678
1 0.78 0.69 0.73 566
avg / total 0.94 0.95 0.94 5244
Accuracy = 0.94584286804
------------------------------------------------------------
shelter :
precision recall f1-score support
0 0.96 0.98 0.97 4770
1 0.78 0.55 0.64 474
avg / total 0.94 0.95 0.94 5244
Accuracy = 0.945080091533
------------------------------------------------------------
clothing :
precision recall f1-score support
0 0.99 1.00 0.99 5178
1 0.62 0.50 0.55 66
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.989893211289
------------------------------------------------------------
money :
precision recall f1-score support
0 0.98 0.99 0.99 5108
1 0.60 0.36 0.45 136
avg / total 0.97 0.98 0.97 5244
Accuracy = 0.977307398932
------------------------------------------------------------
missing_people :
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 0.41 0.11 0.18 61
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.987795575896
------------------------------------------------------------
refugees :
precision recall f1-score support
0 0.97 0.99 0.98 5042
1 0.62 0.25 0.35 202
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.965102974828
------------------------------------------------------------
death :
precision recall f1-score support
0 0.97 0.99 0.98 4989
1 0.73 0.35 0.48 255
avg / total 0.96 0.96 0.96 5244
Accuracy = 0.962242562929
------------------------------------------------------------
other_aid :
precision recall f1-score support
0 0.88 0.98 0.93 4546
1 0.46 0.13 0.20 698
avg / total 0.82 0.86 0.83 5244
Accuracy = 0.863653699466
------------------------------------------------------------
infrastructure_related :
precision recall f1-score support
0 0.94 0.99 0.96 4905
1 0.30 0.08 0.13 339
avg / total 0.90 0.93 0.91 5244
Accuracy = 0.928680396644
------------------------------------------------------------
transport :
precision recall f1-score support
0 0.96 0.99 0.98 4970
1 0.69 0.26 0.38 274
avg / total 0.95 0.96 0.95 5244
Accuracy = 0.955377574371
------------------------------------------------------------
buildings :
precision recall f1-score support
0 0.97 0.99 0.98 4989
1 0.65 0.44 0.53 255
avg / total 0.96 0.96 0.96 5244
Accuracy = 0.961479786423
------------------------------------------------------------
electricity :
precision recall f1-score support
0 0.99 1.00 0.99 5149
1 0.60 0.36 0.45 95
avg / total 0.98 0.98 0.98 5244
Accuracy = 0.983981693364
------------------------------------------------------------
tools :
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.20 0.03 0.05 34
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.992944317315
------------------------------------------------------------
hospitals :
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.25 0.04 0.06 56
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.988558352403
------------------------------------------------------------
shops :
precision recall f1-score support
0 1.00 1.00 1.00 5229
1 0.25 0.07 0.11 15
avg / total 1.00 1.00 1.00 5244
Accuracy = 0.996758199847
------------------------------------------------------------
aid_centers :
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.36 0.08 0.13 64
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.98703279939
------------------------------------------------------------
other_infrastructure :
precision recall f1-score support
0 0.96 0.99 0.98 5020
1 0.34 0.11 0.16 224
avg / total 0.93 0.95 0.94 5244
Accuracy = 0.952898550725
------------------------------------------------------------
weather_related :
precision recall f1-score support
0 0.88 0.95 0.92 3794
1 0.85 0.67 0.75 1450
avg / total 0.87 0.88 0.87 5244
Accuracy = 0.87643020595
------------------------------------------------------------
floods :
precision recall f1-score support
0 0.96 0.99 0.98 4785
1 0.86 0.58 0.69 459
avg / total 0.95 0.95 0.95 5244
Accuracy = 0.954996186117
------------------------------------------------------------
storm :
precision recall f1-score support
0 0.95 0.98 0.97 4774
1 0.75 0.52 0.61 470
avg / total 0.94 0.94 0.94 5244
Accuracy = 0.940884820748
------------------------------------------------------------
fire :
precision recall f1-score support
0 0.99 1.00 0.99 5195
1 0.33 0.12 0.18 49
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.989511823036
------------------------------------------------------------
earthquake :
precision recall f1-score support
0 0.98 0.99 0.99 4762
1 0.89 0.82 0.85 482
avg / total 0.97 0.97 0.97 5244
Accuracy = 0.9734935164
------------------------------------------------------------
cold :
precision recall f1-score support
0 0.99 1.00 0.99 5136
1 0.70 0.31 0.43 108
avg / total 0.98 0.98 0.98 5244
Accuracy = 0.983028222731
------------------------------------------------------------
other_weather :
precision recall f1-score support
0 0.95 0.99 0.97 4960
1 0.41 0.10 0.16 284
avg / total 0.92 0.94 0.93 5244
Accuracy = 0.943363844394
------------------------------------------------------------
direct_report :
precision recall f1-score support
0 0.87 0.96 0.91 4224
1 0.71 0.39 0.51 1020
avg / total 0.84 0.85 0.83 5244
Accuracy = 0.850495804729
------------------------------------------------------------
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'tfidf__use_idf': [True, False],
'clf__estimator__learning_rate': [0.5, 0.9],
'clf__estimator__n_estimators': [50, 100]
}
cv = GridSearchCV(pipeline, param_grid=parameters,
cv=2, n_jobs=-1, verbose=2)
cv.fit(X_train, Y_train)
cv.best_params_
###Output
Fitting 2 folds for each of 8 candidates, totalling 16 fits
[CV] clf__estimator__learning_rate=0.5, clf__estimator__n_estimators=50, tfidf__use_idf=True
[CV] clf__estimator__learning_rate=0.5, clf__estimator__n_estimators=50, tfidf__use_idf=True, total= 2.3min
[CV] clf__estimator__learning_rate=0.5, clf__estimator__n_estimators=50, tfidf__use_idf=True
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.best_estimator_
Y_pred = cv.predict(X_test)
(Y_test == Y_pred).mean().mean()
col=0
cols = Y_test.columns
for categorie in Y_test.columns:
display_results(Y_test[categorie],
Y_pred[:,col],
categorie)
col+=1
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
def tokenize_2(text):
"""
Tokenize the input text. This function is called in StartingVerbExtractor.
"""
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = [lemmatizer.lemmatize(
tok).lower().strip() for tok in tokens]
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
""" return true if the first word is an appropriate verb or RT for retweet """
# tokenize by sentences
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
# tokenize each sentence into words and tag part of speech
pos_tags = nltk.pos_tag(tokenize_2(sentence))
# index pos_tags to get the first word and part of speech tag
first_word, first_tag = pos_tags[0]
# return true if the first word is an appropriate verb or RT for retweet
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
""" Fit """
return self
def transform(self, X):
""" Transform """
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
pipeline_random = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer(use_idf=True))
])),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
pipeline_random.fit(X_train, Y_train)
Y_pred = pipeline_random.predict(X_test)
(Y_pred == Y_test).mean().mean()
###Output
_____no_output_____
###Markdown
Try AdaBoostClassifier
###Code
pipeline_ada = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer(use_idf=True))
])),
('start_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline_ada.fit(X_train, Y_train)
Y_pred = pipeline_ada.predict(X_test)
(Y_pred == Y_test).mean().mean()
col=0
cols = Y_test.columns
for categorie in Y_test.columns:
display_results(Y_test[categorie],
Y_pred[:,col],
categorie)
col+=1
###Output
related :
precision recall f1-score support
0 0.60 0.16 0.25 1198
1 0.79 0.97 0.87 4002
2 0.45 0.11 0.18 44
avg / total 0.74 0.78 0.72 5244
Accuracy = 0.775553012967
------------------------------------------------------------
request :
precision recall f1-score support
0 0.91 0.97 0.94 4335
1 0.77 0.52 0.62 909
avg / total 0.88 0.89 0.88 5244
Accuracy = 0.890350877193
------------------------------------------------------------
offer :
precision recall f1-score support
0 0.99 1.00 1.00 5214
1 0.00 0.00 0.00 30
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.992753623188
------------------------------------------------------------
aid_related :
precision recall f1-score support
0 0.75 0.85 0.80 3044
1 0.75 0.60 0.67 2200
avg / total 0.75 0.75 0.74 5244
Accuracy = 0.747902364607
------------------------------------------------------------
medical_help :
precision recall f1-score support
0 0.94 0.99 0.96 4827
1 0.59 0.24 0.34 417
avg / total 0.91 0.93 0.91 5244
Accuracy = 0.926010678871
------------------------------------------------------------
medical_products :
precision recall f1-score support
0 0.96 0.99 0.98 4990
1 0.59 0.26 0.36 254
avg / total 0.95 0.96 0.95 5244
Accuracy = 0.955377574371
------------------------------------------------------------
search_and_rescue :
precision recall f1-score support
0 0.98 1.00 0.99 5086
1 0.67 0.18 0.29 158
avg / total 0.97 0.97 0.97 5244
Accuracy = 0.972730739893
------------------------------------------------------------
security :
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.53 0.07 0.13 111
avg / total 0.97 0.98 0.97 5244
Accuracy = 0.979023646072
------------------------------------------------------------
military :
precision recall f1-score support
0 0.97 0.99 0.98 5059
1 0.59 0.29 0.39 185
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.967772692601
------------------------------------------------------------
child_alone :
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
Accuracy = 1.0
------------------------------------------------------------
water :
precision recall f1-score support
0 0.98 0.99 0.98 4918
1 0.76 0.66 0.71 326
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.965865751335
------------------------------------------------------------
food :
precision recall f1-score support
0 0.96 0.98 0.97 4678
1 0.78 0.69 0.73 566
avg / total 0.94 0.95 0.94 5244
Accuracy = 0.94584286804
------------------------------------------------------------
shelter :
precision recall f1-score support
0 0.96 0.98 0.97 4770
1 0.78 0.55 0.64 474
avg / total 0.94 0.95 0.94 5244
Accuracy = 0.945080091533
------------------------------------------------------------
clothing :
precision recall f1-score support
0 0.99 1.00 0.99 5178
1 0.62 0.50 0.55 66
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.989893211289
------------------------------------------------------------
money :
precision recall f1-score support
0 0.98 0.99 0.99 5108
1 0.60 0.36 0.45 136
avg / total 0.97 0.98 0.97 5244
Accuracy = 0.977307398932
------------------------------------------------------------
missing_people :
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 0.41 0.11 0.18 61
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.987795575896
------------------------------------------------------------
refugees :
precision recall f1-score support
0 0.97 0.99 0.98 5042
1 0.62 0.25 0.35 202
avg / total 0.96 0.97 0.96 5244
Accuracy = 0.965102974828
------------------------------------------------------------
death :
precision recall f1-score support
0 0.97 0.99 0.98 4989
1 0.73 0.35 0.48 255
avg / total 0.96 0.96 0.96 5244
Accuracy = 0.962242562929
------------------------------------------------------------
other_aid :
precision recall f1-score support
0 0.88 0.98 0.93 4546
1 0.46 0.13 0.20 698
avg / total 0.82 0.86 0.83 5244
Accuracy = 0.863653699466
------------------------------------------------------------
infrastructure_related :
precision recall f1-score support
0 0.94 0.99 0.96 4905
1 0.30 0.08 0.13 339
avg / total 0.90 0.93 0.91 5244
Accuracy = 0.928680396644
------------------------------------------------------------
transport :
precision recall f1-score support
0 0.96 0.99 0.98 4970
1 0.69 0.26 0.38 274
avg / total 0.95 0.96 0.95 5244
Accuracy = 0.955377574371
------------------------------------------------------------
buildings :
precision recall f1-score support
0 0.97 0.99 0.98 4989
1 0.65 0.44 0.53 255
avg / total 0.96 0.96 0.96 5244
Accuracy = 0.961479786423
------------------------------------------------------------
electricity :
precision recall f1-score support
0 0.99 1.00 0.99 5149
1 0.60 0.36 0.45 95
avg / total 0.98 0.98 0.98 5244
Accuracy = 0.983981693364
------------------------------------------------------------
tools :
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.20 0.03 0.05 34
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.992944317315
------------------------------------------------------------
hospitals :
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.33 0.05 0.09 56
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.988749046529
------------------------------------------------------------
shops :
precision recall f1-score support
0 1.00 1.00 1.00 5229
1 0.25 0.07 0.11 15
avg / total 1.00 1.00 1.00 5244
Accuracy = 0.996758199847
------------------------------------------------------------
aid_centers :
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.36 0.08 0.13 64
avg / total 0.98 0.99 0.98 5244
Accuracy = 0.98703279939
------------------------------------------------------------
other_infrastructure :
precision recall f1-score support
0 0.96 0.99 0.98 5020
1 0.34 0.11 0.16 224
avg / total 0.93 0.95 0.94 5244
Accuracy = 0.952898550725
------------------------------------------------------------
weather_related :
precision recall f1-score support
0 0.88 0.95 0.92 3794
1 0.85 0.67 0.75 1450
avg / total 0.87 0.88 0.87 5244
Accuracy = 0.87643020595
------------------------------------------------------------
floods :
precision recall f1-score support
0 0.96 0.99 0.98 4785
1 0.86 0.58 0.69 459
avg / total 0.95 0.95 0.95 5244
Accuracy = 0.954996186117
------------------------------------------------------------
storm :
precision recall f1-score support
0 0.95 0.98 0.97 4774
1 0.75 0.52 0.61 470
avg / total 0.94 0.94 0.94 5244
Accuracy = 0.940884820748
------------------------------------------------------------
fire :
precision recall f1-score support
0 0.99 1.00 0.99 5195
1 0.33 0.12 0.18 49
avg / total 0.99 0.99 0.99 5244
Accuracy = 0.989511823036
------------------------------------------------------------
earthquake :
precision recall f1-score support
0 0.98 0.99 0.99 4762
1 0.89 0.82 0.85 482
avg / total 0.97 0.97 0.97 5244
Accuracy = 0.9734935164
------------------------------------------------------------
cold :
precision recall f1-score support
0 0.99 1.00 0.99 5136
1 0.70 0.31 0.43 108
avg / total 0.98 0.98 0.98 5244
Accuracy = 0.983028222731
------------------------------------------------------------
other_weather :
precision recall f1-score support
0 0.95 0.99 0.97 4960
1 0.41 0.10 0.16 284
avg / total 0.92 0.94 0.93 5244
Accuracy = 0.943363844394
------------------------------------------------------------
direct_report :
precision recall f1-score support
0 0.87 0.96 0.91 4224
1 0.71 0.39 0.51 1020
avg / total 0.84 0.85 0.83 5244
Accuracy = 0.850495804729
------------------------------------------------------------
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(pipeline_ada,open('./models/model_adaboost','wb'))
import pickle
pickle.dump(pipeline_random,open('./models/model_random','wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
pip install sklearn
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
from nltk import WordNetLemmatizer, pos_tag, word_tokenize
nltk.download('stopwords','wordnet')
from nltk.corpus import stopwords, wordnet
import re
from collections import defaultdict
from sklearn.base import BaseEstimator,TransformerMixin
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.metrics import classification_report
from sklearn.multioutput import MultiOutputClassifier
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('data','sqlite:///DisasterResponse.db')
X =df['message']
y =df.drop(['id','message','original','genre'],axis=1)
y.sum()
y=y.drop('child_alone',axis=1)
y.sum()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Convert text into tokens
Input:
text - message that needs to be tokenized
Output:
clean_tokens - list of tokens from the given message
"""
# remove url place holder
url_regex= r'(https?://\S+)'
text = re.sub(url_regex, 'urlplaceholder',text)
#tokenize message into words
tokens=word_tokenize(text)
#remove the stop words
filtered_tokens=[w for w in tokens if not w in stopwords.words('english')]
#remove punctuation and tokens containing non alphabetic symbols
alpha_tokens=[token.lower() for token in filtered_tokens if token.isalpha()]
# make a default dictionary for the pos tagging
tag_map = defaultdict(lambda : wordnet.NOUN)
tag_map['J'] = wordnet.ADJ
tag_map['V'] = wordnet.VERB
tag_map['R'] = wordnet.ADV
#lemmatize tokens using pos tags from defaulct dict
clean_tokens=[]
lmtzr = WordNetLemmatizer()
for token, tag in pos_tag(alpha_tokens):
clean_tokens.append(lmtzr.lemmatize(token, tag_map[tag[0]]))
return clean_tokens
###Output
_____no_output_____
###Markdown
Building custom transformer
###Code
class ContainsHelpNeed(BaseEstimator, TransformerMixin):
"""
This custom transformer extracts the messages which start with verb
creates new feature consisting of 1 (True) and 0 (False) values.
"""
def filter_verb(self, text):
words=tokenize(text)
if 'help' in words or 'need' in words:
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.filter_verb)
return pd.DataFrame(X_tagged)
tokenize('Labas diena , kaip sekasi?')
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline1 = Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer()),
('classifier', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer())
])),
('need_help_transformer', ContainsHelpNeed())
])),
('classifier', MultiOutputClassifier(AdaBoostClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline_fitted = pipeline1.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_prediction_train = pipeline_fitted.predict(X_train)
y_prediction_test = pipeline_fitted.predict(X_test)
print(classification_report(y_test.values, y_prediction_test, target_names=y.columns.values))
print('\n',classification_report(y_train.values, y_prediction_train, target_names=y.columns.values))
###Output
precision recall f1-score support
related 0.81 0.97 0.88 15081
request 0.79 0.52 0.63 3350
offer 0.52 0.12 0.20 90
aid_related 0.77 0.62 0.69 8149
medical_help 0.64 0.28 0.39 1536
medical_products 0.71 0.36 0.48 963
search_and_rescue 0.69 0.22 0.33 540
security 0.47 0.08 0.13 353
military 0.69 0.41 0.52 659
water 0.78 0.66 0.71 1239
food 0.81 0.72 0.76 2155
shelter 0.81 0.56 0.66 1747
clothing 0.78 0.46 0.58 284
money 0.61 0.31 0.41 451
missing_people 0.59 0.18 0.28 226
refugees 0.66 0.29 0.40 639
death 0.77 0.49 0.60 910
other_aid 0.56 0.15 0.24 2603
infrastructure_related 0.52 0.12 0.19 1269
transport 0.73 0.24 0.36 913
buildings 0.70 0.43 0.53 1002
electricity 0.65 0.32 0.43 392
tools 0.53 0.08 0.13 117
hospitals 0.48 0.14 0.21 207
shops 0.72 0.14 0.24 91
aid_centers 0.58 0.13 0.22 227
other_infrastructure 0.48 0.11 0.18 859
weather_related 0.86 0.66 0.75 5500
floods 0.88 0.57 0.70 1639
storm 0.77 0.53 0.63 1828
fire 0.72 0.37 0.49 220
earthquake 0.89 0.77 0.83 1849
cold 0.77 0.38 0.51 396
other_weather 0.54 0.15 0.23 1022
direct_report 0.73 0.41 0.52 3780
micro avg 0.79 0.60 0.68 62286
macro avg 0.69 0.37 0.46 62286
weighted avg 0.76 0.60 0.65 62286
samples avg 0.67 0.53 0.55 62286
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline1.get_params()
parameters = {'classifier__estimator__n_estimators': [40,70,100] }
cv = GridSearchCV(pipeline1, param_grid=parameters)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_cv_prediction_test = cv.predict(X_test)
y_cv_prediction_train = cv.predict(X_train)
print(classification_report(y_test.values, y_cv_prediction_test, target_names=y.columns.values))
###Output
precision recall f1-score support
related 0.81 0.96 0.88 4959
request 0.76 0.53 0.62 1128
offer 0.00 0.00 0.00 25
aid_related 0.76 0.63 0.69 2678
medical_help 0.54 0.24 0.33 542
medical_products 0.62 0.30 0.40 358
search_and_rescue 0.50 0.19 0.27 189
security 0.25 0.08 0.12 115
military 0.52 0.31 0.39 231
water 0.72 0.63 0.67 394
food 0.79 0.73 0.76 714
shelter 0.72 0.57 0.63 580
clothing 0.67 0.39 0.49 106
money 0.51 0.31 0.38 133
missing_people 0.28 0.10 0.15 69
refugees 0.49 0.27 0.35 199
death 0.65 0.41 0.50 288
other_aid 0.46 0.16 0.24 826
infrastructure_related 0.38 0.12 0.18 422
transport 0.66 0.27 0.38 323
buildings 0.66 0.43 0.52 351
electricity 0.49 0.27 0.35 129
tools 0.00 0.00 0.00 34
hospitals 0.14 0.08 0.10 65
shops 0.00 0.00 0.00 27
aid_centers 0.08 0.05 0.06 64
other_infrastructure 0.35 0.12 0.18 298
weather_related 0.83 0.66 0.73 1820
floods 0.79 0.55 0.65 541
storm 0.74 0.53 0.62 610
fire 0.42 0.19 0.26 73
earthquake 0.87 0.77 0.82 580
cold 0.62 0.34 0.44 134
other_weather 0.46 0.14 0.21 367
direct_report 0.66 0.44 0.53 1193
micro avg 0.74 0.59 0.66 20565
macro avg 0.52 0.34 0.40 20565
weighted avg 0.70 0.59 0.63 20565
samples avg 0.63 0.51 0.52 20565
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
## Trying to improve the model with custom transformer, which checks is message contains 'need' or 'help'
pipeline2_fitted = pipeline2.fit(X_train, y_train)
y_2_prediction_train = pipeline2_fitted.predict(X_train)
y_2_prediction_test = pipeline2_fitted.predict(X_test)
print(classification_report(y_test.values, y_2_prediction_test, target_names=y.columns.values))
###Output
precision recall f1-score support
related 0.79 0.97 0.87 4959
request 0.75 0.48 0.59 1128
offer 0.00 0.00 0.00 25
aid_related 0.75 0.62 0.68 2678
medical_help 0.56 0.25 0.34 542
medical_products 0.62 0.30 0.41 358
search_and_rescue 0.59 0.20 0.29 189
security 0.32 0.09 0.14 115
military 0.59 0.31 0.40 231
water 0.71 0.63 0.67 394
food 0.81 0.65 0.72 714
shelter 0.78 0.58 0.67 580
clothing 0.74 0.35 0.47 106
money 0.53 0.32 0.40 133
missing_people 0.43 0.13 0.20 69
refugees 0.51 0.22 0.30 199
death 0.71 0.47 0.56 288
other_aid 0.48 0.16 0.24 826
infrastructure_related 0.36 0.09 0.14 422
transport 0.66 0.23 0.34 323
buildings 0.74 0.42 0.54 351
electricity 0.51 0.29 0.37 129
tools 0.00 0.00 0.00 34
hospitals 0.09 0.05 0.06 65
shops 0.14 0.04 0.06 27
aid_centers 0.21 0.08 0.11 64
other_infrastructure 0.33 0.08 0.13 298
weather_related 0.85 0.63 0.72 1820
floods 0.84 0.54 0.66 541
storm 0.77 0.47 0.58 610
fire 0.47 0.30 0.37 73
earthquake 0.88 0.77 0.82 580
cold 0.72 0.33 0.45 134
other_weather 0.51 0.12 0.19 367
direct_report 0.65 0.41 0.50 1193
micro avg 0.75 0.58 0.66 20565
macro avg 0.55 0.33 0.40 20565
weighted avg 0.71 0.58 0.62 20565
samples avg 0.65 0.51 0.52 20565
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle_param = open('models\classifier.pkl', 'wb')
pickled_model=pickle.dump(cv,pickle_param)
pickle_param.close()
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import re
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'stopwords', 'wordnet'])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('messages_table', engine)
X = df.message
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
X.shape, Y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# normalization
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower() )
# splite text into words
words = word_tokenize(text)
# remove stop words
words = [w for w in words if w not in stopwords.words("english")]
# lemmatization word
words = [WordNetLemmatizer().lemmatize(w).strip() for w in words]
#stemming word
words = [PorterStemmer().stem(w) for w in words]
return words
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
Y_pred = pipeline.predict(X_test)
for i, col in enumerate(Y_test):
print('Categories: {}'.format(col))
print(classification_report(Y_test[col], Y_pred[:, i]))
print('Accuracy: {}'.format((Y_test.values == Y_pred).mean()))
pipeline.get_params()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
cv_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters = {
'clf__estimator__n_estimators': [20, 50],
'clf__estimator__min_samples_split': [4, 6]
}
cv = GridSearchCV(cv_pipeline, param_grid=parameters, verbose = 4)
cv.fit(X_train, Y_train)
Y_pred = cv.predict(X_test)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
[CV] clf__estimator__min_samples_split=4, clf__estimator__n_estimators=20
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
for i, col in enumerate(Y_test):
print('Categories: {}'.format(col))
print(classification_report(Y_test[col], Y_pred[:, i]))
print('Accuracy: {}'.format((Y_test.values == Y_pred).mean()))
###Output
Categories: related
precision recall f1-score support
0 0.73 0.39 0.51 1583
1 0.83 0.95 0.89 4924
accuracy 0.82 6507
macro avg 0.78 0.67 0.70 6507
weighted avg 0.80 0.82 0.79 6507
Categories: request
precision recall f1-score support
0 0.91 0.98 0.94 5397
1 0.85 0.52 0.65 1110
accuracy 0.90 6507
macro avg 0.88 0.75 0.79 6507
weighted avg 0.90 0.90 0.89 6507
Categories: offer
precision recall f1-score support
0 1.00 1.00 1.00 6476
1 0.00 0.00 0.00 31
accuracy 1.00 6507
macro avg 0.50 0.50 0.50 6507
weighted avg 0.99 1.00 0.99 6507
Categories: aid_related
precision recall f1-score support
0 0.81 0.83 0.82 3821
1 0.75 0.72 0.74 2686
accuracy 0.79 6507
macro avg 0.78 0.78 0.78 6507
weighted avg 0.78 0.79 0.78 6507
Categories: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6010
1 0.73 0.10 0.17 497
accuracy 0.93 6507
macro avg 0.83 0.55 0.57 6507
weighted avg 0.92 0.93 0.90 6507
Categories: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6168
1 0.81 0.06 0.12 339
accuracy 0.95 6507
macro avg 0.88 0.53 0.55 6507
weighted avg 0.94 0.95 0.93 6507
Categories: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6327
1 0.52 0.07 0.13 180
accuracy 0.97 6507
macro avg 0.75 0.54 0.56 6507
weighted avg 0.96 0.97 0.96 6507
Categories: security
precision recall f1-score support
0 0.98 1.00 0.99 6408
1 0.50 0.01 0.02 99
accuracy 0.98 6507
macro avg 0.74 0.50 0.51 6507
weighted avg 0.98 0.98 0.98 6507
Categories: military
precision recall f1-score support
0 0.97 1.00 0.98 6297
1 0.86 0.06 0.11 210
accuracy 0.97 6507
macro avg 0.91 0.53 0.55 6507
weighted avg 0.97 0.97 0.96 6507
Categories: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6507
accuracy 1.00 6507
macro avg 1.00 1.00 1.00 6507
weighted avg 1.00 1.00 1.00 6507
Categories: water
precision recall f1-score support
0 0.96 0.99 0.98 6101
1 0.83 0.40 0.54 406
accuracy 0.96 6507
macro avg 0.90 0.70 0.76 6507
weighted avg 0.95 0.96 0.95 6507
Categories: food
precision recall f1-score support
0 0.95 0.99 0.97 5802
1 0.84 0.56 0.67 705
accuracy 0.94 6507
macro avg 0.89 0.78 0.82 6507
weighted avg 0.94 0.94 0.94 6507
Categories: shelter
precision recall f1-score support
0 0.94 0.99 0.97 5900
1 0.82 0.43 0.56 607
accuracy 0.94 6507
macro avg 0.88 0.71 0.76 6507
weighted avg 0.93 0.94 0.93 6507
Categories: clothing
precision recall f1-score support
0 0.99 1.00 0.99 6404
1 0.59 0.10 0.17 103
accuracy 0.98 6507
macro avg 0.79 0.55 0.58 6507
weighted avg 0.98 0.98 0.98 6507
Categories: money
precision recall f1-score support
0 0.98 1.00 0.99 6364
1 1.00 0.04 0.08 143
accuracy 0.98 6507
macro avg 0.99 0.52 0.53 6507
weighted avg 0.98 0.98 0.97 6507
Categories: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6429
1 0.00 0.00 0.00 78
accuracy 0.99 6507
macro avg 0.49 0.50 0.50 6507
weighted avg 0.98 0.99 0.98 6507
Categories: refugees
precision recall f1-score support
0 0.97 1.00 0.98 6278
1 0.36 0.02 0.03 229
accuracy 0.96 6507
macro avg 0.66 0.51 0.51 6507
weighted avg 0.94 0.96 0.95 6507
Categories: death
precision recall f1-score support
0 0.96 1.00 0.98 6215
1 0.84 0.21 0.34 292
accuracy 0.96 6507
macro avg 0.90 0.61 0.66 6507
weighted avg 0.96 0.96 0.95 6507
Categories: other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5620
1 0.56 0.02 0.04 887
accuracy 0.86 6507
macro avg 0.71 0.51 0.48 6507
weighted avg 0.82 0.86 0.81 6507
Categories: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6097
1 0.25 0.00 0.00 410
accuracy 0.94 6507
macro avg 0.59 0.50 0.49 6507
weighted avg 0.89 0.94 0.91 6507
Categories: transport
precision recall f1-score support
0 0.96 1.00 0.98 6205
1 0.82 0.11 0.19 302
accuracy 0.96 6507
macro avg 0.89 0.55 0.58 6507
weighted avg 0.95 0.96 0.94 6507
Categories: buildings
precision recall f1-score support
0 0.95 1.00 0.97 6160
1 0.81 0.10 0.17 347
accuracy 0.95 6507
macro avg 0.88 0.55 0.57 6507
weighted avg 0.94 0.95 0.93 6507
Categories: electricity
precision recall f1-score support
0 0.98 1.00 0.99 6364
1 0.83 0.03 0.07 143
accuracy 0.98 6507
macro avg 0.91 0.52 0.53 6507
weighted avg 0.98 0.98 0.97 6507
Categories: tools
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
from sklearn.decomposition import TruncatedSVD
from sklearn.neural_network import MLPClassifier
def build_model():
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('svd', TruncatedSVD()),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(MLPClassifier()))
])
parameters = {
'clf__estimator__early_stopping': [False, True],
'clf__estimator__hidden_layer_sizes': [100, 200],
'clf__estimator__learning_rate_init': [0.001, 0.01]
}
cv = GridSearchCV(pipeline, param_grid=parameters, verbose = 4)
return cv
mlp_model = build_model()
mlp_model.fit(X_train, Y_train)
Y_pred = mlp_model.predict(X_test)
for i, col in enumerate(Y_test):
print('Categories: {}'.format(col))
print(classification_report(Y_test[col], Y_pred[:, i]))
print('Accuracy: {}'.format((Y_test.values == Y_pred).mean()))
###Output
Categories: related
precision recall f1-score support
0 0.00 0.00 0.00 1583
1 0.76 1.00 0.86 4924
accuracy 0.76 6507
macro avg 0.38 0.50 0.43 6507
weighted avg 0.57 0.76 0.65 6507
Categories: request
precision recall f1-score support
0 0.83 1.00 0.91 5397
1 0.00 0.00 0.00 1110
accuracy 0.83 6507
macro avg 0.41 0.50 0.45 6507
weighted avg 0.69 0.83 0.75 6507
Categories: offer
precision recall f1-score support
0 1.00 1.00 1.00 6476
1 0.00 0.00 0.00 31
accuracy 1.00 6507
macro avg 0.50 0.50 0.50 6507
weighted avg 0.99 1.00 0.99 6507
Categories: aid_related
precision recall f1-score support
0 0.59 1.00 0.74 3821
1 0.00 0.00 0.00 2686
accuracy 0.59 6507
macro avg 0.29 0.50 0.37 6507
weighted avg 0.34 0.59 0.43 6507
Categories: medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6010
1 0.00 0.00 0.00 497
accuracy 0.92 6507
macro avg 0.46 0.50 0.48 6507
weighted avg 0.85 0.92 0.89 6507
Categories: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6168
1 0.00 0.00 0.00 339
accuracy 0.95 6507
macro avg 0.47 0.50 0.49 6507
weighted avg 0.90 0.95 0.92 6507
Categories: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6327
1 0.00 0.00 0.00 180
accuracy 0.97 6507
macro avg 0.49 0.50 0.49 6507
weighted avg 0.95 0.97 0.96 6507
Categories: security
precision recall f1-score support
0 0.98 1.00 0.99 6408
1 0.00 0.00 0.00 99
accuracy 0.98 6507
macro avg 0.49 0.50 0.50 6507
weighted avg 0.97 0.98 0.98 6507
Categories: military
precision recall f1-score support
0 0.97 1.00 0.98 6297
1 0.00 0.00 0.00 210
accuracy 0.97 6507
macro avg 0.48 0.50 0.49 6507
weighted avg 0.94 0.97 0.95 6507
Categories: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6507
accuracy 1.00 6507
macro avg 1.00 1.00 1.00 6507
weighted avg 1.00 1.00 1.00 6507
Categories: water
precision recall f1-score support
0 0.94 1.00 0.97 6101
1 0.00 0.00 0.00 406
accuracy 0.94 6507
macro avg 0.47 0.50 0.48 6507
weighted avg 0.88 0.94 0.91 6507
Categories: food
precision recall f1-score support
0 0.89 1.00 0.94 5802
1 0.00 0.00 0.00 705
accuracy 0.89 6507
macro avg 0.45 0.50 0.47 6507
weighted avg 0.80 0.89 0.84 6507
Categories: shelter
precision recall f1-score support
0 0.91 1.00 0.95 5900
1 0.00 0.00 0.00 607
accuracy 0.91 6507
macro avg 0.45 0.50 0.48 6507
weighted avg 0.82 0.91 0.86 6507
Categories: clothing
precision recall f1-score support
0 0.98 1.00 0.99 6404
1 0.00 0.00 0.00 103
accuracy 0.98 6507
macro avg 0.49 0.50 0.50 6507
weighted avg 0.97 0.98 0.98 6507
Categories: money
precision recall f1-score support
0 0.98 1.00 0.99 6364
1 0.00 0.00 0.00 143
accuracy 0.98 6507
macro avg 0.49 0.50 0.49 6507
weighted avg 0.96 0.98 0.97 6507
Categories: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6429
1 0.00 0.00 0.00 78
accuracy 0.99 6507
macro avg 0.49 0.50 0.50 6507
weighted avg 0.98 0.99 0.98 6507
Categories: refugees
precision recall f1-score support
0 0.96 1.00 0.98 6278
1 0.00 0.00 0.00 229
accuracy 0.96 6507
macro avg 0.48 0.50 0.49 6507
weighted avg 0.93 0.96 0.95 6507
Categories: death
precision recall f1-score support
0 0.96 1.00 0.98 6215
1 0.00 0.00 0.00 292
accuracy 0.96 6507
macro avg 0.48 0.50 0.49 6507
weighted avg 0.91 0.96 0.93 6507
Categories: other_aid
precision recall f1-score support
0 0.86 1.00 0.93 5620
1 0.00 0.00 0.00 887
accuracy 0.86 6507
macro avg 0.43 0.50 0.46 6507
weighted avg 0.75 0.86 0.80 6507
Categories: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6097
1 0.00 0.00 0.00 410
accuracy 0.94 6507
macro avg 0.47 0.50 0.48 6507
weighted avg 0.88 0.94 0.91 6507
Categories: transport
precision recall f1-score support
0 0.95 1.00 0.98 6205
1 0.00 0.00 0.00 302
accuracy 0.95 6507
macro avg 0.48 0.50 0.49 6507
weighted avg 0.91 0.95 0.93 6507
Categories: buildings
precision recall f1-score support
0 0.95 1.00 0.97 6160
1 0.00 0.00 0.00 347
accuracy 0.95 6507
macro avg 0.47 0.50 0.49 6507
weighted avg 0.90 0.95 0.92 6507
Categories: electricity
precision recall f1-score support
0 0.98 1.00 0.99 6364
1 0.00 0.00 0.00 143
accuracy 0.98 6507
macro avg 0.49 0.50 0.49 6507
weighted avg 0.96 0.98 0.97 6507
Categories: tools
precision recall f1-score support
0 1.00 1.00 1.00 6475
1 0.00 0.00 0.00 32
accuracy 1.00 6507
macro avg 0.50 0.50 0.50 6507
weighted avg 0.99 1.00 0.99 6507
Categories: hospitals
precision recall f1-score support
0 0.99 1.00 1.00 6445
1 0.00 0.00 0.00 62
accuracy 0.99 6507
macro avg 0.50 0.50 0.50 6507
weighted avg 0.98 0.99 0.99 6507
Categories: shops
precision recall f1-score support
0 1.00 1.00 1.00 6479
1 0.00 0.00 0.00 28
accuracy 1.00 6507
macro avg 0.50 0.50 0.50 6507
weighted avg 0.99 1.00 0.99 6507
Categories: aid_centers
###Markdown
9. Export your model as a pickle file
###Code
import pickle
mlp_file_name = "pickle_mlp_model.pkl"
with open(mlp_file_name, 'wb') as file:
pickle.dump(mlp_model, file)
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
# import libraries
import sys # cmd input
import re
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'stopwords', 'wordnet'])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import TruncatedSVD
from sklearn.neural_network import MLPClassifier
def load_data(db_file_path):
engine = create_engine('sqlite:///{}'.format(db_file_path))
df = pd.read_sql_table('messages_table', engine)
X = df.message
y = df.drop(['id','message','original','genre'], axis=1).fillna(0)
return X, y
def tokenize(text):
# normalization
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower() )
# splite text into words
words = word_tokenize(text)
# remove stop words
words = [w for w in words if w not in stopwords.words("english")]
# lemmatization word
words = [WordNetLemmatizer().lemmatize(w).strip() for w in words]
#stemming word
words = [PorterStemmer().stem(w) for w in words]
return words
def build_model():
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('svd', TruncatedSVD()),
('tfidf', TfidfTransformer()),
('clf', MLPClassifier())
])
parameters = {
'clf__early_stopping': (False, True),
'clf__learning_rate': ('constant', 'invscaling', 'adaptive'),
}
model_pipeline = GridSearchCV(pipeline, param_grid=parameters, verbose = 4)
return model_pipeline
def train(X, y, model):
# train test split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# fit model
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return model
def export_model(model):
# Export model as a pickle file
mlp_file_name = "pickle_mlp_model.pkl"
with open(mlp_file_name, 'wb') as file:
pickle.dump(model, file)
def run_pipeline(data_file):
X, y = load_data(data_file) # run ETL pipeline
model = build_model() # build model pipeline
model = train(X, y, model) # train model pipeline
export_model(model) # save model
if __name__ == '__main__':
data_file = sys.argv[1] # get filename of dataset
run_pipeline(data_file) # run data pipeline
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import nltk
nltk.download(['punkt', 'wordnet'])
# import libraries
import pandas as pd
import numpy as np
import sqlite3
from sqlalchemy import create_engine
import string
import re
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# load data from database
engine = sqlite3.connect('DisasterResponse.db')
df = pd.read_sql("SELECT * FROM messages_disaster", con=engine)
df.head()
x_cols = list(df.columns)[1]
y_cols = list(df.columns)[4:]
x_cols
X = df[x_cols]
Y = df[y_cols]
X_train, X_test, y_train, y_test = train_test_split(X, Y)
len(Y.columns)
len(X)
X_test.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
'''
Input:
text (str): natural langueage text
Output:
clean_tokens (list): list of clean tokens
'''
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
text.translate(str.maketrans('', '', string.punctuation))
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# from catboost import CatBoostRegressor
from sklearn.ensemble import RandomForestClassifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
pipeline.fit(X_train, y_train)
param_list = pipeline.get_params()
for k in param_list.keys():
print(f'---------------{k}-----------------------')
print(param_list[k])
###Output
---------------memory-----------------------
None
---------------steps-----------------------
[('vect', CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x7f53ed084378>, vocabulary=None)), ('tfidf', TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True)), ('clf', MultiOutputClassifier(estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False),
n_jobs=1))]
---------------vect-----------------------
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x7f53ed084378>, vocabulary=None)
---------------tfidf-----------------------
TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True)
---------------clf-----------------------
MultiOutputClassifier(estimator=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False),
n_jobs=1)
---------------vect__analyzer-----------------------
word
---------------vect__binary-----------------------
False
---------------vect__decode_error-----------------------
strict
---------------vect__dtype-----------------------
<class 'numpy.int64'>
---------------vect__encoding-----------------------
utf-8
---------------vect__input-----------------------
content
---------------vect__lowercase-----------------------
True
---------------vect__max_df-----------------------
1.0
---------------vect__max_features-----------------------
None
---------------vect__min_df-----------------------
1
---------------vect__ngram_range-----------------------
(1, 1)
---------------vect__preprocessor-----------------------
None
---------------vect__stop_words-----------------------
None
---------------vect__strip_accents-----------------------
None
---------------vect__token_pattern-----------------------
(?u)\b\w\w+\b
---------------vect__tokenizer-----------------------
<function tokenize at 0x7f53ed084378>
---------------vect__vocabulary-----------------------
None
---------------tfidf__norm-----------------------
l2
---------------tfidf__smooth_idf-----------------------
True
---------------tfidf__sublinear_tf-----------------------
False
---------------tfidf__use_idf-----------------------
True
---------------clf__estimator__bootstrap-----------------------
True
---------------clf__estimator__class_weight-----------------------
None
---------------clf__estimator__criterion-----------------------
gini
---------------clf__estimator__max_depth-----------------------
None
---------------clf__estimator__max_features-----------------------
auto
---------------clf__estimator__max_leaf_nodes-----------------------
None
---------------clf__estimator__min_impurity_decrease-----------------------
0.0
---------------clf__estimator__min_impurity_split-----------------------
None
---------------clf__estimator__min_samples_leaf-----------------------
1
---------------clf__estimator__min_samples_split-----------------------
2
---------------clf__estimator__min_weight_fraction_leaf-----------------------
0.0
---------------clf__estimator__n_estimators-----------------------
10
---------------clf__estimator__n_jobs-----------------------
1
---------------clf__estimator__oob_score-----------------------
False
---------------clf__estimator__random_state-----------------------
None
---------------clf__estimator__verbose-----------------------
0
---------------clf__estimator__warm_start-----------------------
False
---------------clf__estimator-----------------------
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
---------------clf__n_jobs-----------------------
1
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
from sklearn.metrics import classification_report
def display_results(y_true, y_pred):
target_names = list(y_true.columns)
for i, col in enumerate(y_true):
print(f'{col} |------------------------------>')
print(classification_report(y_true[col], y_pred[:,i], target_names=target_names))
# predict on test data
y_pred = pipeline.predict(X_test)
# display results
display_results(y_test, y_pred)
###Output
related |------------------------------>
precision recall f1-score support
related 0.64 0.36 0.46 1535
request 0.82 0.94 0.87 4963
offer 0.71 0.09 0.16 56
avg / total 0.77 0.79 0.77 6554
request |------------------------------>
precision recall f1-score support
related 0.89 0.98 0.93 5428
request 0.82 0.39 0.53 1126
avg / total 0.88 0.88 0.86 6554
offer |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6527
request 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_related |------------------------------>
precision recall f1-score support
related 0.73 0.87 0.79 3880
request 0.74 0.54 0.62 2674
avg / total 0.74 0.73 0.72 6554
medical_help |------------------------------>
precision recall f1-score support
related 0.93 1.00 0.96 6048
request 0.65 0.08 0.14 506
avg / total 0.91 0.93 0.90 6554
medical_products |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6250
request 0.62 0.08 0.13 304
avg / total 0.94 0.95 0.94 6554
search_and_rescue |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6389
request 0.67 0.07 0.13 165
avg / total 0.97 0.98 0.97 6554
security |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6444
request 0.33 0.01 0.02 110
avg / total 0.97 0.98 0.98 6554
military |------------------------------>
precision recall f1-score support
related 0.97 1.00 0.98 6324
request 0.52 0.07 0.13 230
avg / total 0.95 0.97 0.95 6554
child_alone |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6149
request 0.86 0.29 0.43 405
avg / total 0.95 0.95 0.94 6554
food |------------------------------>
precision recall f1-score support
related 0.93 0.99 0.96 5842
request 0.80 0.36 0.50 712
avg / total 0.91 0.92 0.91 6554
shelter |------------------------------>
precision recall f1-score support
related 0.93 0.99 0.96 5983
request 0.80 0.21 0.33 571
avg / total 0.92 0.93 0.91 6554
clothing |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6455
request 0.75 0.06 0.11 99
avg / total 0.98 0.99 0.98 6554
money |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6401
request 0.50 0.02 0.04 153
avg / total 0.97 0.98 0.97 6554
missing_people |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6469
request 0.50 0.01 0.02 85
avg / total 0.98 0.99 0.98 6554
refugees |------------------------------>
precision recall f1-score support
related 0.97 1.00 0.98 6325
request 0.80 0.02 0.03 229
avg / total 0.96 0.97 0.95 6554
death |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6246
request 0.89 0.10 0.19 308
avg / total 0.95 0.96 0.94 6554
other_aid |------------------------------>
precision recall f1-score support
related 0.88 1.00 0.93 5709
request 0.62 0.04 0.07 845
avg / total 0.84 0.87 0.82 6554
infrastructure_related |------------------------------>
precision recall f1-score support
related 0.94 1.00 0.97 6137
request 0.00 0.00 0.00 417
avg / total 0.88 0.94 0.91 6554
transport |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6246
request 0.80 0.05 0.10 308
avg / total 0.95 0.95 0.94 6554
buildings |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.97 6213
request 0.84 0.06 0.11 341
avg / total 0.95 0.95 0.93 6554
electricity |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6436
request 0.60 0.03 0.05 118
avg / total 0.98 0.98 0.97 6554
tools |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6515
request 0.00 0.00 0.00 39
avg / total 0.99 0.99 0.99 6554
hospitals |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6486
request 0.00 0.00 0.00 68
avg / total 0.98 0.99 0.98 6554
shops |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6527
request 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_centers |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6499
request 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.99 6554
other_infrastructure |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.98 6253
request 0.00 0.00 0.00 301
avg / total 0.91 0.95 0.93 6554
weather_related |------------------------------>
precision recall f1-score support
related 0.84 0.97 0.90 4760
request 0.85 0.52 0.65 1794
avg / total 0.84 0.84 0.83 6554
floods |------------------------------>
precision recall f1-score support
related 0.94 1.00 0.97 6015
request 0.85 0.24 0.37 539
avg / total 0.93 0.93 0.92 6554
storm |------------------------------>
precision recall f1-score support
related 0.93 0.99 0.96 5947
request 0.77 0.29 0.42 607
avg / total 0.92 0.93 0.91 6554
fire |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6490
request 1.00 0.03 0.06 64
avg / total 0.99 0.99 0.99 6554
earthquake |------------------------------>
precision recall f1-score support
related 0.96 0.99 0.98 5974
request 0.88 0.58 0.70 580
avg / total 0.95 0.96 0.95 6554
cold |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6426
request 0.50 0.05 0.09 128
avg / total 0.97 0.98 0.97 6554
other_weather |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.97 6208
request 0.58 0.05 0.10 346
avg / total 0.93 0.95 0.93 6554
direct_report |------------------------------>
precision recall f1-score support
related 0.85 0.98 0.91 5272
request 0.81 0.30 0.44 1282
avg / total 0.84 0.85 0.82 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# parameter searh is limited because of processing limitations
parameters = {'clf__estimator__n_estimators': [20, 30, 50] }
# 'vect__ngram_range': ((1, 1), (1, 2))
# 'vect__max_df': (0.5, 0.75, 1.0)
from sklearn.model_selection import GridSearchCV
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
# display results
display_results(y_test, y_pred)
###Output
related |------------------------------>
precision recall f1-score support
related 0.74 0.29 0.42 1535
request 0.81 0.97 0.88 4963
offer 1.00 0.09 0.16 56
avg / total 0.79 0.80 0.77 6554
request |------------------------------>
precision recall f1-score support
related 0.89 0.99 0.94 5428
request 0.87 0.42 0.57 1126
avg / total 0.89 0.89 0.87 6554
offer |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6527
request 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_related |------------------------------>
precision recall f1-score support
related 0.76 0.87 0.81 3880
request 0.77 0.60 0.68 2674
avg / total 0.76 0.76 0.76 6554
medical_help |------------------------------>
precision recall f1-score support
related 0.93 1.00 0.96 6048
request 0.72 0.06 0.11 506
avg / total 0.91 0.93 0.90 6554
medical_products |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6250
request 0.65 0.05 0.09 304
avg / total 0.94 0.95 0.94 6554
search_and_rescue |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6389
request 0.71 0.10 0.18 165
avg / total 0.97 0.98 0.97 6554
security |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6444
request 0.00 0.00 0.00 110
avg / total 0.97 0.98 0.97 6554
military |------------------------------>
precision recall f1-score support
related 0.97 1.00 0.98 6324
request 1.00 0.03 0.06 230
avg / total 0.97 0.97 0.95 6554
child_alone |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.97 6149
request 0.96 0.22 0.36 405
avg / total 0.95 0.95 0.94 6554
food |------------------------------>
precision recall f1-score support
related 0.93 0.99 0.96 5842
request 0.88 0.39 0.54 712
avg / total 0.93 0.93 0.92 6554
shelter |------------------------------>
precision recall f1-score support
related 0.93 0.99 0.96 5983
request 0.82 0.25 0.38 571
avg / total 0.92 0.93 0.91 6554
clothing |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6455
request 0.73 0.08 0.15 99
avg / total 0.98 0.99 0.98 6554
money |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6401
request 0.60 0.02 0.04 153
avg / total 0.97 0.98 0.97 6554
missing_people |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6469
request 0.00 0.00 0.00 85
avg / total 0.97 0.99 0.98 6554
refugees |------------------------------>
precision recall f1-score support
related 0.97 1.00 0.98 6325
request 0.00 0.00 0.00 229
avg / total 0.93 0.96 0.95 6554
death |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6246
request 0.81 0.07 0.13 308
avg / total 0.95 0.96 0.94 6554
other_aid |------------------------------>
precision recall f1-score support
related 0.87 1.00 0.93 5709
request 0.55 0.01 0.03 845
avg / total 0.83 0.87 0.81 6554
infrastructure_related |------------------------------>
precision recall f1-score support
related 0.94 1.00 0.97 6137
request 0.25 0.00 0.00 417
avg / total 0.89 0.94 0.91 6554
transport |------------------------------>
precision recall f1-score support
related 0.96 1.00 0.98 6246
request 0.72 0.08 0.15 308
avg / total 0.95 0.96 0.94 6554
buildings |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.97 6213
request 0.63 0.05 0.09 341
avg / total 0.93 0.95 0.93 6554
electricity |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6436
request 0.80 0.03 0.07 118
avg / total 0.98 0.98 0.97 6554
tools |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6515
request 0.00 0.00 0.00 39
avg / total 0.99 0.99 0.99 6554
hospitals |------------------------------>
precision recall f1-score support
related 0.99 1.00 0.99 6486
request 0.00 0.00 0.00 68
avg / total 0.98 0.99 0.98 6554
shops |------------------------------>
precision recall f1-score support
related 1.00 1.00 1.00 6527
request 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_centers |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6499
request 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.99 6554
other_infrastructure |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.98 6253
request 0.00 0.00 0.00 301
avg / total 0.91 0.95 0.93 6554
weather_related |------------------------------>
precision recall f1-score support
related 0.86 0.97 0.91 4760
request 0.87 0.59 0.71 1794
avg / total 0.87 0.86 0.86 6554
floods |------------------------------>
precision recall f1-score support
related 0.94 1.00 0.97 6015
request 0.92 0.31 0.47 539
avg / total 0.94 0.94 0.93 6554
storm |------------------------------>
precision recall f1-score support
related 0.94 0.99 0.96 5947
request 0.77 0.40 0.53 607
avg / total 0.93 0.93 0.92 6554
fire |------------------------------>
precision recall f1-score support
related 0.99 1.00 1.00 6490
request 1.00 0.02 0.03 64
avg / total 0.99 0.99 0.99 6554
earthquake |------------------------------>
precision recall f1-score support
related 0.97 0.99 0.98 5974
request 0.89 0.66 0.76 580
avg / total 0.96 0.96 0.96 6554
cold |------------------------------>
precision recall f1-score support
related 0.98 1.00 0.99 6426
request 0.78 0.05 0.10 128
avg / total 0.98 0.98 0.97 6554
other_weather |------------------------------>
precision recall f1-score support
related 0.95 1.00 0.97 6208
request 0.45 0.01 0.03 346
avg / total 0.92 0.95 0.92 6554
direct_report |------------------------------>
precision recall f1-score support
related 0.86 0.98 0.92 5272
request 0.85 0.35 0.49 1282
avg / total 0.86 0.86 0.84 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
!pip install catboost
# from catboost import CatBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
parameters = {'clf__estimator__leaf_size': [20, 30, 50] }
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier()))
])
cv = GridSearchCV(pipeline, param_grid=parameters)
param_list = pipeline.get_params()
for k in param_list.keys():
print(f'---------------{k}-----------------------')
print(param_list[k])
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
# display results
display_results(y_test, y_pred)
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv, open('trained_model.plk', 'wb+'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import sys
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger','stopwords'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
import re
import numpy as np
import pandas as pd
from sklearn.base import BaseEstimator
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import GridSearchCV
# these are for SVD/LSA
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
# load data from database
engine = create_engine('sqlite:///data/DisResp.db')
df =pd.read_sql_table('messages', engine)
X = df['message'].values
display(df.info())
display(X.shape)
Y = df.iloc[:, 4:]
display(Y.shape)
display(Y.head())
# this is to see which categories have few messages associated with them (may be hard to classify)
display(Y.mean(axis=0))
labels = Y.columns.to_list()
display(labels)
Y = df.iloc[:, 4:].values
Y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stopwords.words('english')]
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def build_pipeline(clf, svd=False):
if svd:
# add on the steps to do LSA
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('svd', TruncatedSVD(100)),
('nml', Normalizer(copy=False)),
('multi_clf', MultiOutputClassifier(clf))
])
else:
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('multi_clf', MultiOutputClassifier(clf))
])
return pipeline
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline This will compare two solvers on the basic parameters without the additional SVD/LSA feature. I'm doing this preliminary screening because trying to do it all with GridSearchCV is taking way too long.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
all_results = []
# first try the two solvers with basic params and no SVD
# the SGD parameters are taken from an sklearn example on text classification. will tune later if it comes out
# the better model of the two.
clfs = [RandomForestClassifier(random_state=42), SGDClassifier(loss='hinge', penalty='l2',\
alpha=1e-3, random_state=42, max_iter=5, tol=None)]
for clf in clfs:
model = build_pipeline(clf)
model.fit(X_train, Y_train)
Y_pred = model.predict(X_test)
cl_name = str(type(clf)).split(".")[-1][:-2] # thanks stack overflow
get_results(model, Y_test, Y_pred, labels, cl_name, all_results)
result_df = pd.DataFrame(all_results, columns=['classifier', 'category', 'precis_0', 'rcl_0', 'f1_0', 'support_0',\
'precis_1', 'rcl_1', 'f1_1', 'support_1','accuracy','ma_precision',\
'ma_recall', 'ma_f1'])
display(result_df.head())
result_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].mean()
result_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].min()
# see how many categories have no messages predicted as belonging to them
result_df.query('(f1_1 ==0)').groupby(['classifier'])['category'].count()
###Output
_____no_output_____
###Markdown
Now compare the two solvers on the basic parameters with SVD/LSA.
###Code
all_results = []
# the SGD parameters are taken from an sklearn example on text classification. will tune later if it comes out
# the better model of the two.
clfs = [RandomForestClassifier(random_state=42), SGDClassifier(loss='hinge', penalty='l2',\
alpha=1e-3, random_state=42, max_iter=5, tol=None)]
for clf in clfs:
model = build_pipeline(clf, svd=True)
model.fit(X_train, Y_train)
Y_pred = model.predict(X_test)
cl_name = str(type(clf)).split(".")[-1][:-2] # thanks stack overflow
get_results(model, Y_test, Y_pred, labels, cl_name, all_results)
# see how much the SVD contributes
display(model.named_steps.svd.explained_variance_ratio_.sum())
result_svd_df = pd.DataFrame(all_results, columns=['classifier', 'category', 'precis_0', 'rcl_0', 'f1_0', 'support_0',\
'precis_1', 'rcl_1', 'f1_1', 'support_1','accuracy','ma_precision',\
'ma_recall', 'ma_f1'])
result_svd_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].mean()
result_svd_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].min()
# see how many categories have no messages predicted as belonging to them
result_svd_df.query('(f1_1 ==0)').groupby(['classifier'])['category'].count()
###Output
_____no_output_____
###Markdown
These results are slightly worse than for not using SVD on these data. Will do the grid search without SVD. 5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each. The results are displayed above and below.
###Code
def get_results(model, y_test, y_pred, labels, cl_name, all_results):
for i, label in enumerate(labels):
result = classification_report(y_test[:,i], y_pred[:,i], output_dict=True)
all_results.append([cl_name, label, result['0']['precision'], result['0']['recall'], \
result['0']['f1-score'], result['0']['support'], result['1']['precision'], \
result['1']['recall'], result['1']['f1-score'], result['1']['support'],\
result['accuracy'], result['macro avg']['precision'],\
result['macro avg']['recall'],result['macro avg']['f1-score']])
return
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters. Code that will run this will multiple solvers and options is below. However, that code runs for an excessive amount of time (>48h and counting). I'm using the preliminary results above to limit the grid search to parameters for Random Forest only, with no SVD.
###Code
# Starting with just Random Forest as way overshot on complexity using 3 solvers and is running forever
# have to limit features here as this also takes several hours to run
# just going to do 1 feature for the vectorizer and one for the classifier.
# when this was run with the commented lines left in, it kept going for 3 days and froze.
# going to also cut cv to 2 to cut down time per iteration.
# this combination actually ran extremely quickly: about an hour (!)
parameters = [
{
# 'vect__ngram_range': [(1,1),(1,2)],
'tfidf__use_idf': [True, False],
'multi_clf__estimator__n_estimators': [100, 200],
# 'multi_clf__estimator__max_features': [0.5, "sqrt"]
}]
# create grid search object
clf = RandomForestClassifier(random_state=42)
pipeline = build_pipeline(clf)
# the multithreading option doesn't seem to work in iPython according to what I can find (failed for me)
cv = GridSearchCV(pipeline, param_grid=parameters, cv=2)
cv.fit(X_train, Y_train)
Y_pred = cv.predict(X_test)
cv_results = []
cl_name = str(type(clf)).split(".")[-1][:-2] # thanks stack overflow
get_results(cv, Y_test, Y_pred, labels, cl_name, cv_results)
print("\nBest Parameters:", cv.best_params_)
cv_df = pd.DataFrame(cv_results, columns=['classifier', 'category', 'precis_0', 'rcl_0', 'f1_0', 'support_0',\
'precis_1', 'rcl_1', 'f1_1', 'support_1','accuracy','ma_precision',\
'ma_recall', 'ma_f1'])
cv_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].mean()
###Output
_____no_output_____
###Markdown
Those results didn't improve on the baseline model. But at least it ran in a reasonable amount of time. Will try the other two parameter combinations.
###Code
# try the other two parameter combos to see if they can improve the fit
parameters = [
{
'vect__ngram_range': [(1,1),(1,2)],
# 'tfidf__use_idf': [True, False],
# 'multi_clf__estimator__n_estimators': [100, 200],
'multi_clf__estimator__max_features': [0.5, "sqrt"]
}]
# create grid search object
clf = RandomForestClassifier(random_state=42)
pipeline = build_pipeline(clf)
# the multithreading option doesn't seem to work in iPython according to what I can find (failed for me)
cv2 = GridSearchCV(pipeline, param_grid=parameters, cv=2)
cv2.fit(X_train, Y_train)
Y_pred = cv2.predict(X_test)
cv2_results = []
cl_name = str(type(clf)).split(".")[-1][:-2] # thanks stack overflow
get_results(cv2, Y_test, Y_pred, labels, cl_name, cv2_results)
print("\nBest Parameters:", cv2.best_params_)
cv2_df = pd.DataFrame(cv2_results, columns=['classifier', 'category', 'precis_0', 'rcl_0', 'f1_0', 'support_0',\
'precis_1', 'rcl_1', 'f1_1', 'support_1','accuracy','ma_precision',\
'ma_recall', 'ma_f1'])
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv2_df.groupby(['classifier'])[['accuracy', 'ma_precision', 'ma_recall', 'ma_f1']].mean()
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import pickle
with open('disaster_explore.pkl', 'wb') as outfile:
pickle.dump(cv, outfile)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.svm import SVC
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import TruncatedSVD
from sklearn.base import BaseEstimator
import pickle
import re
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
nltk.download(['punkt', 'wordnet'])
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#imblearn
from imblearn.over_sampling import RandomOverSampler
from sklearn.base import BaseEstimator, TransformerMixin
from iterstrat.ml_stratifiers import MultilabelStratifiedShuffleSplit
from imblearn.under_sampling import RandomUnderSampler
# load data from database
engine = create_engine('sqlite:///messages.db')
df = pd.read_sql_table("messages", con=engine)
df.head()
###Output
_____no_output_____
###Markdown
based on this quick check most of the data is very imbalanced
###Code
X = df["message"]
y = df.drop(['message', 'genre', 'id', 'original'], axis = 1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
'''
Receives text related data and processes it
Args: text related data (columns)
Returns: tokenized text
'''
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def multi_tester(X, y):
'''
Function to create list of fitted models
Args: training data X and y
returns: list of the selected fitted models
'''
pipe_1 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(random_state=42)))
])
pipe_2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(ExtraTreesClassifier(random_state=42)))
])
pipe_3 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(GradientBoostingClassifier(random_state=42)))
])
pipe_4 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(random_state=42)))
])
pipe_5 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(SVC(random_state=42)))
])
pips = [pipe_1, pipe_2, pipe_3, pipe_4, pipe_5]
pip_names = ['RandomForestClassifier', 'ExtraTreesClassifier', 'GradientBoostingClassifier',
'AdaBoostClassifier', 'SVC']
model_fits = []
for i in range(len(pips)):
print('Model: ', pip_names[i])
print(pips[i].get_params())
mdl = pips[i].fit(X, y)
model_fits.append(mdl)
return model_fits
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42, test_size = 0.33)
fitted_mdls = multi_tester(X_train, y_train)
###Output
Model: RandomForestClassifier
{'memory': None, 'steps': [('vect', CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x000002C2469CBCA8>,
vocabulary=None)), ('tfidf', TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True)), ('clf', MultiOutputClassifier(estimator=RandomForestClassifier(bootstrap=True,
class_weight=None,
criterion='gini',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators='warn',
n_jobs=None,
oob_score=False,
random_state=42,
verbose=0,
warm_start=False),
n_jobs=None))], 'verbose': False, 'vect': CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x000002C2469CBCA8>,
vocabulary=None), 'tfidf': TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True), 'clf': MultiOutputClassifier(estimator=RandomForestClassifier(bootstrap=True,
class_weight=None,
criterion='gini',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators='warn',
n_jobs=None,
oob_score=False,
random_state=42,
verbose=0,
warm_start=False),
n_jobs=None), 'vect__analyzer': 'word', 'vect__binary': False, 'vect__decode_error': 'strict', 'vect__dtype': <class 'numpy.int64'>, 'vect__encoding': 'utf-8', 'vect__input': 'content', 'vect__lowercase': True, 'vect__max_df': 1.0, 'vect__max_features': None, 'vect__min_df': 1, 'vect__ngram_range': (1, 1), 'vect__preprocessor': None, 'vect__stop_words': None, 'vect__strip_accents': None, 'vect__token_pattern': '(?u)\\b\\w\\w+\\b', 'vect__tokenizer': <function tokenize at 0x000002C2469CBCA8>, 'vect__vocabulary': None, 'tfidf__norm': 'l2', 'tfidf__smooth_idf': True, 'tfidf__sublinear_tf': False, 'tfidf__use_idf': True, 'clf__estimator__bootstrap': True, 'clf__estimator__class_weight': None, 'clf__estimator__criterion': 'gini', 'clf__estimator__max_depth': None, 'clf__estimator__max_features': 'auto', 'clf__estimator__max_leaf_nodes': None, 'clf__estimator__min_impurity_decrease': 0.0, 'clf__estimator__min_impurity_split': None, 'clf__estimator__min_samples_leaf': 1, 'clf__estimator__min_samples_split': 2, 'clf__estimator__min_weight_fraction_leaf': 0.0, 'clf__estimator__n_estimators': 'warn', 'clf__estimator__n_jobs': None, 'clf__estimator__oob_score': False, 'clf__estimator__random_state': 42, 'clf__estimator__verbose': 0, 'clf__estimator__warm_start': False, 'clf__estimator': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators='warn',
n_jobs=None, oob_score=False, random_state=42, verbose=0,
warm_start=False), 'clf__n_jobs': None}
Model: ExtraTreesClassifier
{'memory': None, 'steps': [('vect', CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x000002C2469CBCA8>,
vocabulary=None)), ('tfidf', TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True)), ('clf', MultiOutputClassifier(estimator=ExtraTreesClassifier(bootstrap=False,
class_weight=None,
criterion='gini',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators='warn',
n_jobs=None,
oob_score=False,
random_state=42, verbose=0,
warm_start=False),
n_jobs=None))], 'verbose': False, 'vect': CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize at 0x000002C2469CBCA8>,
vocabulary=None), 'tfidf': TfidfTransformer(norm='l2', smooth_idf=True, sublinear_tf=False, use_idf=True), 'clf': MultiOutputClassifier(estimator=ExtraTreesClassifier(bootstrap=False,
class_weight=None,
criterion='gini',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators='warn',
n_jobs=None,
oob_score=False,
random_state=42, verbose=0,
warm_start=False),
n_jobs=None), 'vect__analyzer': 'word', 'vect__binary': False, 'vect__decode_error': 'strict', 'vect__dtype': <class 'numpy.int64'>, 'vect__encoding': 'utf-8', 'vect__input': 'content', 'vect__lowercase': True, 'vect__max_df': 1.0, 'vect__max_features': None, 'vect__min_df': 1, 'vect__ngram_range': (1, 1), 'vect__preprocessor': None, 'vect__stop_words': None, 'vect__strip_accents': None, 'vect__token_pattern': '(?u)\\b\\w\\w+\\b', 'vect__tokenizer': <function tokenize at 0x000002C2469CBCA8>, 'vect__vocabulary': None, 'tfidf__norm': 'l2', 'tfidf__smooth_idf': True, 'tfidf__sublinear_tf': False, 'tfidf__use_idf': True, 'clf__estimator__bootstrap': False, 'clf__estimator__class_weight': None, 'clf__estimator__criterion': 'gini', 'clf__estimator__max_depth': None, 'clf__estimator__max_features': 'auto', 'clf__estimator__max_leaf_nodes': None, 'clf__estimator__min_impurity_decrease': 0.0, 'clf__estimator__min_impurity_split': None, 'clf__estimator__min_samples_leaf': 1, 'clf__estimator__min_samples_split': 2, 'clf__estimator__min_weight_fraction_leaf': 0.0, 'clf__estimator__n_estimators': 'warn', 'clf__estimator__n_jobs': None, 'clf__estimator__oob_score': False, 'clf__estimator__random_state': 42, 'clf__estimator__verbose': 0, 'clf__estimator__warm_start': False, 'clf__estimator': ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators='warn',
n_jobs=None, oob_score=False, random_state=42, verbose=0,
warm_start=False), 'clf__n_jobs': None}
###Markdown
5. Test your modelsReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
target_names = y_train.columns.tolist()
def perf_report(model, X_test, y_test):
'''
Function to return model classification reports
Input: Model list, and test data
Output: Prints the Classification report
'''
pip_names = ['RandomForestClassifier', 'ExtraTreesClassifier', 'GradientBoostingClassifier',
'AdaBoostClassifier', 'SVC']
for i in range(len(model)):
print('______________________________Model______________________________')
print('______________________________', pip_names[i], '______________________________')
y_pred = model[i].predict(X_test)
print(classification_report(y_test, y_pred, target_names = target_names))
perf_report(fitted_mdls, X_test, y_test)
###Output
______________________________Model______________________________
______________________________ RandomForestClassifier ______________________________
###Markdown
-shops has very little label diversity so it became an edge case, I will drop it for the optimization______________________________ RandomForestClassifier ______________________________ precision recall f1-score support micro avg 0.80 0.44 0.57 27308 macro avg 0.58 0.16 0.21 27308 weighted avg 0.74 0.44 0.50 27308 samples avg 0.65 0.42 0.46 27308 ______________________________ ExtraTreesClassifier ______________________________ micro avg 0.79 0.44 0.56 27308 macro avg 0.53 0.15 0.21 27308 weighted avg 0.71 0.44 0.49 27308 samples avg 0.66 0.42 0.46 27308______________________________ GradientBoostingClassifier ______________________________ micro avg 0.76 0.57 0.65 27308 macro avg 0.51 0.32 0.38 27308 weighted avg 0.72 0.57 0.61 27308 samples avg 0.65 0.50 0.52 27308 ______________________________ AdaBoostClassifier ______________________________ micro avg 0.77 0.58 0.66 27308 macro avg 0.58 0.33 0.40 27308 weighted avg 0.73 0.58 0.62 27308 samples avg 0.63 0.50 0.51 27308 ______________________________ SVC ______________________________ micro avg 0.76 0.24 0.36 27308 macro avg 0.02 0.03 0.02 27308 weighted avg 0.18 0.24 0.21 27308 samples avg 0.76 0.32 0.40 27308 6. Improve models based on poor target performance eliminationReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.Testing models after dropping poor predictors
###Code
# dropping the targets that had the word performances based on the classification report
targs_drop = ['offer', 'security', 'infrastructure_related', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure', 'fire', 'other_weather']
y_min = y.copy()
y_min.drop(targs_drop, axis = 1, inplace = True)
X_train, X_test, y_train, y_test = train_test_split(X, y_min, random_state = 42, test_size = 0.33)
fitted_mdls_min = multi_tester(X_train, y_train)
target_names = y_train.columns.tolist()
perf_report(fitted_mdls_min, X_test, y_test)
###Output
______________________________Model______________________________
______________________________ RandomForestClassifier ______________________________
###Markdown
______________________________ RandomForestClassifier ______________________________ precision recall f1-score support micro avg 0.80 0.48 0.60 25330 macro avg 0.72 0.22 0.30 25330 weighted avg 0.78 0.48 0.53 25330 samples avg 0.66 0.44 0.48 25330______________________________ ExtraTreesClassifier ______________________________ micro avg 0.79 0.46 0.59 25330 macro avg 0.68 0.20 0.27 25330 weighted avg 0.75 0.46 0.52 25330 samples avg 0.65 0.43 0.47 25330 ______________________________ GradientBoostingClassifier ______________________________ micro avg 0.78 0.61 0.68 25330 macro avg 0.65 0.43 0.50 25330 weighted avg 0.76 0.61 0.65 25330 samples avg 0.66 0.52 0.54 25330 ______________________________ AdaBoostClassifier ______________________________ micro avg 0.77 0.61 0.69 25330 macro avg 0.69 0.42 0.51 25330 weighted avg 0.75 0.61 0.66 25330 samples avg 0.64 0.51 0.53 25330 ______________________________ SVC ______________________________ micro avg 0.76 0.26 0.38 25330 macro avg 0.03 0.04 0.03 25330 weighted avg 0.19 0.26 0.22 25330 samples avg 0.76 0.33 0.41 25330 7. Improve your modelUse grid search to find better parameters. I will work on my best performing model adaboost and using the reduced target data
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(random_state=42)))
])
pipeline.get_params()
parameters = {'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__random_state': [42],
'clf__estimator__learning_rate': [0.5]}
cv = GridSearchCV(pipeline, param_grid = parameters, cv = 10,
refit = True, verbose = 1, return_train_score = True, n_jobs = -1)
cv
###Output
_____no_output_____
###Markdown
8. Test selected model
###Code
# dropping the targets that had the word performances based on the classification report
targs_drop = ['offer', 'security', 'infrastructure_related', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure', 'fire', 'other_weather']
y_min = y.copy()
y_min.drop(targs_drop, axis = 1, inplace = True)
X_train, X_test, y_train, y_test = train_test_split(X, y_min, random_state = 42, test_size = 0.33)
best_ada = cv.fit(X_train, y_train)
print('Best model :', best_ada.best_score_)
print('Params :', best_ada.best_params_)
y_pred = best_ada.predict(X_test)
print(classification_report(y_test, y_pred, target_names = target_names))
###Output
precision recall f1-score support
related 0.81 0.96 0.88 6534
request 0.83 0.51 0.63 1472
aid_related 0.76 0.59 0.67 3545
medical_help 0.60 0.19 0.28 701
medical_products 0.75 0.23 0.35 446
search_and_rescue 0.70 0.10 0.18 226
military 0.64 0.21 0.31 267
water 0.75 0.60 0.67 543
food 0.81 0.69 0.75 965
shelter 0.80 0.50 0.62 775
clothing 0.70 0.35 0.47 127
money 0.54 0.19 0.29 191
missing_people 0.82 0.13 0.23 104
refugees 0.60 0.20 0.30 293
death 0.81 0.37 0.51 406
other_aid 0.62 0.09 0.15 1139
transport 0.75 0.15 0.26 407
buildings 0.80 0.31 0.44 441
electricity 0.66 0.18 0.28 185
weather_related 0.87 0.62 0.72 2390
floods 0.88 0.52 0.65 693
storm 0.75 0.47 0.58 812
earthquake 0.88 0.76 0.82 787
cold 0.76 0.26 0.38 187
direct_report 0.76 0.43 0.55 1694
micro avg 0.80 0.59 0.68 25330
macro avg 0.75 0.38 0.48 25330
weighted avg 0.78 0.59 0.64 25330
samples avg 0.66 0.50 0.53 25330
###Markdown
9. Other Approaches Custom estimators (inspired by: [repo](https://github.com/hnbezz/Portfolio_under_construction/blob/master/Disaster_Response_Pipeline/ML%20Pipeline%20Preparation.ipynb) )
###Code
class StartVerbExtractor(BaseEstimator, TransformerMixin):
def start_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
if len(pos_tags) != 0:
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return 1
return 0
def fit(self, X, y=None):
return self
def transform(self, X):
X_tag = pd.Series(X).apply(self.start_verb)
return pd.DataFrame(X_tag)
def get_text_len(data):
return np.array([len(text) for text in data]).reshape(-1, 1)
# dropping the targets that had the word performances based on the classification report
targs_drop = ['offer', 'security', 'infrastructure_related', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure', 'fire', 'other_weather', 'other_aid']
y_min = y.copy()
y_min.drop(targs_drop, axis = 1, inplace = True)
target_names = y_min.columns.tolist()
#stratifying data
mlss = MultilabelStratifiedShuffleSplit(n_splits=1, test_size=0.33, random_state=42)
for train_index, test_index in mlss.split(X, y_min):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y_min.values[train_index], y_min.values[test_index]
y_train = pd.DataFrame(y_train,columns=target_names)
y_test = pd.DataFrame(y_test,columns=target_names)
pipeline_2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('best', TruncatedSVD()),
('tfidf', TfidfTransformer())])),
('start_verb', StartVerbExtractor())])),
('clf', MultiOutputClassifier(AdaBoostClassifier(random_state=42)))
])
pipeline_2.get_params()
parameters = {'clf__estimator__n_estimators': [100, 200, 300],
'clf__estimator__random_state': [42],
'clf__estimator__learning_rate': [0.1]}
cv_2 = GridSearchCV(pipeline, param_grid = parameters, cv = 10,
refit = True, verbose = 1, return_train_score = True, n_jobs = -1)
cv_2
best_ada_2 = cv_2.fit(X_train, y_train)
print('Best model :', best_ada_2.best_score_)
print('Params :', best_ada_2.best_params_)
y_pred = best_ada_2.predict(X_test)
print(classification_report(y_test, y_pred, target_names = target_names))
test_text = ['there is a storm and people are trapped']
test = cv_2.predict(test_text)
print(y_train.columns.values[(test.flatten()==1)])
###Output
['related' 'weather_related' 'storm']
###Markdown
That is a pretty cool prediction, let's try a few more
###Code
test_text = ['we are having an earthquake, buildings are destroyed, victims need clothes']
test = cv_2.predict(test_text)
print(y_train.columns.values[(test.flatten()==1)])
test_text = ['there was an accident near the bank and we need an ambulance']
test = cv_2.predict(test_text)
print(y_train.columns.values[(test.flatten()==1)])
###Output
['related']
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv_2, open('classifier.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd;
from sqlalchemy import create_engine;
import re;
from nltk import word_tokenize, pos_tag;
from nltk.corpus import stopwords;
from nltk.stem.wordnet import WordNetLemmatizer;
from sklearn.pipeline import Pipeline;
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer;
from sklearn.ensemble import RandomForestClassifier;
from sklearn.multioutput import MultiOutputClassifier;
from sklearn.cross_validation import train_test_split;
import seaborn as sns;
import numpy as np;
import nltk;
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
# load data from database
engine = create_engine('sqlite:///figure_eight.db')
conn = engine.connect();
df = pd.read_sql('select * from disaster_data_cleaned', conn)
X = df['message']
Y = df.select_dtypes('int64').drop('id', axis=1)
print('X shape :', X.shape)
print('Y shape :', Y.shape)
###Output
X shape : (26216,)
Y shape : (26216, 36)
###Markdown
2. Write a tokenization function to process your text data
###Code
word_net = WordNetLemmatizer();
def tokenize(text):
# lower case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower());
words = word_tokenize(text);
words = [word for word in words if word not in stopwords.words('english')];
lemmed = [word_net.lemmatize(word) for word in words];
return lemmed;
tokenize('when the sun rises in the west and sets in the east, when the seas go dry and the mountains blow in the winds like leaves')
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
forest = RandomForestClassifier(n_estimators=10, random_state=1024);
pipeline = Pipeline([('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('model', MultiOutputClassifier(estimator=forest, n_jobs=1))
]);
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=1024);
pipeline.fit(X_train, Y_train);
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
Y_preds = pipeline.predict(X_test);
Y_preds = pd.DataFrame(Y_preds);
Y_preds.columns = Y_test.columns;
Y_preds.index = Y_test.index;
from sklearn.metrics import accuracy_score, f1_score, classification_report, confusion_matrix;
#print('acc ', accuracy_score(Y_test, Y_preds));
#print('f1s', f1_score(Y_test, Y_preds))
#classification_report(Y_test, Y_preds)
for column in Y_test.columns:
print('Column : ' , column)
print(classification_report(Y_test[column], Y_preds[column]))
cross_dict = [];
cross_cols = [];
for column_a in Y_test.columns:
cross_cols.append(column_a);
col_dict = {};
for column_b in Y_preds.columns:
#print(column_a, column_b, (foo[column_a] == bar[column_b]).sum())
col_dict[column_b] = (Y_test[column_a] == Y_preds[column_b]).sum()
cross_dict.append(col_dict)
cross_dict = pd.DataFrame(cross_dict);
cross_dict.index = cross_cols;
import matplotlib.pyplot as plt;
import numpy as np;
plt.matshow(cross_dict)
plt.colorbar()
score_dict = [];
for column in Y_test.columns:
score = f1_score(Y_test[column], Y_preds[column], average='micro');
score_dict.append({'column' : column, 'score' : score});
score_df = pd.DataFrame(score_dict);
g = sns.barplot(score_df['column'], score_df['score']);
for item in g.get_xticklabels():
item.set_rotation(90)
print('Avg of f1 scores: ', np.mean([val for x,val in score_df.values]))
###Output
Avg of f1 scores: 0.944338495916
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
from sklearn.model_selection import GridSearchCV;
forest = RandomForestClassifier(n_estimators=10, random_state=1024);
pipeline = Pipeline([('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('model', MultiOutputClassifier(estimator=forest, n_jobs=1))
]);
parameters = {'model__estimator__max_depth' : [5, 10], 'model__estimator__max_features' : [5, 10], 'model__estimator__criterion' : ['gini', 'entropy']};
cv = GridSearchCV(pipeline, param_grid=parameters, verbose=2);
cv.fit(X_train, Y_train);
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
forest = RandomForestClassifier(n_estimators=10, random_state=1024, criterion='gini', max_depth=5, max_features=5);
pipeline = Pipeline([('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('model', MultiOutputClassifier(estimator=forest, n_jobs=1))
]);
pipeline.fit(X_train, Y_train);
Y_preds = pipeline.predict(X_test);
Y_preds = pd.DataFrame(Y_preds);
Y_preds.columns = Y_test.columns;
Y_preds.index = Y_test.index;
for column in Y_test.columns:
print('Column : ' , column)
print(classification_report(Y_test[column], Y_preds[column]))
score_dict = [];
for column in Y_test.columns:
score = f1_score(Y_test[column], Y_preds[column], average='micro');
score_dict.append({'column' : column, 'score' : score});
score_df = pd.DataFrame(score_dict);
g = sns.barplot(score_df['column'], score_df['score']);
for item in g.get_xticklabels():
item.set_rotation(90)
print('Avg of f1 scores: ', np.mean([val for x,val in score_df.values]))
###Output
Avg of f1 scores: 0.926124980737
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
forest = RandomForestClassifier(n_estimators=30, random_state=1024, criterion='gini', max_depth=5, max_features=5);
pipeline = Pipeline([('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('model', MultiOutputClassifier(estimator=forest, n_jobs=1))
]);
pipeline.fit(X_train, Y_train);
Y_preds = pipeline.predict(X_test);
Y_preds = pd.DataFrame(Y_preds);
Y_preds.columns = Y_test.columns;
Y_preds.index = Y_test.index;
score_dict = [];
for column in Y_test.columns:
score = f1_score(Y_test[column], Y_preds[column], average='micro');
score_dict.append({'column' : column, 'score' : score});
score_df = pd.DataFrame(score_dict);
g = sns.barplot(score_df['column'], score_df['score']);
for item in g.get_xticklabels():
item.set_rotation(90)
print('Avg of f1 scores: ', np.mean([val for x,val in score_df.values]))
###Output
Avg of f1 scores: 0.926115349052
###Markdown
9. Export your model as a pickle file
###Code
import pickle;
pickle.dump(forest, open('forest.pkl', 'wb'));
pickle.dump(pipeline, open('pipeline.pkl', 'wb'));
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd, numpy as np
from sqlalchemy.engine import create_engine
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName1', engine)
df.head()
X = df['message'].values
y = df.iloc[:, 4:].values
category_names = list(df.iloc[:, 4:].columns)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
import nltk
nltk.download(['punkt', 'wordnet'])
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipeline- First of all,
###Code
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.32)
vect = CountVectorizer(tokenizer=tokenize)
tfidf = TfidfTransformer()
clf = MultiOutputClassifier(RandomForestClassifier())
# train classifier
X_train_counts = vect.fit_transform(X_train)
X_train_tfidf = tfidf.fit_transform(X_train_counts)
clf.fit(X_train_tfidf, y_train)
# predict on test data
X_test_counts = vect.transform(X_test)
X_test_tfidf = tfidf.transform(X_test_counts)
y_pred = clf.predict(X_test_tfidf)
y_pred.shape
###Output
_____no_output_____
###Markdown
Now, convert this into the pipeline modelThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()) )
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.32)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
from sklearn.metrics import classification_report
colnames = list(df.iloc[:, 4:].columns)
for k in range(len(colnames)):
print(k, '. ', colnames[k], '. \t acc = ', (y_pred[:, k] == y_test[:,k]).mean())
print(classification_report(y_test[:,k], y_pred[:,k]))
###Output
0 . related . acc = 0.792227917511
precision recall f1-score support
0 0.61 0.34 0.44 1984
1 0.82 0.93 0.87 6405
avg / total 0.77 0.79 0.77 8389
1 . request . acc = 0.878889021337
precision recall f1-score support
0 0.88 0.98 0.93 6940
1 0.81 0.39 0.52 1449
avg / total 0.87 0.88 0.86 8389
2 . offer . acc = 0.994039814042
precision recall f1-score support
0 0.99 1.00 1.00 8339
1 0.00 0.00 0.00 50
avg / total 0.99 0.99 0.99 8389
3 . aid_related . acc = 0.733937298844
precision recall f1-score support
0 0.73 0.88 0.79 4929
1 0.75 0.53 0.62 3460
avg / total 0.74 0.73 0.72 8389
4 . medical_help . acc = 0.926689712719
precision recall f1-score support
0 0.93 1.00 0.96 7761
1 0.58 0.07 0.13 628
avg / total 0.90 0.93 0.90 8389
5 . medical_products . acc = 0.952080104899
precision recall f1-score support
0 0.95 1.00 0.98 7967
1 0.68 0.09 0.16 422
avg / total 0.94 0.95 0.93 8389
6 . search_and_rescue . acc = 0.973655978067
precision recall f1-score support
0 0.98 1.00 0.99 8163
1 0.58 0.08 0.15 226
avg / total 0.96 0.97 0.96 8389
7 . security . acc = 0.982000238407
precision recall f1-score support
0 0.98 1.00 0.99 8240
1 0.25 0.01 0.01 149
avg / total 0.97 0.98 0.97 8389
8 . military . acc = 0.966861366075
precision recall f1-score support
0 0.97 1.00 0.98 8109
1 0.53 0.06 0.11 280
avg / total 0.95 0.97 0.95 8389
9 . child_alone . acc = 1.0
precision recall f1-score support
0 1.00 1.00 1.00 8389
avg / total 1.00 1.00 1.00 8389
10 . water . acc = 0.945047085469
precision recall f1-score support
0 0.95 1.00 0.97 7847
1 0.85 0.18 0.30 542
avg / total 0.94 0.95 0.93 8389
11 . food . acc = 0.923113601144
precision recall f1-score support
0 0.93 0.99 0.96 7446
1 0.85 0.38 0.53 943
avg / total 0.92 0.92 0.91 8389
12 . shelter . acc = 0.929669805698
precision recall f1-score support
0 0.93 0.99 0.96 7671
1 0.80 0.24 0.36 718
avg / total 0.92 0.93 0.91 8389
13 . clothing . acc = 0.984384312791
precision recall f1-score support
0 0.98 1.00 0.99 8255
1 0.80 0.03 0.06 134
avg / total 0.98 0.98 0.98 8389
14 . money . acc = 0.976636071045
precision recall f1-score support
0 0.98 1.00 0.99 8192
1 0.67 0.01 0.02 197
avg / total 0.97 0.98 0.97 8389
15 . missing_people . acc = 0.988794850399
precision recall f1-score support
0 0.99 1.00 0.99 8294
1 0.67 0.02 0.04 95
avg / total 0.99 0.99 0.98 8389
16 . refugees . acc = 0.966384551198
precision recall f1-score support
0 0.97 1.00 0.98 8112
1 0.37 0.03 0.05 277
avg / total 0.95 0.97 0.95 8389
17 . death . acc = 0.955775420193
precision recall f1-score support
0 0.96 1.00 0.98 8005
1 0.70 0.06 0.11 384
avg / total 0.94 0.96 0.94 8389
18 . other_aid . acc = 0.872332816784
precision recall f1-score support
0 0.87 1.00 0.93 7309
1 0.58 0.03 0.06 1080
avg / total 0.84 0.87 0.82 8389
19 . infrastructure_related . acc = 0.93789486232
precision recall f1-score support
0 0.94 1.00 0.97 7871
1 0.29 0.00 0.01 518
avg / total 0.90 0.94 0.91 8389
20 . transport . acc = 0.9576826797
precision recall f1-score support
0 0.96 1.00 0.98 8019
1 0.86 0.05 0.09 370
avg / total 0.95 0.96 0.94 8389
21 . buildings . acc = 0.953629753248
precision recall f1-score support
0 0.95 1.00 0.98 7975
1 0.78 0.08 0.15 414
avg / total 0.95 0.95 0.94 8389
22 . electricity . acc = 0.979020145429
precision recall f1-score support
0 0.98 1.00 0.99 8212
1 0.57 0.02 0.04 177
avg / total 0.97 0.98 0.97 8389
23 . tools . acc = 0.994635832638
precision recall f1-score support
0 0.99 1.00 1.00 8344
1 0.00 0.00 0.00 45
avg / total 0.99 0.99 0.99 8389
24 . hospitals . acc = 0.991417332221
precision recall f1-score support
0 0.99 1.00 1.00 8318
1 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.99 8389
25 . shops . acc = 0.994874240076
precision recall f1-score support
0 0.99 1.00 1.00 8346
1 0.00 0.00 0.00 43
avg / total 0.99 0.99 0.99 8389
26 . aid_centers . acc = 0.989152461557
precision recall f1-score support
0 0.99 1.00 0.99 8298
1 0.00 0.00 0.00 91
avg / total 0.98 0.99 0.98 8389
27 . other_infrastructure . acc = 0.957205864823
precision recall f1-score support
0 0.96 1.00 0.98 8031
1 0.44 0.01 0.02 358
avg / total 0.94 0.96 0.94 8389
28 . weather_related . acc = 0.834426034092
precision recall f1-score support
0 0.84 0.96 0.89 6107
1 0.81 0.51 0.63 2282
avg / total 0.83 0.83 0.82 8389
29 . floods . acc = 0.943974251997
precision recall f1-score support
0 0.95 1.00 0.97 7717
1 0.90 0.34 0.49 672
avg / total 0.94 0.94 0.93 8389
30 . storm . acc = 0.928120157349
precision recall f1-score support
0 0.93 0.99 0.96 7631
1 0.79 0.28 0.41 758
avg / total 0.92 0.93 0.91 8389
31 . fire . acc = 0.990344498748
precision recall f1-score support
0 0.99 1.00 1.00 8305
1 1.00 0.04 0.07 84
avg / total 0.99 0.99 0.99 8389
32 . earthquake . acc = 0.953033734653
precision recall f1-score support
0 0.96 0.99 0.97 7616
1 0.89 0.56 0.69 773
avg / total 0.95 0.95 0.95 8389
33 . cold . acc = 0.981881034688
precision recall f1-score support
0 0.98 1.00 0.99 8223
1 0.79 0.11 0.20 166
avg / total 0.98 0.98 0.98 8389
34 . other_weather . acc = 0.949696030516
precision recall f1-score support
0 0.95 1.00 0.97 7962
1 0.59 0.04 0.07 427
avg / total 0.93 0.95 0.93 8389
35 . direct_report . acc = 0.843723924186
precision recall f1-score support
0 0.85 0.98 0.91 6714
1 0.78 0.30 0.44 1675
avg / total 0.84 0.84 0.81 8389
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
from sklearn.model_selection import GridSearchCV
parameters = {
'vect__ngram_range': ((1, 1), (1, 2), (1, 3)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 100, 500),
'tfidf__use_idf': (True, False)
}
#cv = GridSearchCV(pipeline, param_grid = parameters)
#cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
#y_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
parameters = {
'vect__ngram_range': ((1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 500, 1000, 2000),
'tfidf__use_idf': (True, False)
}
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import pickle
filename = 'classifier.pkl'
pickle.dump(clf, open(filename, 'wb'))
#pickle.dump(clf, filename)
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
#result = loaded_model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
import sys
len(sys.argv), sys.argv
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
#https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import re
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger','stopwords'])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql_table('DisasterResponseTable',engine)
X = df.message.values
Y = df.iloc[:,4:]
X[1:10]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
stop_words = stopwords.words("english")
# def tokenize(text):
# detected_urls = re.findall(url_regex, text)
# for url in detected_urls:
# text = text.replace(url, "urlplaceholder")
# tokens = word_tokenize(text)
# lemmatizer = WordNetLemmatizer()
# # lemmatize andremove stop words
# tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
# return tokens
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
tokenize(X[0])
def display_results(y_test, y_pred):
#labels = np.unique(y_pred)
#confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
#print("Labels:", labels)
#print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators = 100)))
])
xgb_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(XGBClassifier(objective='binary:logistic')))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
%%time
# train classifier
pipeline.fit(X_train, y_train)
# predict on test data
y_pred = pipeline.predict(X_test)
# display results
display_results(y_test, y_pred)
print(classification_report(y_test.values, y_pred, target_names=Y.columns.values))
%%time
# train classifier
xgb_pipeline.fit(X_train, y_train)
# predict on test data
y_pred_xgb = xgb_pipeline.predict(X_test)
# display results
display_results(y_test, y_pred_xgb)
print(classification_report(y_test.values, y_pred_xgb, target_names=Y.columns.values))
###Output
precision recall f1-score support
related 0.85 0.93 0.89 5011
request 0.78 0.58 0.67 1099
offer 0.00 0.00 0.00 36
aid_related 0.78 0.66 0.71 2741
medical_help 0.58 0.27 0.37 546
medical_products 0.63 0.28 0.39 332
search_and_rescue 0.53 0.16 0.25 188
security 0.33 0.03 0.05 111
military 0.63 0.29 0.40 207
water 0.79 0.66 0.72 425
food 0.80 0.75 0.77 730
shelter 0.79 0.60 0.69 606
clothing 0.79 0.44 0.56 105
money 0.51 0.23 0.31 159
missing_people 0.57 0.15 0.24 78
refugees 0.68 0.23 0.35 239
death 0.74 0.53 0.62 304
other_aid 0.56 0.18 0.27 888
infrastructure_related 0.46 0.07 0.12 422
transport 0.69 0.24 0.35 318
buildings 0.69 0.39 0.50 341
electricity 0.64 0.32 0.43 131
tools 0.00 0.00 0.00 36
hospitals 0.71 0.06 0.11 80
shops 0.00 0.00 0.00 35
aid_centers 0.80 0.05 0.10 78
other_infrastructure 0.32 0.04 0.08 276
weather_related 0.85 0.71 0.77 1859
floods 0.87 0.54 0.67 538
storm 0.72 0.65 0.69 593
fire 0.64 0.27 0.38 67
earthquake 0.87 0.80 0.83 651
cold 0.73 0.48 0.58 130
other_weather 0.54 0.13 0.21 349
direct_report 0.75 0.50 0.60 1245
micro avg 0.79 0.61 0.69 20954
macro avg 0.62 0.35 0.42 20954
weighted avg 0.75 0.61 0.65 20954
samples avg 0.64 0.52 0.53 20954
###Markdown
From the results, two things can be inferred, there is something wrong with **related** column and **child_alone** column.
###Code
# investigate "related" and "child-alone" column
Y["related"].value_counts()
Y.columns
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF Few importatnt words like **water, blocked road, medical supplies** are used during a disaster response. So we can create a Custom Transformers like **StartingNounExtractor**, **StartingVerbExtractor** and, **LengthExtractor** and add them to our pipeline.
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
# https://www.guru99.com/pos-tagging-chunking-nltk.html
class StartingNounExtractor(BaseEstimator, TransformerMixin):
def starting_noun(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['NN', 'NNS', 'NNP', 'NNPS'] or first_word == 'RT':
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_noun)
return pd.DataFrame(X_tagged)
# Not useful in this case
class LengthExtractor(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return pd.Series(X).apply(len).values.reshape(-1,1)
###Output
_____no_output_____
###Markdown
Using FeatureUnion
###Code
rand_pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
#('length', LengthExtractor()),
#('starting_noun', StartingNounExtractor()),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators = 100)))
])
boost_pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
#('length', LengthExtractor()),
('starting_noun', StartingNounExtractor()),
('starting_verb', StartingVerbExtractor())
])),
('xgbclassifier', MultiOutputClassifier(XGBClassifier(objective='binary:logistic',random_state = 42)))
])
%%time
# train classifier
rand_pipeline.fit(X_train, y_train)
#predict on test data
y_pred_rand = rand_pipeline.predict(X_test)
#display results
display_results(y_test, y_pred_rand)
%%time
# train classifier
boost_pipeline.fit(X_train, y_train)
#predict on test data
y_pred_boost = boost_pipeline.predict(X_test)
#display results
display_results(y_test, y_pred_boost)
###Output
Accuracy: related 0.822499
request 0.901491
offer 0.994314
aid_related 0.778393
medical_help 0.923313
medical_products 0.956047
search_and_rescue 0.971569
security 0.982019
military 0.972491
water 0.965729
food 0.952666
shelter 0.948210
clothing 0.988781
money 0.976641
missing_people 0.988935
refugees 0.967573
death 0.970954
other_aid 0.870140
infrastructure_related 0.933764
transport 0.956969
buildings 0.958506
electricity 0.982327
tools 0.994467
hospitals 0.987859
shops 0.994621
aid_centers 0.988474
other_infrastructure 0.954972
weather_related 0.878746
floods 0.956662
storm 0.946519
fire 0.991548
earthquake 0.967881
cold 0.985400
other_weather 0.947749
direct_report 0.871523
dtype: float64
CPU times: user 18min 41s, sys: 20.1 s, total: 19min 1s
Wall time: 2min 28s
###Markdown
As we can see adding Custom Transformers like **StartingNounExtractor**, **StartingVerbExtractor** to our pipeline, improves the accuracy. Also, XGBoost classifier workes better than random forest. So we'll apply GridsearchCV on XGBoost.**LengthExtractor** degrades accuracy. 6. Improve your modelUse grid search to find better parameters.
###Code
#REF : https://xgboost.readthedocs.io/en/latest/python/python_api.html
# https://www.kaggle.com/tilii7/hyperparameter-grid-search-with-xgboost
parameters = {
# 'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
# 'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
# 'features__text_pipeline__vect__max_features': (None, 5000, 10000),
# 'features__text_pipeline__tfidf__use_idf': (True, False),
# 'xgbclassifier__estimator__n_estimators': [50, 1000],
'xgbclassifier__estimator__learning_rate': [0.1, 0.5],
# 'xgbclassifier__estimator__max_depth': [3,5],
# 'xgbclassifier__estimator__gamma': [0.5, 2, 5],
# 'features__transformer_weights': (
# {'text_pipeline': 1, 'starting_verb': 0.5,'starting_noun': 0.5},
# {'text_pipeline': 0.5, 'starting_verb': 1,'starting_noun': 0.5},
# {'text_pipeline': 1, 'starting_verb': 0.5,'starting_noun': 1},
# {'text_pipeline': 0.8, 'starting_verb': 1,'starting_noun': 0.5},
# )
}
cv = GridSearchCV(boost_pipeline, param_grid=parameters,cv = 5)
%%time
cv.fit(X_train, y_train)
# predict on test data
y_pred_final = cv.predict(X_test)
# display results
display_results(y_test, y_pred_final)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
print(classification_report(y_test.values, y_pred_final, target_names=Y.columns.values))
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(cv, open('models/classifier.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
from sqlalchemy import create_engine
import pickle
# download NLTK data
import re
import nltk
nltk.download(['punkt', 'wordnet','stopwords'])
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
# load data from database
engine = create_engine('sqlite:///messages.db')
df = pd.read_sql_table('messages',engine)
X = df['message']
Y = df.iloc[:,4:]
categories = list(df.columns[4:])
X.head()
Y.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
'''
Applies Natural Language Processing to raw text, namely: normalizes case, removes punctuation and english stop words, tokenizes and lemmatizes words.
Args:
text: str - raw message (text) to be cleaned
Returns:
tokens: cleaned, tokenized and lemmatized text
'''
#Normalize case and remove punctuation
text = re.sub(r'[^a-zA-Z0-9]',' ' , text.lower())
#Split text into words
tokens = word_tokenize(text)
# Initiate Lemmatizer
lemmatizer = WordNetLemmatizer()
#Lemmatize and remove stop words
tokens = [lemmatizer.lemmatize(w) for w in tokens if w not in stopwords.words('english')]
return tokens
#test the tokenize function
for message in X[:5]:
tokens=tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
Is the Hurricane over or is it not over
['hurricane']
Looking for someone but no name
['looking', 'someone', 'name']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', '80', '90', 'destroyed', 'hospital', 'st', 'croix', 'functioning', 'need', 'supply', 'desperately']
says: west side of Haiti, rest of the country today and tonight
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
#ML Pipeline using Random Forest Classifier
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
#Split data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, Y)
#Train pipeline
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
#Predict on test data
y_pred = pipeline.predict(X_test)
for i in range(Y.shape[1]):
print('Category:', Y.columns[i], '\n', classification_report(y_test.iloc[:,1].values, y_pred[:,i]))
accuracy = (y_pred == y_test).mean()
avg_accuracy = accuracy.mean()
print("Accuracy:", accuracy)
print("Average Accuracy:", avg_accuracy)
#Function to calculate basic statistics for total accuracy of the model
def calculate_stats(accuracy):
'''
Takes a list of accuracies and calculates the basic statistics, like minimum, maximum, mean and median
Args:
accuracy: str - list of accuracies for each category
Returns: non
'''
minimum = accuracy.min()
maximum = accuracy.max()
mean = accuracy.mean()
median = accuracy.median()
return print('Min:', minimum ,'\n','Max:', maximum ,'\n','Mean:', mean ,'\n','Median:', median)
#Apply stats function to Random Forest Pipeline
calculate_stats(accuracy)
###Output
Min: 0.755538579068
Max: 1.0
Mean: 0.944346829641
Median: 0.958441558442
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
#Get pipeline parameters
pipeline.get_params()
parameters = {#'clf__estimator__bootstrap': [True,False],
#'clf__estimator__criterion': ['gini', 'entropy']
#'clf__estimator__n_estimators':[1,10,20,30,60],
'clf__estimator__n_estimators': [10,30]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv.fit(X_train,y_train)
cv.best_estimator_
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
#Predict on test data with tuned model
y_pred_cv = cv.predict(X_test)
for i in range(Y.shape[1]):
print('Category:', Y.columns[i], '\n', classification_report(y_test.iloc[:,1].values, y_pred_cv[:,i]))
accuracy_tunned = (y_pred_cv == y_test).mean()
avg_accuracy_tunned = accuracy_tunned.mean()
print("Accuracy Tunned:", accuracy_tunned)
print("Average Accuracy Tunned:", avg_accuracy_tunned)
###Output
Accuracy Tunned: related 0.814515
request 0.892743
offer 0.995416
aid_related 0.771123
medical_help 0.922383
medical_products 0.956914
search_and_rescue 0.973415
security 0.981054
military 0.966387
child_alone 1.000000
water 0.953094
food 0.940107
shelter 0.935523
clothing 0.986555
money 0.980138
missing_people 0.989610
refugees 0.970053
death 0.960886
other_aid 0.863866
infrastructure_related 0.938732
transport 0.957219
buildings 0.949885
electricity 0.980443
tools 0.993430
hospitals 0.989458
shops 0.996028
aid_centers 0.988694
other_infrastructure 0.959206
weather_related 0.882964
floods 0.949121
storm 0.944538
fire 0.989152
earthquake 0.971887
cold 0.980749
other_weather 0.951719
direct_report 0.852101
dtype: float64
Average Accuracy Tunned: 0.948030727442
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#K Nearest Neighbors
from sklearn.neighbors import KNeighborsClassifier
#Pipeline with K Nearest Neighbors estimator
pipeline_knn = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier()))
])
#Train KNN pipeline
pipeline_knn.fit(X_train, y_train)
#Predict on test data with KNN classifier
y_pred_knn = pipeline_knn.predict(X_test)
for i in range(Y.shape[1]):
print('Category:', Y.columns[i], '\n', classification_report(y_test.iloc[:,1].values, y_pred_knn[:,i]))
accuracy_knn = (y_pred_knn == y_test).mean()
avg_accuracy_knn = accuracy_knn.mean()
print("Accuracy KNN:", accuracy)
print("Average Accuracy KNN:", avg_accuracy_knn)
#AdaBoostClassifier
#Pipeline with AdaBoost Classifier
pipeline_boost = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
#Train SVC pipeline
pipeline_boost.fit(X_train, y_train)
#Predict on test data with SVC classifier
y_pred_boost = pipeline_boost.predict(X_test)
for i in range(Y.shape[1]):
print('Category:', Y.columns[i], '\n', classification_report(y_test.iloc[:,1].values, y_pred_boost[:,i]))
accuracy_boost = (y_pred_boost == y_test).mean()
avg_accuracy_boost = accuracy_boost.mean()
print("Accuracy Boost:", accuracy_boost)
print("Average Accuracy AdaBoost:", avg_accuracy_boost)
#Accuracy for RandomForest Model
calculate_stats(accuracy)
#Accuracy for RandomForest Model Tunned
calculate_stats(accuracy_tunned)
#Accuracty for AdaBoost Model
calculate_stats(accuracy_boost)
#Accuracy for KNN Model
calculate_stats(accuracy_knn)
#We will consider Random Forest Tunned (with 30 estimators) as our final model
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv, open('model.pkl', "wb"))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import re
import pickle
import nltk
import pandas as pd
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sqlalchemy import create_engine
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report
from sklearn.neighbors import KNeighborsClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.linear_model import LogisticRegression
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
# load data from database
engine = create_engine('sqlite:///message_categories.db')
df = pd.read_sql('SELECT * FROM message_categories', engine)
X = df.loc[:, 'message']
Y = df.loc[:, 'related':'direct_report']
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
X.iloc[0]
lemmatizer = WordNetLemmatizer()
def tokenize(text):
'''Tokenize textual data to be processed by TFIDF vectorizers
params:
text - a string of textual data
returns:
clean_tokens - tokens from text
'''
text = re.sub(r'[^a-zA-Z0-9]', ' ', text)
clean_tokens = list()
tokens = word_tokenize(text)
removed_stopwords = [token for token in tokens if token not in stopwords.words('english')]
clean_tokens = [lemmatizer.lemmatize(word, pos='v').lower().strip() for word in removed_stopwords]
clean_tokens = [lemmatizer.lemmatize(token, pos='n').lower().strip() for token in clean_tokens]
return clean_tokens
for message in X.iloc[0:30]:
print(tokenize(message))
###Output
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pas', 'haiti']
['is', 'hurricane']
['looking', 'someone', 'name']
['un', 'report', 'leogane', '80', '90', 'destroy', 'only', 'hospital', 'st', 'croix', 'function', 'need', 'supply', 'desperately']
['say', 'west', 'side', 'haiti', 'rest', 'country', 'today', 'tonight']
['information', 'national', 'palace']
['storm', 'sacred', 'heart', 'jesus']
['please', 'need', 'tent', 'water', 'we', 'silo', 'thank']
['i', 'would', 'like', 'receive', 'message', 'thank']
['i', 'croix', 'de', 'bouquet', 'we', 'health', 'issue', 'they', 'worker', 'santo', '15', 'area', 'croix', 'de', 'bouquet']
['there', 'nothing', 'eat', 'water', 'starve', 'thirsty']
['i', 'petionville', 'i', 'need', 'information', 'regard', '4636']
['i', 'thomassin', 'number', '32', 'area', 'name', 'pyron', 'i', 'would', 'like', 'water', 'thank', 'god', 'fine', 'desperately', 'need', 'water', 'thanks']
['let', 'together', 'need', 'food', 'delma', '75', 'didine', 'area']
['more', 'information', '4636', 'number', 'order', 'participate', 'to', 'see', 'i', 'use']
['a', 'comitee', 'delmas', '19', 'rue', 'street', 'janvier', 'impasse', 'charite', '2', 'we', '500', 'people', 'temporary', 'shelter', 'dire', 'need', 'water', 'food', 'medication', 'tent', 'clothes', 'please', 'stop', 'see', 'u']
['we', 'need', 'food', 'water', 'klecin', '12', 'we', 'die', 'hunger', 'impasse', 'chretien', 'klecin', '12', 'extend', 'extension', 'we', 'hungry', 'sick']
['go', 'call', 'want', 'call', 'ou', 'let', 'know']
['i', 'understand', 'use', 'thing', '4636']
['i', 'would', 'like', 'know', 'earthquake', 'thanks']
['i', 'would', 'like', 'know', 'one', 'radio', 'ginen', 'journalist', 'die']
['i', 'laplaine', 'i', 'victim']
['there', 'lack', 'water', 'moleya', 'please', 'inform']
['those', 'people', 'live', 'sibert', 'need', 'food', 'hungry']
['i', 'want', 'say', 'hello', 'message', 'let', 'know', 'area', 'faustin', 'anhy', 'street', 'nothing', 'neither', 'food', 'water', 'medicine']
['can', 'tell', 'service']
['people', 'i', 'delma', '2', 'anything', 'ever', 'please', 'provide', 'u', 'food', 'water', 'medicine']
['we', 'gressier', 'need', 'assistance', 'right', 'away', 'asap', 'come', 'help', 'u']
['how', 'get', 'water', 'food', 'fontamara', '43', 'cite', 'tinante']
['we', 'need', 'help', 'carrefour', 'forget', 'completely', 'the', 'foul', 'odor', 'kill', 'u', 'just', 'let', 'know', 'thanks']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf_trans', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=5)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
(y_pred == y_test).mean().mean()
for i, class_ in enumerate(y_test.columns):
print(class_, classification_report(y_test.loc[:, class_].values, y_pred[:, i]))
pipeline.get_params()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__min_samples_split':[2, 5]
}
cv = GridSearchCV(pipeline, param_grid=parameters, cv=5, n_jobs=-1)
cv.fit(X_train, y_train)
y_pred_cv = cv.predict(X_test)
(y_pred_cv == y_test).mean().mean()
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
for i, class_ in enumerate(y_test.columns):
print(class_, classification_report(y_test.loc[:, class_].values, y_pred_cv[:, i]))
###Output
related precision recall f1-score support
0 0.62 0.41 0.49 1540
1 0.83 0.92 0.87 4961
2 0.58 0.13 0.22 53
avg / total 0.78 0.80 0.78 6554
request precision recall f1-score support
0 0.90 0.96 0.93 5420
1 0.73 0.50 0.60 1134
avg / total 0.87 0.88 0.87 6554
offer precision recall f1-score support
0 1.00 1.00 1.00 6526
1 0.00 0.00 0.00 28
avg / total 0.99 1.00 0.99 6554
aid_related precision recall f1-score support
0 0.77 0.80 0.78 3811
1 0.70 0.66 0.68 2743
avg / total 0.74 0.74 0.74 6554
medical_help precision recall f1-score support
0 0.93 0.99 0.96 6021
1 0.53 0.14 0.23 533
avg / total 0.90 0.92 0.90 6554
medical_products precision recall f1-score support
0 0.96 1.00 0.98 6234
1 0.63 0.16 0.26 320
avg / total 0.94 0.95 0.94 6554
search_and_rescue precision recall f1-score support
0 0.97 1.00 0.99 6361
1 0.56 0.08 0.14 193
avg / total 0.96 0.97 0.96 6554
security precision recall f1-score support
0 0.98 1.00 0.99 6432
1 0.00 0.00 0.00 122
avg / total 0.96 0.98 0.97 6554
military precision recall f1-score support
0 0.97 1.00 0.98 6328
1 0.56 0.15 0.24 226
avg / total 0.96 0.97 0.96 6554
child_alone precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water precision recall f1-score support
0 0.96 0.99 0.98 6154
1 0.75 0.37 0.50 400
avg / total 0.95 0.95 0.95 6554
food precision recall f1-score support
0 0.94 0.98 0.96 5830
1 0.79 0.49 0.60 724
avg / total 0.92 0.93 0.92 6554
shelter precision recall f1-score support
0 0.95 0.99 0.97 5984
1 0.76 0.43 0.55 570
avg / total 0.93 0.94 0.93 6554
clothing precision recall f1-score support
0 0.99 1.00 0.99 6453
1 0.82 0.14 0.24 101
avg / total 0.98 0.99 0.98 6554
money precision recall f1-score support
0 0.98 1.00 0.99 6405
1 0.47 0.05 0.09 149
avg / total 0.97 0.98 0.97 6554
missing_people precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.33 0.01 0.03 71
avg / total 0.98 0.99 0.98 6554
refugees precision recall f1-score support
0 0.97 1.00 0.98 6341
1 0.71 0.10 0.18 213
avg / total 0.96 0.97 0.96 6554
death precision recall f1-score support
0 0.97 0.99 0.98 6262
1 0.66 0.25 0.36 292
avg / total 0.95 0.96 0.95 6554
other_aid precision recall f1-score support
0 0.87 0.98 0.92 5641
1 0.40 0.09 0.15 913
avg / total 0.80 0.85 0.81 6554
infrastructure_related precision recall f1-score support
0 0.94 0.99 0.97 6182
1 0.24 0.03 0.05 372
avg / total 0.90 0.94 0.92 6554
transport precision recall f1-score support
0 0.96 0.99 0.97 6249
1 0.36 0.06 0.11 305
avg / total 0.93 0.95 0.93 6554
buildings precision recall f1-score support
0 0.96 0.99 0.98 6240
1 0.58 0.14 0.23 314
avg / total 0.94 0.95 0.94 6554
electricity precision recall f1-score support
0 0.98 1.00 0.99 6437
1 0.45 0.12 0.19 117
avg / total 0.97 0.98 0.98 6554
tools precision recall f1-score support
0 0.99 1.00 1.00 6519
1 0.00 0.00 0.00 35
avg / total 0.99 0.99 0.99 6554
hospitals precision recall f1-score support
0 0.99 1.00 1.00 6493
1 0.00 0.00 0.00 61
avg / total 0.98 0.99 0.99 6554
shops precision recall f1-score support
0 1.00 1.00 1.00 6529
1 1.00 0.04 0.08 25
avg / total 1.00 1.00 0.99 6554
aid_centers precision recall f1-score support
0 0.99 1.00 0.99 6487
1 0.00 0.00 0.00 67
avg / total 0.98 0.99 0.98 6554
other_infrastructure precision recall f1-score support
0 0.96 1.00 0.98 6301
1 0.13 0.02 0.03 253
avg / total 0.93 0.96 0.94 6554
weather_related precision recall f1-score support
0 0.88 0.93 0.90 4789
1 0.77 0.65 0.71 1765
avg / total 0.85 0.85 0.85 6554
floods precision recall f1-score support
0 0.96 0.99 0.98 6031
1 0.82 0.53 0.65 523
avg / total 0.95 0.95 0.95 6554
storm precision recall f1-score support
0 0.94 0.98 0.96 5973
1 0.69 0.37 0.48 581
avg / total 0.92 0.93 0.92 6554
fire precision recall f1-score support
0 0.99 1.00 1.00 6493
1 0.00 0.00 0.00 61
avg / total 0.98 0.99 0.99 6554
earthquake precision recall f1-score support
0 0.98 0.99 0.98 5923
1 0.87 0.77 0.82 631
avg / total 0.97 0.97 0.97 6554
cold precision recall f1-score support
0 0.98 1.00 0.99 6421
1 0.71 0.18 0.29 133
avg / total 0.98 0.98 0.98 6554
other_weather precision recall f1-score support
0 0.95 0.99 0.97 6231
1 0.26 0.03 0.06 323
avg / total 0.92 0.95 0.93 6554
direct_report precision recall f1-score support
0 0.87 0.95 0.91 5294
1 0.64 0.38 0.48 1260
avg / total 0.82 0.84 0.82 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
for sentence in nltk.sent_tokenize(X_train.iloc[0:]):
print(sentence)
print(nltk.pos_tag(tokenize(sentence)))
test = nltk.pos_tag(tokenize(X_train.iloc[29]))
def model_pipeline():
'''Build model pipelne
returns:
pipeline - a Pipeline object that can be fit to the data
'''
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf_trans', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier()))
])
return pipeline
improved_pipeline = model_pipeline()
improved_pipeline.fit(X_train, y_train)
y_pred_ip = improved_pipeline.predict(X_test)
(y_pred_ip == y_test).mean().mean()
###Output
_____no_output_____
###Markdown
The two models are about the same, I'll go with the random forest. If I had more time, I would have tried to make more features than just the Tfidf. Probably word counts, number of capital letters are a couple of ideas I would have liked to have tried. 9. Export your model as a pickle file
###Code
filename = 'rand_for_class.sav'
pickle.dump(pipeline, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import nltk
nltk.download(['punkt','wordnet','stopwords'])
# import libraries
import pandas as pd
import numpy as np
import re
import joblib
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from nltk.corpus import stopwords
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_fscore_support
from sklearn.utils.multiclass import type_of_target
# load data from database
engine = create_engine('sqlite:///Project3.db')
df = pd.read_sql_table('DisasterData',engine)
X=df.iloc[:,1]
Y=df.iloc[:,4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# remove punctuations
text = re.sub(r"[^a-zA-Z0-9]"," ",text)
# tokenize text into words
tokens = nltk.word_tokenize(text)
# remove stop words
tokens = [x for x in tokens if x not in stopwords.words("english")]
lemmatizer=WordNetLemmatizer()
clean_tokens=[]
for tok in tokens:
clean_tok=lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier()))])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# split data into train and test sets
X_train, X_test,y_train,y_test=train_test_split(X,Y,test_size=0.3,random_state=42 )
# train pipeline
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred=pipeline.predict(X_test)
category_names=Y.columns.values
print (classification_report(y_test,y_pred,target_names=category_names))
###Output
precision recall f1-score support
related 1.00 0.03 0.07 58
request 0.80 0.41 0.54 1332
offer 0.00 0.00 0.00 36
aid_related 0.73 0.61 0.67 3219
medical_help 0.49 0.08 0.14 638
medical_products 0.73 0.08 0.14 418
search_and_rescue 0.75 0.05 0.09 192
security 0.00 0.00 0.00 144
military 0.54 0.09 0.15 245
child_alone 0.00 0.00 0.00 0
water 0.86 0.27 0.41 500
food 0.85 0.41 0.55 878
shelter 0.76 0.33 0.46 705
clothing 0.80 0.10 0.18 115
money 0.80 0.05 0.09 170
missing_people 0.57 0.04 0.08 92
refugees 0.44 0.05 0.08 260
death 0.78 0.13 0.22 366
other_aid 0.51 0.05 0.09 1033
infrastructure_related 0.40 0.00 0.01 505
transport 0.60 0.04 0.08 362
buildings 0.72 0.11 0.19 392
electricity 0.62 0.03 0.06 168
tools 0.00 0.00 0.00 48
hospitals 0.00 0.00 0.00 78
shops 0.00 0.00 0.00 28
aid_centers 0.00 0.00 0.00 103
other_infrastructure 0.33 0.01 0.01 341
weather_related 0.83 0.62 0.71 2163
floods 0.87 0.32 0.47 623
storm 0.77 0.46 0.57 738
fire 0.33 0.01 0.02 83
earthquake 0.87 0.75 0.80 702
cold 0.64 0.04 0.08 171
other_weather 0.47 0.05 0.09 415
direct_report 0.73 0.32 0.44 1544
avg / total 0.70 0.34 0.42 18865
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params().keys()
parameters = {'vect__ngram_range':((1,1),(1,2)),
'vect__max_df':(0.5,0.75,1.0)}
cv = GridSearchCV(estimator=pipeline,param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# fit the model
cv.fit(X_train,y_train)
# test the model
y_pred_cv = cv.predict(X_test)
category_names=Y.columns.values
print(classification_report(y_test,y_pred_cv,target_names=category_names))
###Output
precision recall f1-score support
related 0.67 0.03 0.07 58
request 0.79 0.47 0.59 1332
offer 0.00 0.00 0.00 36
aid_related 0.74 0.53 0.62 3219
medical_help 0.56 0.10 0.17 638
medical_products 0.79 0.08 0.15 418
search_and_rescue 0.82 0.05 0.09 192
security 0.20 0.01 0.01 144
military 0.45 0.04 0.07 245
child_alone 0.00 0.00 0.00 0
water 0.84 0.28 0.42 500
food 0.86 0.51 0.64 878
shelter 0.79 0.26 0.39 705
clothing 0.59 0.09 0.15 115
money 0.89 0.05 0.09 170
missing_people 0.00 0.00 0.00 92
refugees 0.40 0.07 0.12 260
death 0.78 0.19 0.30 366
other_aid 0.49 0.06 0.10 1033
infrastructure_related 0.33 0.01 0.01 505
transport 0.81 0.05 0.09 362
buildings 0.85 0.11 0.20 392
electricity 1.00 0.03 0.06 168
tools 0.00 0.00 0.00 48
hospitals 1.00 0.01 0.03 78
shops 0.00 0.00 0.00 28
aid_centers 0.00 0.00 0.00 103
other_infrastructure 0.20 0.00 0.01 341
weather_related 0.83 0.56 0.67 2163
floods 0.84 0.32 0.47 623
storm 0.74 0.35 0.48 738
fire 0.25 0.01 0.02 83
earthquake 0.87 0.68 0.76 702
cold 0.50 0.01 0.01 171
other_weather 0.58 0.03 0.05 415
direct_report 0.72 0.28 0.40 1544
avg / total 0.71 0.32 0.41 18865
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
from sklearn.tree import DecisionTreeClassifier
pipeline_new = Pipeline([('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(DecisionTreeClassifier()))])
#find better parameters
pipeline_new.get_params().keys()
parameter_tree={'clf__estimator__criterion':['gini'],
'clf__estimator__max_depth':[2,4,6]}
cv_new=GridSearchCV(estimator=pipeline_new,param_grid=parameter_tree)
# train the new pipeline
cv_new.fit(X_train,y_train)
# test the pipeline model
category_names=Y.columns.values
y_pred_tree=cv_new.predict(X_test)
print(classification_report(y_test,y_pred_tree,target_names=category_names))
###Output
precision recall f1-score support
related 0.73 0.14 0.23 58
request 0.79 0.41 0.54 1332
offer 0.00 0.00 0.00 36
aid_related 0.68 0.54 0.60 3219
medical_help 0.58 0.20 0.30 638
medical_products 0.71 0.29 0.41 418
search_and_rescue 0.60 0.24 0.34 192
security 0.20 0.01 0.03 144
military 0.48 0.24 0.32 245
child_alone 0.00 0.00 0.00 0
water 0.79 0.57 0.67 500
food 0.80 0.79 0.80 878
shelter 0.79 0.54 0.64 705
clothing 0.70 0.42 0.52 115
money 0.55 0.21 0.31 170
missing_people 0.71 0.18 0.29 92
refugees 0.60 0.30 0.40 260
death 0.77 0.51 0.61 366
other_aid 0.52 0.16 0.25 1033
infrastructure_related 0.38 0.03 0.05 505
transport 0.64 0.16 0.25 362
buildings 0.76 0.22 0.34 392
electricity 0.68 0.09 0.16 168
tools 0.00 0.00 0.00 48
hospitals 0.29 0.05 0.09 78
shops 0.00 0.00 0.00 28
aid_centers 0.31 0.04 0.07 103
other_infrastructure 0.42 0.05 0.08 341
weather_related 0.88 0.52 0.66 2163
floods 0.83 0.58 0.68 623
storm 0.75 0.60 0.67 738
fire 0.53 0.33 0.40 83
earthquake 0.88 0.80 0.84 702
cold 0.76 0.36 0.49 171
other_weather 0.57 0.16 0.25 415
direct_report 0.75 0.30 0.43 1544
avg / total 0.70 0.42 0.50 18865
###Markdown
9. Export your model as a pickle file
###Code
import pickle
filename='final_model.pkl'
with open(filename,'wb') as file:
pickle.dump(cv_new)
###Output
_____no_output_____
###Markdown
Trying with an AdaBoostClassifier
###Code
def build_model_v3():
# build pipeline
pipeline = Pipeline([
('features', FeatureUnion([
('textpipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, ngram_range=(1,2))),
('tfidf', TfidfTransformer(smooth_idf=False)),
])),
('qmark_count', QuestionMarkCount()),
('expoint_count', ExclamationPointCount()),
('capital_count', CapitalCount()),
('word_count', WordCount())
])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
# define parameters
parameters = {
'clf__estimator__learning_rate': [0.7, 0.85, 1, 1.1],
'clf__estimator__n_estimators': [50, 100, 200],
'features__transformer_weights': [{'text_pipeline': 0.9, 'word_count': 0.025, 'qmark_count': 0.025, 'expoint_count': 0.025, 'capital_count': 0.025}]
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
# instantiate model and fit
print('Building model v3...')
model_v3 = build_model_v3()
print('Fitting model v3...')
model_v3.fit(X_train, y_train)
# predict on train data
y_pred_train = model_v3.predict(X_train)
# print model results
print(classification_report(y_train, y_pred_train, target_names=df.iloc[:, 4:].columns))
print('Validating model...')
# predict on test data
y_pred = model_v3.predict(X_test)
# print model results
print(classification_report(y_test, y_pred, target_names=df.iloc[:, 4:].columns))
print('Best model parameters...')
model_v3.best_params_
best_model_v3 = model_v3.best_estimator_
import joblib
# save model to disk
print('Saving model v3 to disk...')
filename = 'disaster_response_model_v3.sav'
joblib.dump(best_model_v3, open(filename, 'wb'))
def build_model_v4():
# build pipeline
pipeline = Pipeline([
('features', FeatureUnion([
('textpipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, ngram_range=(1,2))),
('tfidf', TfidfTransformer(smooth_idf=False)),
])),
('qmark_count', QuestionMarkCount()),
('expoint_count', ExclamationPointCount()),
('capital_count', CapitalCount()),
('word_count', WordCount())
])),
('clf', MultiOutputClassifier(GradientBoostingClassifier()))
])
# define parameters
parameters = {
'clf__estimator__max_depth': [3, 5, 8],
'clf__estimator__n_estimators': [100],
'clf__estimator__learning_rate': [0.55, 0.65, 0.07],
'features__transformer_weights': [{'text_pipeline': 0.9, 'word_count': 0.025, 'qmark_count': 0.025, 'expoint_count': 0.025, 'capital_count': 0.025}]
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
# instantiate model and fit
print('Building model v4...')
model_v4 = build_model_v4()
print('Fitting model v4...')
best_model_v4 = model_v4.fit(X_train, y_train)
print('Best model v4 parameters...')
model_v4.best_params_
print('Validating model v4...')
# predict on test data
y_pred = best_model_v4.predict(X_test)
# print model results
print(classification_report(y_test, y_pred, target_names=df.iloc[:, 4:].columns))
def build_model_v5():
# build pipeline
pipeline = Pipeline([
('features', FeatureUnion([
('textpipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, ngram_range=(1,2))),
('tfidf', TfidfTransformer(smooth_idf=False)),
])),
('qmark_count', QuestionMarkCount()),
('expoint_count', ExclamationPointCount()),
('capital_count', CapitalCount()),
('word_count', WordCount())
])),
('clf', MultiOutputClassifier(GradientBoostingClassifier()))
])
# define parameters
parameters = {
'clf__estimator__max_depth': [8],
'clf__estimator__n_estimators': [100],
'clf__estimator__learning_rate': [0.65, 0.07, 0.075],
'features__transformer_weights': [{'text_pipeline': 0.9, 'word_count': 0.025, 'qmark_count': 0.025, 'expoint_count': 0.025, 'capital_count': 0.025}]
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
###Output
_____no_output_____
###Markdown
Measure Reported: weighted averages| Model | Fitting | Precision | Recall | F1 Precision || :---- | :---- | :-------: | :----: | :----------: || Random Forrest Classifier | grid search cross-validation | 0.78 | 0.47 | 0.53 || Random Forrest Classifier | previous + 4 extra features | 0.75 | 0.43 | 0.49 || AdaBoost Classifier | as previous | 0.73 | 0.59 | 0.64 || Gradient Boosting Classifier | as previous | 0.72 | 0.61 | 0.64 |
###Code
# save model to disk
print('Saving model v4 to disk...')
filename = 'disaster_response_model_v4.sav'
joblib.dump(best_model_v4, open(filename, 'wb'))
for label in labels:
category_rows_mask = df[label] == 1
category_df = df[category_rows_mask]
category_size = category_df.shape[0]
if category_size > 0:
sample_size = 5
if category_size < 20:
sample_size = int(category_size / 4)
if sample_size < 1:
sample_size = 1
print("{} ({}) \n----".format(label, category_size))
sample = category_df['message'].sample(sample_size)
for index, text in sample.iteritems():
print("{}:\t{}".format(index, text))
print('\n\n')
###Output
request (3607)
----
2683: i m still waiting for your help. .. i'm starving, please bring me food
4686: THERE IS A MISTAKE IN THE FOOD DISTRIBUTION,SOME PEOPLE GIVE CARDS TO ONLY TO PEOPLE THEY KNOW..!! I HAVE TO BEG OTHER PEOPLE SO THEY CAN EAT,IT'S NOT FAIR.
2618: hello we are in ile a vache. in the trou milieu area. we have 13 people 2 babys among them
879: SORRY I GOT NOTHING TO HEAR NO POWER NO RADIO ONLY MY CELLPHONE PLEASE WRITE ME OR CALL ME I NEED YOUR HELP
3928: Oh my Gosh, we are dying with hunger and thirst in LIlavois 47.
offer (10)
----
255: How can we help the victims at Les Cayes?
3573: i want to give blood where do I go
aid_related (3931)
----
4759: Carrefour Feuilles needs food, drinking water and tents.
2465: We did not find any help in La Grenade, we still have people under the Rubble. We have no food and water.
570: IN MY CITY. WE WANTED YOUR HELP PLEASE WE NEED OF THE FOODS, WATERS. WEARS. HOUSES. BEACAUSE OURS HOUSES IS DESTROYED BY THE CATASTROPH. We are in the stre
114: I am in Petion Ville, in b. .. incomprehensible, we have no water, there is nothing, there is no money. What is being given in Petion Ville and where?
2256: Cit Militaire, we need water / food
medical_help (574)
----
5058: How many fatality missing do we have in Port-au-Prince?
4902: Peole that are living in La Montay especially in Lespinas need medical aid.
876: The house is broken. There are 5 people who have been injured. We need urgent assistance. Please call the number for location.
3866: .. psychologically I am really sick because my older brother died in front of me. He was the only person working to support the family financially. We need psychologist's help, please.
5227: There are a lot of diseases from infection in haiti -- what should we do?
medical_products (342)
----
2912: NO location : we need food, water, tents, diapers, cookies, sugar, please help us.
7557: My dady was dead long time ago, now I lose my mom and three of my brothers during the earthquake, every time I think about that, I can't support my head (headache), what can I do in that situation, it's very important.
10030: Dadu still needs food, medicines, cloths, metresses, blankets
2400: Need medicine and many tents. At Telandieu and Leonord. Thank you in advance.
433: they need help of every king, food, water, health services at Thomassin 32, 12 19 km east of Port-au-Prince. There are about 300 people
search_and_rescue (206)
----
934: Hello, we are in the Petionville area we need tents, food and water
593: we make an inventory. There are a lot of destryed houses. A lot of injured people. Lot of deads. It is a catastrophy. Please make an effort for these people. Our address is Route des Freres in Perrier. .. NEED SERIOUS HELP
10043: EVERY THING IS DAMAGE IN MY CITY TO FLOOD.CITY NAME DERA ALLAH YAR TEHSIL JHAT PAT DISTRICT JAFARABAD BALOCHISTAN.
4509: Good evening, we live in Bon Repos in the area of Rose amber (?) at the entrance of route .. Since January 12th, no one came to see us. Our house is destroyed, we are in the street and we are asking for aid.
7099: I AM SO HUNGRY ,I PRAY,BUT I CANNOT GET HELP CALL ME
security (129)
----
5216: How could you forget me, what will you do for me. I am suffering for three reasons, the first is food, the second is work, the third is sleep. I can't not suffer because I ..
2321: There are people under the Coeur Unis ( i'm guessing this is a church? ). And also, the hunger is killing me. Yesterday they pushed me so I did not recieve food. Fontamara 27. We need security a there are too many fights.
936: Help we need help we need, food, water and security, SOS they are going to kill us
9549: there is a expert will look-at the cracking house's after this earhquaque?i'm living at carade areatabarree i don't see them yet .i would like that,their visit my area.please
78: We would like to receive some help in the Section Communale. There is a lot of violence.
military (44)
----
936: Help we need help we need, food, water and security, SOS they are going to kill us
3323: I thought it was possible help would come from the forreigners here. Leogane, route ..
3041: no police officer ever there was only one since earthquake
6772: I'm on the ground,I'm not inside of the house please help me quickly.
4573: There is a group of young men with machetes that are causing trouble in the Abri area of Site Militaire. We need a police presence here, please.
water (789)
----
5230: I live in Marechal in the Gressier Commune. We need a tent, potable water, food.
5075: WE ARE RESPOSIBLE OF THEIR HEALTH WE NEED VACCIN ,DRINK WATER, NO WE DONT HAVE ANY IN FONTAMA
3859: My house was destroyed and my father and my child died. I hear there are foreigners giving aid in the country, but i cannot find even a little water. My friends are supporting me.
5388: In Delmas 33, Rue Charbonnire prolonge. We need water and tent.
346: We're asking you please to bring everything that's possible. Food, clothse, water, money to save those people's lives. Where we are people died, houses fell
food (1520)
----
5607: i would like to know where food and water are being distributed in carrefour and other areas
1189: There is so much hunger that if a person is eating a little something, somebody else takes it and runs away. As for water, tell them to come in the area every other day with treatment.
4086: WHERE CAN I FIND FOOD ? I AM A SURVIVOR
4745: WE ARE IN DELMAS 33 IN PREDAYE WE NEED FOOD TENTS CARE FOR ALL
2705: I don't have food. Please send us some food. I live signo across Hospital Cardinal Leger.
shelter (1088)
----
2672: we are in petit goave at liberte avenue. we have no house, no shelter, no water, no food. .. please help
9893: In our village Kachipul, flood affected very drastically. The flood destroys our crops, our houses and all belongings. The Govt. have not yet taken any steps to help us.
6344: If there is somewhere I can find a tent please let me know its very hard to be in the rain at 2:00 AM its very hard
9956: in sukkur there is desperate need of tents, clothes and medicines, even a strong need of powder milk
6987: We need of helps as : food, water, tent, toilet of any quality and others. We are locate at the Street of the mines.
clothing (100)
----
920: We have no food left. We're looking for some help with food. We don't have clothing issues nor water but we do not have any food left. ( incomplete )
2593: Hello we are OSCB social organisation in petionville road #28 we need shelter, we need everything possible
6133: We are in need of assistance. we are abandonned here in petionville between Dirgue Road and the Health center.
688: I need food and clothes, I am in Lasile.
1117: Things aren't good at all we as you to send something B?l?s riy?l Charles no 17, we need tents, food and medicine
money (125)
----
2255: I have many problems. I have nothing to eat. please, please put some money on my card so i can call someone. please, thanks in advance.
3323: I thought it was possible help would come from the forreigners here. Leogane, route ..
2532: Good evening, this is the commune of Thomazeau, the first section of Trou Caiman. Things are not good at all. a little can of rice is 50, see what you can do for us. Au Revoir
3379: I am not sick thanks God. However I am in dire need of food in order to survive
3039: Please help me with the earthquake victims,they left P-au-P and come to countryside,I helped them with the little money that I have.
missing_people (83)
----
2353: I cant contact my family since the devastation in Port-au-prince because my account is expired, i cant use a card, please help me
834: The authorities from Gressier hasnt done anything yet until this day. They only decided to have a meeting earlier at 2 pm, there are 6 People under the Rubbles. .. ( Msg lost )
862: I am found. I'm in Cap Haitian. My house in port-au-prince is destroyed
1533: which radio station should I listen to to find out information about someone who went to get medical care in Saint-Domingue?
8530: hi 4636 did you give the news for tonight on an eventuel earthquake I heard a lot persons say that, I would want that you gave me more precision
refugees (167)
----
913: My friends, we are asking for water and food.
8303: United nation see what you can do for us because we don't find anything,we have some people sick, here what we need: medicine, covers, our house is broke down. Claude felix ask that.
3529: Hi, I am one of the victims, I am asking the people in charge to send help to mondestin thimothe, I had a business and it's destroyed.
8530: hi 4636 did you give the news for tonight on an eventuel earthquake I heard a lot persons say that, I would want that you gave me more precision
2430: . .. 3106 childrens, 2353 childrens with one parents, 206 orphans, 257 young, more than 500 families, 1189 adults, 152 elderly persons, 4000 refugees. .. . ..
death (250)
----
7468: Condolences to all the nations whose soldier died in this catastrophe in Haiti on January 12.
48: Am listening to radio in Jacmel. Need help to remove dead bodies at Colege la Trinite-universite, the bodies are the professors and students
646: Please, I am suffering. give me a cahnce to save a life. please call this number so i can be involved. thak you four comprehension
3888: We are starving. We possess (have) knowledge, we can not find work. What to do?
795: hello. please, i would like to join my family in the US
other_aid (1459)
----
1892: I'm a victim in Casale. Want to know when there will be more earthquake. Thanks.
6314: Organizasyon CADEL asks to help them to save 309 families, 1600 persons in Leogane montay palmistaven. Thaks.
3673: Is the the hospital in Delmas that is working?
3766: I am a victim of the earthquake. My house is destroyed and I am now in Les Cayes with 2 young children and I don't have anything
5657: We, at Fougy before the GRIZ Rivera in the new road, are in need of umbrella, food and lines
infrastructure_related (313)
----
9271: Good evening, Haiti has many problems, but they are those of the Haitians tou nan manch. That UNICEF and other ONG which made gifts at the public schools make inventories, because the children n' do not have where to sit down.
2459: Good morning governemnt of this country. i am totally sorry becuase of a disaster ina country. ther eis a lot of damage in a city of p . ..
5471: We are in Bon Repos. We ask for tents because if the rain keeps falling we will have damage. If you are ready this message, have pity on us. God will bless you.
2668: could you give help by giving some portable toilet that would really help because our house and toilet is crushed
7896: the information we have to know about:Cyclone,health,education
transport (192)
----
3327: The people of ? need roads, electricity, medicine because there's an epidemic attacking them
732: It is cold in Cuba this morning. It could reach Haiti tomorrow. Some showers are predicted for our area tonight.
138: Can people enter their houses? When will we have electricity?
9890: In our village kachipul,flood caused huge loss.We lost Our crops,our houses and our jobs and every thing.But the government has not provided any sort of help for us so far
3901: I thought the aftershocks were over. It seems that they are still producing. Can they alert us on that please.
buildings (380)
----
9003: good afternoon, can we get into our house if it's not cracked?
3220: If a house is not cracked can we go inside it?
4862: I need food, I am in Gonaives, I came from PAP, my house was destroyed. Please call me at this number
4894: I AM IN MISERY MY HOUSE IS BROCKEN ,I FOUND NOTHING TO EAT,PLEASE HELP ME
126: I am from Anse a pitree my house which was in Delmas 32 was destroyed with everything I had inside. I went back to my hometown of Anse a pitrea. I would like to know how to get some help, because I have absolutely nothing!
electricity (66)
----
2627: help : water, food, way to have ( electricity )
3323: I thought it was possible help would come from the forreigners here. Leogane, route ..
8198: Do each haitian make money less than the money hi can eat each day, it's not a help but it's a way to continue with operating system.
4561: We are asking the ministry of public health to please help with the flies and mosquitoes that are in the shelters. Especially in Canape Vert
4761: You need to give electricity in Petion-ville now.
tools (28)
----
3936: Diapers, etc. Have big truck huge to store goods f/distribution and security. Rte Clercine, Butte Boyer, (apres hotel Stephia Hotel), Impasse Gelin #2,
9344: Great I am nesly please tell me whether it is true that a volcano in Saut-d'Eau, and I want to know whether the election is effectively true, if yes when he is doing
2580: weneed help in Ravine Pintade, right across from Olympic market
2379: I would like to know what is happening in the country
773: i have a problem talking to people in port au prince, please its talking to people god bless us
hospitals (53)
----
9053: what hospital is on for someone Emens tonight.
3791: Where can I find a health center in La Plaine
3144: The Doctors without Borders Hospital in Delmas 19 is closed. The Saint Louis Gonzaga hospital in Delmas 33 is taken in sick and wounded people for free
6762: the United Nation don't do nothing in Haiti
2104: Please, help me. I need clothes or anything.
shops (31)
----
4862: I need food, I am in Gonaives, I came from PAP, my house was destroyed. Please call me at this number
5417: People at Port Jeremie need tents and all other possible kinds of equipment.
168: One thing I am asking the money tranfer offices is for them to open so we can get the money sent to us.
9903: In our village Kachipur flood has done great damage. our crops, homes and business all have been destroyed. But the Government has not helped us till now. Location is Village Kachipul District Kambarshada
2723: i am asking that the authorities help us. victims association christophe avenue fanfan impasse
aid_centers (74)
----
8260: Will everybody find shelter ? Answer
2835: Even that our father has died, we want to become professionals in all kind of fields of study. Help us find adequate shelter in other for the children to go to school. Best regards and may God bless you.
4938: Please, I would like to know what precautions there are for me to prevent all diseases from the catastrophe on January 12.
2256: Cit Militaire, we need water / food
1758: will there be another earthquake this afternoon?
other_infrastructure (178)
----
4667: I live in Carrefour-Feuilles, on Rue Sicot. I don't have shelter, food or water.
823: I salute all those that are in charge at Digicel. In the name of God, I am a client of Digicel who is a victim of the earthquake that happened on January 12. I have problem. My house is cracked. I would like to go to Jeremie. Notes No name or location given.
9906: We are 17 people. Our house has been immersed(/flooded/inundated completely) and all property and livestock has been washed away by the flood. And our house was in Kachi. I am telling the truth. Thank you.
1511: I need help my house collapsed and I'm in the streets
170: Good morning, to everyone that is listening in Miami and other countries helping. I have my wife and five kids that will starve to death in Haiti if they do not get help. PLease help them! God will bless you!
weather_related (1444)
----
8033: Earthquake in venezuela of 8.2, tsunami alert for the following island : Dominican republic, Haiti, Puerto rico, Jamaica, Trinidad and tobago, and virgin
5060: News regarding the Earthquake in Haiti
81: A cold front is found over Cuba this morning. It could cross Haiti tomorrow. Isolated rain showers are expected over our region tonight.
7072: Please, give me some informations about the cyclon.
3326: I need clothes, shoes, food. Right now. Ansdeno grandoie
floods (282)
----
10093: Sir, I request we KRT AP AP K K Kon kro justice of the United Nations have a right to the national highway Kon, Agr Han AP waqe Effecte flood affected people are madad KRT
3401: NOTES: Statement. No emergency.
1300: Hello I'd like to know if really there will be more earthquakes again this weekend
782: I live in Gonaives. I need help for the hospital in Raboto. We need water, food, and medication because we have a thousand people who need medical attention right now.
9966: we are 100 flood effected .are we not register as a Pakistani nationality?we are requesting for you for the help. please do help us something location VILLAG CITY KACHIPUL Dist!Kamber Shahdad kot Taluka Qubo plz Vist Kachipul city
storm (275)
----
81: A cold front is found over Cuba this morning. It could cross Haiti tomorrow. Isolated rain showers are expected over our region tonight.
7685: Good evening, can I t o have the name of evey cyclones of this season,please?
4436: we do not understand why we cant get a tent for kids that are 5 and 4 months old...last night it was raining on the kids
6863: information about the hurrcane please. Thank you
9237: They say there is a hurrican ,would it be stared by rain.
fire (38)
----
108: We have a factory that is on FIRE on road to the airport near Sogebank. It's starting to burn several nearby houses with documents left in them. Please come and help us!!!
3425: What should one do if they have a lot of bumps/pimples that are growing?
4439: we are a group of women in twitye in carrefour. we would like to know where we can get coupons or cards to receive food
3950: the hospital sans frontiere need blood for those who still live,we need care now otherwise we die
773: i have a problem talking to people in port au prince, please its talking to people god bless us
earthquake (789)
----
9082: what informations at the level of sismic plan in Haiti?
3165: Information on the earthquake.
9194: Is it true from 17 through April 23 earthquake is more than strong? answer please.
4578: We are in Fontamara. We HAVE NOT RECEIVED ANYTHING SINCE the earthquake.
1863: when will this end, is it possible that there will be another earthquake in Port au Prince?
cold (59)
----
2064: We need potable water, food, many tents or the cold will kill us, medecine for flu, infection and fever etc. We have many children with us. We. ..
81: A cold front is found over Cuba this morning. It could cross Haiti tomorrow. Isolated rain showers are expected over our region tonight.
2980: Food is needed in Tabarre 43, Tapage Street, Paul Emile Street, Rapha Street and Rabbi Street..
4568: A cold front was found in Cuba this morning. It is coming to Haiti tomorrow. Looks like there will be another round of rain showers this evening.
2580: weneed help in Ravine Pintade, right across from Olympic market
other_weather (194)
----
4111: Still in the area Fort Jack, route Kalbas. We have yet to find food along with the fact that there are people who need tents/place to stay because their houses fell. Give them a card
2838: Hello, please can you help me? I have someone who is under debris since 15 days ago. It's at the Canape Vert school. He had gone to learn about the part of education. ..
934: Hello, we are in the Petionville area we need tents, food and water
2636: Delma 24, impass madiou, we need water and food they have not deliver any help for us
259: aid should reach the victims outside the city of p-au-p. we are in Gonaives when u get this message
direct_report (3467)
----
8253: when the ambushes of this century, hold closes even when life puts you in discomfiture, hold closes even that one of condescends your family, so God will facilitate you shortcoming
9037: urgent information: i responsible of a center name Oris Remy. im living in Leogane, chatile area. we found nothing since the earthquake, im asking for help.
3129: In Jacmel the aid is poorly organized. They only go to one place, Park Pinchinat, they don't come by to see the rest of the people who are left. It's only the rastas and the strongmen who get aid..
9146: informations on the next earthquake
2808: we need help in the town of ( bas Saintard, guitton, jean hose, mahotte ) in Arcahaie. A lot of people migrating from the capital. We need food,
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sqlalchemy import create_engine
from nltk.stem.porter import PorterStemmer
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
# load data from database
DATABASE_FILENAME = '../db.sqlite3'
TABLE_NAME = 'disaster_message'
engine = create_engine('sqlite:///' + DATABASE_FILENAME)
df = pd.read_sql_table(TABLE_NAME, engine)
X = df['message']
Y = df.iloc[:, 4:]
category_names = list(df.columns[4:])
X
df.columns[4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# normalize text and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
from sklearn.multioutput import MultiOutputClassifier
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier(n_jobs=-1))),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train,X_test,y_train,y_test = train_test_split(X,Y)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
%%time
pipeline.fit(X_train,y_train)
%%time
y_pred = pipeline.predict(X_test)
from sklearn.metrics import classification_report,accuracy_score
y_pred.shape,y_test.shape,len(list(Y.columns))
print(classification_report(y_test.iloc[:, 1:].values, np.array([x[1:] for x in y_pred])))
###Output
precision recall f1-score support
0 0.88 0.45 0.60 1131
1 0.00 0.00 0.00 33
2 0.79 0.62 0.70 2722
3 0.65 0.06 0.11 540
4 0.80 0.07 0.13 328
5 0.86 0.03 0.07 172
6 0.00 0.00 0.00 120
7 0.63 0.05 0.10 232
8 0.00 0.00 0.00 0
9 0.93 0.23 0.37 427
10 0.87 0.48 0.62 729
11 0.85 0.24 0.38 596
12 0.86 0.07 0.14 80
13 1.00 0.04 0.07 171
14 1.00 0.01 0.03 75
15 0.75 0.01 0.03 206
16 0.90 0.12 0.21 298
17 0.47 0.02 0.03 853
18 0.14 0.00 0.00 404
19 0.71 0.09 0.16 298
20 0.83 0.07 0.13 335
21 1.00 0.02 0.03 116
22 0.00 0.00 0.00 37
23 0.00 0.00 0.00 65
24 0.00 0.00 0.00 36
25 0.00 0.00 0.00 70
26 0.00 0.00 0.00 268
27 0.85 0.65 0.74 1852
28 0.87 0.41 0.56 527
29 0.80 0.43 0.56 590
30 0.00 0.00 0.00 62
31 0.89 0.76 0.82 636
32 0.70 0.06 0.10 127
33 0.55 0.02 0.03 348
34 0.84 0.38 0.52 1249
micro avg 0.83 0.36 0.50 15733
macro avg 0.58 0.15 0.21 15733
weighted avg 0.75 0.36 0.44 15733
samples avg 0.39 0.21 0.26 15733
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'vect__max_df': (0.5, 0.75, 1.0),
'vect__ngram_range': ((1, 1), (1,2)),
'vect__stop_words':(None,'english'),
'vect__max_features': (None, 5000,10000),
'tfidf__use_idf': (True, False)
}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
Wall time: 80.6 ms
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
%%time
cv.fit(X_train,y_train)
print('Best Parameters:', cv.best_params_)
#Best Parameters: {'tfidf__use_idf': True, 'vect__max_df': 0.75, 'vect__max_features': 5000, 'vect__ngram_range': (1, 2), 'vect__stop_words': 'english'}
#Wall time: 10h 54min 48s
%%time
y_pred = cv.predict(X_test)
len(y_pred)
print(classification_report(y_test.iloc[:, 1:].values, np.array([x[1:] for x in y_pred])))
###Output
precision recall f1-score support
0 0.79 0.50 0.61 1131
1 0.00 0.00 0.00 33
2 0.74 0.70 0.72 2722
3 0.63 0.16 0.25 540
4 0.75 0.20 0.31 328
5 0.74 0.13 0.23 172
6 0.17 0.01 0.02 120
7 0.55 0.11 0.19 232
8 0.00 0.00 0.00 0
9 0.85 0.55 0.67 427
10 0.80 0.74 0.77 729
11 0.81 0.50 0.62 596
12 0.83 0.24 0.37 80
13 0.86 0.04 0.07 171
14 1.00 0.03 0.05 75
15 0.58 0.17 0.26 206
16 0.76 0.34 0.47 298
17 0.61 0.08 0.14 853
18 0.12 0.00 0.00 404
19 0.57 0.10 0.17 298
20 0.80 0.23 0.36 335
21 0.42 0.04 0.08 116
22 0.00 0.00 0.00 37
23 0.00 0.00 0.00 65
24 0.00 0.00 0.00 36
25 0.00 0.00 0.00 70
26 0.00 0.00 0.00 268
27 0.83 0.73 0.78 1852
28 0.86 0.54 0.66 527
29 0.76 0.66 0.71 590
30 0.00 0.00 0.00 62
31 0.89 0.81 0.84 636
32 0.65 0.19 0.29 127
33 0.53 0.08 0.14 348
34 0.74 0.37 0.50 1249
micro avg 0.78 0.45 0.57 15733
macro avg 0.53 0.23 0.29 15733
weighted avg 0.71 0.45 0.52 15733
samples avg 0.42 0.27 0.31 15733
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF Randomforest takes forever to run, so from (https://scikit-learn.org/stable/modules/sgd.html) I decide to use another ML method called Stochastic Gradient Descent (SGD) other feature could be like hashvectorizer, but it was running for too long, so I gave up and tried another two ML Algorithms, MultinomialNB and AdanboostClassifier, so far AdaboostClassifier's performace is alomost like random forest and with way less run time
###Code
from sklearn.naive_bayes import MultinomialNB
pipeline_MNB = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf',MultiOutputClassifier(MultinomialNB())),
])
%%time
pipeline_MNB.fit(X_train,y_train)
y_pred_MNB = pipeline_MNB.predict(X_test)
y_pred_MNB.shape,y_test.shape,len(list(Y.columns))
print(classification_report(y_test.iloc[:, 1:].values, np.array([x[1:] for x in y_pred_MNB])))
pipeline_Ada = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf',MultiOutputClassifier(AdaBoostClassifier()))
])
%%time
pipeline_Ada.fit(X_train,y_train)
y_pred_ada = pipeline_Ada.predict(X_test)
print(classification_report(y_test.iloc[:, 1:].values, np.array([x[1:] for x in y_pred_ada])))
pipeline_Ada.get_params()
parameters_Ada = {
'vect__stop_words': (None,True),
'vect__max_features': (None, 5000,10000),
'tfidf__use_idf': (True, False)
}
cv_Ada = GridSearchCV(pipeline_Ada, param_grid=parameters_Ada)
%%time
cv_Ada.fit(X_train,y_train)
print('Best Parameters:', cv_Ada.best_params_)
#Best Parameters: {'tfidf__use_idf': True, 'vect__max_features': 10000, 'vect__stop_words': None}
#Wall time: 33min 31s
%%time
y_pred_cv_ada = cv_Ada.predict(X_test)
print(classification_report(y_test.iloc[:, 1:].values, np.array([x[1:] for x in y_pred_cv_ada])))
###Output
precision recall f1-score support
0 0.77 0.55 0.64 1131
1 0.00 0.00 0.00 33
2 0.76 0.60 0.67 2722
3 0.66 0.27 0.39 540
4 0.66 0.32 0.43 328
5 0.60 0.16 0.26 172
6 0.23 0.04 0.07 120
7 0.58 0.31 0.41 232
8 0.00 0.00 0.00 0
9 0.73 0.59 0.65 427
10 0.80 0.68 0.73 729
11 0.77 0.55 0.64 596
12 0.76 0.51 0.61 80
13 0.64 0.22 0.32 171
14 0.73 0.21 0.33 75
15 0.59 0.25 0.35 206
16 0.65 0.41 0.50 298
17 0.52 0.15 0.24 853
18 0.41 0.11 0.18 404
19 0.64 0.22 0.33 298
20 0.66 0.41 0.50 335
21 0.48 0.27 0.34 116
22 0.00 0.00 0.00 37
23 0.20 0.08 0.11 65
24 0.00 0.00 0.00 36
25 0.30 0.09 0.13 70
26 0.39 0.10 0.16 268
27 0.85 0.67 0.75 1852
28 0.83 0.58 0.68 527
29 0.77 0.51 0.61 590
30 0.57 0.27 0.37 62
31 0.89 0.77 0.82 636
32 0.62 0.28 0.38 127
33 0.46 0.16 0.24 348
34 0.70 0.49 0.58 1249
micro avg 0.74 0.47 0.58 15733
macro avg 0.55 0.31 0.38 15733
weighted avg 0.70 0.47 0.56 15733
samples avg 0.40 0.28 0.31 15733
###Markdown
9. Export your model as a pickle file
###Code
import joblib
#save best parm model from random forest model.
joblib.dump(cv, 'randomF.pkl')
joblib.dump(cv.best_estimator_, 'randomF_best.pkl')
#save ada model with the best estimator
joblib.dump(cv_Ada, 'Ada.pkl')
joblib.dump(cv_Ada.best_estimator_, 'Ada_best.pkl')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
nltk.download(['punkt', 'wordnet'])
nltk.download('stopwords')
import re
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import confusion_matrix, classification_report,fbeta_score
from sklearn.model_selection import train_test_split,cross_val_score, GridSearchCV, KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# load data from database
engine = create_engine('sqlite:///Messages.db')
df = pd.read_sql("SELECT * FROM Messages", engine)
X = df['message']
y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
df.head(7)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# Tokenize text
tokens = word_tokenize(text)
# Remove stop words
tokens = [w for w in tokens if w not in stopwords.words("english")]
# Reduce words to their root form
clean_tokens = [WordNetLemmatizer().lemmatize(w) for w in tokens]
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
col_names = list(y.columns.values)
y_pred_train = pipeline.predict(X_test)
y_pred_test = pipeline.predict(X_train)
for i in range(len(col_names)):
print((y_test.columns[i]).upper(),':')
print(classification_report(y_test.iloc[:,i],y_pred_train[:,i],target_names=col_names))
###Output
RELATED :
precision recall f1-score support
related 0.62 0.47 0.54 1528
request 0.85 0.91 0.88 4962
offer 0.38 0.33 0.35 64
avg / total 0.79 0.80 0.79 6554
REQUEST :
precision recall f1-score support
related 0.90 0.98 0.94 5469
request 0.79 0.45 0.57 1085
avg / total 0.88 0.89 0.88 6554
OFFER :
precision recall f1-score support
related 0.99 1.00 1.00 6509
request 0.00 0.00 0.00 45
avg / total 0.99 0.99 0.99 6554
AID_RELATED :
precision recall f1-score support
related 0.76 0.84 0.80 3913
request 0.73 0.62 0.67 2641
avg / total 0.75 0.75 0.75 6554
MEDICAL_HELP :
precision recall f1-score support
related 0.93 0.99 0.96 6072
request 0.60 0.10 0.18 482
avg / total 0.91 0.93 0.91 6554
MEDICAL_PRODUCTS :
precision recall f1-score support
related 0.96 1.00 0.98 6238
request 0.75 0.09 0.15 316
avg / total 0.95 0.95 0.94 6554
SEARCH_AND_RESCUE :
precision recall f1-score support
related 0.98 1.00 0.99 6395
request 0.50 0.08 0.13 159
avg / total 0.97 0.98 0.97 6554
SECURITY :
precision recall f1-score support
related 0.98 1.00 0.99 6435
request 0.00 0.00 0.00 119
avg / total 0.96 0.98 0.97 6554
MILITARY :
precision recall f1-score support
related 0.97 1.00 0.98 6349
request 0.55 0.08 0.14 205
avg / total 0.96 0.97 0.96 6554
CHILD_ALONE :
precision recall f1-score support
related 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
WATER :
precision recall f1-score support
related 0.96 1.00 0.98 6169
request 0.86 0.38 0.52 385
avg / total 0.96 0.96 0.95 6554
FOOD :
precision recall f1-score support
related 0.96 0.99 0.97 5899
request 0.83 0.59 0.69 655
avg / total 0.94 0.95 0.94 6554
SHELTER :
precision recall f1-score support
related 0.94 0.99 0.97 5975
request 0.83 0.35 0.49 579
avg / total 0.93 0.94 0.92 6554
CLOTHING :
precision recall f1-score support
related 0.99 1.00 0.99 6461
request 0.75 0.06 0.12 93
avg / total 0.98 0.99 0.98 6554
MONEY :
precision recall f1-score support
related 0.98 1.00 0.99 6392
request 0.83 0.03 0.06 162
avg / total 0.97 0.98 0.96 6554
MISSING_PEOPLE :
precision recall f1-score support
related 0.99 1.00 0.99 6482
request 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6554
REFUGEES :
precision recall f1-score support
related 0.97 1.00 0.98 6347
request 0.54 0.06 0.11 207
avg / total 0.96 0.97 0.96 6554
DEATH :
precision recall f1-score support
related 0.96 1.00 0.98 6267
request 0.70 0.16 0.27 287
avg / total 0.95 0.96 0.95 6554
OTHER_AID :
precision recall f1-score support
related 0.87 0.99 0.93 5689
request 0.49 0.06 0.10 865
avg / total 0.82 0.87 0.82 6554
INFRASTRUCTURE_RELATED :
precision recall f1-score support
related 0.94 1.00 0.97 6137
request 0.20 0.01 0.01 417
avg / total 0.89 0.94 0.91 6554
TRANSPORT :
precision recall f1-score support
related 0.96 1.00 0.98 6261
request 0.58 0.05 0.09 293
avg / total 0.94 0.96 0.94 6554
BUILDINGS :
precision recall f1-score support
related 0.96 1.00 0.98 6233
request 0.74 0.16 0.26 321
avg / total 0.95 0.96 0.94 6554
ELECTRICITY :
precision recall f1-score support
related 0.98 1.00 0.99 6408
request 0.67 0.04 0.08 146
avg / total 0.97 0.98 0.97 6554
TOOLS :
precision recall f1-score support
related 1.00 1.00 1.00 6525
request 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
HOSPITALS :
precision recall f1-score support
related 0.99 1.00 0.99 6483
request 0.00 0.00 0.00 71
avg / total 0.98 0.99 0.98 6554
SHOPS :
precision recall f1-score support
related 1.00 1.00 1.00 6530
request 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 6554
AID_CENTERS :
precision recall f1-score support
related 0.99 1.00 0.99 6476
request 0.00 0.00 0.00 78
avg / total 0.98 0.99 0.98 6554
OTHER_INFRASTRUCTURE :
precision recall f1-score support
related 0.96 1.00 0.98 6270
request 0.38 0.01 0.02 284
avg / total 0.93 0.96 0.94 6554
WEATHER_RELATED :
precision recall f1-score support
related 0.87 0.95 0.91 4735
request 0.83 0.63 0.72 1819
avg / total 0.86 0.86 0.86 6554
FLOODS :
precision recall f1-score support
related 0.95 0.99 0.97 6032
request 0.84 0.34 0.49 522
avg / total 0.94 0.94 0.93 6554
STORM :
precision recall f1-score support
related 0.93 0.99 0.96 5926
request 0.74 0.33 0.46 628
avg / total 0.91 0.92 0.91 6554
FIRE :
precision recall f1-score support
related 0.99 1.00 1.00 6488
request 1.00 0.05 0.09 66
avg / total 0.99 0.99 0.99 6554
EARTHQUAKE :
precision recall f1-score support
related 0.98 0.99 0.98 5959
request 0.87 0.77 0.82 595
avg / total 0.97 0.97 0.97 6554
COLD :
precision recall f1-score support
related 0.98 1.00 0.99 6422
request 0.80 0.06 0.11 132
avg / total 0.98 0.98 0.97 6554
OTHER_WEATHER :
precision recall f1-score support
related 0.95 1.00 0.97 6210
request 0.55 0.07 0.12 344
avg / total 0.93 0.95 0.93 6554
DIRECT_REPORT :
precision recall f1-score support
related 0.86 0.97 0.91 5318
request 0.71 0.33 0.45 1236
avg / total 0.83 0.85 0.82 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
from sklearn.metrics import accuracy_score, make_scorer,fbeta_score
parameters = [
{
"clf__estimator__n_estimators": [50, 100, 150],
"clf__estimator__max_depth":[8],
# "clf__estimator__random_state":[42],
"clf__estimator__min_samples_split": [2, 3, 4]}
]
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=4, verbose=2)
# cv = GridSearchCV(
# pipeline,
# parameters,
# cv=5,
# scoring='accuracy',
# n_jobs=-1)
cv.fit(X_train, y_train)
best_model=cv.best_estimator_
y_pred = cv.predict(X_test)
print (cv.best_params_)
###Output
{'clf__estimator__max_depth': 8, 'clf__estimator__min_samples_split': 3, 'clf__estimator__n_estimators': 150}
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# confusion matrix usage to evaluate the quality of the output of a classifier on the data set
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
for i in range(36):
cm=confusion_matrix(y_test.iloc[:,i], y_pred[:,i])
plt.matshow(cm)
plt.title(y_test.columns[i]+" confusion matrix ")
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
print('')
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
import pickle
pickle.dump(best_model, open('final_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
import pickle
nltk.download(['punkt','wordnet'])
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import AdaBoostClassifier
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('messages_cat', engine)
X = df['message']
Y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipeline- You'll find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(), n_jobs=-1))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall on both the training set and the test set. You can use sklearn's `classification_report` function here.
###Code
print(classification_report(y_test, y_pred, target_names=y_test.columns))
pipeline_os = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
])
X_new = pipeline_os.fit_transform(X_train)
smt = SMOTE()
os_X_train, os_y_train = smt.fit_sample(X_new, y_train['fire'])
rf = RandomForestClassifier(n_estimators=100)
rf.fit(os_X_train, os_y_train)
X_test_new = pipeline_os.transform(X_test)
X_test_new.shape
os_y_pred = rf.predict(X_test_new)
y_test.shape
print(classification_report(y_test['fire'], os_y_pred))
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__max_df': (0.5, 0.75),
'vect__max_features': (None, 5000, 10000),
'tfidf__use_idf': (True, False),
#'clf__estimator__n_estimators': [50, 100],
#'clf__estimator__learning_rate': [0.1, 1, 3]
}
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs = -1, verbose=10, scoring='f1_weighted')
cv.get_params().keys()
cv_fit = cv.fit(X_train, y_train)
y_pred = cv_fit.best_estimator_.predict(X_test)
print(classification_report(y_test, y_pred, target_names=y_test.columns))
from sklearn.externals import joblib
joblib.dump(cv_fit,'rf.model')
cv_fit = joblib.load('rf.model')
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
pkl_filename = "pickle_model.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(cv_fit.best_estimator_, file)
# Load from file
with open(pkl_filename, 'rb') as file:
pickle_model = pickle.load(file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import pickle
from sqlalchemy import create_engine
import re
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
import nltk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('wordnet')
nltk.download('words')
from nltk.corpus import stopwords
from nltk import pos_tag, ne_chunk
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix,classification_report, accuracy_score, recall_score, precision_score
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV,cross_val_score, cross_validate
from sklearn.metrics import fbeta_score, make_scorer, SCORERS
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql("SELECT * from Disaster_Response",engine)
X = df["message"].values
Y = (df[['related', 'request', 'offer', 'aid_related', 'medical_help', 'medical_products',
'search_and_rescue', 'security', 'military', 'child_alone', 'water', 'food', 'shelter',
'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid',
'infrastructure_related', 'transport', 'buildings', 'electricity', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure',
'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold', 'other_weather', 'direct_report']].values)
print(X[0],Y[0])
df.head()
X[0]
Y[0]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# normalizing all the text
text = text.lower()
#removing extra characters
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
#tokenizing all the sentences
words = word_tokenize(text)
#removing stopwords
words = [w for w in words if w not in stopwords.words("english")]
# Reduce words to their stems
stemmed = [PorterStemmer().stem(w) for w in words]
# Lemmatize verbs by specifying pos
lemmed = [WordNetLemmatizer().lemmatize(w, pos='v') for w in stemmed]
#tagging parts of speech
#sentence = pos_tag(lemmed)
#named entities
#tree = ne_chunk(sentence)
return lemmed
def display_results(y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(estimator = RandomForestClassifier(n_jobs =-1)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, y_train)
###Output
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
y_pred = pipeline.predict(X_test)
# display results
display_results(y_test[0], y_pred[0])
###Output
Labels: [0 1]
Confusion Matrix:
[[29 0]
[ 4 3]]
Accuracy: 0.8888888888888888
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
#clf = MultiOutputClassifier(estimator = RandomForestClassifier( random_state = 1, n_jobs = -1, oob_score = True))
# Create the parameters list you wish to tune, using a dictionary if needed.
parameters = {}
parameters["clf__estimator__oob_score"] = [True]
parameters["clf__estimator__n_estimators"] = [10,20,50,100]
#parameters["clf__estimator__max_features"] = ["auto"]
#parameters["clf__estimator__min_samples_leaf"] = [5,10,20,30,50,100,150,200,300,400,500]
# Make an fbeta_score scoring object using make_scorer()
#scorer = make_scorer(fbeta_score, beta=.5, average = "micro")
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(pipeline, parameters)
# Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = y_pred
best_predictions = best_clf.predict(X_test)
display_results(y_test[0], best_predictions[0])
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test[0], predictions[0])))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test[0], predictions[0], beta = 0.5, average = "micro")))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test[0], best_predictions[0])))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test[0], best_predictions[0], beta = 0.5, average = "micro")))
###Output
Unoptimized model
------
Accuracy score on testing data: 0.8889
F-score on testing data: 0.8889
Optimized Model
------
Final accuracy score on the testing data: 0.8889
Final F-score on the testing data: 0.8889
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
columns = ['related', 'request', 'offer', 'aid_related', 'medical_help', 'medical_products',
'search_and_rescue', 'security', 'military', 'child_alone', 'water', 'food', 'shelter',
'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid',
'infrastructure_related', 'transport', 'buildings', 'electricity', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure',
'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold', 'other_weather', 'direct_report']
for i,col in enumerate(columns):
print(col)
accuracy = accuracy_score(y_test[i], best_predictions[i])
precision = precision_score(y_test[i], best_predictions[i])
recall = recall_score(y_test[i], best_predictions[i])
print("\tAccuracy: %.4f\tPrecision: %.4f\t Recall: %.4f\n" % (accuracy, precision, recall))
###Output
related
Accuracy: 0.8889 Precision: 1.0000 Recall: 0.4286
request
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
offer
Accuracy: 0.8889 Precision: 0.2000 Recall: 1.0000
aid_related
Accuracy: 0.9444 Precision: 0.8333 Recall: 0.8333
medical_help
Accuracy: 0.9722 Precision: 1.0000 Recall: 0.6667
medical_products
Accuracy: 0.8611 Precision: 0.0000 Recall: 0.0000
search_and_rescue
Accuracy: 0.7222 Precision: 0.6667 Recall: 0.1818
security
Accuracy: 0.9167 Precision: 1.0000 Recall: 0.5000
military
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
child_alone
Accuracy: 0.9722 Precision: 1.0000 Recall: 0.8571
water
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
food
Accuracy: 0.9167 Precision: 0.5000 Recall: 0.3333
shelter
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.6667
clothing
Accuracy: 0.9444 Precision: 0.8333 Recall: 0.8333
money
Accuracy: 0.8611 Precision: 1.0000 Recall: 0.5455
missing_people
Accuracy: 0.9722 Precision: 0.0000 Recall: 0.0000
refugees
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.6667
death
Accuracy: 0.9444 Precision: 0.8333 Recall: 0.8333
other_aid
Accuracy: 0.9167 Precision: 1.0000 Recall: 0.2500
infrastructure_related
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
transport
Accuracy: 0.9722 Precision: 1.0000 Recall: 0.8000
buildings
Accuracy: 1.0000 Precision: 0.0000 Recall: 0.0000
electricity
Accuracy: 0.9722 Precision: 0.5000 Recall: 1.0000
tools
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
hospitals
Accuracy: 0.9722 Precision: 1.0000 Recall: 0.6667
shops
Accuracy: 0.9722 Precision: 0.0000 Recall: 0.0000
aid_centers
Accuracy: 0.9167 Precision: 1.0000 Recall: 0.4000
other_infrastructure
Accuracy: 0.9167 Precision: 1.0000 Recall: 0.4000
weather_related
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.6000
floods
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.6000
storm
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.6667
fire
Accuracy: 0.9444 Precision: 1.0000 Recall: 0.3333
earthquake
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
cold
Accuracy: 0.9722 Precision: 0.0000 Recall: 0.0000
other_weather
Accuracy: 0.9722 Precision: 0.0000 Recall: 0.0000
direct_report
Accuracy: 1.0000 Precision: 1.0000 Recall: 1.0000
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
grid_fit.best_params_
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
filename = 'model_pipeline.pkl'
pickle.dump(pipeline, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import train_test_split
import nltk
from nltk import word_tokenize
import time
import re
from sklearn.externals import joblib
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
pd.set_option('mode.chained_assignment', None)
nltk.download('punkt')
# load data from database
engine = create_engine('sqlite:///disaster_messages.db')
df = pd.read_sql_table(table_name='messages_categories',con=engine)
X = df.message
y = df.iloc[:,3:]
X.shape,y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
words = word_tokenize(text)
return words
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(RandomForestClassifier(random_state=42))),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
start = time.time()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42,test_size=0.3)
pipeline.fit(X_train, y_train)
end = time.time()
time_elapsed = (end - start)/60
print('Training data size:{} documents.'.format(X_train.shape[0]))
print('Baseline model took {} minutes to train.'.format(round(time_elapsed,2)))
###Output
Training data size:18351 documents.
Baseline model took 1.13 minutes to train.
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
print(classification_report(y_pred,y_test,target_names=y_test.columns))
###Output
precision recall f1-score support
related 0.94 0.81 0.87 6850
request 0.37 0.83 0.51 586
offer 0.00 0.00 0.00 0
aid_related 0.52 0.75 0.61 2218
medical_help 0.07 0.58 0.12 76
medical_products 0.07 0.75 0.13 40
search_and_rescue 0.05 0.56 0.10 18
security 0.00 0.00 0.00 2
military 0.09 0.62 0.15 34
water 0.25 0.78 0.37 158
food 0.34 0.79 0.48 382
shelter 0.25 0.79 0.38 224
clothing 0.06 1.00 0.11 7
money 0.05 0.67 0.09 12
missing_people 0.02 1.00 0.04 2
refugees 0.03 0.41 0.06 22
death 0.14 0.89 0.24 56
other_aid 0.03 0.53 0.06 66
infrastructure_related 0.01 0.54 0.03 13
transport 0.11 0.68 0.19 59
buildings 0.06 0.68 0.12 37
electricity 0.07 0.85 0.12 13
tools 0.00 0.00 0.00 0
hospitals 0.00 0.00 0.00 1
shops 0.00 0.00 0.00 0
aid_centers 0.00 0.00 0.00 0
other_infrastructure 0.00 0.00 0.00 1
weather_related 0.53 0.84 0.65 1355
floods 0.34 0.86 0.49 249
storm 0.38 0.74 0.50 376
fire 0.11 0.90 0.19 10
earthquake 0.44 0.87 0.58 354
cold 0.13 0.76 0.22 29
other_weather 0.03 0.58 0.05 19
direct_report 0.31 0.77 0.44 615
avg / total 0.67 0.80 0.70 13884
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__min_samples_leaf' : [1,5,10],
'clf__estimator__max_features' : ["auto",'log2']
}
cv = GridSearchCV(
estimator = pipeline,
param_grid = parameters,
cv = 3,
n_jobs = -1,
scoring = 'f1_samples',
return_train_score = True
)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
tuned_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(
RandomForestClassifier(
random_state=42,
max_features = 'auto',
min_samples_leaf = 1,
n_estimators = 200
)
)
),
])
tuned_pipeline.fit(X_train, y_train)
y_pred_tuned = tuned_pipeline.predict(X_test)
print(classification_report(y_pred_tuned,y_test,target_names=y_test.columns))
###Output
precision recall f1-score support
related 0.97 0.80 0.88 7211
request 0.42 0.89 0.57 626
offer 0.00 0.00 0.00 0
aid_related 0.61 0.78 0.69 2507
medical_help 0.07 0.72 0.12 60
medical_products 0.07 0.79 0.14 39
search_and_rescue 0.03 0.50 0.05 10
security 0.00 0.00 0.00 1
military 0.04 0.64 0.07 14
water 0.26 0.90 0.40 142
food 0.43 0.89 0.58 429
shelter 0.25 0.82 0.38 215
clothing 0.06 0.88 0.11 8
money 0.05 0.80 0.09 10
missing_people 0.02 1.00 0.04 2
refugees 0.01 0.30 0.02 10
death 0.11 0.78 0.19 51
other_aid 0.02 0.63 0.04 30
infrastructure_related 0.01 0.38 0.01 8
transport 0.11 0.76 0.20 54
buildings 0.09 0.83 0.16 41
electricity 0.04 0.86 0.07 7
tools 0.00 0.00 0.00 0
hospitals 0.00 0.00 0.00 1
shops 0.00 0.00 0.00 0
aid_centers 0.00 0.00 0.00 1
other_infrastructure 0.00 0.00 0.00 3
weather_related 0.62 0.86 0.72 1552
floods 0.42 0.88 0.57 300
storm 0.44 0.79 0.56 414
fire 0.01 0.33 0.02 3
earthquake 0.69 0.87 0.77 550
cold 0.05 0.90 0.10 10
other_weather 0.01 0.67 0.03 9
direct_report 0.34 0.85 0.49 619
avg / total 0.73 0.81 0.74 14937
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
improved_pipeline = Pipeline([
('vect', TfidfVectorizer(tokenizer=tokenize,stop_words='english')),
(
'clf', MultiOutputClassifier(
AdaBoostClassifier(
random_state = 42,
learning_rate = 0.3,
n_estimators = 200
)
)
),
])
improved_pipeline.fit(X_train, y_train)
y_pred_improved = improved_pipeline.predict(X_test)
print(classification_report(y_pred_improved,y_test,target_names=y_test.columns))
###Output
precision recall f1-score support
related 0.97 0.79 0.87 7259
request 0.46 0.83 0.59 738
offer 0.00 0.00 0.00 5
aid_related 0.58 0.78 0.67 2408
medical_help 0.17 0.61 0.27 178
medical_products 0.22 0.73 0.34 128
search_and_rescue 0.12 0.60 0.21 40
security 0.00 0.00 0.00 3
military 0.17 0.53 0.26 80
water 0.64 0.73 0.68 437
food 0.71 0.83 0.76 756
shelter 0.50 0.83 0.62 426
clothing 0.38 0.68 0.49 65
money 0.23 0.57 0.33 68
missing_people 0.12 0.55 0.20 20
refugees 0.20 0.66 0.30 77
death 0.36 0.81 0.50 162
other_aid 0.08 0.61 0.14 129
infrastructure_related 0.05 0.65 0.09 37
transport 0.17 0.83 0.29 76
buildings 0.30 0.82 0.44 142
electricity 0.17 0.63 0.27 46
tools 0.00 0.00 0.00 4
hospitals 0.04 0.25 0.07 12
shops 0.00 0.00 0.00 9
aid_centers 0.06 0.35 0.10 17
other_infrastructure 0.03 0.56 0.05 16
weather_related 0.60 0.88 0.71 1483
floods 0.53 0.88 0.66 374
storm 0.44 0.76 0.56 432
fire 0.12 0.45 0.19 22
earthquake 0.78 0.88 0.83 620
cold 0.30 0.75 0.43 68
other_weather 0.06 0.49 0.11 51
direct_report 0.35 0.75 0.48 729
avg / total 0.71 0.79 0.72 17117
###Markdown
9. Export your model as a pickle file
###Code
joblib.dump(improved_pipeline, 'model.pkl')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sqlalchemy
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt','wordnet'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
import re
import numpy as np
import pandas as pd
import pickle
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import classification_report
# load data from database
# engine = create_engine('sqlite:///InsertDatabaseName.db')
engine = create_engine('sqlite:///disaster_response.db')
df = pd.read_sql("SELECT * FROM disaster_categories", engine)
X = df.message.values
y = df.iloc[:, 4:].values
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
]))
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
pipeline.score(X_train, y_train)
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy)
for i in range(0,36):
print("Category: "+str(i+1), classification_report([row[i] for row in y_test], [row[i] for row in y_pred]))
###Output
Category: 1 precision recall f1-score support
0 0.62 0.37 0.46 1540
1 0.82 0.93 0.87 4962
2 0.75 0.14 0.24 43
avg / total 0.77 0.79 0.77 6545
Category: 2 precision recall f1-score support
0 0.88 0.98 0.93 5405
1 0.83 0.39 0.53 1140
avg / total 0.88 0.88 0.86 6545
Category: 3 precision recall f1-score support
0 1.00 1.00 1.00 6521
1 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 6545
Category: 4 precision recall f1-score support
0 0.73 0.89 0.80 3857
1 0.77 0.53 0.63 2688
avg / total 0.75 0.74 0.73 6545
Category: 5 precision recall f1-score support
0 0.92 1.00 0.96 6020
1 0.67 0.07 0.12 525
avg / total 0.90 0.92 0.89 6545
Category: 6 precision recall f1-score support
0 0.96 1.00 0.98 6235
1 0.79 0.07 0.13 310
avg / total 0.95 0.96 0.94 6545
Category: 7 precision recall f1-score support
0 0.98 1.00 0.99 6363
1 0.61 0.12 0.20 182
avg / total 0.97 0.97 0.96 6545
Category: 8 precision recall f1-score support
0 0.98 1.00 0.99 6437
1 0.25 0.01 0.02 108
avg / total 0.97 0.98 0.98 6545
Category: 9 precision recall f1-score support
0 0.97 1.00 0.98 6336
1 0.88 0.03 0.06 209
avg / total 0.97 0.97 0.95 6545
Category: 10 precision recall f1-score support
0 1.00 1.00 1.00 6545
avg / total 1.00 1.00 1.00 6545
Category: 11 precision recall f1-score support
0 0.95 1.00 0.97 6132
1 0.80 0.19 0.30 413
avg / total 0.94 0.95 0.93 6545
Category: 12 precision recall f1-score support
0 0.93 0.99 0.96 5844
1 0.84 0.36 0.50 701
avg / total 0.92 0.92 0.91 6545
Category: 13 precision recall f1-score support
0 0.93 1.00 0.96 5965
1 0.87 0.24 0.38 580
avg / total 0.93 0.93 0.91 6545
Category: 14 precision recall f1-score support
0 0.99 1.00 0.99 6445
1 0.67 0.02 0.04 100
avg / total 0.98 0.98 0.98 6545
Category: 15 precision recall f1-score support
0 0.98 1.00 0.99 6384
1 1.00 0.01 0.01 161
avg / total 0.98 0.98 0.96 6545
Category: 16 precision recall f1-score support
0 0.99 1.00 0.99 6468
1 1.00 0.04 0.08 77
avg / total 0.99 0.99 0.98 6545
Category: 17 precision recall f1-score support
0 0.97 1.00 0.98 6337
1 0.60 0.01 0.03 208
avg / total 0.96 0.97 0.95 6545
Category: 18 precision recall f1-score support
0 0.96 1.00 0.98 6229
1 0.79 0.07 0.13 316
avg / total 0.95 0.95 0.94 6545
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
MultiOutputClassifier(RandomForestClassifier()).get_params().keys()
parameters = {
'clf__estimator__n_estimators': [100, 200],
'clf__estimator__criterion': ['gini', 'entropy']
}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
cv.best_params_
cv.score(X_train, y_train)
y_pred = cv.predict(X_test)
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy)
for i in range(0,36):
print("Category: "+str(i+1), classification_report([row[i] for row in y_test], [row[i] for row in y_pred]))
###Output
Category: 1 precision recall f1-score support
0 0.73 0.26 0.39 1540
1 0.80 0.97 0.88 4962
2 0.75 0.14 0.24 43
avg / total 0.79 0.80 0.76 6545
Category: 2 precision recall f1-score support
0 0.89 0.99 0.94 5405
1 0.89 0.44 0.59 1140
avg / total 0.89 0.89 0.88 6545
Category: 3 precision recall f1-score support
0 1.00 1.00 1.00 6521
1 0.00 0.00 0.00 24
avg / total 0.99 1.00 0.99 6545
Category: 4 precision recall f1-score support
0 0.78 0.89 0.83 3857
1 0.80 0.63 0.70 2688
avg / total 0.79 0.78 0.78 6545
Category: 5 precision recall f1-score support
0 0.92 1.00 0.96 6020
1 0.72 0.04 0.08 525
avg / total 0.91 0.92 0.89 6545
Category: 6 precision recall f1-score support
0 0.96 1.00 0.98 6235
1 0.83 0.08 0.14 310
avg / total 0.95 0.96 0.94 6545
Category: 7 precision recall f1-score support
0 0.97 1.00 0.99 6363
1 0.60 0.03 0.06 182
avg / total 0.96 0.97 0.96 6545
Category: 8 precision recall f1-score support
0 0.98 1.00 0.99 6437
1 0.50 0.01 0.02 108
avg / total 0.98 0.98 0.98 6545
Category: 9 precision recall f1-score support
0 0.97 1.00 0.98 6336
1 0.88 0.03 0.06 209
avg / total 0.97 0.97 0.95 6545
Category: 10 precision recall f1-score support
0 1.00 1.00 1.00 6545
avg / total 1.00 1.00 1.00 6545
Category: 11 precision recall f1-score support
0 0.95 1.00 0.97 6132
1 0.91 0.27 0.41 413
avg / total 0.95 0.95 0.94 6545
Category: 12 precision recall f1-score support
0 0.93 0.99 0.96 5844
1 0.89 0.40 0.55 701
avg / total 0.93 0.93 0.92 6545
Category: 13 precision recall f1-score support
0 0.93 1.00 0.96 5965
1 0.90 0.24 0.38 580
avg / total 0.93 0.93 0.91 6545
Category: 14 precision recall f1-score support
0 0.99 1.00 0.99 6445
1 0.50 0.04 0.07 100
avg / total 0.98 0.98 0.98 6545
Category: 15 precision recall f1-score support
0 0.98 1.00 0.99 6384
1 1.00 0.01 0.02 161
avg / total 0.98 0.98 0.96 6545
Category: 16 precision recall f1-score support
0 0.99 1.00 0.99 6468
1 1.00 0.03 0.05 77
avg / total 0.99 0.99 0.98 6545
Category: 17 precision recall f1-score support
0 0.97 1.00 0.98 6337
1 0.75 0.01 0.03 208
avg / total 0.96 0.97 0.95 6545
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
from sklearn.neighbors import KNeighborsRegressor
new_pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
]))
])),
('clf', MultiOutputClassifier(KNeighborsRegressor()))
])
parameters = {
'clf__estimator__weights': ['uniform', 'distance'],
'clf__estimator__leaf_size': [30, 40]
}
cv_KNN = GridSearchCV(new_pipeline, param_grid=parameters)
cv_KNN.fit(X_train, y_train)
cv_SVM.best_params_
cv_SVM.score(X_train, y_train)
y_pred = cv_SVM.predict(X_test)
accuracy = (y_pred == y_test).mean()
print("Accuracy:", accuracy)
for i in range(0,36):
print("Category: "+str(i+1), classification_report([row[i] for row in y_test], [row[i] for row in y_pred]))
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(model, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import TruncatedSVD
import pickle
import nltk
nltk.download(['punkt', 'wordnet'])
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table("messages_disaster", con=engine)
df.head()
X = df["message"]
Y = df.drop(['message', 'genre', 'id', 'original'], axis = 1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
pipeline.get_params()
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y,test_size = 0.2, random_state = 45)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def perf_report(model, X_test, y_test):
'''
Function to generate classification report on the model
Input: Model, test set ie X_test & y_test
Output: Prints the Classification report
'''
y_pred = model.predict(X_test)
for i, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, i]))
perf_report(pipeline, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.60 0.35 0.44 1198
1 0.82 0.93 0.87 4002
2 0.88 0.16 0.27 44
avg / total 0.77 0.79 0.77 5244
request
precision recall f1-score support
0 0.88 0.98 0.93 4335
1 0.83 0.39 0.53 909
avg / total 0.87 0.88 0.86 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5214
1 0.00 0.00 0.00 30
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.72 0.88 0.79 3044
1 0.76 0.53 0.62 2200
avg / total 0.74 0.73 0.72 5244
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4827
1 0.62 0.06 0.10 417
avg / total 0.90 0.92 0.89 5244
medical_products
precision recall f1-score support
0 0.96 1.00 0.98 4990
1 0.70 0.09 0.16 254
avg / total 0.94 0.95 0.94 5244
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 5086
1 0.73 0.05 0.09 158
avg / total 0.96 0.97 0.96 5244
security
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.00 0.00 0.00 111
avg / total 0.96 0.98 0.97 5244
military
precision recall f1-score support
0 0.97 1.00 0.98 5059
1 0.67 0.04 0.08 185
avg / total 0.96 0.97 0.95 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.95 1.00 0.97 4918
1 0.86 0.21 0.33 326
avg / total 0.94 0.95 0.93 5244
food
precision recall f1-score support
0 0.92 0.99 0.95 4678
1 0.82 0.28 0.42 566
avg / total 0.91 0.92 0.90 5244
shelter
precision recall f1-score support
0 0.93 1.00 0.96 4770
1 0.86 0.19 0.31 474
avg / total 0.92 0.92 0.90 5244
clothing
precision recall f1-score support
0 0.99 1.00 0.99 5178
1 0.64 0.11 0.18 66
avg / total 0.98 0.99 0.98 5244
money
precision recall f1-score support
0 0.97 1.00 0.99 5108
1 1.00 0.02 0.04 136
avg / total 0.98 0.97 0.96 5244
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 1.00 0.02 0.03 61
avg / total 0.99 0.99 0.98 5244
refugees
precision recall f1-score support
0 0.96 1.00 0.98 5042
1 0.71 0.02 0.05 202
avg / total 0.95 0.96 0.94 5244
death
precision recall f1-score support
0 0.96 1.00 0.98 4989
1 0.87 0.10 0.18 255
avg / total 0.95 0.96 0.94 5244
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 4546
1 0.40 0.02 0.05 698
avg / total 0.81 0.87 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4905
1 0.00 0.00 0.00 339
avg / total 0.87 0.93 0.90 5244
transport
precision recall f1-score support
0 0.95 1.00 0.97 4970
1 0.92 0.04 0.08 274
avg / total 0.95 0.95 0.93 5244
buildings
precision recall f1-score support
0 0.95 1.00 0.98 4989
1 0.71 0.04 0.07 255
avg / total 0.94 0.95 0.93 5244
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5149
1 1.00 0.03 0.06 95
avg / total 0.98 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.00 0.00 0.00 56
avg / total 0.98 0.99 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5229
1 0.00 0.00 0.00 15
avg / total 0.99 1.00 1.00 5244
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.00 0.00 0.00 64
avg / total 0.98 0.99 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 5020
1 0.33 0.00 0.01 224
avg / total 0.93 0.96 0.94 5244
weather_related
precision recall f1-score support
0 0.84 0.96 0.89 3794
1 0.83 0.50 0.63 1450
avg / total 0.83 0.83 0.82 5244
floods
precision recall f1-score support
0 0.94 1.00 0.97 4785
1 0.89 0.33 0.48 459
avg / total 0.93 0.94 0.92 5244
storm
precision recall f1-score support
0 0.94 0.99 0.96 4774
1 0.75 0.39 0.51 470
avg / total 0.93 0.93 0.92 5244
fire
precision recall f1-score support
0 0.99 1.00 1.00 5195
1 1.00 0.02 0.04 49
avg / total 0.99 0.99 0.99 5244
earthquake
precision recall f1-score support
0 0.96 0.99 0.97 4762
1 0.89 0.54 0.67 482
avg / total 0.95 0.95 0.95 5244
cold
precision recall f1-score support
0 0.98 1.00 0.99 5136
1 0.75 0.06 0.10 108
avg / total 0.98 0.98 0.97 5244
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4960
1 0.36 0.01 0.03 284
avg / total 0.91 0.95 0.92 5244
direct_report
precision recall f1-score support
0 0.86 0.98 0.91 4224
1 0.78 0.32 0.45 1020
avg / total 0.84 0.85 0.82 5244
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__min_samples_split': [2, 4]}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
perf_report(cv, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.73 0.27 0.40 1198
1 0.81 0.97 0.88 4002
2 0.71 0.23 0.34 44
avg / total 0.79 0.80 0.77 5244
request
precision recall f1-score support
0 0.89 0.99 0.94 4335
1 0.89 0.44 0.59 909
avg / total 0.89 0.89 0.88 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5214
1 0.00 0.00 0.00 30
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.76 0.88 0.81 3044
1 0.79 0.61 0.69 2200
avg / total 0.77 0.77 0.76 5244
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4827
1 0.72 0.06 0.10 417
avg / total 0.91 0.92 0.89 5244
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 4990
1 0.79 0.06 0.11 254
avg / total 0.95 0.95 0.93 5244
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.98 5086
1 0.67 0.03 0.05 158
avg / total 0.96 0.97 0.96 5244
security
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 1.00 0.01 0.02 111
avg / total 0.98 0.98 0.97 5244
military
precision recall f1-score support
0 0.97 1.00 0.98 5059
1 0.91 0.05 0.10 185
avg / total 0.96 0.97 0.95 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.95 1.00 0.97 4918
1 0.91 0.19 0.31 326
avg / total 0.95 0.95 0.93 5244
food
precision recall f1-score support
0 0.93 0.99 0.96 4678
1 0.86 0.40 0.54 566
avg / total 0.92 0.93 0.92 5244
shelter
precision recall f1-score support
0 0.93 1.00 0.96 4770
1 0.87 0.22 0.35 474
avg / total 0.92 0.93 0.91 5244
clothing
precision recall f1-score support
0 0.99 1.00 0.99 5178
1 1.00 0.03 0.06 66
avg / total 0.99 0.99 0.98 5244
money
precision recall f1-score support
0 0.97 1.00 0.99 5108
1 0.83 0.04 0.07 136
avg / total 0.97 0.97 0.96 5244
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 1.00 0.02 0.03 61
avg / total 0.99 0.99 0.98 5244
refugees
precision recall f1-score support
0 0.96 1.00 0.98 5042
1 1.00 0.01 0.02 202
avg / total 0.96 0.96 0.94 5244
death
precision recall f1-score support
0 0.96 1.00 0.98 4989
1 0.88 0.09 0.16 255
avg / total 0.95 0.95 0.94 5244
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 4546
1 0.59 0.01 0.03 698
avg / total 0.83 0.87 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4905
1 0.00 0.00 0.00 339
avg / total 0.87 0.94 0.90 5244
transport
precision recall f1-score support
0 0.95 1.00 0.97 4970
1 0.71 0.04 0.08 274
avg / total 0.94 0.95 0.93 5244
buildings
precision recall f1-score support
0 0.95 1.00 0.98 4989
1 0.86 0.05 0.09 255
avg / total 0.95 0.95 0.93 5244
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5149
1 0.75 0.03 0.06 95
avg / total 0.98 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.00 0.00 0.00 56
avg / total 0.98 0.99 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5229
1 0.00 0.00 0.00 15
avg / total 0.99 1.00 1.00 5244
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.00 0.00 0.00 64
avg / total 0.98 0.99 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 5020
1 0.00 0.00 0.00 224
avg / total 0.92 0.96 0.94 5244
weather_related
precision recall f1-score support
0 0.87 0.96 0.91 3794
1 0.86 0.63 0.73 1450
avg / total 0.87 0.87 0.86 5244
floods
precision recall f1-score support
0 0.94 1.00 0.97 4785
1 0.91 0.37 0.53 459
avg / total 0.94 0.94 0.93 5244
storm
precision recall f1-score support
0 0.94 0.99 0.97 4774
1 0.80 0.40 0.53 470
avg / total 0.93 0.94 0.93 5244
fire
precision recall f1-score support
0 0.99 1.00 1.00 5195
1 0.00 0.00 0.00 49
avg / total 0.98 0.99 0.99 5244
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 4762
1 0.91 0.74 0.82 482
avg / total 0.97 0.97 0.97 5244
cold
precision recall f1-score support
0 0.98 1.00 0.99 5136
1 1.00 0.05 0.09 108
avg / total 0.98 0.98 0.97 5244
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4960
1 0.40 0.01 0.03 284
avg / total 0.92 0.95 0.92 5244
direct_report
precision recall f1-score support
0 0.86 0.98 0.92 4224
1 0.84 0.35 0.50 1020
avg / total 0.86 0.86 0.84 5244
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#Improve the pipeline
pipeline2 = Pipeline([
('vect', CountVectorizer()),
('best', TruncatedSVD()),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.get_params()
#Train & predict
pipeline2.fit(X_train, y_train)
perf_report(pipeline2, X_test, y_test)
#Param tunning
parameters2 = { #'vect__ngram_range': ((1, 1), (1, 2)),
#'vect__max_df': (0.5, 1.0),
#'vect__max_features': (None, 5000),
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__learning_rate': [1,2] }
cv2 = GridSearchCV(pipeline2, param_grid=parameters2)
cv2
cv2.fit(X_train, y_train)
perf_report(cv2, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.45 0.01 0.02 1198
1 0.77 1.00 0.87 4002
2 0.50 0.05 0.08 44
avg / total 0.69 0.76 0.67 5244
request
precision recall f1-score support
0 0.83 1.00 0.91 4335
1 0.00 0.00 0.00 909
avg / total 0.68 0.83 0.75 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5214
1 0.00 0.00 0.00 30
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.58 0.98 0.73 3044
1 0.41 0.02 0.04 2200
avg / total 0.51 0.58 0.44 5244
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4827
1 0.00 0.00 0.00 417
avg / total 0.85 0.92 0.88 5244
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 4990
1 0.00 0.00 0.00 254
avg / total 0.91 0.95 0.93 5244
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.98 5086
1 0.00 0.00 0.00 158
avg / total 0.94 0.97 0.95 5244
security
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.00 0.00 0.00 111
avg / total 0.96 0.98 0.97 5244
military
precision recall f1-score support
0 0.96 1.00 0.98 5059
1 0.00 0.00 0.00 185
avg / total 0.93 0.96 0.95 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.94 1.00 0.97 4918
1 0.00 0.00 0.00 326
avg / total 0.88 0.94 0.91 5244
food
precision recall f1-score support
0 0.89 1.00 0.94 4678
1 0.00 0.00 0.00 566
avg / total 0.80 0.89 0.84 5244
shelter
precision recall f1-score support
0 0.91 1.00 0.95 4770
1 0.00 0.00 0.00 474
avg / total 0.83 0.91 0.87 5244
clothing
precision recall f1-score support
0 0.99 1.00 0.99 5178
1 0.00 0.00 0.00 66
avg / total 0.97 0.99 0.98 5244
money
precision recall f1-score support
0 0.97 1.00 0.99 5108
1 0.00 0.00 0.00 136
avg / total 0.95 0.97 0.96 5244
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 0.00 0.00 0.00 61
avg / total 0.98 0.99 0.98 5244
refugees
precision recall f1-score support
0 0.96 1.00 0.98 5042
1 0.00 0.00 0.00 202
avg / total 0.92 0.96 0.94 5244
death
precision recall f1-score support
0 0.95 1.00 0.98 4989
1 0.00 0.00 0.00 255
avg / total 0.91 0.95 0.93 5244
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 4546
1 0.00 0.00 0.00 698
avg / total 0.75 0.87 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4905
1 0.00 0.00 0.00 339
avg / total 0.87 0.94 0.90 5244
transport
precision recall f1-score support
0 0.95 1.00 0.97 4970
1 0.00 0.00 0.00 274
avg / total 0.90 0.95 0.92 5244
buildings
precision recall f1-score support
0 0.95 1.00 0.98 4989
1 0.00 0.00 0.00 255
avg / total 0.91 0.95 0.93 5244
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5149
1 0.00 0.00 0.00 95
avg / total 0.96 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5188
1 0.00 0.00 0.00 56
avg / total 0.98 0.99 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5229
1 0.00 0.00 0.00 15
avg / total 0.99 1.00 1.00 5244
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.00 0.00 0.00 64
avg / total 0.98 0.99 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 5020
1 0.00 0.00 0.00 224
avg / total 0.92 0.96 0.94 5244
weather_related
precision recall f1-score support
0 0.72 1.00 0.84 3794
1 0.50 0.01 0.01 1450
avg / total 0.66 0.72 0.61 5244
floods
precision recall f1-score support
0 0.91 1.00 0.95 4785
1 0.00 0.00 0.00 459
avg / total 0.83 0.91 0.87 5244
storm
precision recall f1-score support
0 0.91 1.00 0.95 4774
1 0.00 0.00 0.00 470
avg / total 0.83 0.91 0.87 5244
fire
precision recall f1-score support
0 0.99 1.00 1.00 5195
1 0.00 0.00 0.00 49
avg / total 0.98 0.99 0.99 5244
earthquake
precision recall f1-score support
0 0.91 1.00 0.95 4762
1 0.32 0.01 0.02 482
avg / total 0.85 0.91 0.87 5244
cold
precision recall f1-score support
0 0.98 1.00 0.99 5136
1 0.00 0.00 0.00 108
avg / total 0.96 0.98 0.97 5244
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4960
1 0.00 0.00 0.00 284
avg / total 0.89 0.95 0.92 5244
direct_report
precision recall f1-score support
0 0.81 1.00 0.89 4224
1 0.00 0.00 0.00 1020
avg / total 0.65 0.81 0.72 5244
###Markdown
9. Export your model as a pickle file
###Code
with open('model.pkl', 'wb') as f:
pickle.dump(cv2, f)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Predict categories from test set using tuned pipeline
start_datetime = datetime.datetime.now().replace(microsecond=0)
Y_pred_tuned = cv.predict(X_test)
print("--- Predicting time: %s ---" % (datetime.datetime.now().replace(microsecond=0) - start_datetime))
# Print accuracy of tuned pipeline for each of individual category
accuracy_tuned = (Y_pred_tuned == Y_test).mean()
accuracy_tuned
# Print overall accuracy of tuned pipeline
overall_accuracy_tuned = (Y_pred_tuned == Y_test).mean().mean()
print('Overall accuracy of tuned pipeline is: {}%'.format(round(overall_accuracy_tuned*100, 2)))
# Print overall f_score of tuned pipeline
multi_f_gmean_tuned = multi_label_fscore(Y_test,Y_pred_tuned, beta = 1)
print('Overall F_beta_score of tuned pipeline is: {0:.2f}%'.format(multi_f_gmean_tuned*100))
# Report the tuned pipeline f1 score, precision and recall for each output category of the dataset
# by iterating through the columns and calling sklearn's classification_report on each column
for column in Y_test.columns:
print('------------------------------------------------------\n')
print('CATEGORY: {}\n'.format(column))
print(classification_report(Y_test[column],pd.DataFrame(Y_pred_tuned, columns=Y_test.columns)[column]))
# Create dict for classification report containg metrics for each of the label for tuned pipeline
clf_report_dict_tuned = {}
for column in Y_test.columns:
clf_report_dict_tuned[column] = classification_report(Y_test[column],\
pd.DataFrame(Y_pred_tuned, columns=Y_test.columns)[column],\
output_dict=True)
clf_report_dict_tuned
# Calculate weighted avg metric for tuned pipeline and concatenate accuracy calculated above for each
# of the label to form a new dataframe
final_metric_tuned = pd.concat([weighted_avg_metric(clf_report_dict_tuned), accuracy_tuned], axis=1)
final_metric_tuned.rename(columns={0:'accuracy'}, inplace=True)
final_metric_tuned
# Print overall weighted avg accuracy for tuned pipeline
gmean(final_metric_tuned['accuracy'])
# Print overall weighted avg f1_score for tuned pipeline
gmean(final_metric_tuned['f1_score'])
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF So, from above, we can observe that after tuning the basic pipeline, tuned pipeline was able to perform slightly better in terms of both accuracy and f1-score.* **Accuracy of basic pipeline was 94.29% whereas accuracy of tuned pipeline is 94.88%.*** **Also, f1-score of basic pipleine was 92.81% whereas f1-score of tuned pipeline is 93.73%**Below are the changes which we introduced while tuning the basic pipeline in order to have tuned pipleine.* **Added another feature 'starting_verb' by using custom estimator 'StartingVerbExtractor' and performed Feature Union along with TF-IDF.*** **Tuned various parameters of estimators(transformers & classifier) in order to have the best estimator.*** **The best estimator parameters are:**{'clf__estimator__min_samples_split': 4, 'clf__estimator__n_estimators': 200, 'features__text_pipeline__tfidf__use_idf': True, 'features__text_pipeline__vect__max_df': 0.75, 'features__text_pipeline__vect__max_features': 5000, 'features__text_pipeline__vect__ngram_range': (1, 2), 'features__transformer_weights': {'text_pipeline': 1, 'starting_verb': 0.5}} **We can try further impriving the perfromance metrics(accuracy & f1-score) by using other classifiers like AdaBoost or SVM. But since, this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass, therefore, we would proceed further by taking tuned pipeline into account.** 9. Export your model as a pickle file 9.1 Dump model as pickle file
###Code
# Pickle file and save the model to disk
#filename = './models/DisasterResponseModel.p'
#outfile = open(filename,'wb')
#pickle.dump(cv, outfile)
#outfile.close()
###Output
_____no_output_____
###Markdown
In above cell, model has been pickeled and saved to disk. In below cell, model has been compressed after pickeling by using bz2 module and then saved to disk in order to save storage space on disk. 9.2 Save model as compressed pickle file
###Code
# pickle file(but compressed version) and save the model to disk.
# To save the pickle file(uncompressed version), see above cell
filename = './models/comp_DisasterResponseModel.p.bz2'
outfile = bz2.BZ2File(filename, 'wb')
pickle.dump(cv, outfile)
outfile.close()
###Output
_____no_output_____
###Markdown
9.3 Load model from compressed pickle file
###Code
# Decompress pickle file and load the model from disk
filename = './models/comp_DisasterResponseModel.p.bz2'
infile = bz2.BZ2File(filename, 'rb')
cv_from_compress_pickle = pickle.load(infile)
infile.close()
# Print model after loading compressed pickle file
cv_from_compress_pickle
# Print best estimator
cv_from_compress_pickle.best_estimator_
# Print best params
cv_from_compress_pickle.best_params_
# Print scorer function of model
cv_from_compress_pickle.scorer_
# Print number of splits
cv_from_compress_pickle.n_splits_
###Output
_____no_output_____
###Markdown
9.4 Test model loaded from compressed pickle file
###Code
# Predict categories from test set using model loaded from compressed pickle file
start_datetime = datetime.datetime.now().replace(microsecond=0)
Y_pred_cv_pickle = cv_from_compress_pickle.predict(X_test)
print("--- Predicting time: %s ---" % (datetime.datetime.now().replace(microsecond=0) - start_datetime))
# Print overall accuracy of model loaded from compressed pickle file
overall_accuracy_cv_pickle = (Y_pred_cv_pickle == Y_test).mean().mean()
print('Overall accuracy of model loaded from pickle file is: {}%'.format(round(overall_accuracy_cv_pickle*100, 2)))
# Print overall f_score of model loaded from pickle file
multi_f_gmean_cv_pickle = multi_label_fscore(Y_test,Y_pred_cv_pickle, beta = 1)
print('Overall F_beta_score of model loaded from pickle file is: {0:.2f}%'.format(multi_f_gmean_cv_pickle*100))
###Output
Overall F_beta_score of model loaded from pickle file is: 93.75%
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
import re
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('words')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('wordnet')
import pandas as pd
import numpy as np
import string
from nltk import pos_tag, ne_chunk
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import pickle
# load data from database
engine = create_engine('sqlite:///messages.db')
df = pd.read_sql_table('clean_messages', engine)
df = df[df['related']!=2]
X = df['message']
y = df.drop(['id','message','original','genre'],axis=1)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
#remove punctuation characters
text = text.translate(str.maketrans('', '', string.punctuation))
#lemmatize, convert to lowercase, remove leading/trailing white space
lemmatizer = WordNetLemmatizer()
text = lemmatizer.lemmatize(text).lower().strip()
#tokenize
words = word_tokenize(text)
#stop words removal
words = [w for w in words if w not in stopwords.words("english")]
clean_tokens = []
for tok in words:
clean_tokens.append(tok)
return clean_tokens
print(tokenize(X[0]))
print(tokenize(X[1]))
print(tokenize(X[26203]))
###Output
['weather', 'update', 'cold', 'front', 'cuba', 'could', 'pass', 'haiti']
['hurricane']
['bangkok', '24', 'january', '2012', 'nnt', 'prime', 'minister', 'yingluck', 'shinawatra', 'attended', 'meeting', 'permanent', 'secretaries', 'various', 'ministries', 'urging', 'quickly', 'distribute', 'flood', 'compensations', 'wisely', 'utilize', 'budgets']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier())),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
#train
pipeline.fit(X_train,y_train)
#predict on test data
y_pred = pipeline.predict(X_test)
print(y_pred)
y_pred_columns = y_test.columns
y_pred = pd.DataFrame(y_pred, columns = y_pred_columns)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
target_names = y_test.columns
print(classification_report(y_test, y_pred, target_names=target_names))
###Output
precision recall f1-score support
related 0.84 0.91 0.87 4967
request 0.81 0.45 0.58 1136
offer 0.00 0.00 0.00 37
aid_related 0.73 0.60 0.66 2661
medical_help 0.47 0.10 0.16 509
medical_products 1.00 0.07 0.13 318
search_and_rescue 0.47 0.04 0.08 168
security 0.00 0.00 0.00 103
military 0.67 0.09 0.16 217
child_alone 0.00 0.00 0.00 0
water 0.82 0.27 0.41 402
food 0.86 0.46 0.60 736
shelter 0.81 0.35 0.49 572
clothing 0.71 0.05 0.10 97
money 1.00 0.04 0.07 142
missing_people 0.00 0.00 0.00 68
refugees 0.57 0.14 0.22 210
death 0.74 0.16 0.26 290
other_aid 0.54 0.06 0.11 834
infrastructure_related 0.00 0.00 0.00 414
transport 0.75 0.08 0.15 298
buildings 0.61 0.14 0.23 322
electricity 0.73 0.07 0.12 120
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 77
shops 0.00 0.00 0.00 31
aid_centers 0.00 0.00 0.00 64
other_infrastructure 0.11 0.00 0.01 285
weather_related 0.83 0.61 0.71 1772
floods 0.90 0.36 0.52 524
storm 0.77 0.39 0.52 582
fire 1.00 0.01 0.03 67
earthquake 0.87 0.72 0.79 585
cold 0.64 0.07 0.13 121
other_weather 0.57 0.04 0.07 355
direct_report 0.76 0.35 0.48 1271
avg / total 0.74 0.49 0.55 20387
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'clf__estimator__bootstrap': (True, False)
}
cv = GridSearchCV(pipeline, parameters)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv = GridSearchCV(pipeline, param_grid = parameters,verbose = 2)
np.random.seed(42)
y_pred2 = cv.fit(X_train, y_train)
prediction2 = y_pred2.predict(X_test)
print(classification_report(y_test, prediction2 , target_names = target_names))
###Output
precision recall f1-score support
related 0.86 0.87 0.87 4967
request 0.78 0.49 0.60 1136
offer 0.00 0.00 0.00 37
aid_related 0.73 0.58 0.65 2661
medical_help 0.59 0.11 0.19 509
medical_products 0.61 0.11 0.19 318
search_and_rescue 0.52 0.07 0.13 168
security 0.12 0.01 0.02 103
military 0.51 0.08 0.14 217
child_alone 0.00 0.00 0.00 0
water 0.83 0.37 0.51 402
food 0.85 0.50 0.63 736
shelter 0.76 0.30 0.43 572
clothing 0.73 0.16 0.27 97
money 0.78 0.05 0.09 142
missing_people 1.00 0.04 0.08 68
refugees 0.35 0.05 0.09 210
death 0.67 0.17 0.27 290
other_aid 0.46 0.08 0.13 834
infrastructure_related 0.00 0.00 0.00 414
transport 0.55 0.06 0.10 298
buildings 0.56 0.11 0.19 322
electricity 0.67 0.07 0.12 120
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 77
shops 0.00 0.00 0.00 31
aid_centers 0.00 0.00 0.00 64
other_infrastructure 0.06 0.00 0.01 285
weather_related 0.80 0.54 0.65 1772
floods 0.86 0.27 0.41 524
storm 0.76 0.41 0.54 582
fire 0.64 0.10 0.18 67
earthquake 0.84 0.68 0.75 585
cold 0.83 0.08 0.15 121
other_weather 0.45 0.10 0.17 355
direct_report 0.70 0.35 0.47 1271
avg / total 0.71 0.48 0.54 20387
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file 10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
import sys
import nltk
import re
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('words')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('wordnet')
import pandas as pd
import numpy as np
import string
from nltk import pos_tag, ne_chunk
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import pickle
def load_data(database_filepath):
engine = create_engine('sqlite:///' + database_filepath)
df = pd.read_sql_table('DisasterResponse', engine)
X = df['message']
y = df.drop(['id','message','original','genre'],axis=1)
category_names = y.columns
return X, y, category_names
def tokenize(text):
#remove punctuation characters
text = text.translate(str.maketrans('', '', string.punctuation))
#lemmatize, convert to lowercase, remove leading/trailing white space
lemmatizer = WordNetLemmatizer()
text = lemmatizer.lemmatize(text).lower().strip()
#tokenize
words = word_tokenize(text)
#stop words removal
words = [w for w in words if w not in stopwords.words("english")]
#Stemming
words = [PorterStemmer().stem(w) for w in words]
clean_tokens = []
for tok in words:
clean_tokens.append(tok)
return clean_tokens
def build_model():
pipeline = Pipeline([('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier())),
])
return pipeline
def evaluate_model(model, X_test, Y_test, category_names):
#predict on test data
y_pred = model.predict(X_test)
print(classification_report(Y_test, y_pred, target_names=category_names))
def save_model(model, model_filepath):
pickle.dump(model,open(model_filepath,'wb'))
def main():
if len(sys.argv) == 3:
database_filepath, model_filepath = sys.argv[1:]
print('Loading data...\n DATABASE: {}'.format(database_filepath))
X, Y, category_names = load_data(database_filepath)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
print('Building model...')
model = build_model()
print('Training model...')
model.fit(X_train, Y_train)
print('Evaluating model...')
evaluate_model(model, X_test, Y_test, category_names)
print('Saving model...\n MODEL: {}'.format(model_filepath))
save_model(model, model_filepath)
print('Trained model saved!')
else:
print('Please provide the filepath of the disaster messages database '\
'as the first argument and the filepath of the pickle file to '\
'save the model to as the second argument. \n\nExample: python '\
'train_classifier.py ../data/DisasterResponse.db classifier.pkl')
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import re
import joblib
import nltk
import numpy as np
import pandas as pd
from nltk.corpus import stopwords, wordnet
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report, f1_score
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.multiclass import OneVsRestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sqlalchemy import create_engine
nltk.download(['punkt', 'wordnet', 'stopwords', 'words'], quiet=True)
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql_table('DisasterResponse', engine)
X = df.message.values
Y = df.drop(['message', 'original', 'genre'], axis=1)
labels = Y.columns
Y = Y.values
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
# Tokenize text
tokens = nltk.word_tokenize(text)
# Remove stop words
tokens = [w for w in tokens if w not in stopwords.words('english')]
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
def display_results(y_pred, y_test):
# display results
cr = {}
model_avg_f1 = np.empty(len(labels))
for i, label in enumerate(labels):
cr[label] = classification_report(
y_test[:, i], y_pred[:, i], labels=df[label].unique(), zero_division=0)
score = f1_score(y_test[:, i], y_pred[:, i], labels=df[label],
average='weighted', zero_division=0)
model_avg_f1[i] = score
model_avg_f1 = np.mean(model_avg_f1)
print(f'The model weighted f1 score is {model_avg_f1}')
return cr
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def create_pipeline(classifier, nr_jobs=1):
pipeline = Pipeline([
('tfidf', TfidfVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(classifier, n_jobs=nr_jobs))
])
return pipeline
pipeline = create_pipeline(KNeighborsClassifier(n_jobs=-1), -1)
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
y_pred = pipeline.predict(X_test)
print('Prediction done.')
pipeline_cr = display_results(y_pred, y_test)
pipeline_cr.keys()
print(pipeline_cr['medical_help'])
###Output
precision recall f1-score support
0 0.92 1.00 0.96 6058
1 0.00 0.00 0.00 496
accuracy 0.92 6554
macro avg 0.46 0.50 0.48 6554
weighted avg 0.85 0.92 0.89 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__leaf_size': [1, 5],
'clf__estimator__n_neighbors': [6, 10, 15]
}
cv = GridSearchCV(create_pipeline(KNeighborsClassifier()),
param_grid=parameters, cv=2, verbose=5, n_jobs=-1)
cv.fit(X_train, y_train)
###Output
Fitting 2 folds for each of 6 candidates, totalling 12 fits
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# predict on test data
y_pred = cv.predict(X_test)
cv_cr = display_results(y_pred, y_test)
cv.cv_results_
cv.best_params_
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
model = create_pipeline(RandomForestClassifier(n_jobs=-1), -1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
model_cr = display_results(y_pred, y_test)
model.get_params()
X_train, X_test, y_train, y_test = train_test_split(X, Y)
parameters = {
'tfidf__norm': [False, True],
'tfidf__use_idf': [False, True]
}
cv = GridSearchCV(create_pipeline(RandomForestClassifier()),
param_grid=parameters, cv=2, verbose=5, n_jobs=-1)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
model_cr = display_results(y_pred, y_test)
cv.best_params_
svm_model = create_pipeline(OneVsRestClassifier(LinearSVC(), n_jobs=-1), -1)
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)
model_cr = display_results(y_pred, y_test)
svm_model.get_params()
parameters = {
'tfidf__norm': ['l1', 'l2'],
'tfidf__smooth_idf': [True, False],
'tfidf__use_idf': [True, False]
}
cv = GridSearchCV(create_pipeline(OneVsRestClassifier(LinearSVC(), n_jobs=-1), -1),
param_grid=parameters, cv=2, verbose=5, n_jobs=-1)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
model_cr = display_results(y_pred, y_test)
cv.best_params_
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
joblib.dump(cv, 'models/model.pkl')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
from sqlalchemy import create_engine
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.svm import SVC
import pickle
# load data from database
engine = create_engine('sqlite:///messages.db')
df = pd.read_sql("SELECT * FROM Messages_transformed", engine)
X = df.message.values
y = df.drop(columns=["id","message","original","genre","related"])
df.head()
X.shape
y.shape
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([('cvect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
labels = np.unique(y_pred)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Accuracy:", accuracy)
#print(type(y_pred))
#print(type(y_test))
report = classification_report(y_test, y_pred, target_names = y.columns.values)
print(report)
###Output
precision recall f1-score support
request 0.81 0.40 0.54 1097
offer 0.00 0.00 0.00 36
aid_related 0.78 0.44 0.56 2677
medical_help 0.50 0.02 0.03 529
medical_products 0.62 0.02 0.05 332
search_and_rescue 0.67 0.01 0.02 183
security 0.00 0.00 0.00 117
military 0.67 0.01 0.02 216
child_alone 0.00 0.00 0.00 0
water 0.87 0.16 0.27 414
food 0.87 0.31 0.46 706
shelter 0.88 0.09 0.16 578
clothing 1.00 0.04 0.08 102
money 0.33 0.01 0.01 151
missing_people 0.00 0.00 0.00 73
refugees 0.50 0.00 0.01 216
death 0.93 0.09 0.16 289
other_aid 0.56 0.02 0.04 857
infrastructure_related 0.00 0.00 0.00 434
transport 0.50 0.00 0.01 313
buildings 0.88 0.02 0.04 332
electricity 1.00 0.01 0.01 135
tools 0.00 0.00 0.00 35
hospitals 0.00 0.00 0.00 82
shops 0.00 0.00 0.00 27
aid_centers 0.00 0.00 0.00 74
other_infrastructure 0.00 0.00 0.00 284
weather_related 0.84 0.41 0.55 1836
floods 0.87 0.16 0.27 528
storm 0.78 0.15 0.25 604
fire 0.00 0.00 0.00 84
earthquake 0.87 0.47 0.61 605
cold 1.00 0.01 0.01 136
other_weather 0.00 0.00 0.00 349
direct_report 0.76 0.31 0.44 1270
avg / total 0.69 0.23 0.32 15701
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
sorted(pipeline.get_params().keys())
parameters = {'clf__max_depth': [10, 20, None],
'clf__min_samples_leaf': [1, 2, 4],
'clf__min_samples_split': [2, 5, 10],
'clf__n_estimators': [10, 20, 40]}
#pipeline.fit(X_train, y_train)
cv = GridSearchCV(pipeline, param_grid=parameters, scoring='f1_micro', verbose=1, n_jobs=-1)
cv.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 81 candidates, totalling 243 fits
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred = cv.predict(X_test)
labels = np.unique(y_pred)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Accuracy:", accuracy)
report = classification_report(y_test, y_pred)
###Output
Labels: [ 0. 1.]
Accuracy: request-0 0.895941
offer-0 0.995270
aid_related-0 0.750076
medical_help-0 0.924016
medical_products-0 0.954684
search_and_rescue-0 0.975130
security-0 0.980623
military-0 0.968264
child_alone-0 1.000000
water-0 0.945987
food-0 0.905096
shelter-0 0.921269
clothing-0 0.984437
money-0 0.976808
missing_people-0 0.988862
refugees-0 0.967806
death-0 0.959262
other_aid-0 0.872597
infrastructure_related-0 0.939121
transport-0 0.955142
buildings-0 0.951785
electricity-0 0.981843
tools-0 0.995117
hospitals-0 0.989319
shops-0 0.996338
aid_centers-0 0.988862
other_infrastructure-0 0.959414
weather_related-0 0.848947
floods-0 0.936375
storm-0 0.923558
fire-0 0.988862
earthquake-0 0.948428
cold-0 0.982148
other_weather-0 0.946140
direct_report-0 0.861153
dtype: float64
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
cv.best_params_
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
m = pickle.dumps('clf')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import necessary libraries
import pandas as pd
import numpy as np
import os
import pickle
import nltk
import re
from sqlalchemy import create_engine
import sqlite3
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report
from sklearn.metrics import precision_recall_fscore_support
from scipy.stats import hmean
from scipy.stats.mstats import gmean
from nltk.corpus import stopwords
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
import matplotlib.pyplot as plt
%matplotlib inline
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql("SELECT * FROM InsertTableName", engine)
df.head()
# View types of unque 'genre' attribute
genre_types = df.genre.value_counts()
genre_types
# check for attributes with missing values/elements
df.isnull().mean().head()
# drops attributes with missing values
df.dropna()
df.head()
# load data from database with 'X' as attributes for message column
X = df["message"]
# load data from database with 'Y' attributes for the last 36 columns
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
# Proprocess text by removing unwanted properties
def tokenize(text):
'''
input:
text: input text data containing attributes
output:
clean_tokens: cleaned text without unwanted texts
'''
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# take out all punctuation while tokenizing
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
# lemmatize as shown in the lesson
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
# Visualize model parameters
pipeline.get_params()
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# use sklearn split function to split dataset into train and 20% test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2)
# Train pipeline using RandomForest Classifier algorithm
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's classification_report on each.
###Code
# Output result metrics of trained RandomForest Classifier algorithm
def evaluate_model(model, X_test, y_test):
'''
Input:
model: RandomForest Classifier trained model
X_test: Test training features
Y_test: Test training response variable
Output:
None:
Display model precision, recall, f1-score, support
'''
y_pred = model.predict(X_test)
for item, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, item]))
# classification_report to display model precision, recall, f1-score, support
evaluate_model(pipeline, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.65 0.38 0.48 1193
1 0.83 0.94 0.88 4016
2 0.50 0.43 0.46 35
avg / total 0.79 0.81 0.79 5244
request
precision recall f1-score support
0 0.89 0.98 0.93 4361
1 0.82 0.39 0.53 883
avg / total 0.88 0.88 0.87 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.72 0.88 0.79 3049
1 0.75 0.53 0.62 2195
avg / total 0.74 0.73 0.72 5244
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 4805
1 0.71 0.08 0.14 439
avg / total 0.90 0.92 0.89 5244
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 4984
1 0.60 0.07 0.12 260
avg / total 0.94 0.95 0.93 5244
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 5106
1 0.67 0.10 0.18 138
avg / total 0.97 0.98 0.97 5244
security
precision recall f1-score support
0 0.98 1.00 0.99 5151
1 0.25 0.01 0.02 93
avg / total 0.97 0.98 0.97 5244
military
precision recall f1-score support
0 0.97 1.00 0.98 5069
1 0.67 0.07 0.12 175
avg / total 0.96 0.97 0.95 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.95 1.00 0.97 4897
1 0.82 0.30 0.44 347
avg / total 0.94 0.95 0.94 5244
food
precision recall f1-score support
0 0.94 0.99 0.96 4655
1 0.83 0.46 0.59 589
avg / total 0.92 0.93 0.92 5244
shelter
precision recall f1-score support
0 0.93 0.99 0.96 4761
1 0.82 0.30 0.44 483
avg / total 0.92 0.93 0.91 5244
clothing
precision recall f1-score support
0 0.98 1.00 0.99 5150
1 1.00 0.05 0.10 94
avg / total 0.98 0.98 0.98 5244
money
precision recall f1-score support
0 0.98 1.00 0.99 5133
1 0.75 0.05 0.10 111
avg / total 0.98 0.98 0.97 5244
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5181
1 0.75 0.05 0.09 63
avg / total 0.99 0.99 0.98 5244
refugees
precision recall f1-score support
0 0.97 1.00 0.99 5091
1 0.82 0.06 0.11 153
avg / total 0.97 0.97 0.96 5244
death
precision recall f1-score support
0 0.96 1.00 0.98 5021
1 0.77 0.11 0.19 223
avg / total 0.95 0.96 0.95 5244
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 4531
1 0.54 0.04 0.07 713
avg / total 0.82 0.86 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4907
1 0.00 0.00 0.00 337
avg / total 0.88 0.93 0.90 5244
transport
precision recall f1-score support
0 0.95 1.00 0.97 4977
1 0.61 0.06 0.12 267
avg / total 0.93 0.95 0.93 5244
buildings
precision recall f1-score support
0 0.95 1.00 0.97 4966
1 0.87 0.07 0.13 278
avg / total 0.95 0.95 0.93 5244
electricity
precision recall f1-score support
0 0.98 1.00 0.99 5138
1 0.83 0.09 0.17 106
avg / total 0.98 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 1.00 5209
1 0.00 0.00 0.00 35
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 5189
1 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5218
1 0.00 0.00 0.00 26
avg / total 0.99 1.00 0.99 5244
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 5185
1 0.00 0.00 0.00 59
avg / total 0.98 0.99 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 5011
1 0.25 0.00 0.01 233
avg / total 0.92 0.96 0.93 5244
weather_related
precision recall f1-score support
0 0.85 0.97 0.90 3801
1 0.85 0.53 0.66 1443
avg / total 0.85 0.85 0.83 5244
floods
precision recall f1-score support
0 0.93 1.00 0.96 4798
1 0.87 0.23 0.37 446
avg / total 0.93 0.93 0.91 5244
storm
precision recall f1-score support
0 0.94 0.99 0.96 4758
1 0.77 0.35 0.48 486
avg / total 0.92 0.93 0.92 5244
fire
precision recall f1-score support
0 0.99 1.00 0.99 5186
1 1.00 0.02 0.03 58
avg / total 0.99 0.99 0.98 5244
earthquake
precision recall f1-score support
0 0.96 0.99 0.98 4769
1 0.90 0.61 0.73 475
avg / total 0.96 0.96 0.95 5244
cold
precision recall f1-score support
0 0.98 1.00 0.99 5150
1 0.90 0.10 0.17 94
avg / total 0.98 0.98 0.98 5244
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 4958
1 0.46 0.04 0.08 286
avg / total 0.92 0.95 0.92 5244
direct_report
precision recall f1-score support
0 0.85 0.98 0.91 4197
1 0.78 0.30 0.43 1047
avg / total 0.83 0.84 0.81 5244
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {'clf__estimator__max_depth': [10, 50, None],
'clf__estimator__min_samples_leaf':[2, 5, 10]}
cv = GridSearchCV(pipeline, parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model.Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Train pipeline using the improved model
cv.fit(X_train, y_train)
# # classification_report to display model precision, recall, f1-score, support
evaluate_model(cv, X_test, y_test)
cv.best_estimator_
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Improve model using DecisionTree Classifier
new_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier()))
])
# Train improved model
new_pipeline.fit(X_train, y_train)
# Run result metric score display function
evaluate_model(new_pipeline, X_test, y_test)
###Output
related
precision recall f1-score support
0 0.47 0.45 0.46 1193
1 0.84 0.85 0.84 4016
2 0.31 0.40 0.35 35
avg / total 0.75 0.75 0.75 5244
request
precision recall f1-score support
0 0.92 0.92 0.92 4361
1 0.60 0.61 0.60 883
avg / total 0.87 0.87 0.87 5244
offer
precision recall f1-score support
0 0.99 1.00 1.00 5210
1 0.00 0.00 0.00 34
avg / total 0.99 0.99 0.99 5244
aid_related
precision recall f1-score support
0 0.75 0.75 0.75 3049
1 0.65 0.65 0.65 2195
avg / total 0.71 0.71 0.71 5244
medical_help
precision recall f1-score support
0 0.94 0.95 0.94 4805
1 0.33 0.30 0.31 439
avg / total 0.89 0.89 0.89 5244
medical_products
precision recall f1-score support
0 0.97 0.97 0.97 4984
1 0.40 0.35 0.37 260
avg / total 0.94 0.94 0.94 5244
search_and_rescue
precision recall f1-score support
0 0.98 0.98 0.98 5106
1 0.22 0.20 0.21 138
avg / total 0.96 0.96 0.96 5244
security
precision recall f1-score support
0 0.98 0.99 0.98 5151
1 0.04 0.03 0.03 93
avg / total 0.97 0.97 0.97 5244
military
precision recall f1-score support
0 0.98 0.98 0.98 5069
1 0.39 0.37 0.38 175
avg / total 0.96 0.96 0.96 5244
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
avg / total 1.00 1.00 1.00 5244
water
precision recall f1-score support
0 0.98 0.98 0.98 4897
1 0.67 0.67 0.67 347
avg / total 0.96 0.96 0.96 5244
food
precision recall f1-score support
0 0.96 0.96 0.96 4655
1 0.72 0.71 0.71 589
avg / total 0.94 0.94 0.94 5244
shelter
precision recall f1-score support
0 0.96 0.96 0.96 4761
1 0.62 0.59 0.61 483
avg / total 0.93 0.93 0.93 5244
clothing
precision recall f1-score support
0 0.99 1.00 0.99 5150
1 0.62 0.40 0.49 94
avg / total 0.98 0.98 0.98 5244
money
precision recall f1-score support
0 0.99 0.99 0.99 5133
1 0.40 0.38 0.39 111
avg / total 0.97 0.97 0.97 5244
missing_people
precision recall f1-score support
0 0.99 0.99 0.99 5181
1 0.27 0.21 0.23 63
avg / total 0.98 0.98 0.98 5244
refugees
precision recall f1-score support
0 0.98 0.98 0.98 5091
1 0.24 0.25 0.25 153
avg / total 0.96 0.95 0.96 5244
death
precision recall f1-score support
0 0.98 0.98 0.98 5021
1 0.49 0.53 0.51 223
avg / total 0.96 0.96 0.96 5244
other_aid
precision recall f1-score support
0 0.89 0.90 0.89 4531
1 0.29 0.27 0.28 713
avg / total 0.81 0.81 0.81 5244
infrastructure_related
precision recall f1-score support
0 0.94 0.95 0.95 4907
1 0.18 0.16 0.17 337
avg / total 0.89 0.90 0.90 5244
transport
precision recall f1-score support
0 0.96 0.97 0.97 4977
1 0.36 0.29 0.32 267
avg / total 0.93 0.94 0.93 5244
buildings
precision recall f1-score support
0 0.97 0.97 0.97 4966
1 0.43 0.40 0.42 278
avg / total 0.94 0.94 0.94 5244
electricity
precision recall f1-score support
0 0.99 0.99 0.99 5138
1 0.39 0.31 0.35 106
avg / total 0.97 0.98 0.97 5244
tools
precision recall f1-score support
0 0.99 1.00 0.99 5209
1 0.05 0.03 0.04 35
avg / total 0.99 0.99 0.99 5244
hospitals
precision recall f1-score support
0 0.99 0.99 0.99 5189
1 0.22 0.18 0.20 55
avg / total 0.98 0.98 0.98 5244
shops
precision recall f1-score support
0 1.00 1.00 1.00 5218
1 0.00 0.00 0.00 26
avg / total 0.99 0.99 0.99 5244
aid_centers
precision recall f1-score support
0 0.99 0.99 0.99 5185
1 0.08 0.08 0.08 59
avg / total 0.98 0.98 0.98 5244
other_infrastructure
precision recall f1-score support
0 0.96 0.97 0.96 5011
1 0.15 0.13 0.14 233
avg / total 0.92 0.93 0.93 5244
weather_related
precision recall f1-score support
0 0.89 0.91 0.90 3801
1 0.74 0.71 0.72 1443
avg / total 0.85 0.85 0.85 5244
floods
precision recall f1-score support
0 0.96 0.96 0.96 4798
1 0.59 0.54 0.57 446
avg / total 0.93 0.93 0.93 5244
storm
precision recall f1-score support
0 0.96 0.97 0.97 4758
1 0.66 0.65 0.65 486
avg / total 0.94 0.94 0.94 5244
fire
precision recall f1-score support
0 0.99 0.99 0.99 5186
1 0.31 0.29 0.30 58
avg / total 0.98 0.99 0.98 5244
earthquake
precision recall f1-score support
0 0.98 0.98 0.98 4769
1 0.80 0.78 0.79 475
avg / total 0.96 0.96 0.96 5244
cold
precision recall f1-score support
0 0.99 0.99 0.99 5150
1 0.34 0.38 0.36 94
avg / total 0.98 0.98 0.98 5244
other_weather
precision recall f1-score support
0 0.96 0.96 0.96 4958
1 0.26 0.22 0.24 286
avg / total 0.92 0.92 0.92 5244
direct_report
precision recall f1-score support
0 0.88 0.89 0.88 4197
1 0.54 0.50 0.52 1047
avg / total 0.81 0.81 0.81 5244
###Markdown
9. Export your model as a pickle file
###Code
# save a copy file of the the trained model to disk
trained_model_file = 'trained_model.sav'
pickle.dump(cv, open(trained_model_file, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import re
import numpy as np
import pandas as pd
import nltk
import pickle
from sqlalchemy import create_engine
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report,f1_score,accuracy_score,log_loss
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('punkt')
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName', engine)
Y_labels = ['related', 'request', 'offer', 'aid_related',
'medical_help', 'medical_products', 'search_and_rescue',
'security', 'military', 'child_alone', 'water', 'food',
'shelter', 'clothing', 'money', 'missing_people', 'refugees',
'death', 'other_aid', 'infrastructure_related', 'transport',
'buildings', 'electricity', 'tools', 'hospitals', 'shops',
'aid_centers', 'other_infrastructure', 'weather_related',
'floods', 'storm', 'fire', 'earthquake', 'cold',
'other_weather', 'direct_report']
X = df['message'].values
Y = df[Y_labels].values
category_names = Y_labels
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
stop_words = stopwords.words('english')
lemmatizer = WordNetLemmatizer()
#remove punctation
text = re.sub(r"[^a-zA-Z0-9]",' ',text.lower())
#tokenize text
tokens = word_tokenize(text)
#lemmatize and remove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return (tokens)
#test
tokenize(X[3])
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipeline- You'll find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect',CountVectorizer()),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(DecisionTreeClassifier(random_state=42),n_jobs=-1))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test,y_train, y_test = train_test_split(X, Y, test_size=0.2,random_state=42)
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
## get f1 score
y_pred = pipeline.predict(X_test)
print("Computing Accuracy for each Category")
for i in range(36):
print(category_names[i], " Accuracy: ", accuracy_score(y_test[:,i],y_pred[:,i]))
print("\n Classification Report")
print(classification_report(y_test, y_pred, target_names=category_names))
###Output
Computing Accuracy for each Category
related Accuracy: 0.6636779855017169
request Accuracy: 0.7573445249904617
offer Accuracy: 0.9919877909194964
aid_related Accuracy: 0.5331934376192293
medical_help Accuracy: 0.8668447157573446
medical_products Accuracy: 0.9135826020602823
search_and_rescue Accuracy: 0.9471575734452499
security Accuracy: 0.9692865318580695
military Accuracy: 0.9395268981304845
child_alone Accuracy: 1.0
water Accuracy: 0.8922167111789393
food Accuracy: 0.8174360930942388
shelter Accuracy: 0.8401373521556658
clothing Accuracy: 0.9713849675696299
money Accuracy: 0.9566959175887066
missing_people Accuracy: 0.9753910721098817
refugees Accuracy: 0.9376192293017932
death Accuracy: 0.9147272033574971
other_aid Accuracy: 0.7813811522319726
infrastructure_related Accuracy: 0.8859214040442579
transport Accuracy: 0.9118657001144601
buildings Accuracy: 0.9101487981686379
electricity Accuracy: 0.960129721480351
tools Accuracy: 0.9883632201449828
hospitals Accuracy: 0.9797787104158718
shops Accuracy: 0.9938954597481877
aid_centers Accuracy: 0.979969477298741
other_infrastructure Accuracy: 0.9217855780236551
weather_related Accuracy: 0.6501335368180083
floods Accuracy: 0.8609309423884014
storm Accuracy: 0.8605494086226632
fire Accuracy: 0.9801602441816101
earthquake Accuracy: 0.861884776802747
cold Accuracy: 0.9650896604349485
other_weather Accuracy: 0.9051888592140405
direct_report Accuracy: 0.7315909958031286
Classification Report
precision recall f1-score support
related 0.77 0.79 0.78 3993
request 0.27 0.26 0.27 883
offer 0.05 0.04 0.05 24
aid_related 0.43 0.41 0.42 2150
medical_help 0.08 0.07 0.07 419
medical_products 0.05 0.04 0.04 256
search_and_rescue 0.02 0.02 0.02 138
security 0.05 0.03 0.04 108
military 0.04 0.04 0.04 171
child_alone 0.00 0.00 0.00 0
water 0.05 0.05 0.05 316
food 0.13 0.12 0.12 564
shelter 0.08 0.08 0.08 468
clothing 0.01 0.01 0.01 85
money 0.03 0.03 0.03 120
missing_people 0.00 0.00 0.00 67
refugees 0.08 0.06 0.07 200
death 0.07 0.07 0.07 243
other_aid 0.12 0.12 0.12 671
infrastructure_related 0.08 0.07 0.07 347
transport 0.06 0.05 0.05 255
buildings 0.09 0.08 0.08 271
electricity 0.03 0.03 0.03 112
tools 0.00 0.00 0.00 28
hospitals 0.00 0.00 0.00 60
shops 0.00 0.00 0.00 17
aid_centers 0.06 0.05 0.05 64
other_infrastructure 0.08 0.06 0.07 242
weather_related 0.36 0.34 0.35 1456
floods 0.13 0.11 0.12 448
storm 0.23 0.19 0.21 507
fire 0.00 0.00 0.00 54
earthquake 0.22 0.21 0.21 475
cold 0.01 0.01 0.01 101
other_weather 0.06 0.06 0.06 265
direct_report 0.27 0.25 0.26 1001
avg / total 0.35 0.34 0.34 16579
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__min_df': [1],
'vect__lowercase': [False],
'tfidf__smooth_idf': [False],
}
cv = GridSearchCV(pipeline,param_grid=parameters,cv=2,n_jobs=-1)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
cv.best_score_
## get f1 score
y_pred = cv.predict(X_test)
print("Computing Accuracy for each Category")
for i in range(36):
print(category_names[i], " Accuracy: ", accuracy_score(y_test[:,i],y_pred[:,i]))
print("\n Classification Report")
print(classification_report(y_test, y_pred, target_names=category_names))
###Output
Computing Accuracy for each Category
related Accuracy: 0.6579549790156429
request Accuracy: 0.7607783288821061
offer Accuracy: 0.9914154902708889
aid_related Accuracy: 0.5335749713849676
medical_help Accuracy: 0.8605494086226632
medical_products Accuracy: 0.9152995040061045
search_and_rescue Accuracy: 0.9481114078595956
security Accuracy: 0.9671880961465089
military Accuracy: 0.9414345669591759
child_alone Accuracy: 1.0
water Accuracy: 0.8885921404044258
food Accuracy: 0.8147653567340709
shelter Accuracy: 0.8416634872186188
clothing Accuracy: 0.9710034338038916
money Accuracy: 0.9549790156428843
missing_people Accuracy: 0.9742464708126669
refugees Accuracy: 0.9393361312476154
death Accuracy: 0.9120564669973292
other_aid Accuracy: 0.790537962609691
infrastructure_related Accuracy: 0.8826783670354826
transport Accuracy: 0.9097672644028997
buildings Accuracy: 0.9095764975200306
electricity Accuracy: 0.9597481877146128
tools Accuracy: 0.9874093857306372
hospitals Accuracy: 0.9795879435330027
shops Accuracy: 0.9935139259824495
aid_centers Accuracy: 0.9778710415871804
other_infrastructure Accuracy: 0.9164441053033193
weather_related Accuracy: 0.6604349484929416
floods Accuracy: 0.8519648988935521
storm Accuracy: 0.8620755436856162
fire Accuracy: 0.9795879435330027
earthquake Accuracy: 0.8630293780999618
cold Accuracy: 0.9652804273178176
other_weather Accuracy: 0.9034719572682183
direct_report Accuracy: 0.7180465471194201
Classification Report
precision recall f1-score support
related 0.77 0.79 0.78 3993
request 0.28 0.26 0.27 883
offer 0.00 0.00 0.00 24
aid_related 0.43 0.42 0.42 2150
medical_help 0.07 0.06 0.07 419
medical_products 0.03 0.02 0.02 256
search_and_rescue 0.05 0.06 0.06 138
security 0.04 0.03 0.03 108
military 0.08 0.08 0.08 171
child_alone 0.00 0.00 0.00 0
water 0.06 0.05 0.06 316
food 0.13 0.13 0.13 564
shelter 0.09 0.09 0.09 468
clothing 0.00 0.00 0.00 85
money 0.04 0.04 0.04 120
missing_people 0.00 0.00 0.00 67
refugees 0.07 0.04 0.05 200
death 0.04 0.04 0.04 243
other_aid 0.14 0.13 0.13 671
infrastructure_related 0.06 0.05 0.06 347
transport 0.04 0.04 0.04 255
buildings 0.10 0.09 0.09 271
electricity 0.05 0.05 0.05 112
tools 0.00 0.00 0.00 28
hospitals 0.00 0.00 0.00 60
shops 0.00 0.00 0.00 17
aid_centers 0.00 0.00 0.00 64
other_infrastructure 0.03 0.02 0.03 242
weather_related 0.38 0.34 0.36 1456
floods 0.10 0.09 0.10 448
storm 0.21 0.15 0.18 507
fire 0.00 0.00 0.00 54
earthquake 0.23 0.22 0.22 475
cold 0.02 0.02 0.02 101
other_weather 0.04 0.04 0.04 265
direct_report 0.25 0.24 0.25 1001
avg / total 0.34 0.34 0.34 16579
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
# ('clf', MultiOutputClassifier(RandomForestClassifier(random_state = 42), n_jobs = -1)),
# ('clf', MultiOutputClassifier(KNeighborsClassifier(), n_jobs = -1)),
('clf', MultiOutputClassifier(AdaBoostClassifier(random_state=42), n_jobs = -1))
])
parameters = {
'vect__min_df': [1],
'vect__lowercase': [False],
'tfidf__smooth_idf': [False],
}
cv = GridSearchCV(pipeline, parameters, cv = 2, n_jobs = -1)
cv.fit(X_train, y_train)
## get f1 score
y_pred = cv.predict(X_test)
print("Computing Accuracy for each Category")
for i in range(36):
print(category_names[i], " Accuracy: ", accuracy_score(y_test[:,i],y_pred[:,i]))
print("\n Classification Report")
print(classification_report(y_test, y_pred, target_names=category_names))
###Output
Computing Accuracy for each Category
related Accuracy: 0.756390690576116
request Accuracy: 0.822014498283098
offer Accuracy: 0.9948492941625334
aid_related Accuracy: 0.5930942388401373
medical_help Accuracy: 0.918733307897749
medical_products Accuracy: 0.9509729111026326
search_and_rescue Accuracy: 0.9732926363983212
security Accuracy: 0.9790156428843952
military Accuracy: 0.9660434948492942
child_alone Accuracy: 1.0
water Accuracy: 0.9391453643647463
food Accuracy: 0.8897367417016406
shelter Accuracy: 0.9109118657001145
clothing Accuracy: 0.9820679130103014
money Accuracy: 0.9763449065242273
missing_people Accuracy: 0.9866463181991606
refugees Accuracy: 0.9608927890118275
death Accuracy: 0.9523082792827166
other_aid Accuracy: 0.8695154521175124
infrastructure_related Accuracy: 0.9334223578786722
transport Accuracy: 0.9496375429225486
buildings Accuracy: 0.9481114078595956
electricity Accuracy: 0.9782525753529188
tools Accuracy: 0.9944677603967951
hospitals Accuracy: 0.9876001526135063
shops Accuracy: 0.9963754292254865
aid_centers Accuracy: 0.9872186188477681
other_infrastructure Accuracy: 0.9528805799313239
weather_related Accuracy: 0.7344524990461656
floods Accuracy: 0.9147272033574971
storm Accuracy: 0.9028996566196108
fire Accuracy: 0.9881724532621137
earthquake Accuracy: 0.9160625715375811
cold Accuracy: 0.9793971766501335
other_weather Accuracy: 0.9483021747424647
direct_report Accuracy: 0.797787104158718
Classification Report
precision recall f1-score support
related 0.77 0.98 0.86 3993
request 0.38 0.09 0.15 883
offer 0.00 0.00 0.00 24
aid_related 0.51 0.17 0.26 2150
medical_help 0.00 0.00 0.00 419
medical_products 0.00 0.00 0.00 256
search_and_rescue 0.00 0.00 0.00 138
security 0.00 0.00 0.00 108
military 0.00 0.00 0.00 171
child_alone 0.00 0.00 0.00 0
water 0.20 0.00 0.01 316
food 0.21 0.01 0.02 564
shelter 1.00 0.00 0.00 468
clothing 0.00 0.00 0.00 85
money 0.00 0.00 0.00 120
missing_people 0.00 0.00 0.00 67
refugees 0.00 0.00 0.00 200
death 0.00 0.00 0.00 243
other_aid 0.00 0.00 0.00 671
infrastructure_related 0.25 0.00 0.01 347
transport 0.00 0.00 0.00 255
buildings 0.00 0.00 0.00 271
electricity 0.25 0.01 0.02 112
tools 0.00 0.00 0.00 28
hospitals 0.00 0.00 0.00 60
shops 0.00 0.00 0.00 17
aid_centers 0.00 0.00 0.00 64
other_infrastructure 0.00 0.00 0.00 242
weather_related 0.58 0.16 0.26 1456
floods 1.00 0.00 0.00 448
storm 0.48 0.06 0.11 507
fire 0.00 0.00 0.00 54
earthquake 0.65 0.16 0.25 475
cold 0.00 0.00 0.00 101
other_weather 0.00 0.00 0.00 265
direct_report 0.36 0.07 0.12 1001
avg / total 0.45 0.29 0.29 16579
###Markdown
9. Export your model as a pickle file
###Code
with open('clf.pickle', 'wb') as f:
pickle.dump(cv, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sys
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import os
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sqlalchemy import create_engine
import pickle
from sklearn.base import BaseEstimator,TransformerMixin
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.datasets import make_multilabel_classification
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName',engine)
X = df['message']
Y = df.iloc[:,4:]
columns = Y.columns
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
'''
text : the text you want to tokenize
'''
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# Replace url in case we find some
detected_urls = re.findall(url_regex, text)
# Tokenize the text
tokens = word_tokenize(text)
#lemmatize and normalize the data
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
"""
Starting Verb Extractor class
This class extract the starting verb of a sentence,
creating a new feature for the ML classifier
"""
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
# Given it is a tranformer we can return the self
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
classifier = RandomForestClassifier(min_samples_split = 100,min_samples_leaf = 20, max_depth = 8,
max_features = 'sqrt', random_state = 1)
pipeline = Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer()),
('classifier', MultiOutputClassifier(classifier))
])
cv = GridSearchCV(pipeline, param_grid=params, cv=5, n_jobs=-1)
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = cv.predict(X_test)
print(classification_report(y_test.values, y_pred, target_names = columns))
###Output
precision recall f1-score support
related 0.77 1.00 0.87 5008
request 0.00 0.00 0.00 1116
offer 0.00 0.00 0.00 23
aid_related 0.95 0.03 0.06 2720
medical_help 0.00 0.00 0.00 513
medical_products 0.00 0.00 0.00 317
search_and_rescue 0.00 0.00 0.00 181
security 0.00 0.00 0.00 119
military 0.00 0.00 0.00 195
water 0.00 0.00 0.00 402
food 0.00 0.00 0.00 751
shelter 0.00 0.00 0.00 621
clothing 0.00 0.00 0.00 104
money 0.00 0.00 0.00 140
missing_people 0.00 0.00 0.00 69
refugees 0.00 0.00 0.00 236
death 0.00 0.00 0.00 316
other_aid 0.00 0.00 0.00 890
infrastructure_related 0.00 0.00 0.00 430
transport 0.00 0.00 0.00 315
buildings 0.00 0.00 0.00 372
electricity 0.00 0.00 0.00 138
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 77
shops 0.00 0.00 0.00 35
aid_centers 0.00 0.00 0.00 62
other_infrastructure 0.00 0.00 0.00 302
weather_related 1.00 0.00 0.01 1818
floods 0.00 0.00 0.00 526
storm 0.00 0.00 0.00 582
fire 0.00 0.00 0.00 77
earthquake 0.00 0.00 0.00 599
cold 0.00 0.00 0.00 143
other_weather 0.00 0.00 0.00 356
direct_report 0.00 0.00 0.00 1264
avg / total 0.40 0.24 0.22 20849
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
classifier = RandomForestClassifier(min_samples_split = 100,min_samples_leaf = 20, max_depth = 8,
max_features = 'sqrt', random_state = 1)
pipeline2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=tokenize)),
('tfidf_transformer', TfidfTransformer())
])),
('starting_verb_transformer', StartingVerbExtractor())
])),
('classifier', MultiOutputClassifier(classifier))
])
params2 = {
'classifier__estimator__n_estimators': [100, 200]
# These parameters were commented because they take too long to run
# ,
# 'classifier__estimator__random_state' : [1,5,10],
# 'classifier__estimator__min_samples_split':[100,200,300]
}
cv2 = GridSearchCV(pipeline2, param_grid = params2, cv=5, n_jobs=-1)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv2.fit(X_train, y_train)
y_pred2 = cv2.predict(X_test)
print(classification_report(y_test.values, y_pred2, target_names = columns))
###Output
precision recall f1-score support
related 0.77 1.00 0.87 5008
request 0.00 0.00 0.00 1116
offer 0.00 0.00 0.00 23
aid_related 0.95 0.03 0.06 2720
medical_help 0.00 0.00 0.00 513
medical_products 0.00 0.00 0.00 317
search_and_rescue 0.00 0.00 0.00 181
security 0.00 0.00 0.00 119
military 0.00 0.00 0.00 195
water 0.00 0.00 0.00 402
food 0.00 0.00 0.00 751
shelter 0.00 0.00 0.00 621
clothing 0.00 0.00 0.00 104
money 0.00 0.00 0.00 140
missing_people 0.00 0.00 0.00 69
refugees 0.00 0.00 0.00 236
death 0.00 0.00 0.00 316
other_aid 0.00 0.00 0.00 890
infrastructure_related 0.00 0.00 0.00 430
transport 0.00 0.00 0.00 315
buildings 0.00 0.00 0.00 372
electricity 0.00 0.00 0.00 138
tools 0.00 0.00 0.00 32
hospitals 0.00 0.00 0.00 77
shops 0.00 0.00 0.00 35
aid_centers 0.00 0.00 0.00 62
other_infrastructure 0.00 0.00 0.00 302
weather_related 1.00 0.01 0.01 1818
floods 0.00 0.00 0.00 526
storm 0.00 0.00 0.00 582
fire 0.00 0.00 0.00 77
earthquake 0.00 0.00 0.00 599
cold 0.00 0.00 0.00 143
other_weather 0.00 0.00 0.00 356
direct_report 0.00 0.00 0.00 1264
avg / total 0.40 0.24 0.22 20849
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
pickle.dump(cv2, open('classifier.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import sys
import re
import pickle
import nltk
from nltk.tokenize import word_tokenize,sent_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_recall_fscore_support,accuracy_score,label_ranking_average_precision_score
from sklearn.model_selection import GridSearchCV
nltk.download(['punkt','stopwords','wordnet'])
def load_data(db_path='workspace/data/DisasterResponse.db',tablename='disastertab'):
"""
Function: load data from database and return X and y.
Args:
db_path(str): database file name included path
tablename:(str): table name in the database file.
Return:
X(pd.DataFrame): messages for X
y(pd.DataFrame): labels part in messages for y
"""
# load data from database
engine = create_engine('sqlite:///'+db_path)
df=pd.read_sql_table(tablename, engine)
X = df['message']
# result is multiple classifty.
y = df.iloc[:,4:]
return X, y
X,y=load_data()
print(X[:5].values.tolist())
#need sent_tokenize to parse X.
###Output
['Weather update - a cold front from Cuba that could pass over Haiti', 'Is the Hurricane over or is it not over', 'Looking for someone but no name', 'UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.', 'says: west side of Haiti, rest of the country today and tonight']
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Function: tokenize the text
Args: source string
Return:
clean_tokens(str list): clean string list
"""
#normalize text
text = re.sub(r'[^a-zA-Z0-9]',' ',text.lower())
#token messages
words = word_tokenize(text)
tokens = [w for w in words if w not in stopwords.words("english")]
#sterm and lemmatizer
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipeline- You'll find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect',TfidfVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=200)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2, random_state=42)
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def display_results(Y_test, y_pred):
result=precision_recall_fscore_support(Y_test, y_pred)
for i, col in enumerate(Y_test.columns.values):
accu=accuracy_score(Y_test.loc[:,col],y_pred[:,i])
score = ('{}\n Accuracy: {:.4f} % Precision: {:.4f} % Recall {:.4f} '.format(
col,accu,result[0][i],result[1][i]))
print(score)
avg_precision = label_ranking_average_precision_score(Y_test, y_pred)
avg_score= ('label ranking average precision: {}'.format(avg_precision))
print(avg_score)
y_pred=pipeline.predict(X_test)
display_results(y_test, y_pred)
pipeline.get_params()
from sklearn.externals import joblib
model = joblib.load("workspace/models/classifier.pkl")
print(model.best_params_)
p1=model.best_estimator_
p1.get_params()
###Output
{'clf__estimator__min_samples_leaf': 1, 'clf__estimator__max_features': 'auto', 'vect__smooth_idf': True}
###Markdown
pipeline = Pipeline([ ('vect',TfidfVectorizer(tokenizer=tokenize)), ('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=200,random_state=20))) ]) parameters = { 'clf__estimator__min_samples_leaf': [1,10], 'clf__estimator__max_features': ['auto','log2'], 'vect__smooth_idf':[True] } 6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__min_samples_leaf': [1,10],
'clf__estimator__max_features': ['auto','log2'],
'vect__smooth_idf':[True]
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters,n_jobs=-1)
###Output
_____no_output_____
###Markdown
parameters_bak = { 'clf__estimator__n_estimators': [200,100], 'clf__estimator__max_depth': [3,15], 'clf__estimator__min_samples_leaf': [1,8], 'vect__smooth_idf': [True,False], 'vect__sublinear_tf':[True,False]}
###Code
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
display_results(y_test, y_pred)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline = Pipeline([
('vect',TfidfVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline.get_params()
parameters = {
'vect__smooth_idf': [True,False],
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters,n_jobs=-1)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
display_results(y_test, y_pred)
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
with open('classifer.pkl', 'wb') as f:
pickle.dump(cv, f)
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import sys
import re
import pickle
import nltk
from nltk.tokenize import word_tokenize,sent_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier,AdaBoostClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_recall_fscore_support,accuracy_score
from sklearn.model_selection import GridSearchCV
nltk.download(['punkt','stopwords','wordnet'])
def load_data(db_path='workspace/data/DisasterResponse.db',tablename='disastertab'):
"""
Function: load data from database and return X and y.
Args:
db_path(str): database file name included path
tablename:(str): table name in the database file.
Return:
X(pd.DataFrame): messages for X
y(pd.DataFrame): labels part in messages for y
"""
# load data from database
engine = create_engine('sqlite:///'+db_path)
df=pd.read_sql('SELECT * from '+tablename, engine)
X = df['message']
# result is multiple classifty.
y = df.iloc[:,4:]
return X, y
def tokenize(text):
"""
Function: tokenize the text
Args: source string
Return:
clean_tokens(str list): clean string list
"""
#normalize text
text = re.sub(r'[^a-zA-Z0-9]',' ',text.lower())
#token messages
words = word_tokenize(text)
tokens = [w for w in words if w not in stopwords.words("english")]
#sterm and lemmatizer
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).strip()
clean_tokens.append(clean_tok)
return clean_tokens
def build_model():
"""
Function: build model that consist of pipeline
Args:
N/A
Return
cv(model): Grid Search model
"""
pipeline = Pipeline([
('vect',TfidfVectorizer(tokenizer=tokenize)),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=200,random_state=20)))
])
parameters = {
'clf__estimator__criterion':['entropy']
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters,n_jobs=-1)
return cv
def evaluate_model(y_test, y_pred):
result=precision_recall_fscore_support(Y_test, y_pred)
for i, col in enumerate(category_names):
accu=accuracy_score(Y_test.loc[:,col],y_pred[:,i])
print('{}\n Accuracy: {:.4f} % Precision: {:.4f} % Recall {:.4f} '.format(
col,accu,result[0][i],result[1][i]))
def save_model(cv):
"""
Function: save model as pickle file.
Args:
cv:target model
Return:
N/A
"""
with open('classifer.pkl', 'wb') as f:
pickle.dump(cv, f)
def main():
print("Load data")
X,y=load_data()
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2, random_state=42)
print("Build model")
model=build_model()
print("Train model")
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
print("Evaluation model")
evaluate_model(y_test, y_pred)
print("Save model")
save_model(model)
if __name__ == "__main__":
main()
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
from sqlalchemy import create_engine
import numpy as np
# download necessary NLTK data
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
# import statements
import re
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.metrics import confusion_matrix, fbeta_score, classification_report, make_scorer
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from scipy.stats import hmean
from scipy.stats.mstats import gmean
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
import pickle
import warnings
warnings.filterwarnings('ignore')
# load data from database
engine = create_engine('sqlite:///SQL_fig8.db')
df = pd.read_sql('messages', engine)
X = df['message']
Y = df.drop(columns=['id', 'message', 'genre'])
df.shape
df.info()
df.genre.value_counts()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
# remove stop words
STOPWORDS = stopwords.words("english")
tokens = [word for word in tokens if word not in STOPWORDS]
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
for message in X[:5]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
for message in X[:5]:
tokens = tokenize(message)
print(message)
print(tokens, '\n')
###Output
Weather update - a cold front from Cuba that could pass over Haiti
['weather', 'update', '-', 'a', 'cold', 'front', 'from', 'cuba', 'that', 'could', 'pas', 'over', 'haiti']
Is the Hurricane over or is it not over
['is', 'the', 'hurricane', 'over', 'or', 'is', 'it', 'not', 'over']
Looking for someone but no name
['looking', 'for', 'someone', 'but', 'no', 'name']
UN reports Leogane 80-90 destroyed. Only Hospital St. Croix functioning. Needs supplies desperately.
['un', 'report', 'leogane', '80-90', 'destroyed', '.', 'only', 'hospital', 'st.', 'croix', 'functioning', '.', 'needs', 'supply', 'desperately', '.']
says: west side of Haiti, rest of the country today and tonight
['say', ':', 'west', 'side', 'of', 'haiti', ',', 'rest', 'of', 'the', 'country', 'today', 'and', 'tonight']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def model_pipeline():
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
return pipeline
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.20, random_state=101)
model = model_pipeline()
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def overall_evaluation(model, X_test, y_test):
multi_predictions = pd.DataFrame(model.predict(X_test))
multi_predictions.columns = y_test.columns.copy()
eval_list = []
for column in multi_predictions:
# set each value to be the last character of the string
#confusion_mat = confusion_matrix(y_test[column], multi_predictions[column])
report = classification_report(y_test[column],multi_predictions[column])
accuracy = accuracy_score(y_test[column],multi_predictions[column])
precision = precision_score(y_test[column],multi_predictions[column], average='weighted')
recall = recall_score(y_test[column],multi_predictions[column], average='weighted')
f1 = f1_score(y_test[column],multi_predictions[column], average='weighted')
print("Label:", column)
print(report)
eval_list.append([precision, recall, accuracy, f1])
#precision_list.append(precision)
#recall_list.append(recall)
#f1_list.append(f1)
print("-----------------------------------------------------------------------")
evaluation = pd.DataFrame(eval_list)
evaluation.columns = ['precision','recall','accuracy','f1_score']
#evaluation['recall'] = recall_list
#evaluation['accuracy'] = accuracy_list
#evaluation['f1_score'] = f1_list
print(evaluation)
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(evaluation.precision), np.mean(evaluation.recall),
np.mean(evaluation.accuracy), np.mean(evaluation.f1_score)))
return evaluation
first_evaluation = overall_evaluation(model, X_test, y_test)
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(first_evaluation.precision), np.mean(first_evaluation.recall),
np.mean(first_evaluation.accuracy), np.mean(first_evaluation.f1_score)))
###Output
*******Overall Evaluation*******
Precision:0.93 Recall:0.94
Accuracy:0.94 F1 Score:0.93
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))])
# hyper-parameter grid
parameters = {'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.50, 0.75, 1.0),
'tfidf__use_idf': (True, False)
}
# create model
cv = GridSearchCV(estimator=pipeline,
param_grid=parameters,
verbose=3,
cv=3)
#return model
model_1 = cv.fit(X_train, y_train)
model_1.best_params_
model_1.best_score_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
second_evaluation = overall_evaluation(model_1, X_test, y_test)
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(first_evaluation.precision), np.mean(first_evaluation.recall),
np.mean(first_evaluation.accuracy), np.mean(first_evaluation.f1_score)))
###Output
*******Overall Evaluation*******
Precision:0.93 Recall:0.94
Accuracy:0.94 F1 Score:0.93
###Markdown
f1 score and accuracy has not improved compare to previous run using gridsearch cv
###Code
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(second_evaluation.precision), np.mean(second_evaluation.recall),
np.mean(second_evaluation.accuracy), np.mean(second_evaluation.f1_score)))
###Output
*******Overall Evaluation*******
Precision:0.93 Recall:0.94
Accuracy:0.94 F1 Score:0.93
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())])),
('starting_verb', StartingVerbExtractor())])),
('clf', MultiOutputClassifier(OneVsRestClassifier(LinearSVC())))])
parameters = {'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__tfidf__use_idf': (True, False)}
#scorer1 = make_scorer(np.mean(second_evaluation.f1_score))
cross_validation = GridSearchCV(pipeline, param_grid=parameters, verbose = 3, n_jobs=-1)
model_3 = cross_validation.fit(X_train, y_train)
third_evaluation = overall_evaluation(model_3, X_test, y_test)
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(first_evaluation.precision), np.mean(first_evaluation.recall),
np.mean(first_evaluation.accuracy), np.mean(first_evaluation.f1_score)))
###Output
*******Overall Evaluation*******
Precision:0.93 Recall:0.94
Accuracy:0.94 F1 Score:0.93
###Markdown
f1 score and accuracy has improved by 1 % compare to previous run using gridsearch cv, I will consider this model as Final Model, if next model does not improve any further. Let's Try AdaboostClassifier
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())])),
('starting_verb', StartingVerbExtractor())])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))])
parameters = {'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__tfidf__use_idf': (True, False)}
#scorer1 = make_scorer(np.mean(second_evaluation.f1_score))
cross_validation = GridSearchCV(pipeline, param_grid=parameters, verbose = 3, n_jobs=-1)
model_4 = cross_validation.fit(X_train, y_train)
fourth_evaluation = overall_evaluation(model_4, X_test, y_test)
print("*******Overall Evaluation*******\nPrecision:{:.2f}\tRecall:{:.2f}\nAccuracy:{:.2f}\tF1 Score:{:.2f}".format(
np.mean(third_evaluation.precision), np.mean(third_evaluation.recall),
np.mean(third_evaluation.accuracy), np.mean(third_evaluation.f1_score)))
###Output
*******Overall Evaluation*******
Precision:0.94 Recall:0.95
Accuracy:0.95 F1 Score:0.94
###Markdown
We can see No additional improvement in overall accuracy and F1 Score. Lets compare all categories for outcome of 36 classification Below we can see different in each category, If a value is Positive then Fourth Model(Adaboost) is doing well compare Third model(OneVsRestClassifier(LinearSVC())).
###Code
final_eval = fourth_evaluation*100 - third_evaluation*100
final_eval['Categories'] = pd.DataFrame(y_test.columns)
final_eval
###Output
_____no_output_____
###Markdown
As we can see there are very few categories where values are in positive side, but with very minor difference, mostly below 1%. Next, we can see "related" category f1 score is -8%, which mean third model is performing better than fourth in this category, and we can see similar results in categories like request, aid_related and weather_related . Therefore, I would choose Third Model as my Final model for this problem. 9. Export your model as a pickle file
###Code
pickle.dump(model_3, open('ML_model.sav', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('Messages', engine)
X = df.iloc[:,1]
y = df.iloc[:,4:]
category_names = y.columns
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train, y_train)
###Output
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\jasper.kuller\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
# y_pred.shape
# type(y_pred)
# y_pred.dtype
# y_test.shape
# type(y_test)
# y_pred.dtype
for i in range(len(category_names)):
print("Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))
###Output
Category: related
precision recall f1-score support
0 0.60 0.33 0.43 1523
1 0.82 0.93 0.87 5006
accuracy 0.79 6529
macro avg 0.71 0.63 0.65 6529
weighted avg 0.77 0.79 0.77 6529
Category: request
precision recall f1-score support
0 0.88 0.98 0.93 5412
1 0.82 0.37 0.51 1117
accuracy 0.88 6529
macro avg 0.85 0.68 0.72 6529
weighted avg 0.87 0.88 0.86 6529
Category: offer
precision recall f1-score support
0 0.99 1.00 1.00 6494
1 0.00 0.00 0.00 35
accuracy 0.99 6529
macro avg 0.50 0.50 0.50 6529
weighted avg 0.99 0.99 0.99 6529
Category: aid_related
precision recall f1-score support
0 0.71 0.88 0.79 3814
1 0.75 0.50 0.60 2715
accuracy 0.72 6529
macro avg 0.73 0.69 0.69 6529
weighted avg 0.73 0.72 0.71 6529
Category: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6025
1 0.65 0.07 0.13 504
accuracy 0.93 6529
macro avg 0.79 0.54 0.55 6529
weighted avg 0.91 0.93 0.90 6529
Category: medical_products
precision recall f1-score support
0 0.96 1.00 0.98 6233
1 0.83 0.07 0.13 296
accuracy 0.96 6529
macro avg 0.90 0.53 0.55 6529
weighted avg 0.95 0.96 0.94 6529
Category: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6363
1 0.67 0.04 0.07 166
accuracy 0.98 6529
macro avg 0.82 0.52 0.53 6529
weighted avg 0.97 0.98 0.96 6529
Category: security
precision recall f1-score support
0 0.98 1.00 0.99 6424
1 0.00 0.00 0.00 105
accuracy 0.98 6529
macro avg 0.49 0.50 0.50 6529
weighted avg 0.97 0.98 0.98 6529
Category: military
precision recall f1-score support
0 0.97 1.00 0.98 6332
1 0.53 0.04 0.08 197
accuracy 0.97 6529
macro avg 0.75 0.52 0.53 6529
weighted avg 0.96 0.97 0.96 6529
Category:
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'clf__estimator__n_estimators': [10, 20, 30, 40, 50],
'clf__estimator__min_samples_leaf': [1, 2, 3, 4, 5]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
cv.best_params_
y_pred_cv = cv.predict(X_test)
for i in range(len(category_names)):
print("Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred_cv[:, i]))
###Output
Category: related
precision recall f1-score support
0 0.75 0.27 0.40 1523
1 0.81 0.97 0.89 5006
accuracy 0.81 6529
macro avg 0.78 0.62 0.64 6529
weighted avg 0.80 0.81 0.77 6529
Category: request
precision recall f1-score support
0 0.89 0.99 0.94 5412
1 0.87 0.42 0.57 1117
accuracy 0.89 6529
macro avg 0.88 0.70 0.75 6529
weighted avg 0.89 0.89 0.87 6529
Category: offer
precision recall f1-score support
0 0.99 1.00 1.00 6494
1 0.00 0.00 0.00 35
accuracy 0.99 6529
macro avg 0.50 0.50 0.50 6529
weighted avg 0.99 0.99 0.99 6529
Category: aid_related
precision recall f1-score support
0 0.75 0.89 0.81 3814
1 0.78 0.58 0.67 2715
accuracy 0.76 6529
macro avg 0.77 0.73 0.74 6529
weighted avg 0.76 0.76 0.75 6529
Category: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 6025
1 0.69 0.05 0.10 504
accuracy 0.93 6529
macro avg 0.81 0.53 0.53 6529
weighted avg 0.91 0.93 0.89 6529
Category: medical_products
precision recall f1-score support
0 0.96 1.00 0.98 6233
1 0.79 0.06 0.12 296
accuracy 0.96 6529
macro avg 0.87 0.53 0.55 6529
weighted avg 0.95 0.96 0.94 6529
Category: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6363
1 0.88 0.04 0.08 166
accuracy 0.98 6529
macro avg 0.93 0.52 0.53 6529
weighted avg 0.97 0.98 0.96 6529
Category: security
precision recall f1-score support
0 0.98 1.00 0.99 6424
1 0.00 0.00 0.00 105
accuracy 0.98 6529
macro avg 0.49 0.50 0.50 6529
weighted avg 0.97 0.98 0.98 6529
Category: military
precision recall f1-score support
0 0.97 1.00 0.98 6332
1 0.50 0.02 0.04 197
accuracy 0.97 6529
macro avg 0.74 0.51 0.51 6529
weighted avg 0.96 0.97 0.96 6529
Category: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6529
accuracy 1.00 6529
macro avg 1.00 1.00 1.00 6529
weighted avg 1.00 1.00 1.00 6529
Category: water
precision recall f1-score support
0 0.94 1.00 0.97 6110
1 0.90 0.15 0.26 419
accuracy 0.94 6529
macro avg 0.92 0.57 0.61 6529
weighted avg 0.94 0.94 0.93 6529
Category:
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
pipeline2 = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.get_params()
parameters2 = {
'clf__estimator__n_estimators': [10, 20, 30, 40, 50]
}
cv2 = GridSearchCV(pipeline2, param_grid=parameters2)
cv2.fit(X_train, y_train)
# I'll take this configuration for the final model (python script)
cv2.best_params_
y_pred_cv2 = cv2.predict(X_test)
for i in range(len(category_names)):
print("Category:", category_names[i],"\n", classification_report(y_test.iloc[:, i].values, y_pred_cv2[:, i]))
###Output
Category: related
precision recall f1-score support
0 0.65 0.35 0.46 1523
1 0.83 0.94 0.88 5006
accuracy 0.81 6529
macro avg 0.74 0.65 0.67 6529
weighted avg 0.79 0.81 0.78 6529
Category: request
precision recall f1-score support
0 0.91 0.96 0.93 5412
1 0.75 0.52 0.62 1117
accuracy 0.89 6529
macro avg 0.83 0.74 0.78 6529
weighted avg 0.88 0.89 0.88 6529
Category: offer
precision recall f1-score support
0 0.99 1.00 1.00 6494
1 0.00 0.00 0.00 35
accuracy 0.99 6529
macro avg 0.50 0.50 0.50 6529
weighted avg 0.99 0.99 0.99 6529
Category: aid_related
precision recall f1-score support
0 0.74 0.87 0.80 3814
1 0.76 0.58 0.66 2715
accuracy 0.75 6529
macro avg 0.75 0.73 0.73 6529
weighted avg 0.75 0.75 0.74 6529
Category: medical_help
precision recall f1-score support
0 0.94 0.99 0.96 6025
1 0.58 0.24 0.34 504
accuracy 0.93 6529
macro avg 0.76 0.61 0.65 6529
weighted avg 0.91 0.93 0.91 6529
Category: medical_products
precision recall f1-score support
0 0.97 0.99 0.98 6233
1 0.68 0.33 0.45 296
accuracy 0.96 6529
macro avg 0.82 0.66 0.71 6529
weighted avg 0.96 0.96 0.96 6529
Category: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6363
1 0.58 0.23 0.33 166
accuracy 0.98 6529
macro avg 0.78 0.61 0.66 6529
weighted avg 0.97 0.98 0.97 6529
Category: security
precision recall f1-score support
0 0.98 1.00 0.99 6424
1 0.23 0.05 0.08 105
accuracy 0.98 6529
macro avg 0.61 0.52 0.53 6529
weighted avg 0.97 0.98 0.98 6529
Category: military
precision recall f1-score support
0 0.98 0.99 0.99 6332
1 0.57 0.34 0.43 197
accuracy 0.97 6529
macro avg 0.78 0.67 0.71 6529
weighted avg 0.97 0.97 0.97 6529
Category: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6529
accuracy 1.00 6529
macro avg 1.00 1.00 1.00 6529
weighted avg 1.00 1.00 1.00 6529
Category: water
precision recall f1-score support
0 0.98 0.98 0.98 6110
1 0.73 0.66 0.69 419
accuracy 0.96 6529
macro avg 0.85 0.82 0.84 6529
weighted avg 0.96 0.96 0.96 6529
Category: food
precision recall f1-score support
0 0.96 0.98 0.97 5797
1 0.81 0.70 0.75 732
accuracy 0.95 6529
macro avg 0.88 0.84 0.86 6529
weighted avg 0.95 0.95 0.95 6529
Category: shelter
precision recall f1-score support
0 0.96 0.98 0.97 5977
1 0.76 0.54 0.63 552
accuracy 0.95 6529
macro avg 0.86 0.76 0.80 6529
weighted avg 0.94 0.95 0.94 6529
Category: clothing
precision recall f1-score support
0 0.99 1.00 0.99 6449
1 0.61 0.39 0.47 80
accuracy 0.99 6529
macro avg 0.80 0.69 0.73 6529
weighted avg 0.99 0.99 0.99 6529
Category: money
precision recall f1-score support
0 0.98 1.00 0.99 6377
1 0.58 0.25 0.35 152
accuracy 0.98 6529
macro avg 0.78 0.62 0.67 6529
weighted avg 0.97 0.98 0.97 6529
Category: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6460
1 0.57 0.25 0.34 69
accuracy 0.99 6529
macro avg 0.78 0.62 0.67 6529
weighted avg 0.99 0.99 0.99 6529
Category: refugees
precision recall f1-score support
0 0.97 0.99 0.98 6321
1 0.51 0.22 0.31 208
accuracy 0.97 6529
macro avg 0.74 0.61 0.65 6529
weighted avg 0.96 0.97 0.96 6529
Category: death
precision recall f1-score support
0 0.98 0.99 0.98 6229
1 0.81 0.47 0.60 300
accuracy 0.97 6529
macro avg 0.89 0.73 0.79 6529
weighted avg 0.97 0.97 0.97 6529
Category: other_aid
precision recall f1-score support
0 0.88 0.98 0.92 5643
1 0.45 0.11 0.18 886
accuracy 0.86 6529
macro avg 0.66 0.54 0.55 6529
weighted avg 0.82 0.86 0.82 6529
Category: infrastructure_related
precision recall f1-score support
0 0.94 0.99 0.97 6117
1 0.44 0.09 0.16 412
accuracy 0.94 6529
macro avg 0.69 0.54 0.56 6529
weighted avg 0.91 0.94 0.92 6529
Category: transport
precision recall f1-score support
0 0.96 1.00 0.98 6231
1 0.72 0.24 0.37 298
accuracy 0.96 6529
macro avg 0.84 0.62 0.67 6529
weighted avg 0.95 0.96 0.95 6529
Category: buildings
precision recall f1-score support
0 0.97 0.99 0.98 6226
1 0.57 0.38 0.46 303
accuracy 0.96 6529
macro avg 0.77 0.68 0.72 6529
weighted avg 0.95 0.96 0.95 6529
Category: electricity
precision recall f1-score support
0 0.99 1.00 0.99 6399
1 0.54 0.28 0.37 130
accuracy 0.98 6529
macro avg 0.76 0.64 0.68 6529
weighted avg 0.98 0.98 0.98 6529
Category: tools
precision recall f1-score support
0 0.99 1.00 1.00 6479
1 0.50 0.04 0.07 50
accuracy 0.99 6529
macro avg 0.75 0.52 0.54 6529
weighted avg 0.99 0.99 0.99 6529
Category: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6463
1 0.22 0.08 0.11 66
accuracy 0.99 6529
macro avg 0.60 0.54 0.55 6529
weighted avg 0.98 0.99 0.98 6529
Category: shops
precision recall f1-score support
0 0.99 1.00 1.00 6491
1 0.00 0.00 0.00 38
accuracy 0.99 6529
macro avg 0.50 0.50 0.50 6529
weighted avg 0.99 0.99 0.99 6529
Category: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6459
1 0.24 0.09 0.13 70
accuracy 0.99 6529
macro avg 0.62 0.54 0.56 6529
weighted avg 0.98 0.99 0.98 6529
Category: other_infrastructure
precision recall f1-score support
0 0.96 0.99 0.98 6257
1 0.31 0.08 0.13 272
accuracy 0.95 6529
macro avg 0.63 0.54 0.55 6529
weighted avg 0.93 0.95 0.94 6529
###Markdown
9. Export your model as a pickle file
###Code
with open('model.pkl', 'wb') as file:
pickle.dump(cv2, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sqlite3
import pandas as pd
import re
from sqlalchemy import create_engine
import nltk
nltk.download('punkt')
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import numpy as np
#ML Pipelines
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report, f1_score, accuracy_score
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, make_scorer
import pickle
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
# load data from database
engine = create_engine('sqlite:///disaster.db')
df = pd.read_sql_table('disaster', engine)
X = df['message']
y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
df.groupby('genre').count()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Normalize, tokenize and stem text into words
Args:
text, a string of words
Returns:
stem, array of strings containing words
"""
#lower case and remove special punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ",text.lower())
#split using tokenizer
words = word_tokenize(text)
#remove stopwords to reduce vocab & use stem
words = [w for w in words if w not in stopwords.words("english")]
return words
#test first line of X to see if it has tokenized the words correctly
print(X[0])
tokenize(X[0])
###Output
Weather update - a cold front from Cuba that could pass over Haiti
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=10)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def pred_loop(actual, predicted, col_names):
"""
Args:
actual: Array with labels
predicted: Array with labels
col_names: Names for each column
Returns:
predictions_df: Dataframe with recall, precision, f1 and accuracy scores
"""
metrics = []
#Loop to score each of the metrics and predicitions for inputted arrays
for i in range(len(col_names)):
accuracy = accuracy_score(actual[:, i], predicted[:, i])
precision = precision_score(actual[:, i], predicted[:, i], average='micro')
recall = recall_score(actual[:, i], predicted[:, i], average='micro')
f1 = f1_score(actual[:, i], predicted[:, i], average='micro')
metrics.append([accuracy, precision, recall, f1])
#Dataframe creation containing the predictions
metrics = np.array(metrics)
predictions_df = pd.DataFrame(data = metrics, index = col_names, columns = ['Accuracy', 'Precision', 'Recall', 'F1'])
return predictions_df
#model and pipeline run on the training set
y_pred = pipeline.predict(X_train)
col_names = list(y.columns.values)
Xtrain_pred = pred_loop(np.array(y_train), y_pred, col_names)
Xtrain_pred
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
#to do this we need to tune the hyperparameters
pipeline.get_params
#hyper-parameter selection
parameters = {
#'vect__min_df': [1, 5],
#'vect__max_features': [10000],
#'clf__estimator__n_estimators': [300],
'tfidf__smooth_idf':[True, False],
#'clf__estimator__n_estimators':[10, 25],
'clf__estimator__min_samples_split':[2, 5, 10]
}
cv = GridSearchCV(pipeline, parameters, cv=3, n_jobs=-1)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
model_impro = cv.fit(X_train, y_train)
model_impro_predict = model_impro.predict(X_train)
predictor_model = pred_loop(np.array(y_train), model_impro_predict, col_names)
predictor_model
model_impro.best_params_
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#Use a different estimator (DTC) to try and improve the model further
from sklearn.tree import DecisionTreeClassifier
pipeline_dtc = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier())),
])
#hyper-parameter selection
dtc_params = {
'tfidf__smooth_idf':[False],
#'clf__estimator__degree':[2]
}
cv_dtc = GridSearchCV(pipeline_dtc, dtc_params, cv=3, n_jobs=-1)
cv_dtc
dtc_fit = cv_dtc.fit(X_train, y_train)
dtc_pred = dtc_fit.predict(X_train)
dtc_model = pred_loop(np.array(y_train), dtc_pred, col_names)
dtc_model
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv, open('ml_model.p', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import re
import nltk
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, precision_score, recall_score, classification_report
nltk.download(['punkt', 'wordnet','stopwords','averaged_perceptron_tagger'])
def remove_outliers(df):
text_length = df['message'].apply(lambda x: len(x))
outliers = df['message'][text_length[text_length < 25].index].values
df = df[~df['message'].isin(outliers)]
return df
# load data from database
engine = create_engine('sqlite:///./data/DisasterResponse.db')
df = pd.read_sql("SELECT * FROM DisasterResponse", engine)
df = remove_outliers(df)
columns = list(df.iloc[:,4:].columns)
X = df['message'].values
y = df.iloc[:,4:].values
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# get list of all urls using regex
url_regex = r"http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+"
repeated_symbols_regex = r"[\?\.\!]+(?=[\?\.\!])"
text = re.sub(url_regex,"urlplaceholder",text)
text = re.sub(repeated_symbols_regex,'', text)
# tokenize text
tokens = word_tokenize(text)
#lemmatizer and stopwords
clean_tokens = [WordNetLemmatizer().lemmatize(w.lower().strip())
for w in tokens if w not in stopwords.words('english')]
#Stemmer
clean_tokens = [PorterStemmer().stem(t) for t in clean_tokens]
#Removing Symbols
symbols_list = ['_','-','?','!','.','@','#','$','%','^','&','*','(',')','[',']','/']
clean_tokens = [PorterStemmer().stem(t) for t in clean_tokens if t not in symbols_list]
return clean_tokens
tokenize(X[0])
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=-1),n_jobs=-1)),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
for i in range(y_pred.shape[1]):
print(f"Feature Name: {columns[i]}")
print(classification_report(y_test[:,i], y_pred[:,i]))
###Output
Feature Name: related
precision recall f1-score support
0 0.63 0.44 0.52 1873
1 0.83 0.92 0.87 5934
2 0.67 0.07 0.12 58
micro avg 0.80 0.80 0.80 7865
macro avg 0.71 0.48 0.51 7865
weighted avg 0.78 0.80 0.78 7865
Feature Name: request
precision recall f1-score support
0 0.89 0.98 0.93 6533
1 0.78 0.41 0.54 1332
micro avg 0.88 0.88 0.88 7865
macro avg 0.83 0.69 0.73 7865
weighted avg 0.87 0.88 0.86 7865
Feature Name: offer
precision recall f1-score support
0 1.00 1.00 1.00 7829
1 0.00 0.00 0.00 36
micro avg 1.00 1.00 1.00 7865
macro avg 0.50 0.50 0.50 7865
weighted avg 0.99 1.00 0.99 7865
Feature Name: aid_related
precision recall f1-score support
0 0.75 0.85 0.80 4646
1 0.74 0.59 0.66 3219
micro avg 0.75 0.75 0.75 7865
macro avg 0.74 0.72 0.73 7865
weighted avg 0.75 0.75 0.74 7865
Feature Name: medical_help
precision recall f1-score support
0 0.93 0.99 0.96 7227
1 0.57 0.10 0.17 638
micro avg 0.92 0.92 0.92 7865
macro avg 0.75 0.55 0.56 7865
weighted avg 0.90 0.92 0.89 7865
Feature Name: medical_products
precision recall f1-score support
0 0.95 1.00 0.97 7447
1 0.72 0.08 0.14 418
micro avg 0.95 0.95 0.95 7865
macro avg 0.83 0.54 0.56 7865
weighted avg 0.94 0.95 0.93 7865
Feature Name: search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 7673
1 0.57 0.04 0.08 192
micro avg 0.98 0.98 0.98 7865
macro avg 0.77 0.52 0.53 7865
weighted avg 0.97 0.98 0.97 7865
Feature Name: security
precision recall f1-score support
0 0.98 1.00 0.99 7721
1 0.00 0.00 0.00 144
micro avg 0.98 0.98 0.98 7865
macro avg 0.49 0.50 0.50 7865
weighted avg 0.96 0.98 0.97 7865
Feature Name: military
precision recall f1-score support
0 0.97 1.00 0.98 7620
1 0.55 0.09 0.15 245
micro avg 0.97 0.97 0.97 7865
macro avg 0.76 0.54 0.57 7865
weighted avg 0.96 0.97 0.96 7865
Feature Name: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 7865
micro avg 1.00 1.00 1.00 7865
macro avg 1.00 1.00 1.00 7865
weighted avg 1.00 1.00 1.00 7865
Feature Name: water
precision recall f1-score support
0 0.95 1.00 0.97 7365
1 0.82 0.23 0.35 500
micro avg 0.95 0.95 0.95 7865
macro avg 0.88 0.61 0.66 7865
weighted avg 0.94 0.95 0.93 7865
Feature Name: food
precision recall f1-score support
0 0.93 0.99 0.96 6987
1 0.88 0.38 0.53 878
micro avg 0.92 0.92 0.92 7865
macro avg 0.91 0.69 0.74 7865
weighted avg 0.92 0.92 0.91 7865
Feature Name: shelter
precision recall f1-score support
0 0.93 0.99 0.96 7160
1 0.77 0.23 0.36 705
micro avg 0.93 0.93 0.93 7865
macro avg 0.85 0.61 0.66 7865
weighted avg 0.92 0.93 0.91 7865
Feature Name: clothing
precision recall f1-score support
0 0.99 1.00 0.99 7750
1 0.75 0.03 0.05 115
micro avg 0.99 0.99 0.99 7865
macro avg 0.87 0.51 0.52 7865
weighted avg 0.98 0.99 0.98 7865
Feature Name: money
precision recall f1-score support
0 0.98 1.00 0.99 7695
1 0.67 0.07 0.13 170
micro avg 0.98 0.98 0.98 7865
macro avg 0.82 0.53 0.56 7865
weighted avg 0.97 0.98 0.97 7865
Feature Name: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 7773
1 0.33 0.01 0.02 92
micro avg 0.99 0.99 0.99 7865
macro avg 0.66 0.51 0.51 7865
weighted avg 0.98 0.99 0.98 7865
Feature Name: refugees
precision recall f1-score support
0 0.97 1.00 0.98 7605
1 0.52 0.04 0.08 260
micro avg 0.97 0.97 0.97 7865
macro avg 0.75 0.52 0.53 7865
weighted avg 0.95 0.97 0.95 7865
Feature Name: death
precision recall f1-score support
0 0.96 1.00 0.98 7499
1 0.85 0.21 0.34 366
micro avg 0.96 0.96 0.96 7865
macro avg 0.90 0.60 0.66 7865
weighted avg 0.96 0.96 0.95 7865
Feature Name: other_aid
precision recall f1-score support
0 0.87 0.99 0.93 6832
1 0.49 0.05 0.10 1033
micro avg 0.87 0.87 0.87 7865
macro avg 0.68 0.52 0.51 7865
weighted avg 0.82 0.87 0.82 7865
Feature Name: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 7360
1 0.31 0.01 0.02 505
micro avg 0.94 0.94 0.94 7865
macro avg 0.62 0.50 0.49 7865
weighted avg 0.90 0.94 0.91 7865
Feature Name: transport
precision recall f1-score support
0 0.96 1.00 0.98 7503
1 0.80 0.04 0.08 362
micro avg 0.96 0.96 0.96 7865
macro avg 0.88 0.52 0.53 7865
weighted avg 0.95 0.96 0.94 7865
Feature Name: buildings
precision recall f1-score support
0 0.95 1.00 0.98 7473
1 0.68 0.07 0.13 392
micro avg 0.95 0.95 0.95 7865
macro avg 0.82 0.53 0.55 7865
weighted avg 0.94 0.95 0.93 7865
Feature Name: electricity
precision recall f1-score support
0 0.98 1.00 0.99 7697
1 0.71 0.03 0.06 168
micro avg 0.98 0.98 0.98 7865
macro avg 0.85 0.51 0.52 7865
weighted avg 0.97 0.98 0.97 7865
Feature Name: tools
precision recall f1-score support
0 0.99 1.00 1.00 7817
1 0.00 0.00 0.00 48
micro avg 0.99 0.99 0.99 7865
macro avg 0.50 0.50 0.50 7865
weighted avg 0.99 0.99 0.99 7865
Feature Name: hospitals
precision recall f1-score support
0 0.99 1.00 0.99 7787
1 0.00 0.00 0.00 78
micro avg 0.99 0.99 0.99 7865
macro avg 0.50 0.50 0.50 7865
weighted avg 0.98 0.99 0.99 7865
Feature Name: shops
precision recall f1-score support
0 1.00 1.00 1.00 7837
1 0.00 0.00 0.00 28
micro avg 1.00 1.00 1.00 7865
macro avg 0.50 0.50 0.50 7865
weighted avg 0.99 1.00 0.99 7865
Feature Name: aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 7762
1 0.00 0.00 0.00 103
micro avg 0.99 0.99 0.99 7865
macro avg 0.49 0.50 0.50 7865
weighted avg 0.97 0.99 0.98 7865
Feature Name: other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 7524
1 0.33 0.01 0.01 341
micro avg 0.96 0.96 0.96 7865
macro avg 0.65 0.50 0.49 7865
weighted avg 0.93 0.96 0.94 7865
Feature Name: weather_related
precision recall f1-score support
0 0.86 0.95 0.90 5702
1 0.82 0.58 0.68 2163
micro avg 0.85 0.85 0.85 7865
macro avg 0.84 0.77 0.79 7865
weighted avg 0.85 0.85 0.84 7865
Feature Name: floods
precision recall f1-score support
0 0.95 0.99 0.97 7242
1 0.85 0.41 0.56 623
micro avg 0.95 0.95 0.95 7865
macro avg 0.90 0.70 0.76 7865
weighted avg 0.94 0.95 0.94 7865
Feature Name: storm
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {'clf__estimator__min_samples_split': [2, 3]}
param_distributions = {'clf__estimator__n_estimators': [10],
'clf__estimator__criterion':['gini','entropy'],
'clf__estimator__max_depth':list(range(1,10))+list(range(10,100,10))+list(range(100,1100,100)),
'clf__estimator__min_samples_split': list(range(2,20)),
'clf__estimator__min_samples_leaf': list(range(2,20))}
cv = GridSearchCV(pipeline,
param_grid=parameters,
cv=3,
n_jobs=-1)
cv = RandomizedSearchCV(pipeline,
param_distributions=param_distributions,
n_iter=20,
cv=3,
n_jobs=-1)
cv.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
clf = cv.best_estimator_
y_pred = clf.predict(X_test)
for i in range(y_pred.shape[1]):
print(f"Feature Name: {columns[i]}")
print(classification_report(y_test[:,i], y_pred[:,i]))
from joblib import dump, load
dump(cv, 'cv_clf.joblib')
cv = load('filename.joblib')
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
try:
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
except:
pass
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def build_model(parameters):
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
parameters = {'clf__estimator__n_estimators': [50],
'clf__estimator__min_samples_split': [2, 3, 4]}
cv = build_model(parameters)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
import numpy as np
nltk.download(['punkt', 'wordnet'])
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.stem import WordNetLemmatizer
import pandas as pd
from sqlalchemy import create_engine
import re
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import precision_recall_fscore_support
from sklearn.tree import DecisionTreeClassifier
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('Disasters', con=engine)
X = df['message']
Y = df[df.columns[5:]]
added = pd.get_dummies(df[['related','genre']])
y = pd.concat([Y, added], axis=1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# take out all punctuation while tokenizing
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
# lemmatize as shown in the lesson
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Create pipeline with Classifier
moc = MultiOutputClassifier(RandomForestClassifier())
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', moc)
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# split data, train and predict
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train.as_matrix(), y_train.as_matrix())
y_pred = pipeline.predict(X_test)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:3: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Get results and add them to a dataframe.
def get_results(y_test, y_pred):
results = pd.DataFrame(columns=['Category', 'f_score', 'precision', 'recall'])
num = 0
for cat in y_test.columns:
precision, recall, f_score, support = precision_recall_fscore_support(y_test[cat], y_pred[:,num], average='weighted')
results.set_value(num+1, 'Category', cat)
results.set_value(num+1, 'f_score', f_score)
results.set_value(num+1, 'precision', precision)
results.set_value(num+1, 'recall', recall)
num += 1
print('Aggregated f_score:', results['f_score'].mean())
print('Aggregated precision:', results['precision'].mean())
print('Aggregated recall:', results['recall'].mean())
return results
results = get_results(y_test, y_pred)
results
###Output
Aggregated f_score: 0.930901270579
Aggregated precision: 0.934133880955
Aggregated recall: 0.94360461022
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {'clf__estimator__max_depth': [10, 50, None],
'clf__estimator__min_samples_leaf':[2, 5, 10]}
cv = GridSearchCV(pipeline, parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train.as_matrix(), y_train.as_matrix())
y_pred = cv.predict(X_test)
results2 = get_results(y_test, y_pred)
results2
cv.best_estimator_
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# testing a pure decision tree classifier
moc = MultiOutputClassifier(DecisionTreeClassifier())
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', moc)
])
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train.as_matrix(), y_train.as_matrix())
y_pred = pipeline.predict(X_test)
results = get_results(y_test, y_pred)
results
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:11: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
# This is added back by InteractiveShellApp.init_path()
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(cv, open('model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import sqlite3
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'wordnet'])
nltk.download('stopwords')
import re
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('DisasterResponse', engine)
#df.head()
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis=1)
df.groupby(df['related']).count()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import precision_score, recall_score, f1_score
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
def tokenize(text):
# Define url pattern
url_re = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\), ]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# Detect and replace urls
detected_urls = re.findall(url_re, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# tokenize sentences
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
# save cleaned tokens
clean_tokens = [lemmatizer.lemmatize(tok).lower().strip() for tok in tokens]
# remove stopwords
STOPWORDS = list(set(stopwords.words('english')))
clean_tokens = [token for token in clean_tokens if token not in STOPWORDS]
return clean_tokens
a=tokenize("It is a far, far better thing that I do, than I have ever done; it is a far, far better rest I go to than I have ever known.")
b=CountVectorizer(a)
c = b.fit_transform(a)
c
###Output
/home/apu/anaconda3/envs/udacity/lib/python3.7/site-packages/sklearn/utils/validation.py:71: FutureWarning: Pass input=['far', ',', 'far', 'better', 'thing', ',', 'ever', 'done', ';', 'far', ',', 'far', 'better', 'rest', 'go', 'ever', 'known', '.'] as keyword args. From version 0.25 passing these as positional arguments will result in an error
FutureWarning)
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def build_pipeline():
# build NLP pipeline - count words, tf-idf, multiple output classifier
pipeline = Pipeline([
('vec', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators = 100, n_jobs = -1)))
])
return pipeline
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline = build_pipeline()
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def report(pipeline, X_test, Y_test):
# predict on the X_test
Y_pred = pipeline.predict(X_test)
# build classification report on every column
performances = []
for i in range(len(Y_test.columns)):
performances.append([f1_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro'),
precision_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro'),
recall_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro')])
# build dataframe
performances = pd.DataFrame(performances, columns=['f1 score', 'precision', 'recall'],
index = Y_test.columns)
return performances
report(pipeline, X_test, y_test)
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline = build_pipeline()
from sklearn.model_selection import GridSearchCV
parameters = {
'clf__estimator__n_estimators':[10,50,100]
}
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs= -1)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.best_params_
report(cv, X_test, y_test)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline_improved = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(n_estimators = 100)))
])
pipeline_improved.fit(X_train, y_train)
y_pred_improved = pipeline_improved.predict(X_test)
report(pipeline_improved, X_test, y_test)
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(cv, open('classifier.pkl', 'wb'))
#pickle.dump(pipeline_improved, open('adaboost_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import pickle
from datetime import datetime
from nltk.corpus import stopwords
from nltk import WordNetLemmatizer, RegexpTokenizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, \
GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier, ExtraTreeClassifier
from sklearn.metrics import accuracy_score, recall_score
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.base import BaseEstimator, TransformerMixin
# load data from database
def load_data(db_name: str):
"""
Function used to load data from sqlite database
:param db_name: database file name
:return: X and Y
"""
engine = create_engine(f'sqlite:///{db_name}.db')
table_name = engine.table_names()[0]
df = pd.read_sql(f'select * from {table_name}', engine, index_col='id')
not_in_Y = ['original', 'genre', 'message']
X = df['message'].values
Y = df[[col for col in df.columns if col not in not_in_Y]].values
return X, Y
X, Y = load_data('data/DisasterResponse')
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def clean_text(text: str):
"""
Function used to clean text data:
- text normalization
- text tokenization
- text lemmatization or stemming
:param text: text to clean
:param reduce_word: lammatizer or stemming object
:return: clean text
"""
# text normalization and tokenization
tokenizer = RegexpTokenizer(r'[\w]')
tokens = tokenizer.tokenize(text)
# removing stopwords
stop_words = stopwords.words('english')
tokens = [word.strip() for word in tokens if word not in stop_words]
# lemmatization or stemming
lemmatizer = WordNetLemmatizer()
text = [lemmatizer.lemmatize(word) for word in tokens]
return text
###Output
_____no_output_____
###Markdown
3. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3,
random_state=0)
###Output
_____no_output_____
###Markdown
4. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline_rf = Pipeline([
('count_vectorizer', CountVectorizer(tokenizer=clean_text)),
('tfids', TfidfTransformer()),
('cls',
MultiOutputClassifier(estimator=RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
start = datetime.now()
pipeline_rf.fit(X_train, y_train)
y_predict = pipeline_rf.predict(X_test)
end = datetime.now()
delta = (end - start).seconds
print(f'It took {delta // 60} minutes and {delta % 60} seconds')
accuracy = []
for c_r, c_p in zip(y_test.T, y_predict.T):
accuracy.append(accuracy_score(c_r, c_p))
mean_accuracy = sum(accuracy) / len(accuracy)
print(f'The accuracy of the model is: {mean_accuracy:.2%}')
###Output
The accuracy of the model is: 93.06%
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'tfids__norm': ['l1', 'l2'],
'cls__estimator__n_estimators': [10, 50, 100, 500],
'cls__estimator__max_depth': [None, 1, 2, 3, 4, 5],
'cls__estimator__min_samples_leaf': [1, 5, 10]
}
cv = GridSearchCV(pipeline_rf, param_grid=parameters)
start = datetime.now()
cv.fit(X_train, y_train)
end = datetime.now()
delta = (end - start).seconds
print(f'It took {delta // 60} minutes and {delta % 60} seconds')
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio! 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
with open('models/rf.pkl', 'wb') as file:
pickle.dump(pipeline_rf, file)
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
temp_pipeline = Pipeline([
('count_vectorizer',
CountVectorizer(tokenizer=CleanDate(reduce_word=PorterStemmer))),
('tfids', TfidfTransformer()),
('cls',
MultiOutputClassifier(estimator=RandomForestClassifier()))
])
for i in pipeline_rf.get_params().keys():
print(i)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
import sqlite3
import re
import nltk
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk import PorterStemmer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import Normalizer
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
nltk.download(['words', 'punkt', 'stopwords',
'averaged_perceptron_tagger',
'maxent_ne_chunker', 'wordnet'])
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('message_category', engine)
df.head(2)
# number of distinct observations
df.nunique()
# number of missing values
df.isnull().sum()
# drop id, original
df.drop(['id', 'original'], axis=1, inplace=True)
df.head(2)
# Check distribution of message categories
category_names = df.loc[:, 'related':'direct_report'].columns
category_counts = (df.loc[:, 'related':'direct_report']
).sum().sort_values(ascending=False)
category_counts.plot(kind='bar', figsize=(
10, 5), title='Distribution of message categories')
X = df['message'].values
Y = df.loc[:,'related':'direct_report'].values
# check messages and categories
rnd = np.random.randint(df.shape[0])
print(X[rnd])
df.iloc[rnd]
###Output
Beyond the ISDR Secretariat and OCHA, let me note that WMO has also much to offer in the area of scientific and technological expertise.
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
"""
1. Replace url in the text with 'urlplaceholder'
2. Remove punctuations and use lower cases
3. Remove stopwords and lemmatize tokens
Args: text
Returns: cleaned tokens of text
"""
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
tokens = word_tokenize(text)
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
clean_tokens = [lemmatizer.lemmatize(tok)
for tok in tokens if tok not in stop_words]
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline_ada = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer(use_idf=True)),
('clf', MultiOutputClassifier(AdaBoostClassifier())),
])
pipeline_ada.get_params()
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=42)
pipeline_ada.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
Y_pred = pipeline_ada.predict(X_test)
(Y_pred == Y_test).mean()
def display_results(y_test, y_pred, y_col):
"""
Display f1 score, precision, recall, accuracy and confusion_matrix
for each category of the test dataset
"""
clf_report = classification_report(y_test, y_pred)
confusion_mat = confusion_matrix(y_test, y_pred)
accuracy = (y_pred == y_test).mean()
print('\n')
print(y_col, ":")
print('\n')
print(clf_report)
print('confusion_matrix :')
print(confusion_mat)
print('\n')
print('Accuracy =', accuracy)
print('-'*65)
for i in range(Y_test.shape[1]):
display_results(Y_test[:, i], Y_pred[:, i],
df.loc[:, 'related':'direct_report'].columns[i])
###Output
related :
precision recall f1-score support
0 0.70 0.24 0.36 1245
1 0.80 0.97 0.88 3998
accuracy 0.80 5243
macro avg 0.75 0.61 0.62 5243
weighted avg 0.78 0.80 0.76 5243
confusion_matrix :
[[ 304 941]
[ 131 3867]]
Accuracy = 0.7955369063513256
-----------------------------------------------------------------
request :
precision recall f1-score support
0 0.90 0.97 0.93 4352
1 0.78 0.47 0.59 891
accuracy 0.89 5243
macro avg 0.84 0.72 0.76 5243
weighted avg 0.88 0.89 0.88 5243
confusion_matrix :
[[4231 121]
[ 470 421]]
Accuracy = 0.8872782757962998
-----------------------------------------------------------------
offer :
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 24
accuracy 0.99 5243
macro avg 0.50 0.50 0.50 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5212 7]
[ 24 0]]
Accuracy = 0.9940873545679955
-----------------------------------------------------------------
aid_related :
precision recall f1-score support
0 0.74 0.89 0.81 3079
1 0.78 0.57 0.66 2164
accuracy 0.76 5243
macro avg 0.76 0.73 0.73 5243
weighted avg 0.76 0.76 0.75 5243
confusion_matrix :
[[2737 342]
[ 939 1225]]
Accuracy = 0.7556742323097463
-----------------------------------------------------------------
medical_help :
precision recall f1-score support
0 0.94 0.99 0.96 4808
1 0.63 0.27 0.37 435
accuracy 0.93 5243
macro avg 0.78 0.63 0.67 5243
weighted avg 0.91 0.93 0.91 5243
confusion_matrix :
[[4740 68]
[ 319 116]]
Accuracy = 0.9261872973488461
-----------------------------------------------------------------
medical_products :
precision recall f1-score support
0 0.96 0.99 0.98 4964
1 0.62 0.29 0.40 279
accuracy 0.95 5243
macro avg 0.79 0.64 0.69 5243
weighted avg 0.94 0.95 0.94 5243
confusion_matrix :
[[4913 51]
[ 197 82]]
Accuracy = 0.9526988365439634
-----------------------------------------------------------------
search_and_rescue :
precision recall f1-score support
0 0.98 1.00 0.99 5107
1 0.60 0.18 0.28 136
accuracy 0.98 5243
macro avg 0.79 0.59 0.63 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5090 17]
[ 111 25]]
Accuracy = 0.9755864962807553
-----------------------------------------------------------------
security :
precision recall f1-score support
0 0.98 1.00 0.99 5147
1 0.13 0.02 0.04 96
accuracy 0.98 5243
macro avg 0.56 0.51 0.51 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5134 13]
[ 94 2]]
Accuracy = 0.9795918367346939
-----------------------------------------------------------------
military :
precision recall f1-score support
0 0.98 0.99 0.99 5085
1 0.57 0.29 0.39 158
accuracy 0.97 5243
macro avg 0.78 0.64 0.69 5243
weighted avg 0.97 0.97 0.97 5243
confusion_matrix :
[[5051 34]
[ 112 46]]
Accuracy = 0.9721533473202365
-----------------------------------------------------------------
child_alone :
precision recall f1-score support
0 1.00 1.00 1.00 5243
accuracy 1.00 5243
macro avg 1.00 1.00 1.00 5243
weighted avg 1.00 1.00 1.00 5243
confusion_matrix :
[[5243]]
Accuracy = 1.0
-----------------------------------------------------------------
water :
precision recall f1-score support
0 0.98 0.99 0.98 4908
1 0.75 0.67 0.71 335
accuracy 0.96 5243
macro avg 0.87 0.83 0.84 5243
weighted avg 0.96 0.96 0.96 5243
confusion_matrix :
[[4835 73]
[ 112 223]]
Accuracy = 0.9647148579057792
-----------------------------------------------------------------
food :
precision recall f1-score support
0 0.96 0.98 0.97 4659
1 0.82 0.68 0.74 584
accuracy 0.95 5243
macro avg 0.89 0.83 0.86 5243
weighted avg 0.94 0.95 0.95 5243
confusion_matrix :
[[4571 88]
[ 188 396]]
Accuracy = 0.9473583826053786
-----------------------------------------------------------------
shelter :
precision recall f1-score support
0 0.96 0.98 0.97 4775
1 0.76 0.56 0.64 468
accuracy 0.94 5243
macro avg 0.86 0.77 0.81 5243
weighted avg 0.94 0.94 0.94 5243
confusion_matrix :
[[4692 83]
[ 208 260]]
Accuracy = 0.9444974251382796
-----------------------------------------------------------------
clothing :
precision recall f1-score support
0 0.99 1.00 0.99 5173
1 0.67 0.34 0.45 70
accuracy 0.99 5243
macro avg 0.83 0.67 0.72 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5161 12]
[ 46 24]]
Accuracy = 0.9889376311272172
-----------------------------------------------------------------
money :
precision recall f1-score support
0 0.99 0.99 0.99 5131
1 0.52 0.31 0.39 112
accuracy 0.98 5243
macro avg 0.75 0.65 0.69 5243
weighted avg 0.98 0.98 0.98 5243
confusion_matrix :
[[5099 32]
[ 77 35]]
Accuracy = 0.9792103757390807
-----------------------------------------------------------------
missing_people :
precision recall f1-score support
0 0.99 1.00 0.99 5180
1 0.71 0.19 0.30 63
accuracy 0.99 5243
macro avg 0.85 0.59 0.65 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5175 5]
[ 51 12]]
Accuracy = 0.9893190921228304
-----------------------------------------------------------------
refugees :
precision recall f1-score support
0 0.98 0.99 0.98 5073
1 0.57 0.28 0.38 170
accuracy 0.97 5243
macro avg 0.77 0.64 0.68 5243
weighted avg 0.96 0.97 0.96 5243
confusion_matrix :
[[5037 36]
[ 122 48]]
Accuracy = 0.9698645813465573
-----------------------------------------------------------------
death :
precision recall f1-score support
0 0.97 0.99 0.98 4996
1 0.80 0.48 0.60 247
accuracy 0.97 5243
macro avg 0.89 0.74 0.79 5243
weighted avg 0.97 0.97 0.97 5243
confusion_matrix :
[[4966 30]
[ 128 119]]
Accuracy = 0.9698645813465573
-----------------------------------------------------------------
other_aid :
precision recall f1-score support
0 0.88 0.98 0.93 4551
1 0.52 0.15 0.23 692
accuracy 0.87 5243
macro avg 0.70 0.56 0.58 5243
weighted avg 0.83 0.87 0.84 5243
confusion_matrix :
[[4455 96]
[ 589 103]]
Accuracy = 0.8693496090024795
-----------------------------------------------------------------
infrastructure_related :
precision recall f1-score support
0 0.94 0.99 0.97 4907
1 0.41 0.08 0.14 336
accuracy 0.93 5243
macro avg 0.68 0.54 0.55 5243
weighted avg 0.91 0.93 0.91 5243
confusion_matrix :
[[4867 40]
[ 308 28]]
Accuracy = 0.9336257867633034
-----------------------------------------------------------------
transport :
precision recall f1-score support
0 0.96 1.00 0.98 5008
1 0.68 0.20 0.30 235
accuracy 0.96 5243
macro avg 0.82 0.60 0.64 5243
weighted avg 0.95 0.96 0.95 5243
confusion_matrix :
[[4986 22]
[ 189 46]]
Accuracy = 0.9597558649628075
-----------------------------------------------------------------
buildings :
precision recall f1-score support
0 0.97 0.99 0.98 4974
1 0.71 0.38 0.50 269
accuracy 0.96 5243
macro avg 0.84 0.69 0.74 5243
weighted avg 0.95 0.96 0.95 5243
confusion_matrix :
[[4932 42]
[ 166 103]]
Accuracy = 0.9603280564562273
-----------------------------------------------------------------
electricity :
precision recall f1-score support
0 0.98 1.00 0.99 5128
1 0.61 0.22 0.32 115
accuracy 0.98 5243
macro avg 0.80 0.61 0.66 5243
weighted avg 0.97 0.98 0.98 5243
confusion_matrix :
[[5112 16]
[ 90 25]]
Accuracy = 0.9797825672325005
-----------------------------------------------------------------
tools :
precision recall f1-score support
0 0.99 1.00 1.00 5208
1 0.20 0.03 0.05 35
accuracy 0.99 5243
macro avg 0.60 0.51 0.52 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5204 4]
[ 34 1]]
Accuracy = 0.9927522410833493
-----------------------------------------------------------------
hospitals :
precision recall f1-score support
0 0.99 1.00 0.99 5191
1 0.36 0.15 0.22 52
accuracy 0.99 5243
macro avg 0.68 0.58 0.61 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5177 14]
[ 44 8]]
Accuracy = 0.9889376311272172
-----------------------------------------------------------------
shops :
precision recall f1-score support
0 1.00 1.00 1.00 5218
1 0.25 0.04 0.07 25
accuracy 0.99 5243
macro avg 0.62 0.52 0.53 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5215 3]
[ 24 1]]
Accuracy = 0.9948502765592219
-----------------------------------------------------------------
aid_centers :
precision recall f1-score support
0 0.99 1.00 0.99 5179
1 0.25 0.05 0.08 64
accuracy 0.99 5243
macro avg 0.62 0.52 0.54 5243
weighted avg 0.98 0.99 0.98 5243
confusion_matrix :
[[5170 9]
[ 61 3]]
Accuracy = 0.986648865153538
-----------------------------------------------------------------
other_infrastructure :
precision recall f1-score support
0 0.96 0.99 0.98 5018
1 0.43 0.12 0.18 225
accuracy 0.96 5243
macro avg 0.70 0.55 0.58 5243
weighted avg 0.94 0.96 0.94 5243
confusion_matrix :
[[4984 34]
[ 199 26]]
Accuracy = 0.9555597940110624
-----------------------------------------------------------------
weather_related :
precision recall f1-score support
0 0.88 0.96 0.92 3771
1 0.86 0.68 0.76 1472
accuracy 0.88 5243
macro avg 0.87 0.82 0.84 5243
weighted avg 0.88 0.88 0.87 5243
confusion_matrix :
[[3607 164]
[ 476 996]]
Accuracy = 0.8779324814037764
-----------------------------------------------------------------
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
%timeit
parameters = {
# 'vect__max_df': [0.75, 1.0],
'vect__max_features': [500, 2000],
'vect__ngram_range': [(1, 1), (1, 2)],
# 'tfidf__smooth_idf': [True, False],
# 'tfidf__sublinear_tf': [True, False],
# 'tfidf__use_idf': [True, False],
'clf__estimator__learning_rate': [0.5, 1.0],
'clf__estimator__n_estimators': [50, 100]
}
cv_ada = GridSearchCV(pipeline_ada, param_grid=parameters,
cv=2, n_jobs=-1, verbose=2)
cv_ada.fit(X_train, Y_train)
cv_ada.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
Y_pred = cv_ada.predict(X_test)
(Y_pred == Y_test).mean()
for i in range(Y_test.shape[1]):
display_results(Y_test[:, i], Y_pred[:, i],
df.loc[:, 'related':'direct_report'].columns[i])
###Output
related :
precision recall f1-score support
0 0.75 0.25 0.38 1245
1 0.81 0.97 0.88 3998
accuracy 0.80 5243
macro avg 0.78 0.61 0.63 5243
weighted avg 0.79 0.80 0.76 5243
confusion_matrix :
[[ 312 933]
[ 104 3894]]
Accuracy = 0.8022124737745565
-----------------------------------------------------------------
request :
precision recall f1-score support
0 0.90 0.98 0.94 4352
1 0.84 0.44 0.58 891
accuracy 0.89 5243
macro avg 0.87 0.71 0.76 5243
weighted avg 0.89 0.89 0.88 5243
confusion_matrix :
[[4275 77]
[ 499 392]]
Accuracy = 0.8901392332633988
-----------------------------------------------------------------
offer :
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 24
accuracy 1.00 5243
macro avg 0.50 0.50 0.50 5243
weighted avg 0.99 1.00 0.99 5243
confusion_matrix :
[[5219 0]
[ 24 0]]
Accuracy = 0.9954224680526416
-----------------------------------------------------------------
aid_related :
precision recall f1-score support
0 0.75 0.89 0.82 3079
1 0.79 0.57 0.67 2164
accuracy 0.76 5243
macro avg 0.77 0.73 0.74 5243
weighted avg 0.77 0.76 0.75 5243
confusion_matrix :
[[2755 324]
[ 923 1241]]
Accuracy = 0.7621590692351707
-----------------------------------------------------------------
medical_help :
precision recall f1-score support
0 0.93 0.99 0.96 4808
1 0.63 0.18 0.28 435
accuracy 0.92 5243
macro avg 0.78 0.58 0.62 5243
weighted avg 0.91 0.92 0.90 5243
confusion_matrix :
[[4763 45]
[ 357 78]]
Accuracy = 0.9233263398817471
-----------------------------------------------------------------
medical_products :
precision recall f1-score support
0 0.96 0.99 0.98 4964
1 0.72 0.25 0.37 279
accuracy 0.95 5243
macro avg 0.84 0.62 0.67 5243
weighted avg 0.95 0.95 0.94 5243
confusion_matrix :
[[4937 27]
[ 209 70]]
Accuracy = 0.9549876025176426
-----------------------------------------------------------------
search_and_rescue :
precision recall f1-score support
0 0.98 1.00 0.99 5107
1 0.68 0.12 0.21 136
accuracy 0.98 5243
macro avg 0.83 0.56 0.60 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5099 8]
[ 119 17]]
Accuracy = 0.9757772267785619
-----------------------------------------------------------------
security :
precision recall f1-score support
0 0.98 1.00 0.99 5147
1 0.17 0.01 0.02 96
accuracy 0.98 5243
macro avg 0.57 0.50 0.50 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5142 5]
[ 95 1]]
Accuracy = 0.9809269502193401
-----------------------------------------------------------------
military :
precision recall f1-score support
0 0.98 0.99 0.98 5085
1 0.53 0.18 0.27 158
accuracy 0.97 5243
macro avg 0.75 0.59 0.63 5243
weighted avg 0.96 0.97 0.96 5243
confusion_matrix :
[[5059 26]
[ 129 29]]
Accuracy = 0.9704367728399771
-----------------------------------------------------------------
child_alone :
precision recall f1-score support
0 1.00 1.00 1.00 5243
accuracy 1.00 5243
macro avg 1.00 1.00 1.00 5243
weighted avg 1.00 1.00 1.00 5243
confusion_matrix :
[[5243]]
Accuracy = 1.0
-----------------------------------------------------------------
water :
precision recall f1-score support
0 0.98 0.99 0.98 4908
1 0.77 0.64 0.70 335
accuracy 0.96 5243
macro avg 0.87 0.81 0.84 5243
weighted avg 0.96 0.96 0.96 5243
confusion_matrix :
[[4842 66]
[ 120 215]]
Accuracy = 0.9645241274079726
-----------------------------------------------------------------
food :
precision recall f1-score support
0 0.97 0.98 0.97 4659
1 0.84 0.72 0.77 584
accuracy 0.95 5243
macro avg 0.90 0.85 0.87 5243
weighted avg 0.95 0.95 0.95 5243
confusion_matrix :
[[4578 81]
[ 165 419]]
Accuracy = 0.9530802975395766
-----------------------------------------------------------------
shelter :
precision recall f1-score support
0 0.95 0.99 0.97 4775
1 0.81 0.51 0.62 468
accuracy 0.95 5243
macro avg 0.88 0.75 0.80 5243
weighted avg 0.94 0.95 0.94 5243
confusion_matrix :
[[4719 56]
[ 230 238]]
Accuracy = 0.9454510776273126
-----------------------------------------------------------------
clothing :
precision recall f1-score support
0 0.99 1.00 0.99 5173
1 0.76 0.27 0.40 70
accuracy 0.99 5243
macro avg 0.88 0.64 0.70 5243
weighted avg 0.99 0.99 0.99 5243
confusion_matrix :
[[5167 6]
[ 51 19]]
Accuracy = 0.9891283616250238
-----------------------------------------------------------------
money :
precision recall f1-score support
0 0.98 1.00 0.99 5131
1 0.59 0.17 0.26 112
accuracy 0.98 5243
macro avg 0.79 0.58 0.63 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5118 13]
[ 93 19]]
Accuracy = 0.9797825672325005
-----------------------------------------------------------------
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Add two customer transformers
def tokenize_2(text):
"""
Tokenize the input text. This function is called in StartingVerbExtractor.
"""
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = [lemmatizer.lemmatize(
tok).lower().strip() for tok in tokens]
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
""" return true if the first word is an appropriate verb or RT for retweet """
# tokenize by sentences
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
# tokenize each sentence into words and tag part of speech
pos_tags = nltk.pos_tag(tokenize_2(sentence))
# index pos_tags to get the first word and part of speech tag
first_word, first_tag = pos_tags[0]
# return true if the first word is an appropriate verb or RT for retweet
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
""" Fit """
return self
def transform(self, X):
""" Transform """
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
# Count the number of tokens
class TextLengthExtractor(BaseEstimator, TransformerMixin):
def text_len_count(self, text):
""" Count the number of tokens """
text_length = len(tokenize(text))
return text_length
def fit(self, x, y=None):
""" Fit """
return self
def transform(self, X):
""" Transform """
X_text_len = pd.Series(X).apply(self.text_len_count)
return pd.DataFrame(X_text_len)
pipeline_xgb = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize,
# max_features=5000,
# max_df=0.75,
)),
('tfidf', TfidfTransformer(use_idf=True))
])),
('txt_length', TextLengthExtractor()),
('start_verb', StartingVerbExtractor())
])),
('norm', Normalizer()),
('clf', MultiOutputClassifier(XGBClassifier(
# max_depth=3,
# learning_rate=0.2,
# max_delta_step=2,
# colsample_bytree=0.7,
# colsample_bylevel=0.7,
# subsample=0.8,
# n_estimators=150,
tree_method='hist',
)))
])
pipeline_xgb.fit(X_train, Y_train)
Y_pred = pipeline_xgb.predict(X_test)
(Y_pred == Y_test).mean()
for i in range(Y_test.shape[1]):
display_results(Y_test[:, i], Y_pred[:, i],
df.loc[:, 'related':'direct_report'].columns[i])
pipeline_xgb.get_params()
# Use grid search to find better parameters.
%timeit
parameters = {
'clf__estimator__max_depth': [3, 4],
'clf__estimator__learning_rate': [0.2, 0.5],
'clf__estimator__max_delta_step': [2, 3],
'clf__estimator__colsample_bytree': [0.5, 0.7],
'clf__estimator__colsample_bylevel': [0.5, 0.7],
'clf__estimator__subsample': [0.5, 0.8],
'clf__estimator__n_estimators': [100, 150]
}
cv_xgb = GridSearchCV(pipeline_xgb, param_grid=parameters,
cv=2, n_jobs=-1, verbose=2)
cv_xgb.fit(X_train, Y_train)
cv_xgb.best_params_
Y_pred = cv_xgb.predict(X_test)
(Y_pred == Y_test).mean()
for i in range(Y_test.shape[1]):
display_results(Y_test[:, i], Y_pred[:, i],
df.loc[:, 'related':'direct_report'].columns[i])
###Output
related :
precision recall f1-score support
0 0.71 0.41 0.52 1245
1 0.84 0.95 0.89 3998
accuracy 0.82 5243
macro avg 0.77 0.68 0.70 5243
weighted avg 0.81 0.82 0.80 5243
confusion_matrix :
[[ 507 738]
[ 210 3788]]
Accuracy = 0.8191874880793439
-----------------------------------------------------------------
request :
precision recall f1-score support
0 0.91 0.98 0.94 4352
1 0.82 0.54 0.65 891
accuracy 0.90 5243
macro avg 0.86 0.76 0.80 5243
weighted avg 0.90 0.90 0.89 5243
confusion_matrix :
[[4244 108]
[ 410 481]]
Accuracy = 0.9012016021361816
-----------------------------------------------------------------
offer :
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 24
accuracy 1.00 5243
macro avg 0.50 0.50 0.50 5243
weighted avg 0.99 1.00 0.99 5243
confusion_matrix :
[[5219 0]
[ 24 0]]
Accuracy = 0.9954224680526416
-----------------------------------------------------------------
aid_related :
precision recall f1-score support
0 0.78 0.88 0.82 3079
1 0.79 0.64 0.71 2164
accuracy 0.78 5243
macro avg 0.78 0.76 0.77 5243
weighted avg 0.78 0.78 0.78 5243
confusion_matrix :
[[2707 372]
[ 781 1383]]
Accuracy = 0.7800877360289911
-----------------------------------------------------------------
medical_help :
precision recall f1-score support
0 0.94 0.99 0.96 4808
1 0.66 0.27 0.38 435
accuracy 0.93 5243
macro avg 0.80 0.63 0.67 5243
weighted avg 0.91 0.93 0.91 5243
confusion_matrix :
[[4748 60]
[ 317 118]]
Accuracy = 0.9280946023269121
-----------------------------------------------------------------
medical_products :
precision recall f1-score support
0 0.96 0.99 0.98 4964
1 0.73 0.29 0.41 279
accuracy 0.96 5243
macro avg 0.84 0.64 0.69 5243
weighted avg 0.95 0.96 0.95 5243
confusion_matrix :
[[4934 30]
[ 199 80]]
Accuracy = 0.9563227160022888
-----------------------------------------------------------------
search_and_rescue :
precision recall f1-score support
0 0.98 1.00 0.99 5107
1 0.64 0.20 0.30 136
accuracy 0.98 5243
macro avg 0.81 0.60 0.65 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5092 15]
[ 109 27]]
Accuracy = 0.9763494182719817
-----------------------------------------------------------------
security :
precision recall f1-score support
0 0.98 1.00 0.99 5147
1 0.33 0.01 0.02 96
accuracy 0.98 5243
macro avg 0.66 0.51 0.51 5243
weighted avg 0.97 0.98 0.97 5243
confusion_matrix :
[[5145 2]
[ 95 1]]
Accuracy = 0.9814991417127599
-----------------------------------------------------------------
military :
precision recall f1-score support
0 0.98 0.99 0.99 5085
1 0.56 0.32 0.40 158
accuracy 0.97 5243
macro avg 0.77 0.65 0.70 5243
weighted avg 0.97 0.97 0.97 5243
confusion_matrix :
[[5046 39]
[ 108 50]]
Accuracy = 0.9719626168224299
-----------------------------------------------------------------
child_alone :
precision recall f1-score support
0 1.00 1.00 1.00 5243
accuracy 1.00 5243
macro avg 1.00 1.00 1.00 5243
weighted avg 1.00 1.00 1.00 5243
confusion_matrix :
[[5243]]
Accuracy = 1.0
-----------------------------------------------------------------
water :
precision recall f1-score support
0 0.98 0.99 0.98 4908
1 0.78 0.70 0.74 335
accuracy 0.97 5243
macro avg 0.88 0.84 0.86 5243
weighted avg 0.97 0.97 0.97 5243
confusion_matrix :
[[4843 65]
[ 100 235]]
Accuracy = 0.9685294678619111
-----------------------------------------------------------------
food :
precision recall f1-score support
0 0.97 0.98 0.98 4659
1 0.83 0.78 0.80 584
accuracy 0.96 5243
macro avg 0.90 0.88 0.89 5243
weighted avg 0.96 0.96 0.96 5243
confusion_matrix :
[[4565 94]
[ 128 456]]
Accuracy = 0.957657829486935
-----------------------------------------------------------------
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(pipeline_xgb,open('./models/model_xgb','wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
nltk.download('punkt')
import pandas as pd
from sqlalchemy import create_engine
import re
from nltk.tokenize import word_tokenize
nltk.download('stopwords')
from nltk.corpus import stopwords
nltk.download('wordnet') # download for lemmatization
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, fbeta_score, recall_score, classification_report, accuracy_score, precision_score, make_scorer, precision_recall_fscore_support
from sklearn.model_selection import GridSearchCV
from workspace_utils import active_session
import pickle
import numpy as np
# load data from database
engine = create_engine('sqlite:///disaster_data.db')
df = pd.read_sql("SELECT * FROM disaster_data",engine)
# df.head()
X = df.message.values
Y = df.iloc[:,4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
def tokenize(text):
text = re.sub(r"[^a-zA-Z0-9]"," ",text.lower())
tokens = word_tokenize(text)
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train,X_test,y_train,y_test = train_test_split(X,Y)
# Take only 10% of data for fitting using GridSearchCV so that it's faster
# train = np.concatenate((X_train, y_train), axis=1)
# X_train_d = X_train[:,np.newaxis]
# train = np.hstack((X_train, y_train))
# train_sample = train.sample(frac=0.1, replace=False, random_state=None, axis=0)
number_of_rows = X_train.shape[0]
# np.random.seed(123) # uncomment if we want repeatable results
random_rows = np.random.choice(number_of_rows, size=int(0.10*number_of_rows), replace=False)
X_train_sample = X_train[random_rows] # this is numpy ndarray
y_train_sample = y_train.iloc[random_rows,:] # this is pandas df
# pipeline.fit(X_train_sample,y_train_sample) # for now, do only with a subset, to compare to tuned model, which needs subset to run in reasonable time
pipeline.fit(X_train,y_train)
###Output
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\ensemble\forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
predicted = pipeline.predict(X_test)
# predicted.shape
# y_test.iloc[:,0].value_counts()
# f1_score = fbeta_score(y_test,predicted,beta=1)
# precision = fbeta_score(y_test,predicted,beta=0)
# recall = recall_score(y_test,predicted)
# for i in range(y_test.shape[1]):
# print(classification_report(y_test.iloc[:,i],predicted[:,i]))
# print(classification_report(y_test.values,predicted,target_names=Y.columns.values))
category_names = Y.columns.values
for i, c in enumerate(category_names):
print(c)
# if i==1:
# test=classification_report(y_test.iloc[:,i], predicted[:,i],output_dict=True)
print(classification_report(y_test.iloc[:,i], predicted[:,i])) # the averages given in Udacity workspace are weighted; on my computer, I get macro avg and accuracy in addition
# print('Accuracy: ', accuracy_score(y_test.iloc[:,i],predicted[:,i]), '\n')
metrics = dict()
for i, c in enumerate(category_names):
print(c)
metrics[c] = precision_recall_fscore_support(y_test.iloc[:,i], predicted[:,i],average='weighted')
print(metrics[c],'\n')
# print('Accuracy: ', accuracy_score(y_test.iloc[:,i],predicted[:,i]), '\n')
# Test
metrics['related'][2] # f score of 'related' column
# This is cleaner (but only gives values for positive class)
# print(classification_report(y_test, predicted, target_names=Y.columns))
# The following doesn't give averages either
category_names = Y.columns.values
def get_metrics_summary(y_test,y_pred):
metrics_summary = pd.DataFrame(index = category_names, columns = ['accuracy', 'precision', 'recall', 'f-1_score'])
for i, c in enumerate(category_names):
metrics_summary.loc[c,'accuracy'] = accuracy_score(y_test.iloc[:,i],y_pred[:,i])
metrics_summary.loc[c,'precision'] = precision_score(y_test.iloc[:,i],y_pred[:,i])
metrics_summary.loc[c,'recall'] = recall_score(y_test.iloc[:,i],y_pred[:,i])
metrics_summary.loc[c,'f-1_score'] = fbeta_score(y_test.iloc[:,i],y_pred[:,i],beta=1)
metrics_summary.loc['average'] = metrics_summary.mean(axis=0)
return metrics_summary
# metrics_summary_1 = get_metrics_summary(y_test,predicted)
# metrics_summary_1
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
# parameters = {
# 'clf__estimator__max_depth':[2],
# 'clf__estimator__n_estimators':[20,50]
# }
# parameters = {
# 'vect__max_df': (0.5, 0.75, 1.0),
# 'vect__ngram_range': ((1, 1), (1,2)),
# 'vect__max_features': (None, 5000,10000),
# 'tfidf__use_idf': (True, False)
# }
# parameters = {
# 'vect__ngram_range': ((1, 1), (1, 2)),
# 'clf__estimator__bootstrap': (True, False)
# }
# parameters = { # try next
# 'clf__estimator__n_estimators': [50, 100, 150],
# 'clf__estimator__min_samples_split': [2, 3, 4]
# }
# parameters = { # try
# 'clf__estimator__n_estimators': [100, 200],
# 'clf__estimator__learning_rate': [0.1, 0.3]
# }
parameters = { # try next
'clf__estimator__n_estimators': [200,300],
'clf__estimator__min_samples_split': [2,3]
}
# BEST PARAMS for [50,100] and [2,3] was {'clf__estimator__min_samples_split': 2, 'clf__estimator__n_estimators': 100}****
# BEST PARAMS for [100,200] and [2,3] was {'clf__estimator__min_samples_split': 2, 'clf__estimator__n_estimators': 100}
# parameters = { # try next
# # 'clf__estimator__criterion': ['gini', 'entropy'], # In this, keep gini or entropy.
# 'clf__estimator__max_depth': [2, 5], # Use only two [2,5,None]
# 'clf__estimator__n_estimators': [100, 200], # Use only two [10,20,50]
# 'clf__estimator__min_samples_leaf':[2, 5], # can be ignored [1,5,10]
# }
# scoring = {'accuracy': make_scorer(accuracy_score), 'precision': make_scorer(precision_score), 'recall': make_scorer(recall_score)}
# cv = GridSearchCV(pipeline,param_grid=parameters,scoring=scoring,refit='accuracy') #If scoring not included, refit not needed and grid search would find best model based on estimator's score method, which is average accuracy
# cv = GridSearchCV(pipeline,param_grid=parameters)
scoring = make_scorer(f1_score, average='weighted')
# cv = GridSearchCV(pipeline,param_grid=parameters, n_jobs=-1, verbose=2, scoring = ‘f1_weighted’) # the higher the verbose the more information
# cv = GridSearchCV(pipeline,param_grid=parameters, n_jobs=-1, verbose=2, scoring = scoring)
cv = GridSearchCV(pipeline,param_grid=parameters, n_jobs=-1, verbose=2) # better results when leave default scoring
# with active_session():
cv.fit(X_train,y_train)
# cv.fit(X_train_sample,y_train_sample)
y_pred_improved = cv.predict(X_test)
# import pickle
# with open('train_classifier.pkl', 'wb') as file:
# pickle.dump(cv, file)
# with open('train_classifier.pkl', "rb") as input_file:
# e = pickle.load(input_file)
# y_pred_best = e.predict(X_test)
# e.best_params_
# e.get_params()
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# print(classification_report(y_test, y_pred_improved, target_names=Y.columns))
# metrics_2 = get_metrics_summary(y_test,y_pred_improved)
# metrics_2
# **COMPLETE THIS FUNCTION**
def compare_metric(y_test,y_pred1,y_pred2,metric='f-1_score'):
metrics_summary = pd.DataFrame(index = category_names, columns = ['metric_old', 'metric_new'])
for i, c in enumerate(category_names):
if metric=='accuracy':
metrics_summary.loc[c,'metric_old'] = accuracy_score(y_test.iloc[:,i],y_pred1[:,i])
metrics_summary.loc[c,'metric_new'] = accuracy_score(y_test.iloc[:,i],y_pred2[:,i])
elif metric=='precision':
# metrics_summary.loc[c,'metric_old'] = precision_score(y_test.iloc[:,i],y_pred1[:,i])
# metrics_summary.loc[c,'metric_new'] = precision_score(y_test.iloc[:,i],y_pred2[:,i])
metrics_summary.loc[c,'metric_old'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred1[:,i], average='weighted')[0] # 0 for 1st entry in tuple for precision
metrics_summary.loc[c,'metric_new'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred2[:,i], average='weighted')[0]
elif metric=='recall':
# metrics_summary.loc[c,'metric_old'] = recall_score(y_test.iloc[:,i],y_pred1[:,i])
# metrics_summary.loc[c,'metric_new'] = recall_score(y_test.iloc[:,i],y_pred2[:,i])
metrics_summary.loc[c,'metric_old'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred1[:,i], average='weighted')[1] # 1 for 2nd entry in tuple for recall
metrics_summary.loc[c,'metric_new'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred2[:,i], average='weighted')[1]
elif metric=='f-1_score':
# metrics_summary.loc[c,'metric_old'] = fbeta_score(y_test.iloc[:,i],y_pred1[:,i],beta=1)
# metrics_summary.loc[c,'metric_new'] = fbeta_score(y_test.iloc[:,i],y_pred2[:,i],beta=1)
metrics_summary.loc[c,'metric_old'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred1[:,i], average='weighted')[2] # 2 for 3rd entry in tuple for f-score
metrics_summary.loc[c,'metric_new'] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred2[:,i], average='weighted')[2]
metrics_summary['improved'] = metrics_summary['metric_new']>=metrics_summary['metric_old']
metrics_summary.loc['sum'] = metrics_summary.sum()
# metrics_summary.loc['average'] = metrics_summary.mean(axis=0)
return metrics_summary
f1_comparison = compare_metric(y_test,predicted,y_pred_improved)
f1_comparison
recall_comparison = compare_metric(y_test,predicted,y_pred_improved,metric='recall')
recall_comparison
precision_comparison = compare_metric(y_test,predicted,y_pred_improved,metric='precision')
precision_comparison
accuracy_comparison = compare_metric(y_test,predicted,y_pred_improved,metric='accuracy')
accuracy_comparison
cv.best_params_
# 12/1/20 with all data, 26 columns same or improved; best estimator 2 min samples split and 300 n estimators
# 0.948995 0.592643 0.208286 0.258425 with n_estimators = 200 and min_samples_split =2 as best from [100,200] and [2,3]
with open('tuned_model.pkl', 'wb') as file:
pickle.dump(cv, file)
# a=1
# with active_session():
# while a==1:
# b=2
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# Try using different classifier
pipeline2 = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.fit(X_train,y_train)
predicted2 = pipeline2.predict(X_test)
# metrics_adaboost = get_metrics_summary(y_test,predicted2)
# metrics_adaboost
metrics_adaboost = dict()
for i, c in enumerate(category_names):
print(c)
metrics_adaboost[c] = precision_recall_fscore_support(y_test.iloc[:,i], predicted2[:,i],average='weighted')
print(metrics_adaboost[c],'\n')
f1_comparison2 = compare_metric(y_test,y_pred_improved,predicted2)
print('F1-SCORE')
print(f1_comparison2,'\n')
recall_comparison2 = compare_metric(y_test,y_pred_improved,predicted2,metric='recall')
print('RECALL')
print(recall_comparison2,'\n')
precision_comparison2 = compare_metric(y_test,y_pred_improved,predicted2,metric='precision')
print('PRECISION')
print(precision_comparison2,'\n')
accuracy_comparison2 = compare_metric(y_test,y_pred_improved,predicted2,metric='accuracy')
print('ACCURACY')
print(accuracy_comparison2)
###Output
F1-SCORE
metric_old metric_new improved
related 0.806712 0.747882 0.0
request 0.889369 0.883782 0.0
offer 0.991479 0.990633 0.0
aid_related 0.783931 0.756488 0.0
medical_help 0.891289 0.918521 1.0
medical_products 0.934633 0.951496 1.0
search_and_rescue 0.959795 0.964012 1.0
security 0.976311 0.974707 0.0
military 0.952447 0.965359 1.0
child_alone 1 1 1.0
water 0.958213 0.963616 1.0
food 0.94177 0.941911 1.0
shelter 0.925494 0.940098 1.0
clothing 0.98305 0.98637 1.0
money 0.966831 0.973378 1.0
missing_people 0.983663 0.987398 1.0
refugees 0.947586 0.957449 1.0
death 0.944196 0.961013 1.0
other_aid 0.813718 0.837169 1.0
infrastructure_related 0.901778 0.908516 1.0
transport 0.94144 0.949192 1.0
buildings 0.936913 0.951944 1.0
electricity 0.967046 0.974405 1.0
tools 0.992399 0.992409 1.0
hospitals 0.982515 0.981996 0.0
shops 0.992629 0.992748 1.0
aid_centers 0.980219 0.982304 1.0
other_infrastructure 0.93706 0.941015 1.0
weather_related 0.876094 0.864864 0.0
floods 0.945394 0.949416 1.0
storm 0.935334 0.932531 0.0
fire 0.980449 0.984644 1.0
earthquake 0.969966 0.967461 0.0
cold 0.974534 0.978735 1.0
other_weather 0.926854 0.932596 1.0
direct_report 0.838269 0.835238 0.0
sum 33.7294 33.8213 26.0
RECALL
metric_old metric_new improved
related 0.824804 0.793146 0.0
request 0.8998 0.892731 0.0
offer 0.994314 0.992623 0.0
aid_related 0.785308 0.761641 0.0
medical_help 0.920854 0.929768 1.0
medical_products 0.952974 0.957584 1.0
search_and_rescue 0.971876 0.971569 0.0
security 0.983864 0.979868 0.0
military 0.966805 0.969264 1.0
child_alone 1 1 1.0
water 0.964346 0.964961 1.0
food 0.945136 0.944521 0.0
shelter 0.937759 0.944829 1.0
clothing 0.987245 0.98832 1.0
money 0.977409 0.976641 0.0
missing_people 0.989089 0.990011 1.0
refugees 0.964346 0.9645 1.0
death 0.958814 0.965576 1.0
other_aid 0.868603 0.870447 1.0
infrastructure_related 0.933456 0.930536 0.0
transport 0.957123 0.958814 1.0
buildings 0.953588 0.957277 1.0
electricity 0.977102 0.978331 1.0
tools 0.994929 0.994006 0.0
hospitals 0.98832 0.985861 0.0
shops 0.995082 0.994775 0.0
aid_centers 0.986783 0.986169 0.0
other_infrastructure 0.95743 0.954664 0.0
weather_related 0.879361 0.871062 0.0
floods 0.951437 0.952666 1.0
storm 0.940372 0.93822 0.0
fire 0.986937 0.98832 1.0
earthquake 0.970647 0.968342 0.0
cold 0.982019 0.982173 1.0
other_weather 0.948363 0.947595 0.0
direct_report 0.859843 0.849547 0.0
sum 34.1561 34.0964 17.0
PRECISION
metric_old metric_new improved
related 0.811942 0.772858 0.0
request 0.895284 0.885294 0.0
offer 0.98866 0.98865 0.0
aid_related 0.784009 0.761615 0.0
medical_help 0.893822 0.916788 1.0
medical_products 0.947538 0.950238 1.0
search_and_rescue 0.968411 0.962984 0.0
security 0.973762 0.970119 0.0
military 0.958716 0.963715 1.0
child_alone 1 1 1.0
water 0.963002 0.96285 0.0
food 0.941813 0.941333 0.0
shelter 0.931977 0.939459 1.0
clothing 0.986182 0.986357 1.0
money 0.972338 0.971558 0.0
missing_people 0.978296 0.987339 1.0
refugees 0.95157 0.95493 1.0
death 0.954577 0.960642 1.0
other_aid 0.83903 0.838387 0.0
infrastructure_related 0.893873 0.901374 1.0
transport 0.950403 0.949874 0.0
buildings 0.945867 0.950273 1.0
electricity 0.972654 0.973343 1.0
tools 0.989883 0.991196 1.0
hospitals 0.976777 0.978788 1.0
shops 0.990189 0.991568 1.0
aid_centers 0.973742 0.980722 1.0
other_infrastructure 0.931235 0.934985 1.0
weather_related 0.876909 0.869974 0.0
floods 0.948034 0.948937 1.0
storm 0.934859 0.932114 0.0
fire 0.974045 0.985762 1.0
earthquake 0.969739 0.96722 0.0
cold 0.982343 0.977987 0.0
other_weather 0.930367 0.92973 0.0
direct_report 0.851946 0.834979 0.0
sum 33.8338 33.8139 19.0
ACCURACY
metric_old metric_new improved
related 0.824804 0.793146 0.0
request 0.8998 0.892731 0.0
offer 0.994314 0.992623 0.0
aid_related 0.785308 0.761641 0.0
medical_help 0.920854 0.929768 1.0
medical_products 0.952974 0.957584 1.0
search_and_rescue 0.971876 0.971569 0.0
security 0.983864 0.979868 0.0
military 0.966805 0.969264 1.0
child_alone 1 1 1.0
water 0.964346 0.964961 1.0
food 0.945136 0.944521 0.0
shelter 0.937759 0.944829 1.0
clothing 0.987245 0.98832 1.0
money 0.977409 0.976641 0.0
missing_people 0.989089 0.990011 1.0
refugees 0.964346 0.9645 1.0
death 0.958814 0.965576 1.0
other_aid 0.868603 0.870447 1.0
infrastructure_related 0.933456 0.930536 0.0
transport 0.957123 0.958814 1.0
buildings 0.953588 0.957277 1.0
electricity 0.977102 0.978331 1.0
tools 0.994929 0.994006 0.0
hospitals 0.98832 0.985861 0.0
shops 0.995082 0.994775 0.0
aid_centers 0.986783 0.986169 0.0
other_infrastructure 0.95743 0.954664 0.0
weather_related 0.879361 0.871062 0.0
floods 0.951437 0.952666 1.0
storm 0.940372 0.93822 0.0
fire 0.986937 0.98832 1.0
earthquake 0.970647 0.968342 0.0
cold 0.982019 0.982173 1.0
other_weather 0.948363 0.947595 0.0
direct_report 0.859843 0.849547 0.0
sum 34.1561 34.0964 17.0
###Markdown
F1-score improved, and precision also improved marginally, but recall and accuracy got marginally worse. So, untuned AdaBoost might be better. Let's see if tuning will improve it.
###Code
pipeline2.get_params()
# parameters2 = {
# 'clf__estimator__n_estimators': [50,100],
# 'clf__estimator__learning_rate': [0.5, 1.0]
# }
# 100 and 0.5 were best
# parameters2 = {
# 'clf__estimator__n_estimators': [100,200],
# 'clf__estimator__learning_rate': [0.25,0.5]
# }
# 200 and 0.5 best; better than previous
parameters2 = {
'clf__estimator__n_estimators': [200,300],
'clf__estimator__learning_rate': [0.25,0.5]
}
# 200 and 0.5 is still best
cv2 = GridSearchCV(pipeline2,param_grid=parameters2, n_jobs=-1, verbose=2) # the higher the verbose the more information
# with active_session():
cv2.fit(X_train,y_train)
y_pred2 = cv2.predict(X_test)
cv2.best_params_
# metrics_adaboost_new = get_metrics_summary(y_test,y_pred2)
# metrics_adaboost_new
metrics_adaboost_new = dict()
for i, c in enumerate(category_names):
print(c)
metrics_adaboost_new[c] = precision_recall_fscore_support(y_test.iloc[:,i], y_pred2[:,i],average='weighted')
print(metrics_adaboost_new[c],'\n')
f1_comparison2_new = compare_metric(y_test,y_pred_improved,y_pred2)
print('F1-SCORE')
print(f1_comparison2_new,'\n')
recall_comparison2_new = compare_metric(y_test,y_pred_improved,y_pred2,metric='recall')
print('RECALL')
print(recall_comparison2_new,'\n')
precision_comparison2_new = compare_metric(y_test,y_pred_improved,y_pred2,metric='precision')
print('PRECISION')
print(precision_comparison2_new,'\n')
accuracy_comparison2_new = compare_metric(y_test,y_pred_improved,y_pred2,metric='accuracy')
print('ACCURACY')
print(accuracy_comparison2_new)
###Output
C:\Users\dagus\Anaconda3\envs\udacity\lib\site-packages\sklearn\metrics\classification.py:1437: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
27,23,17,23 tuned with 0.5/10027,20,21,20 tuned with 0.5/200 (then 26,20,21,20) 9. Export your model as a pickle file
###Code
with open('tuned_model_2.pkl', 'wb') as file:
pickle.dump(cv2, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# download nltk libraries
import nltk
nltk.download(['punkt', 'wordnet', 'stopwords'])
# import libraries
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
import re
from sklearn.model_selection import train_test_split
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline,FeatureUnion
from sklearn.metrics import classification_report,accuracy_score,f1_score,precision_score,recall_score
from sklearn.model_selection import GridSearchCV
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse1.db')
df = pd.read_sql("SELECT * FROM Disaster", engine)
X = df.message.values
Y = df.drop(columns=['id', 'message', 'original', 'genre']).values
category_names = np.array(df.drop(columns=['id', 'message', 'original', 'genre']).columns)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def replace_urls(text):
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, 'urlplaceholder')
return text
def tokenize(text):
text=replace_urls(text)
text=re.sub(r'[^a-zA-Z0-9]', ' ', text).lower()
tokens = word_tokenize(text)
words = [w for w in tokens if w not in stopwords.words("english")]
lemmed = [WordNetLemmatizer().lemmatize(w, pos='v') for w in words]
stem_words = [PorterStemmer().stem(w) for w in lemmed]
return stem_words
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(estimator=RandomForestClassifier(n_jobs=-1)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
for ids in range(y_test.shape[-1]):
print(classification_report(y_test[:,ids], y_pred[:,ids]))
print("------------------------------------------------------\n")
def get_scores(y_true, y_pred):
"""
Returns the accuracy, precision and recall and f1 scores of the two same shape numpy arrays `y_true` and `y_pred`.
INPUTS:
y_true - Numpy array object - A (1 x n) vector of true values
y_pred - Numpy array object - A (1 x n) vector of predicted values
OUPUT:
dict_scores - Python dict - A dictionary of accuracy, precision and recall and f1 scores of `y_true` and `y_pred`.
"""
# Compute the accuracy score of y_true and y_pred
accuracy = accuracy_score(y_true, y_pred)
# Compute the precision score of y_true and y_pred
precision =round( precision_score(y_true, y_pred, average='micro'))
# Compute the recall score of y_true and y_pred
recall = recall_score(y_true, y_pred, average='micro')
# Compute the recall score of y_true and y_pred
f_1 = f1_score(y_true, y_pred, average='micro')
# A dictionary of accuracy, precision and recall and f1 scores of `y_true` and `y_pred`
dict_scores = {
'Accuracy': accuracy,
'Precision': precision,
'Recall': recall,
'F1 Score': f_1
}
return dict_scores
tabulate_metric_scores = lambda y_test, y_pred : pd.DataFrame([get_scores(y_test[:, ids], y_pred[:, ids]) for ids in range(y_test.shape[-1])], index=category_names)
tabulate_metric_scores(y_test, y_pred)
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {'tfidf__norm': ['l1','l2'],
'clf__estimator__criterion': ["gini", "entropy"]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred = cv.predict(X_test)
tabulate_metric_scores = lambda y_test, y_pred : pd.DataFrame([get_scores(y_test[:, ids], y_pred[:, ids]) for ids in range(y_test.shape[-1])], index=category_names)
tabulate_metric_scores(y_test, y_pred)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
with open('MLpipeline.pkl', 'wb') as file:
pickle.dump(cv, file)
###Output
_____no_output_____
###Markdown
ML Pipeline Preparation 1. Import libraries and load data from database.
###Code
# import libraries
import pandas as pd
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import nltk
nltk.download(['punkt','stopwords'])
from nltk.stem.porter import PorterStemmer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,classification_report,recall_score,f1_score
from sklearn.model_selection import GridSearchCV
import pickle
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql('disaster_response',engine)
X = df.message
Y = df.drop(['id','message','original','genre'],axis=1)
print(X.head())
print(Y.head())
###Output
0 Weather update - a cold front from Cuba that c...
1 Is the Hurricane over or is it not over
2 Looking for someone but no name
3 UN reports Leogane 80-90 destroyed. Only Hospi...
4 says: west side of Haiti, rest of the country ...
Name: message, dtype: object
related request offer aid_related medical_help medical_products \
0 1 0 0 0 0 0
1 1 0 0 1 0 0
2 1 0 0 0 0 0
3 1 1 0 1 0 1
4 1 0 0 0 0 0
search_and_rescue security military child_alone ... aid_centers \
0 0 0 0 0 ... 0
1 0 0 0 0 ... 0
2 0 0 0 0 ... 0
3 0 0 0 0 ... 0
4 0 0 0 0 ... 0
other_infrastructure weather_related floods storm fire earthquake \
0 0 0 0 0 0 0
1 0 1 0 1 0 0
2 0 0 0 0 0 0
3 0 0 0 0 0 0
4 0 0 0 0 0 0
cold other_weather direct_report
0 0 0 0
1 0 0 0
2 0 0 0
3 0 0 0
4 0 0 0
[5 rows x 36 columns]
###Markdown
2. Write a tokenization function to process your text data
###Code
# Normalize by converting to lower case
# Tokenize by converting sentence to tokens
# Remove stop words
# Convert words to root form by Stemming
def tokenize(text):
text=text.lower()
token=word_tokenize(text)
final_token=[]
stemmer=PorterStemmer()
for tok in token:
if tok not in stopwords.words('english'):
stem=stemmer.stem(tok)
final_token.append(stem)
return final_token
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset.
###Code
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline
###Code
# Split data into train and test set
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2)
# Train pipeline
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Measure performance of the model
###Code
# Predicting output on test set
y_pred=pipeline.predict(X_test)
def evaluate_model(model,X_test, y_test):
y_pred=model.predict(X_test)
y_pred_df=pd.DataFrame(y_pred)
y_pred_df.columns=y_test.columns
## Creating an evaluation matrix of precision scores and recall scores for each column
eval_matrix=[]
for column in y_test.columns:
eval_matrix.append(str(precision_score(y_test[column], y_pred_df[column])) +','+ str(recall_score(y_test[column], y_pred_df[column])) +','+ str(f1_score(y_test[column], y_pred_df[column])))
# Converting eval matrix to data frame for ease of readability
df=pd.DataFrame(eval_matrix)
eval_df=df[0].str.split(',',expand=True)
eval_df.columns=['Precision','Recall','F1']
for col in eval_df.columns:
eval_df[col]=eval_df[col].astype(float)
print(eval_df.shape)
print(eval_df)
print(eval_df.describe())
evaluate_model(pipeline,X_test, y_test)
###Output
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1465: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
average, "true nor predicted", 'F-score is', len(true_sum)
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {'vect__analyzer': ['word']
,'clf__estimator__min_samples_leaf': [1,3],
'clf__estimator__n_estimators':[10, 25],
'clf__estimator__min_samples_split':[2, 5]
}
cv = GridSearchCV(pipeline,parameters,verbose=10)
tuned_model=cv.fit(X_train,y_train)
tuned_model.best_params_
###Output
Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV] vect__analyzer=word .............................................
###Markdown
7. Test your modelGet the Precision, Recall and F1 score of the tuned model.
###Code
evaluate_model(tuned_model,X_test, y_test)
###Output
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
F:\Anaconda\lib\site-packages\sklearn\metrics\_classification.py:1465: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
average, "true nor predicted", 'F-score is', len(true_sum)
###Markdown
9. Export your model as a pickle file
###Code
# Pickle best model
pickle.dump(tuned_model, open('models/disaster_model.sav', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
from sqlalchemy import create_engine
# load data from database
engine = create_engine('sqlite:///MesCat.db')
df = pd.read_sql_table('mescat_df',con=engine)
df.head()
X = df.message.values
cols = list(df.columns[3:])
Y = df[cols]
X[0]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
import nltk
import re
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline,FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.base import BaseEstimator, TransformerMixin
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
# train classifier
pipeline.fit(X_train, Y_train)
# predict on test data
Y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
for i,col in enumerate(cols):
print('\n\n')
print('######%s######'%col)
print(classification_report(Y_test[col], Y_pred[:,i]))
###Output
######related######
precision recall f1-score support
0 0.34 0.12 0.18 1524
1 0.77 0.93 0.84 4980
2 0.12 0.02 0.04 48
avg / total 0.66 0.73 0.68 6552
######request######
precision recall f1-score support
0 0.84 0.98 0.90 5466
1 0.38 0.07 0.11 1086
avg / total 0.76 0.83 0.77 6552
######offer######
precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 25
avg / total 0.99 1.00 0.99 6552
######aid_related######
precision recall f1-score support
0 0.59 0.82 0.68 3809
1 0.44 0.19 0.27 2743
avg / total 0.52 0.56 0.51 6552
######medical_help######
precision recall f1-score support
0 0.92 1.00 0.96 6014
1 0.00 0.00 0.00 538
avg / total 0.84 0.92 0.88 6552
######medical_products######
precision recall f1-score support
0 0.95 1.00 0.97 6198
1 0.07 0.00 0.01 354
avg / total 0.90 0.94 0.92 6552
######search_and_rescue######
precision recall f1-score support
0 0.97 1.00 0.99 6379
1 0.00 0.00 0.00 173
avg / total 0.95 0.97 0.96 6552
######security######
precision recall f1-score support
0 0.98 1.00 0.99 6433
1 0.00 0.00 0.00 119
avg / total 0.96 0.98 0.97 6552
######military######
precision recall f1-score support
0 0.97 1.00 0.98 6328
1 0.50 0.01 0.02 224
avg / total 0.95 0.97 0.95 6552
######child_alone######
precision recall f1-score support
0 1.00 1.00 1.00 6552
avg / total 1.00 1.00 1.00 6552
######water######
precision recall f1-score support
0 0.94 1.00 0.97 6135
1 0.11 0.00 0.01 417
avg / total 0.88 0.93 0.91 6552
######food######
precision recall f1-score support
0 0.89 1.00 0.94 5819
1 0.16 0.01 0.01 733
avg / total 0.81 0.88 0.84 6552
######shelter######
precision recall f1-score support
0 0.91 0.99 0.95 5992
1 0.08 0.01 0.01 560
avg / total 0.84 0.91 0.87 6552
######clothing######
precision recall f1-score support
0 0.98 1.00 0.99 6441
1 0.00 0.00 0.00 111
avg / total 0.97 0.98 0.97 6552
######money######
precision recall f1-score support
0 0.98 1.00 0.99 6410
1 0.00 0.00 0.00 142
avg / total 0.96 0.98 0.97 6552
######missing_people######
precision recall f1-score support
0 0.99 1.00 0.99 6476
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6552
######refugees######
precision recall f1-score support
0 0.97 1.00 0.98 6335
1 0.14 0.00 0.01 217
avg / total 0.94 0.97 0.95 6552
######death######
precision recall f1-score support
0 0.95 1.00 0.98 6256
1 0.00 0.00 0.00 296
avg / total 0.91 0.95 0.93 6552
######other_aid######
precision recall f1-score support
0 0.87 0.99 0.92 5695
1 0.07 0.01 0.01 857
avg / total 0.76 0.86 0.81 6552
######infrastructure_related######
precision recall f1-score support
0 0.93 1.00 0.97 6126
1 0.00 0.00 0.00 426
avg / total 0.87 0.93 0.90 6552
######transport######
precision recall f1-score support
0 0.95 1.00 0.98 6248
1 0.00 0.00 0.00 304
avg / total 0.91 0.95 0.93 6552
######buildings######
precision recall f1-score support
0 0.95 1.00 0.97 6214
1 0.00 0.00 0.00 338
avg / total 0.90 0.94 0.92 6552
######electricity######
precision recall f1-score support
0 0.98 1.00 0.99 6429
1 0.00 0.00 0.00 123
avg / total 0.96 0.98 0.97 6552
######tools######
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6552
######hospitals######
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 74
avg / total 0.98 0.99 0.98 6552
######shops######
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6552
######aid_centers######
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.00 0.00 0.00 77
avg / total 0.98 0.99 0.98 6552
######other_infrastructure######
precision recall f1-score support
0 0.96 1.00 0.98 6272
1 0.08 0.00 0.01 280
avg / total 0.92 0.96 0.94 6552
######weather_related######
precision recall f1-score support
0 0.75 0.96 0.84 4739
1 0.59 0.14 0.23 1813
avg / total 0.70 0.74 0.67 6552
######floods######
precision recall f1-score support
0 0.92 1.00 0.96 6024
1 0.25 0.01 0.01 528
avg / total 0.87 0.92 0.88 6552
######storm######
precision recall f1-score support
0 0.91 1.00 0.95 5943
1 0.42 0.03 0.06 609
avg / total 0.86 0.91 0.87 6552
######fire######
precision recall f1-score support
0 0.99 1.00 0.99 6480
1 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6552
######earthquake######
precision recall f1-score support
0 0.92 0.99 0.96 5958
1 0.68 0.14 0.23 594
avg / total 0.90 0.92 0.89 6552
######cold######
precision recall f1-score support
0 0.98 1.00 0.99 6423
1 0.00 0.00 0.00 129
avg / total 0.96 0.98 0.97 6552
######other_weather######
precision recall f1-score support
0 0.95 1.00 0.97 6219
1 0.00 0.00 0.00 333
avg / total 0.90 0.95 0.92 6552
######direct_report######
precision recall f1-score support
0 0.81 0.98 0.89 5283
1 0.38 0.06 0.10 1269
avg / total 0.73 0.80 0.73 6552
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params().keys()
parameters = {
# 'clf__estimator__n_estimators': [50, 100, 200],
# 'clf__estimator__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},)
}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
from sqlalchemy import create_engine, inspect
import re
import nltk
nltk.download(['punkt','wordnet','stopwords','averaged_perceptron_tagger'])
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql(
"SELECT * FROM DisasterResponse",
con=engine
)
X = df["message"]
Y = df.drop(columns=["id", "message", "original", "genre"]).astype('bool')
Y.columns.values
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
#normalize text
text = re.sub(r'[^a-zA-Z0-9]',' ',text.lower())
words = word_tokenize(text)
tokens = [w for w in words if w not in stopwords.words("english")]
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.25, random_state = 42)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
print(classification_report(y_test, y_pred, target_names=y_test.columns.values))
###Output
precision recall f1-score support
related 0.83 0.95 0.89 4991
request 0.82 0.48 0.60 1111
offer 0.00 0.00 0.00 33
aid_related 0.74 0.69 0.72 2670
medical_help 0.72 0.09 0.16 535
medical_products 0.78 0.08 0.15 344
search_and_rescue 0.56 0.03 0.06 159
security 0.20 0.01 0.02 116
military 0.78 0.09 0.16 200
child_alone 0.00 0.00 0.00 0
water 0.85 0.39 0.54 418
food 0.85 0.61 0.71 745
shelter 0.80 0.39 0.53 581
clothing 0.71 0.10 0.18 98
money 0.80 0.06 0.11 133
missing_people 1.00 0.01 0.03 73
refugees 0.62 0.07 0.13 215
death 0.82 0.14 0.24 297
other_aid 0.58 0.04 0.07 864
infrastructure_related 0.50 0.00 0.00 411
transport 0.70 0.06 0.12 303
buildings 0.80 0.11 0.20 323
electricity 1.00 0.03 0.05 147
tools 0.00 0.00 0.00 43
hospitals 0.00 0.00 0.00 56
shops 0.00 0.00 0.00 24
aid_centers 0.00 0.00 0.00 81
other_infrastructure 0.00 0.00 0.00 283
weather_related 0.83 0.70 0.76 1773
floods 0.87 0.47 0.61 519
storm 0.75 0.51 0.61 605
fire 0.50 0.02 0.03 66
earthquake 0.89 0.78 0.83 590
cold 0.88 0.11 0.19 141
other_weather 0.67 0.02 0.05 335
direct_report 0.77 0.34 0.48 1272
micro avg 0.81 0.53 0.64 20555
macro avg 0.60 0.21 0.26 20555
weighted avg 0.76 0.53 0.57 20555
samples avg 0.66 0.48 0.51 20555
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__n_estimators': [50, 100, 150],
'clf__estimator__criterion': ["gini", "entropy"]
}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
y_pred2 = pipeline.predict(X_test)
print(classification_report(y_test, y_pred2, target_names=y_test.columns.values))
###Output
precision recall f1-score support
related 0.83 0.94 0.88 4991
request 0.84 0.50 0.62 1111
offer 0.00 0.00 0.00 33
aid_related 0.74 0.70 0.72 2670
medical_help 0.64 0.08 0.14 535
medical_products 0.79 0.08 0.14 344
search_and_rescue 0.62 0.06 0.11 159
security 0.00 0.00 0.00 116
military 0.57 0.06 0.11 200
child_alone 0.00 0.00 0.00 0
water 0.86 0.34 0.49 418
food 0.86 0.58 0.70 745
shelter 0.81 0.41 0.55 581
clothing 0.88 0.07 0.13 98
money 0.80 0.06 0.11 133
missing_people 0.00 0.00 0.00 73
refugees 0.67 0.01 0.02 215
death 0.85 0.13 0.23 297
other_aid 0.52 0.03 0.06 864
infrastructure_related 0.00 0.00 0.00 411
transport 0.73 0.06 0.12 303
buildings 0.82 0.12 0.22 323
electricity 1.00 0.01 0.01 147
tools 0.00 0.00 0.00 43
hospitals 0.00 0.00 0.00 56
shops 0.00 0.00 0.00 24
aid_centers 0.00 0.00 0.00 81
other_infrastructure 0.50 0.00 0.01 283
weather_related 0.84 0.71 0.77 1773
floods 0.87 0.49 0.62 519
storm 0.75 0.55 0.63 605
fire 0.00 0.00 0.00 66
earthquake 0.89 0.79 0.84 590
cold 0.91 0.07 0.13 141
other_weather 0.64 0.02 0.04 335
direct_report 0.77 0.34 0.47 1272
micro avg 0.81 0.53 0.64 20555
macro avg 0.56 0.20 0.25 20555
weighted avg 0.75 0.53 0.57 20555
samples avg 0.66 0.48 0.51 20555
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
modelname = 'disaster_response_model'
# pickle.dump(cv.best_estimator_, open(modelname, 'wb'))
pickle.dump(pipeline, open(modelname, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import re
import pandas as pd
from sklearn.multioutput import MultiOutputClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import nltk
nltk.download(['punkt', 'wordnet'])
nltk.download('stopwords')
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('cleaned', engine)
#print (colum)
X = df["message"]
Y = df[df.columns[4:]]
print(Y)
Y.related.unique()
Y.loc[Y['related'] == 2] = 1
Y.related.unique()
###Output
/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py:189: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Normalize text
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# Tokenize text
words = word_tokenize(text)
# Remove stop words
stop = stopwords.words("english")
words = [t for t in words if t not in stop]
# Lemmatization
lemm = [WordNetLemmatizer().lemmatize(w) for w in words]
return lemm
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline =Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(SVC())),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size=0.2, random_state = 22)
pipeline.fit(X_train, y_train)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/model_selection/_split.py:2026: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.
FutureWarning)
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
import numpy as np
y_pred = pipeline.predict(X_test)
i=0
for col in y_test:
print('Feature {}: {}'.format(i+1, col))
print(classification_report(y_test[col], y_pred[:, i]))
i = i + 1
accuracy = (y_pred == y_test.values).mean()
print('Model accuracy is {:.3f}'.format(accuracy))
###Output
Feature 1: related
precision recall f1-score support
0 0.00 0.00 0.00 4899
1 0.77 1.00 0.87 16074
avg / total 0.59 0.77 0.67 20973
Feature 2: request
precision recall f1-score support
0 0.82 1.00 0.90 17210
1 0.00 0.00 0.00 3763
avg / total 0.67 0.82 0.74 20973
Feature 3: offer
precision recall f1-score support
0 0.99 1.00 0.99 20713
1 0.00 0.00 0.00 260
avg / total 0.98 0.99 0.98 20973
Feature 4: aid_related
precision recall f1-score support
0 0.58 1.00 0.73 12152
1 0.00 0.00 0.00 8821
avg / total 0.34 0.58 0.43 20973
Feature 5: medical_help
precision recall f1-score support
0 0.91 1.00 0.95 19143
1 0.00 0.00 0.00 1830
avg / total 0.83 0.91 0.87 20973
Feature 6: medical_products
precision recall f1-score support
0 0.94 1.00 0.97 19762
1 0.00 0.00 0.00 1211
avg / total 0.89 0.94 0.91 20973
Feature 7: search_and_rescue
precision recall f1-score support
0 0.96 1.00 0.98 20228
1 0.00 0.00 0.00 745
avg / total 0.93 0.96 0.95 20973
Feature 8: security
precision recall f1-score support
0 0.97 1.00 0.99 20437
1 0.00 0.00 0.00 536
avg / total 0.95 0.97 0.96 20973
Feature 9: military
precision recall f1-score support
0 0.96 1.00 0.98 20131
1 0.00 0.00 0.00 842
avg / total 0.92 0.96 0.94 20973
Feature 10: child_alone
precision recall f1-score support
0 0.99 1.00 1.00 20811
1 0.00 0.00 0.00 162
avg / total 0.98 0.99 0.99 20973
Feature 11: water
precision recall f1-score support
0 0.93 1.00 0.96 19481
1 0.00 0.00 0.00 1492
avg / total 0.86 0.93 0.89 20973
Feature 12: food
precision recall f1-score support
0 0.88 1.00 0.94 18468
1 0.00 0.00 0.00 2505
avg / total 0.78 0.88 0.82 20973
Feature 13: shelter
precision recall f1-score support
0 0.90 1.00 0.95 18966
1 0.00 0.00 0.00 2007
avg / total 0.82 0.90 0.86 20973
Feature 14: clothing
precision recall f1-score support
0 0.98 1.00 0.99 20470
1 0.00 0.00 0.00 503
avg / total 0.95 0.98 0.96 20973
Feature 15: money
precision recall f1-score support
0 0.97 1.00 0.98 20332
1 0.00 0.00 0.00 641
avg / total 0.94 0.97 0.95 20973
Feature 16: missing_people
precision recall f1-score support
0 0.98 1.00 0.99 20572
1 0.00 0.00 0.00 401
avg / total 0.96 0.98 0.97 20973
Feature 17: refugees
precision recall f1-score support
0 0.96 1.00 0.98 20115
1 0.00 0.00 0.00 858
avg / total 0.92 0.96 0.94 20973
Feature 18: death
precision recall f1-score support
0 0.95 1.00 0.97 19857
1 0.00 0.00 0.00 1116
avg / total 0.90 0.95 0.92 20973
Feature 19: other_aid
precision recall f1-score support
0 0.86 1.00 0.93 18082
1 0.00 0.00 0.00 2891
avg / total 0.74 0.86 0.80 20973
Feature 20: infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.96 19445
1 0.00 0.00 0.00 1528
avg / total 0.86 0.93 0.89 20973
Feature 21: transport
precision recall f1-score support
0 0.95 1.00 0.97 19849
1 0.00 0.00 0.00 1124
avg / total 0.90 0.95 0.92 20973
Feature 22: buildings
precision recall f1-score support
0 0.94 1.00 0.97 19767
1 0.00 0.00 0.00 1206
avg / total 0.89 0.94 0.91 20973
Feature 23: electricity
precision recall f1-score support
0 0.97 1.00 0.99 20378
1 0.00 0.00 0.00 595
avg / total 0.94 0.97 0.96 20973
Feature 24: tools
precision recall f1-score support
0 0.99 1.00 0.99 20678
1 0.00 0.00 0.00 295
avg / total 0.97 0.99 0.98 20973
Feature 25: hospitals
precision recall f1-score support
0 0.98 1.00 0.99 20569
1 0.00 0.00 0.00 404
avg / total 0.96 0.98 0.97 20973
Feature 26: shops
precision recall f1-score support
0 0.99 1.00 0.99 20716
1 0.00 0.00 0.00 257
avg / total 0.98 0.99 0.98 20973
Feature 27: aid_centers
precision recall f1-score support
0 0.98 1.00 0.99 20562
1 0.00 0.00 0.00 411
avg / total 0.96 0.98 0.97 20973
Feature 28: other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.97 19900
1 0.00 0.00 0.00 1073
avg / total 0.90 0.95 0.92 20973
Feature 29: weather_related
precision recall f1-score support
0 0.71 1.00 0.83 14986
1 0.00 0.00 0.00 5987
avg / total 0.51 0.71 0.60 20973
Feature 30: floods
precision recall f1-score support
0 0.91 1.00 0.95 19068
1 0.00 0.00 0.00 1905
avg / total 0.83 0.91 0.87 20973
Feature 31: storm
precision recall f1-score support
0 0.90 1.00 0.95 18849
1 0.00 0.00 0.00 2124
avg / total 0.81 0.90 0.85 20973
Feature 32: fire
precision recall f1-score support
0 0.98 1.00 0.99 20593
1 0.00 0.00 0.00 380
avg / total 0.96 0.98 0.97 20973
Feature 33: earthquake
precision recall f1-score support
0 0.90 1.00 0.95 18876
1 0.00 0.00 0.00 2097
avg / total 0.81 0.90 0.85 20973
Feature 34: cold
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
'''
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__n_estimators': [50, 100, 200],
'clf__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},
)
}
'''
parameters= [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
def display_results(cv, y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test.values.argmax(axis=1), y_pred.argmax(axis=1))
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
display_results(cv, y_test, y_pred)
###Output
Labels: [0 1]
Confusion Matrix:
[[20973]]
Accuracy: related 0.766414
request 0.820579
offer 0.987603
aid_related 0.579412
medical_help 0.912745
medical_products 0.942259
search_and_rescue 0.964478
security 0.974443
military 0.959853
child_alone 0.992276
water 0.928861
food 0.880561
shelter 0.904306
clothing 0.976017
money 0.969437
missing_people 0.980880
refugees 0.959090
death 0.946789
other_aid 0.862156
infrastructure_related 0.927144
transport 0.946407
buildings 0.942497
electricity 0.971630
tools 0.985934
hospitals 0.980737
shops 0.987746
aid_centers 0.980403
other_infrastructure 0.948839
weather_related 0.714538
floods 0.909169
storm 0.898727
fire 0.981881
earthquake 0.900014
cold 0.971583
other_weather 0.941115
direct_report 0.799075
dtype: float64
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline1 =Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
print(sorted(pipeline1.get_params().keys()))
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size=0.3)
pipeline1.fit(X_train, y_train)
y_pred1 = pipeline1.predict(X_test)
i=0
for col in y_test:
print('Feature {}: {}'.format(i+1, col))
print(classification_report(y_test[col], y_pred1[:, i]))
i = i + 1
accuracy = (y_pred1 == y_test.values).mean()
print('Model accuracy is {:.3f}'.format(accuracy))
param_grid = {
'n_estimators': [5, 10, 50, 100, 200],
'max_features': ['auto', 'log2', 'sqrt'],
'bootstrap': [False, True],
'max_depth':[3,5, 10, 20]}
'''param_grid = {"clf__estimator__max_depth": [3, 5, 10, 20],
'clf__estimator__n_estimators': [5, 10, 50, 100, 200],
"clf__estimator__max_features": ['auto', 'log2', 'sqrt'],
"clf__estimator__bootstrap": [True, False],
"clf__estimator__criterion": ["gini", "entropy"]}
'''
cv1 = GridSearchCV(pipeline1, param_grid=param_grid)
def display_results(cv1, y_test, y_pred1):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test.values.argmax(axis=1), y_pred1.argmax(axis=1))
accuracy = (y_pred1 == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
display_results(cv1, y_test, y_pred1)
###Output
Labels: [0 1]
Confusion Matrix:
[[17837 84 13 188 8 2 3 2 23 3 15 4
3 10 5 3 12 3 2 1 1 2 70 9
1 15 1 1 31]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0]]
Accuracy: related 0.798224
request 0.863830
offer 0.981310
aid_related 0.725044
medical_help 0.905078
medical_products 0.944802
search_and_rescue 0.965998
security 0.974553
military 0.960604
child_alone 0.978531
water 0.940170
food 0.927692
shelter 0.926711
clothing 0.978858
money 0.970357
missing_people 0.969867
refugees 0.945673
death 0.935756
other_aid 0.845793
infrastructure_related 0.919082
transport 0.946109
buildings 0.935920
electricity 0.972810
tools 0.976188
hospitals 0.970521
shops 0.988448
aid_centers 0.980820
other_infrastructure 0.940769
weather_related 0.844159
floods 0.930089
storm 0.920663
fire 0.976079
earthquake 0.946436
cold 0.963110
other_weather 0.931670
direct_report 0.820728
dtype: float64
###Markdown
9. Export your model as a pickle file
###Code
pkl_filename = "pickle_model.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(cv1, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import nltk
import pickle
import re
import matplotlib.pyplot as plt
nltk.download('punkt')
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
np.random.seed(42)
###Output
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\hjone\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\hjone\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Functions
###Code
def get_eval_metrics(actual, predicted, col_names):
"""
Calculate evaluation metrics for model
Args:
actual: array. Array containing actual labels.
predicted: array. Array containing predicted labels.
col_names: List of strings. List containing names for each of the predicted fields.
Returns:
metrics_df: dataframe. Dataframe containing the accuracy, precision, recall
and f1 score for a given set of actual and predicted labels.
"""
metrics = []
# average{‘micro’, ‘macro’, ‘samples’,’weighted’, ‘binary’} or None, default=’binary’
avg_type='weighted' # weighted is supposed to take label imbalance into account
zero_division_treatment=0 # 0,1,'warn'
# Calculate evaluation metrics for each set of labels
for i in range(len(col_names)):
accuracy = accuracy_score(actual[:, i], predicted[:, i])
precision = precision_score(actual[:, i], predicted[:, i], average=avg_type, zero_division=zero_division_treatment)
recall = recall_score(actual[:, i], predicted[:, i], average=avg_type, zero_division=zero_division_treatment)
f1 = f1_score(actual[:, i], predicted[:, i], average=avg_type, zero_division=zero_division_treatment)
metrics.append( [accuracy, precision, recall, f1] )
# Create dataframe containing metrics
metrics = np.array(metrics)
metrics_df = pd.DataFrame(data = metrics, index = col_names, columns = ['Accuracy', 'Precision', 'Recall', 'F1'])
return metrics_df
# Define performance metric for use in grid search scoring object
def performance_metric(y_true, y_pred)->float:
"""
Calculate median F1 score for all of the output classifiers
Args:
y_true: array. Array containing actual labels.
y_pred: array. Array containing predicted labels.
Returns:
score: float. Median F1 score for all of the output classifiers
"""
average_type='binary'
f1_list = []
for i in range(np.shape(y_pred)[1]):
f1 = f1_score(np.array(y_true)[:, i], y_pred[:, i],average='micro')
f1_list.append(f1)
score = np.median(f1_list)
return score
tableName='Message_Categories'
dbName='Disaster_Response_Message.db'
# load data from database
engine = create_engine('sqlite:///' + dbName)
df = pd.read_sql_table(table_name=tableName, con=engine)
feature_list=['id', 'message', 'original', 'genre']
X = df['message']
Y = df.drop(feature_list, axis=1)
print(type(X))
print("X",X.shape)
print("Y",Y.shape)
# Visualize target class labels support
print (Y.shape)
fig, axs = plt.subplots(6,6,figsize=(15,15))
axs = np.array(axs)
fig.suptitle("Classes")
fig.tight_layout()
for i, ax in enumerate(fig.axes):
ax.hist(Y[Y.columns[i-1]],bins=2)
ax.set_title(Y.columns[i-1])
ax.set_ylabel('support')
ax.set_xlabel('')
###Output
(26177, 36)
###Markdown
Nearly all classes are unbalanced 2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Tokenize the text message fields
Args:
text (string) text to tokenize
Returns:
List tokenised text
"""
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
tokens = word_tokenize(text)
stop_words = stopwords.words("english")
#tokenised = [stemmer.stem(word) for word in tokens if word not in stop_words]
lemmatizer = WordNetLemmatizer()
# Lemmatize
tokenised = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokenised
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables. CountVectorizer : Builds a count dictionary with count for each wordTfidTransformer : TF-FTI Term Frequency times inverse document frequency. Reduces weightage of common words
###Code
pipeline = Pipeline ( [
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
] )
# List all the parameters for this pipeline
#pipeline.get_params()
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,test_size=0.2, random_state = 42)
print(X_train.shape)
print(Y_train.shape)
# Train pipeline model
model1=pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Calculate evaluation metrics for training set
Y_train_pred = pipeline.predict(X_train)
col_names = list(Y.columns.values)
eval_metrics0 = get_eval_metrics(np.array(Y_train), Y_train_pred, col_names)
print(eval_metrics0)
eval_metrics0.describe()
# Calculate predicted classes for test dataset
Y_test_pred = pipeline.predict(X_test)
# Calculate evaluation metrics
eval_metrics1 = get_eval_metrics(np.array(Y_test), Y_test_pred, col_names)
print(eval_metrics1)
# Descrive Evaluation Metrics Test
eval_metrics1.describe()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Create grid search object
parameters = {'vect__min_df': [1, 5],
'tfidf__use_idf':[True, False],
'clf__estimator__n_estimators':[100, 150],
'clf__estimator__min_samples_split':[2, 5, 10]}
scorer = make_scorer(performance_metric)
cv = GridSearchCV(pipeline, param_grid = parameters, scoring = scorer, cv=3, verbose = 10, n_jobs=None)
# Find best parameters
np.random.seed(42)
model2 = cv.fit(X_train, Y_train)
# Print the best parameters in the GridSearch
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your model
Show the accuracy, precision, and recall of the tuned model.
Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Calculate evaluation metrics for test set
model2_pred_test = model2.predict(X_test)
eval_metrics2 = get_eval_metrics(np.array(Y_test), model2_pred_test, col_names)
print(eval_metrics2)
# Get summary stats for tuned model test
eval_metrics2.describe()
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
from sklearn.tree import DecisionTreeClassifier
# Try using DecisionTreeClassifier instead of Random Forest Classifier
pipeline3 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier( DecisionTreeClassifier(splitter='best') ))])
# List all the parameters for this pipeline
pipeline3.get_params()
# Create grid search object
parameters = {'vect__min_df': [1, 5],
'tfidf__use_idf':[True, False],
'clf__estimator__criterion':['gini', 'entropy'],
'clf__estimator__min_samples_leaf':[1, 3]}
scorer = make_scorer(performance_metric)
cv = GridSearchCV(pipeline3, param_grid = parameters, scoring = scorer, cv=3, verbose = 10, n_jobs=None)
# Find best parameters
np.random.seed(42)
model3 = cv.fit(X_train, Y_train)
# Print the best parameters in the GridSearch
cv.best_params_
# Calculate evaluation metrics for training set
Y_train_pred = model3.predict(X_train)
col_names = list(Y.columns.values)
eval_metrics3 = get_eval_metrics(np.array(Y_train), Y_train_pred, col_names)
print(eval_metrics3)
# Get summary stats for tuned model
eval_metrics3.describe()
# Calculate evaluation metrics for test set
model3_pred_test = model3.predict(X_test)
eval_metrics4 = get_eval_metrics(np.array(Y_test), model3_pred_test, col_names)
print(eval_metrics4)
# Get summary stats for model3 test
eval_metrics4.describe()
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
# Pickle best model
pickle.dump(model3, open('disaster_response_message_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline Preparation 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import re
import pickle
import nltk
nltk.download(['punkt', 'wordnet', 'stopwords'])
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sqlalchemy import create_engine
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report, accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.model_selection import GridSearchCV
# load data from database
engine = create_engine('sqlite:///disasterpipeline.db')
df = pd.read_sql_table('msgCat', con=engine)
df.head()
df.genre.value_counts()
df.dropna(inplace=True)
X = df['message']
y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process text data
###Code
def tokenize(text):
# remove punctuations
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
tokens = [w for w in tokens if w not in stopwords.words('english')]
# lemmatize as shown in the classroom
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(MultinomialNB()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
pipeline.fit(X_train.values, y_train.values)
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
5. Test the modelReport the f1 score, precision and recall for each output category of the dataset. This can be done by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# create a function to calculate `classification_report` for each
# of the class and store it in a dataframe
# but instead of classification report I will be using `precision_recall_fscore_support` function
# since it will become easy for me to store each value in a dataframe
def create_report(y_test, y_pred):
results = pd.DataFrame(columns=['Category', 'f1_score', 'precision', 'recall'])
num = 0
for category in y_test.columns:
precision, recall, f1_score, support = precision_recall_fscore_support(y_test[category],
y_pred[:,num], average='weighted')
results.set_value(num+1, 'Category', category)
results.set_value(num+1, 'f1_score', f1_score)
results.set_value(num+1, 'precision', precision)
results.set_value(num+1, 'recall', recall)
num += 1
print('Overall f1_score:', results['f1_score'].mean())
print('Overall precision:', results['precision'].mean())
print('Overall recall:', results['recall'].mean())
return results
results = create_report(y_test, y_pred)
results
###Output
_____no_output_____
###Markdown
6. Improve the modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {'clf__estimator__alpha': [0.5, 1.0, 2.0, 2.5, 3.0, 3.5]}
cv = GridSearchCV(pipeline, parameters)
###Output
_____no_output_____
###Markdown
7. Test the modelShow the accuracy, precision, and recall of the tuned model.
###Code
cv.fit(X_train, y_train)
print(cv.best_estimator_)
y_pred2 = cv.predict(X_test)
results2 = create_report(y_test, y_pred2)
results2
###Output
Overall f1_score: 0.899551330536
Overall precision: 0.87931940144
Overall recall: 0.929533994088
###Markdown
8. Improving the model further.
###Code
pipeline_2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
# X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline_2.fit(X_train, y_train)
y_pred3 = pipeline_2.predict(X_test)
results3 = create_report(y_test, y_pred3)
results3
pipeline_3 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier()))
])
# X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline_3.fit(X_train, y_train)
y_pred4 = pipeline_3.predict(X_test)
results4 = create_report(y_test, y_pred4)
results4
###Output
Overall f1_score: 0.898881680437
Overall precision: 0.896427179142
Overall recall: 0.901506259781
###Markdown
9. Export the model as a pickle file
###Code
pickle.dump(pipeline_2, open('moc_model.pkl', 'wb'))
# choosing pipeline_2 because it is giving better overall scores than the rest
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
from sqlalchemy import create_engine
import pandas as pd
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import TruncatedSVD
import pickle
'''# import libraries
import re
import numpy as np
import pandas as pd
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sqlalchemy import create_engine
import pickle
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report'''
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table('ETL_Pipeline_Preparation', con=engine)
X = df.message
Y = df.drop(['id','message','original','genre'], axis=1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
'''
takes text input and returns tokenized
and lemmatized list of words
in lower case with white space stripped
'''
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
# lemmatize, normalize case, and remove leading/trailing white space
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
#initiate pipeline
pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# splitting data
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# train pipeline
model = pipeline.fit(X_train, y_train)
model
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def report(model, X_test, y_test):
'''
takes model and test data and returns
classification report for predictions
'''
y_pred = model.predict(X_test)
for item, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, item]))
report(model, X_test, y_test)
###Output
related
precision recall f1-score support
0.0 0.34 0.13 0.19 1538
1.0 0.77 0.92 0.84 4962
2.0 0.50 0.02 0.04 52
avg / total 0.67 0.73 0.68 6552
request
precision recall f1-score support
0.0 0.84 0.98 0.90 5421
1.0 0.47 0.08 0.14 1131
avg / total 0.77 0.83 0.77 6552
offer
precision recall f1-score support
0.0 0.99 1.00 1.00 6519
1.0 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6552
aid_related
precision recall f1-score support
0.0 0.60 0.83 0.69 3837
1.0 0.46 0.21 0.29 2715
avg / total 0.54 0.57 0.52 6552
medical_help
precision recall f1-score support
0.0 0.92 1.00 0.96 6033
1.0 0.04 0.00 0.00 519
avg / total 0.85 0.92 0.88 6552
medical_products
precision recall f1-score support
0.0 0.95 1.00 0.97 6231
1.0 0.04 0.00 0.01 321
avg / total 0.91 0.95 0.93 6552
search_and_rescue
precision recall f1-score support
0.0 0.97 1.00 0.99 6372
1.0 0.00 0.00 0.00 180
avg / total 0.95 0.97 0.96 6552
security
precision recall f1-score support
0.0 0.98 1.00 0.99 6430
1.0 0.00 0.00 0.00 122
avg / total 0.96 0.98 0.97 6552
military
precision recall f1-score support
0.0 0.97 1.00 0.98 6342
1.0 0.00 0.00 0.00 210
avg / total 0.94 0.97 0.95 6552
child_alone
precision recall f1-score support
0.0 1.00 1.00 1.00 6552
avg / total 1.00 1.00 1.00 6552
water
precision recall f1-score support
0.0 0.94 1.00 0.97 6139
1.0 0.05 0.00 0.00 413
avg / total 0.88 0.93 0.91 6552
food
precision recall f1-score support
0.0 0.89 1.00 0.94 5800
1.0 0.23 0.01 0.01 752
avg / total 0.81 0.88 0.83 6552
shelter
precision recall f1-score support
0.0 0.91 0.99 0.95 5980
1.0 0.11 0.01 0.02 572
avg / total 0.84 0.91 0.87 6552
clothing
precision recall f1-score support
0.0 0.99 1.00 0.99 6457
1.0 0.33 0.01 0.02 95
avg / total 0.98 0.99 0.98 6552
money
precision recall f1-score support
0.0 0.98 1.00 0.99 6402
1.0 0.00 0.00 0.00 150
avg / total 0.95 0.98 0.97 6552
missing_people
precision recall f1-score support
0.0 0.99 1.00 0.99 6477
1.0 0.00 0.00 0.00 75
avg / total 0.98 0.99 0.98 6552
refugees
precision recall f1-score support
0.0 0.97 1.00 0.98 6336
1.0 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6552
death
precision recall f1-score support
0.0 0.95 1.00 0.97 6239
1.0 0.00 0.00 0.00 313
avg / total 0.91 0.95 0.93 6552
other_aid
precision recall f1-score support
0.0 0.88 0.99 0.93 5734
1.0 0.12 0.01 0.02 818
avg / total 0.78 0.87 0.82 6552
infrastructure_related
precision recall f1-score support
0.0 0.94 1.00 0.97 6147
1.0 0.07 0.00 0.00 405
avg / total 0.88 0.94 0.91 6552
transport
precision recall f1-score support
0.0 0.95 1.00 0.98 6242
1.0 0.00 0.00 0.00 310
avg / total 0.91 0.95 0.93 6552
buildings
precision recall f1-score support
0.0 0.95 1.00 0.97 6246
1.0 0.00 0.00 0.00 306
avg / total 0.91 0.95 0.93 6552
electricity
precision recall f1-score support
0.0 0.98 1.00 0.99 6424
1.0 0.00 0.00 0.00 128
avg / total 0.96 0.98 0.97 6552
tools
precision recall f1-score support
0.0 0.99 1.00 1.00 6510
1.0 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 6552
hospitals
precision recall f1-score support
0.0 0.99 1.00 1.00 6497
1.0 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.99 6552
shops
precision recall f1-score support
0.0 0.99 1.00 1.00 6515
1.0 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6552
aid_centers
precision recall f1-score support
0.0 0.99 1.00 0.99 6479
1.0 0.00 0.00 0.00 73
avg / total 0.98 0.99 0.98 6552
other_infrastructure
precision recall f1-score support
0.0 0.96 1.00 0.98 6274
1.0 0.00 0.00 0.00 278
avg / total 0.92 0.96 0.94 6552
weather_related
precision recall f1-score support
0.0 0.75 0.96 0.84 4752
1.0 0.59 0.15 0.24 1800
avg / total 0.70 0.74 0.68 6552
floods
precision recall f1-score support
0.0 0.92 1.00 0.96 6011
1.0 0.23 0.01 0.01 541
avg / total 0.86 0.92 0.88 6552
storm
precision recall f1-score support
0.0 0.91 0.99 0.95 5936
1.0 0.46 0.04 0.08 616
avg / total 0.87 0.91 0.87 6552
fire
precision recall f1-score support
0.0 0.99 1.00 0.99 6485
1.0 0.00 0.00 0.00 67
avg / total 0.98 0.99 0.98 6552
earthquake
precision recall f1-score support
0.0 0.92 0.99 0.96 5986
1.0 0.63 0.13 0.21 566
avg / total 0.90 0.92 0.89 6552
cold
precision recall f1-score support
0.0 0.98 1.00 0.99 6431
1.0 0.00 0.00 0.00 121
avg / total 0.96 0.98 0.97 6552
other_weather
precision recall f1-score support
0.0 0.95 1.00 0.97 6201
1.0 0.17 0.00 0.01 351
avg / total 0.90 0.95 0.92 6552
direct_report
precision recall f1-score support
0.0 0.81 0.98 0.88 5263
1.0 0.38 0.06 0.10 1289
avg / total 0.72 0.80 0.73 6552
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
'''
parameters = {'tfidf__use_idf': (True, False),
'clf__estimator__max_depth': [2,4],
'clf__estimator__n_estimators': [10, 100],
'clf__estimator__min_samples_split': [2, 3]}
cv = GridSearchCV(pipeline, param_grid=parameters)
'''
parameters = {'clf__estimator__max_depth': [2,4],
'clf__estimator__n_estimators': [5, 10],
'clf__estimator__min_samples_split': [2, 3]}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
%%time
cv.fit(X_train, y_train)
report(cv, X_test, y_test)
###Output
related
precision recall f1-score support
0.0 0.00 0.00 0.00 1538
1.0 0.76 1.00 0.86 4962
2.0 0.00 0.00 0.00 52
avg / total 0.57 0.76 0.65 6552
request
precision recall f1-score support
0.0 0.83 1.00 0.91 5421
1.0 0.00 0.00 0.00 1131
avg / total 0.68 0.83 0.75 6552
offer
precision recall f1-score support
0.0 0.99 1.00 1.00 6519
1.0 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6552
aid_related
precision recall f1-score support
0.0 0.59 1.00 0.74 3837
1.0 0.00 0.00 0.00 2715
avg / total 0.34 0.59 0.43 6552
medical_help
precision recall f1-score support
0.0 0.92 1.00 0.96 6033
1.0 0.00 0.00 0.00 519
avg / total 0.85 0.92 0.88 6552
medical_products
precision recall f1-score support
0.0 0.95 1.00 0.97 6231
1.0 0.00 0.00 0.00 321
avg / total 0.90 0.95 0.93 6552
search_and_rescue
precision recall f1-score support
0.0 0.97 1.00 0.99 6372
1.0 0.00 0.00 0.00 180
avg / total 0.95 0.97 0.96 6552
security
precision recall f1-score support
0.0 0.98 1.00 0.99 6430
1.0 0.00 0.00 0.00 122
avg / total 0.96 0.98 0.97 6552
military
precision recall f1-score support
0.0 0.97 1.00 0.98 6342
1.0 0.00 0.00 0.00 210
avg / total 0.94 0.97 0.95 6552
child_alone
precision recall f1-score support
0.0 1.00 1.00 1.00 6552
avg / total 1.00 1.00 1.00 6552
water
precision recall f1-score support
0.0 0.94 1.00 0.97 6139
1.0 0.00 0.00 0.00 413
avg / total 0.88 0.94 0.91 6552
food
precision recall f1-score support
0.0 0.89 1.00 0.94 5800
1.0 0.00 0.00 0.00 752
avg / total 0.78 0.89 0.83 6552
shelter
precision recall f1-score support
0.0 0.91 1.00 0.95 5980
1.0 0.00 0.00 0.00 572
avg / total 0.83 0.91 0.87 6552
clothing
precision recall f1-score support
0.0 0.99 1.00 0.99 6457
1.0 0.00 0.00 0.00 95
avg / total 0.97 0.99 0.98 6552
money
precision recall f1-score support
0.0 0.98 1.00 0.99 6402
1.0 0.00 0.00 0.00 150
avg / total 0.95 0.98 0.97 6552
missing_people
precision recall f1-score support
0.0 0.99 1.00 0.99 6477
1.0 0.00 0.00 0.00 75
avg / total 0.98 0.99 0.98 6552
refugees
precision recall f1-score support
0.0 0.97 1.00 0.98 6336
1.0 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6552
death
precision recall f1-score support
0.0 0.95 1.00 0.98 6239
1.0 0.00 0.00 0.00 313
avg / total 0.91 0.95 0.93 6552
other_aid
precision recall f1-score support
0.0 0.88 1.00 0.93 5734
1.0 0.00 0.00 0.00 818
avg / total 0.77 0.88 0.82 6552
infrastructure_related
precision recall f1-score support
0.0 0.94 1.00 0.97 6147
1.0 0.00 0.00 0.00 405
avg / total 0.88 0.94 0.91 6552
transport
precision recall f1-score support
0.0 0.95 1.00 0.98 6242
1.0 0.00 0.00 0.00 310
avg / total 0.91 0.95 0.93 6552
buildings
precision recall f1-score support
0.0 0.95 1.00 0.98 6246
1.0 0.00 0.00 0.00 306
avg / total 0.91 0.95 0.93 6552
electricity
precision recall f1-score support
0.0 0.98 1.00 0.99 6424
1.0 0.00 0.00 0.00 128
avg / total 0.96 0.98 0.97 6552
tools
precision recall f1-score support
0.0 0.99 1.00 1.00 6510
1.0 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 6552
hospitals
precision recall f1-score support
0.0 0.99 1.00 1.00 6497
1.0 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.99 6552
shops
precision recall f1-score support
0.0 0.99 1.00 1.00 6515
1.0 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6552
aid_centers
precision recall f1-score support
0.0 0.99 1.00 0.99 6479
1.0 0.00 0.00 0.00 73
avg / total 0.98 0.99 0.98 6552
other_infrastructure
precision recall f1-score support
0.0 0.96 1.00 0.98 6274
1.0 0.00 0.00 0.00 278
avg / total 0.92 0.96 0.94 6552
weather_related
precision recall f1-score support
0.0 0.73 1.00 0.84 4752
1.0 0.00 0.00 0.00 1800
avg / total 0.53 0.73 0.61 6552
floods
precision recall f1-score support
0.0 0.92 1.00 0.96 6011
1.0 0.00 0.00 0.00 541
avg / total 0.84 0.92 0.88 6552
storm
precision recall f1-score support
0.0 0.91 1.00 0.95 5936
1.0 0.00 0.00 0.00 616
avg / total 0.82 0.91 0.86 6552
fire
precision recall f1-score support
0.0 0.99 1.00 0.99 6485
1.0 0.00 0.00 0.00 67
avg / total 0.98 0.99 0.98 6552
earthquake
precision recall f1-score support
0.0 0.91 1.00 0.95 5986
1.0 0.00 0.00 0.00 566
avg / total 0.83 0.91 0.87 6552
cold
precision recall f1-score support
0.0 0.98 1.00 0.99 6431
1.0 0.00 0.00 0.00 121
avg / total 0.96 0.98 0.97 6552
other_weather
precision recall f1-score support
0.0 0.95 1.00 0.97 6201
1.0 0.00 0.00 0.00 351
avg / total 0.90 0.95 0.92 6552
direct_report
precision recall f1-score support
0.0 0.80 1.00 0.89 5263
1.0 0.00 0.00 0.00 1289
avg / total 0.65 0.80 0.72 6552
CPU times: user 8min 2s, sys: 94.5 ms, total: 8min 2s
Wall time: 8min 4s
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
new_pipeline = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),
('svd', TruncatedSVD()),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))])
new_model = new_pipeline.fit(X_train, y_train)
new_model
report(new_model, X_test, y_test)
new_pipeline.get_params()
new_parameters = {'svd__n_components': [2,3],
'clf__estimator__learning_rate': [0.8, 1.0],
'clf__estimator__n_estimators': [50, 75]}
new_cv = GridSearchCV(new_pipeline, param_grid=new_parameters)
new_cv
%%time
new_cv.fit(X_train, y_train)
report(new_cv, X_test, y_test)
###Output
related
precision recall f1-score support
0.0 0.00 0.00 0.00 1538
1.0 0.76 1.00 0.86 4962
2.0 0.00 0.00 0.00 52
avg / total 0.57 0.76 0.65 6552
request
precision recall f1-score support
0.0 0.83 1.00 0.91 5421
1.0 0.00 0.00 0.00 1131
avg / total 0.68 0.83 0.75 6552
offer
precision recall f1-score support
0.0 0.99 1.00 1.00 6519
1.0 0.00 0.00 0.00 33
avg / total 0.99 0.99 0.99 6552
aid_related
precision recall f1-score support
0.0 0.59 1.00 0.74 3837
1.0 0.18 0.00 0.00 2715
avg / total 0.42 0.58 0.43 6552
medical_help
precision recall f1-score support
0.0 0.92 1.00 0.96 6033
1.0 0.00 0.00 0.00 519
avg / total 0.85 0.92 0.88 6552
medical_products
precision recall f1-score support
0.0 0.95 1.00 0.97 6231
1.0 0.00 0.00 0.00 321
avg / total 0.90 0.95 0.93 6552
search_and_rescue
precision recall f1-score support
0.0 0.97 1.00 0.99 6372
1.0 0.00 0.00 0.00 180
avg / total 0.95 0.97 0.96 6552
security
precision recall f1-score support
0.0 0.98 1.00 0.99 6430
1.0 0.00 0.00 0.00 122
avg / total 0.96 0.98 0.97 6552
military
precision recall f1-score support
0.0 0.97 1.00 0.98 6342
1.0 0.00 0.00 0.00 210
avg / total 0.94 0.97 0.95 6552
child_alone
precision recall f1-score support
0.0 1.00 1.00 1.00 6552
avg / total 1.00 1.00 1.00 6552
water
precision recall f1-score support
0.0 0.94 1.00 0.97 6139
1.0 0.00 0.00 0.00 413
avg / total 0.88 0.94 0.91 6552
food
precision recall f1-score support
0.0 0.89 1.00 0.94 5800
1.0 0.00 0.00 0.00 752
avg / total 0.78 0.88 0.83 6552
shelter
precision recall f1-score support
0.0 0.91 1.00 0.95 5980
1.0 0.00 0.00 0.00 572
avg / total 0.83 0.91 0.87 6552
clothing
precision recall f1-score support
0.0 0.99 1.00 0.99 6457
1.0 0.00 0.00 0.00 95
avg / total 0.97 0.99 0.98 6552
money
precision recall f1-score support
0.0 0.98 1.00 0.99 6402
1.0 0.00 0.00 0.00 150
avg / total 0.95 0.98 0.97 6552
missing_people
precision recall f1-score support
0.0 0.99 1.00 0.99 6477
1.0 0.00 0.00 0.00 75
avg / total 0.98 0.99 0.98 6552
refugees
precision recall f1-score support
0.0 0.97 1.00 0.98 6336
1.0 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6552
death
precision recall f1-score support
0.0 0.95 1.00 0.98 6239
1.0 0.00 0.00 0.00 313
avg / total 0.91 0.95 0.93 6552
other_aid
precision recall f1-score support
0.0 0.88 1.00 0.93 5734
1.0 0.00 0.00 0.00 818
avg / total 0.77 0.88 0.82 6552
infrastructure_related
precision recall f1-score support
0.0 0.94 1.00 0.97 6147
1.0 0.00 0.00 0.00 405
avg / total 0.88 0.94 0.91 6552
transport
precision recall f1-score support
0.0 0.95 1.00 0.98 6242
1.0 0.00 0.00 0.00 310
avg / total 0.91 0.95 0.93 6552
buildings
precision recall f1-score support
0.0 0.95 1.00 0.98 6246
1.0 0.00 0.00 0.00 306
avg / total 0.91 0.95 0.93 6552
electricity
precision recall f1-score support
0.0 0.98 1.00 0.99 6424
1.0 0.00 0.00 0.00 128
avg / total 0.96 0.98 0.97 6552
tools
precision recall f1-score support
0.0 0.99 1.00 1.00 6510
1.0 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 6552
hospitals
precision recall f1-score support
0.0 0.99 1.00 1.00 6497
1.0 0.00 0.00 0.00 55
avg / total 0.98 0.99 0.99 6552
shops
precision recall f1-score support
0.0 0.99 1.00 1.00 6515
1.0 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6552
aid_centers
precision recall f1-score support
0.0 0.99 1.00 0.99 6479
1.0 0.00 0.00 0.00 73
avg / total 0.98 0.99 0.98 6552
other_infrastructure
precision recall f1-score support
0.0 0.96 1.00 0.98 6274
1.0 0.00 0.00 0.00 278
avg / total 0.92 0.96 0.94 6552
weather_related
precision recall f1-score support
0.0 0.73 1.00 0.84 4752
1.0 0.62 0.01 0.01 1800
avg / total 0.70 0.73 0.61 6552
floods
precision recall f1-score support
0.0 0.92 1.00 0.96 6011
1.0 0.00 0.00 0.00 541
avg / total 0.84 0.92 0.88 6552
storm
precision recall f1-score support
0.0 0.91 1.00 0.95 5936
1.0 0.33 0.00 0.00 616
avg / total 0.85 0.91 0.86 6552
fire
precision recall f1-score support
0.0 0.99 1.00 0.99 6485
1.0 0.00 0.00 0.00 67
avg / total 0.98 0.99 0.98 6552
earthquake
precision recall f1-score support
0.0 0.91 1.00 0.96 5986
1.0 0.62 0.02 0.03 566
avg / total 0.89 0.91 0.88 6552
cold
precision recall f1-score support
0.0 0.98 1.00 0.99 6431
1.0 0.00 0.00 0.00 121
avg / total 0.96 0.98 0.97 6552
other_weather
precision recall f1-score support
0.0 0.95 1.00 0.97 6201
1.0 0.00 0.00 0.00 351
avg / total 0.90 0.95 0.92 6552
direct_report
precision recall f1-score support
0.0 0.80 1.00 0.89 5263
1.0 0.00 0.00 0.00 1289
avg / total 0.65 0.80 0.72 6552
CPU times: user 19min 27s, sys: 110 ms, total: 19min 27s
Wall time: 19min 29s
###Markdown
9. Export your model as a pickle file
###Code
with open('model.pkl', 'wb') as f:
pickle.dump(new_cv, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report,confusion_matrix, precision_score,\
recall_score,accuracy_score, f1_score, make_scorer
from sklearn.base import BaseEstimator, TransformerMixin
import nltk
from nltk import word_tokenize
import pickle
def load_data():
# load data from database
#engine = create_engine('sqlite:///DisasterResponse_new.db')
#df = pd.read_sql("SELECT * FROM DisasterResponse_new", engine)
df = pd.read_csv("DisasterResponse_new.csv")
X = df.message
y = df.loc[:,"related":"direct_report"]
category_names=y.columns
return X, y,category_names
###Output
_____no_output_____
###Markdown
Use the first five messages as a sample to take a look at the data
###Code
X,y,category_names=load_data()
print(X[:5])
y.head(5)
###Output
0 Weather update - a cold front from Cuba that c...
1 Is the Hurricane over or is it not over
2 Looking for someone but no name
3 UN reports Leogane 80-90 destroyed. Only Hospi...
4 says: west side of Haiti, rest of the country ...
Name: message, dtype: object
###Markdown
2. Normalize text data A sequence of functions used to clean up HTML markups, expand contractions, stem and lemmatize, remove special characters, get rid of stop words, and remove accents from characters, etc. is defined in the notebook called Text_Normalization_Function. Run the notebook and the functions will be available in this notebook
###Code
%run ./Text_Normalization_Function.ipynb
###Output
Processing c:\users\nsun9\appdata\local\pip\cache\wheels\4f\85\2a\67a30aa6cf144eca0c159f337ce5166df2213c4cde9e699cbe\html_parser-0.2-py3-none-any.whl
Requirement already satisfied: ply in d:\programs\anaconda3\lib\site-packages (from html.parser) (3.11)
Installing collected packages: html.parser
Successfully installed html.parser
Requirement already satisfied: nltk in d:\programs\anaconda3\lib\site-packages (3.5)
Requirement already satisfied: click in d:\programs\anaconda3\lib\site-packages (from nltk) (7.1.2)
Requirement already satisfied: tqdm in d:\programs\anaconda3\lib\site-packages (from nltk) (4.47.0)
Requirement already satisfied: joblib in d:\programs\anaconda3\lib\site-packages (from nltk) (0.16.0)
Requirement already satisfied: regex in d:\programs\anaconda3\lib\site-packages (from nltk) (2020.6.8)
Requirement already satisfied: pattern3 in d:\programs\anaconda3\lib\site-packages (3.0.0)
Requirement already satisfied: docx in d:\programs\anaconda3\lib\site-packages (from pattern3) (0.2.4)
Requirement already satisfied: pdfminer.six in d:\programs\anaconda3\lib\site-packages (from pattern3) (20201018)
Requirement already satisfied: beautifulsoup4 in d:\programs\anaconda3\lib\site-packages (from pattern3) (4.9.1)
Requirement already satisfied: simplejson in d:\programs\anaconda3\lib\site-packages (from pattern3) (3.17.2)
Requirement already satisfied: pdfminer3k in d:\programs\anaconda3\lib\site-packages (from pattern3) (1.3.4)
Requirement already satisfied: cherrypy in d:\programs\anaconda3\lib\site-packages (from pattern3) (18.6.0)
Requirement already satisfied: feedparser in d:\programs\anaconda3\lib\site-packages (from pattern3) (6.0.2)
Requirement already satisfied: Pillow>=2.0 in d:\programs\anaconda3\lib\site-packages (from docx->pattern3) (7.2.0)
Requirement already satisfied: lxml in d:\programs\anaconda3\lib\site-packages (from docx->pattern3) (4.5.2)
Requirement already satisfied: chardet; python_version > "3.0" in d:\programs\anaconda3\lib\site-packages (from pdfminer.six->pattern3) (3.0.4)
Requirement already satisfied: sortedcontainers in d:\programs\anaconda3\lib\site-packages (from pdfminer.six->pattern3) (2.2.2)
Requirement already satisfied: cryptography in d:\programs\anaconda3\lib\site-packages (from pdfminer.six->pattern3) (2.9.2)
Requirement already satisfied: soupsieve>1.2 in d:\programs\anaconda3\lib\site-packages (from beautifulsoup4->pattern3) (2.0.1)
Requirement already satisfied: ply in d:\programs\anaconda3\lib\site-packages (from pdfminer3k->pattern3) (3.11)
Requirement already satisfied: zc.lockfile in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (2.0)
Requirement already satisfied: pywin32; sys_platform == "win32" in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (227)
Requirement already satisfied: more-itertools in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (8.4.0)
Requirement already satisfied: cheroot>=8.2.1 in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (8.5.2)
Requirement already satisfied: portend>=2.1.1 in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (2.7.1)
Requirement already satisfied: jaraco.collections in d:\programs\anaconda3\lib\site-packages (from cherrypy->pattern3) (3.3.0)
Requirement already satisfied: sgmllib3k in d:\programs\anaconda3\lib\site-packages (from feedparser->pattern3) (1.0.0)
Requirement already satisfied: six>=1.4.1 in d:\programs\anaconda3\lib\site-packages (from cryptography->pdfminer.six->pattern3) (1.15.0)
Requirement already satisfied: cffi!=1.11.3,>=1.8 in d:\programs\anaconda3\lib\site-packages (from cryptography->pdfminer.six->pattern3) (1.14.0)
Requirement already satisfied: setuptools in d:\programs\anaconda3\lib\site-packages (from zc.lockfile->cherrypy->pattern3) (49.2.0.post20200714)
Requirement already satisfied: jaraco.functools in d:\programs\anaconda3\lib\site-packages (from cheroot>=8.2.1->cherrypy->pattern3) (3.3.0)
Requirement already satisfied: tempora>=1.8 in d:\programs\anaconda3\lib\site-packages (from portend>=2.1.1->cherrypy->pattern3) (4.0.1)
Requirement already satisfied: jaraco.text in d:\programs\anaconda3\lib\site-packages (from jaraco.collections->cherrypy->pattern3) (3.5.0)
Requirement already satisfied: jaraco.classes in d:\programs\anaconda3\lib\site-packages (from jaraco.collections->cherrypy->pattern3) (3.2.1)
Requirement already satisfied: pycparser in d:\programs\anaconda3\lib\site-packages (from cffi!=1.11.3,>=1.8->cryptography->pdfminer.six->pattern3) (2.20)
Requirement already satisfied: pytz in d:\programs\anaconda3\lib\site-packages (from tempora>=1.8->portend>=2.1.1->cherrypy->pattern3) (2020.1)
###Markdown
The normalize_corpus can be used as a customized preprocessor in CountVectorizer.\**preprocessor** should be a callable, default=None. Override the preprocessing (strip_accents and lowercase) stage while preserving the tokenizing and n-grams generation steps. It should return a text **(not a series or list)**. However, if a function is used to normalize the corpus before feeding to CountVectorizer, the function should return a series or list. Use the first five messages as a sample to take a look at result after CountVectorizer
###Code
bow_vectorizer = CountVectorizer(preprocessor=normalize_corpus)
NORM_corpus_train_bow = bow_vectorizer.fit_transform(X[:5])
NORM_corpus_train_bow_table= pd.DataFrame(data = NORM_corpus_train_bow.todense(),
columns = bow_vectorizer.get_feature_names())
NORM_corpus_train_bow_table.head()
###Output
_____no_output_____
###Markdown
3. Add other features besides the TF-IDF Other characteristics of the text, such as length, may also affect the results. I defined a function to count the number of tokens contained in the text
###Code
class Text_Length_Extractor(BaseEstimator, TransformerMixin):
def get_length(self, text):
length=len(word_tokenize(text))
return length
def __init__(self):
pass
def fit(self, X, y=None):
return self
def transform(self, X):
X_length = pd.Series(X).apply(self.get_length)
# In order to use FeatureUnion to combine the Text_Length_Extractor with the text_pipeline,
# We must convert X_length into a dataframe. Otherwise, ValueError: blocks[0,:] has incompatible row dimensions.
return pd.DataFrame(X_length)
###Output
_____no_output_____
###Markdown
4. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline',Pipeline([
('vect', CountVectorizer(preprocessor=normalize_corpus)),
('tfidf', TfidfTransformer())
])),
('text_length',Text_Length_Extractor())
])),
('clf', MultiOutputClassifier(estimator=RandomForestClassifier(random_state=42)))
])
###Output
_____no_output_____
###Markdown
5. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
6. Test your modelReport the f1 score, precision and recall for each output category of the dataset. The y_pred is a numpy array with a shape of (6554, 36), so we have to access it by referring to its index number.
###Code
print(y_pred)
print(y_pred[:,0])
###Output
[[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
...
[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]]
[1 1 1 ... 1 1 1]
###Markdown
The y_test is a pd dataframe, if we want to access it by referring to its column number, we can use df.iloc, integer-location based indexing
###Code
y_test.head()
###Output
_____no_output_____
###Markdown
Parameter average: required for multiclass/multilabel targets. Binary:Only report results for the class specified by pos_label (default is 1).Macro average (averaging the unweighted mean per label), weighted average (averaging the support-weighted mean per label).Take F1 score as an example:Macro F1 calculates the F1 separated by class but not using weights for the aggregation: F1class1+F1class2+⋅⋅⋅+F1classN, which resuls in a bigger penalisation when the model does not perform well with the minority classes(when there is imbalance)Weighted F1 score calculates the F1 score for each class independently but when it adds them together uses a weight that depends on the number of true labels of each class: F1class1∗W1+F1class2∗W2+⋅⋅⋅+F1classN∗WN.Therefore favouring the majority class
###Code
print(classification_report(y_test.iloc[:,0], y_pred[:,0]))
###Output
precision recall f1-score support
0 0.70 0.38 0.50 1566
1 0.83 0.95 0.89 4988
accuracy 0.81 6554
macro avg 0.76 0.67 0.69 6554
weighted avg 0.80 0.81 0.79 6554
###Markdown
In this project, I use the default average parameter, binary. The recall and precision for some small categories such as offer and child alone are almost zero. The classifier classified almost everything as 0 due to an imbalance in the training dataUnlike the common problem with only one column of y, this project has 36 columns of y. In order to evaluate the prediction of each column, I use for loop
###Code
metrics_list_all=[]
for col in range(y_test.shape[1]):
accuracy = accuracy_score(y_test.iloc[:,col], y_pred[:,col])
precision=precision_score(y_test.iloc[:,col], y_pred[:,col])
recall = recall_score(y_test.iloc[:,col], y_pred[:,col])
f_1 = f1_score(y_test.iloc[:,col], y_pred[:,col])
metrics_list=[accuracy,precision,recall,f_1]
metrics_list_all.append(metrics_list)
metrics_df=pd.DataFrame(metrics_list_all,index=category_names,columns=["Accuracy","Precision","Recall","F_1"])
print(metrics_df)
###Output
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1464: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(
###Markdown
If I calculate the accuracy score directly, it will give back very weird result
###Code
accuracy_score(y_test.values, y_pred),pipeline.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
However, if use reshape to flatten the data from having 36 different columns to 1 column (appending data of each column one after the other), the result will be the same as using for loop to calculate the accuracy score of each column and then calculate the averagenumpy.reshape(a, newshape, order='C') gives a new shape to an array without changing its data.
###Code
accuracy_score(y_test.values.reshape(-1,1), y_pred.reshape(-1,1))
print(("The average accuracy score among all categories is {:.4f},\nthe average precision score score among all categories is {:.4f},\nthe average recall score among all categories is {:.4f},\nthe average F 1 score among all categories is {:.4f}").format(metrics_df.mean()["Accuracy"],metrics_df.mean()["Precision"],metrics_df.mean()["Recall"],metrics_df.mean()["F_1"]))
###Output
The average accuracy score among all categories is 0.9496,
the average precision score score among all categories is 0.6161,
the average recall score among all categories is 0.2089,
the average F 1 score among all categories is 0.2582
###Markdown
7. Improve your modelUse grid search to find better parameters.
###Code
# Define a score used in scoring parameter
def avg_accuracy(y_test, y_pred):
"""
This is the score_func used in make_scorer, which would be used in in GridSearchCV
"""
avg_accuracy=accuracy_score(y_test.values.reshape(-1,1), y_pred.reshape(-1,1))
return avg_accuracy
avg_accuracy_cv = make_scorer(avg_accuracy)
# Take a look at what parameters are available to be tuned
list(pipeline.get_params())
parameters = parameters = {
#'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'clf__estimator__max_depth': [15, 30],
'clf__estimator__n_estimators': [100, 250]}
cv = GridSearchCV(
pipeline,
param_grid=parameters,
cv=3,
scoring=avg_accuracy_cv,
verbose=3)
cv.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 4 candidates, totalling 12 fits
[CV] clf__estimator__max_depth=15, clf__estimator__n_estimators=100 ..
###Markdown
8. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
def evaluate_model(model, X_test, y_test,category_names):
"""
The evaluate_model function will return the accuracy, precision, and recall, and f1 scores for each output category of the dataset.
INPUTS:
model- a trained model for evaluation
X_test - a panda data frame or Numpy array, contains the untouched values of features.
y_pred - a Numpy array, contains predicted category values of the messages.
OUTPUT:
metrics_df, a panda dataframe that contains accuracy, precision, and recall, and f1 scores for each output category of the dataset.
"""
y_pred=model.predict(X_test)
metrics_list_all=[]
for col in range(y_test.shape[1]):
accuracy = accuracy_score(y_test.iloc[:,col], y_pred[:,col])
precision=precision_score(y_test.iloc[:,col], y_pred[:,col])
recall = recall_score(y_test.iloc[:,col], y_pred[:,col])
f_1 = f1_score(y_test.iloc[:,col], y_pred[:,col])
metrics_list=[accuracy,precision,recall,f_1]
metrics_list_all.append(metrics_list)
metrics_df=pd.DataFrame(metrics_list_all,index=category_names,columns=["Accuracy","Precision","Recall","F_1"])
print(metrics_df)
print("----------------------------------------------------------------------")
print(("The average accuracy score among all categories is {:.4f},\nthe average precision score score among all categories is {:.4f},\nthe average recall score among all categories is {:.4f},\nthe average F 1 score among all categories is {:.4f}").format(metrics_df.mean()["Accuracy"],metrics_df.mean()["Precision"],metrics_df.mean()["Recall"],metrics_df.mean()["F_1"]))
return None
# Get the best model and store it as best_randomforest
best_randomforest=cv.best_estimator_
evaluate_model(best_randomforest, X_test, y_test,category_names)
###Output
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1221: UndefinedMetricWarning: Recall is ill-defined and being set to 0.0 due to no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
D:\Programs\Anaconda3\lib\site-packages\sklearn\metrics\_classification.py:1464: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no true nor predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(
###Markdown
9. Export your model as a pickle file **Pickle** is the standard way of serializing objects in Python.You can use the pickle operation to serialize your machine learning algorithms and save the serialized format to a file.Later you can load this file to deserialize your model and use it to make new predictions.
###Code
filename = 'best_randomforest.pkl'
pickle.dump(pipeline, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import nltk
from nltk.corpus import stopwords
nltk.download(['punkt', 'wordnet','stopwords'])
# import libraries
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sqlalchemy import create_engine
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql('select * from InsertTableName',engine)
X = df.message.values
Y = df.drop(['id','message','original','genre'], 1).values
set(df.columns.values)-set(['id','message','original','genre'])
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
def tokenize(text):
# normalize case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# lemmatize andremove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('mic', MultiOutputClassifier(RandomForestClassifier(n_estimators = 10)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
col_names = df.drop(['id','message','original','genre'], 1).columns.values
len(col_names)
y_pred_df = pd.DataFrame(y_pred, columns=col_names)
y_test_df = pd.DataFrame(y_test, columns=col_names)
col_names[0]
y_pred_df['related'].nunique()
Y_df = pd.DataFrame(Y, columns=col_names)
Y_df.query('related == 2')['related']
def display_results(y_test, y_pred):
for col in y_pred.columns:
print(classification_report(y_test[col].values,y_pred[col].values))
display_results(y_test_df, y_pred_df)
###Output
precision recall f1-score support
0 0.64 0.50 0.56 1466
1 0.86 0.91 0.89 5041
2 0.29 0.40 0.34 47
micro avg 0.82 0.82 0.82 6554
macro avg 0.59 0.60 0.59 6554
weighted avg 0.81 0.82 0.81 6554
precision recall f1-score support
0 0.89 0.98 0.93 5433
1 0.79 0.44 0.57 1121
micro avg 0.88 0.88 0.88 6554
macro avg 0.84 0.71 0.75 6554
weighted avg 0.88 0.88 0.87 6554
precision recall f1-score support
0 1.00 1.00 1.00 6528
1 0.00 0.00 0.00 26
micro avg 1.00 1.00 1.00 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 1.00 0.99 6554
precision recall f1-score support
0 0.75 0.85 0.79 3797
1 0.74 0.60 0.66 2757
micro avg 0.74 0.74 0.74 6554
macro avg 0.74 0.72 0.73 6554
weighted avg 0.74 0.74 0.74 6554
precision recall f1-score support
0 0.92 0.99 0.96 6004
1 0.55 0.08 0.15 550
micro avg 0.92 0.92 0.92 6554
macro avg 0.74 0.54 0.55 6554
weighted avg 0.89 0.92 0.89 6554
precision recall f1-score support
0 0.95 1.00 0.97 6209
1 0.65 0.09 0.16 345
micro avg 0.95 0.95 0.95 6554
macro avg 0.80 0.54 0.57 6554
weighted avg 0.94 0.95 0.93 6554
precision recall f1-score support
0 0.98 1.00 0.99 6373
1 0.68 0.14 0.23 181
micro avg 0.97 0.97 0.97 6554
macro avg 0.83 0.57 0.61 6554
weighted avg 0.97 0.97 0.97 6554
precision recall f1-score support
0 0.98 1.00 0.99 6430
1 0.00 0.00 0.00 124
micro avg 0.98 0.98 0.98 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.96 0.98 0.97 6554
precision recall f1-score support
0 0.97 1.00 0.98 6345
1 0.63 0.08 0.14 209
micro avg 0.97 0.97 0.97 6554
macro avg 0.80 0.54 0.56 6554
weighted avg 0.96 0.97 0.96 6554
precision recall f1-score support
0 1.00 1.00 1.00 6554
micro avg 1.00 1.00 1.00 6554
macro avg 1.00 1.00 1.00 6554
weighted avg 1.00 1.00 1.00 6554
precision recall f1-score support
0 0.96 0.99 0.98 6139
1 0.82 0.40 0.54 415
micro avg 0.96 0.96 0.96 6554
macro avg 0.89 0.70 0.76 6554
weighted avg 0.95 0.96 0.95 6554
precision recall f1-score support
0 0.94 0.99 0.96 5826
1 0.83 0.52 0.64 728
micro avg 0.94 0.94 0.94 6554
macro avg 0.89 0.76 0.80 6554
weighted avg 0.93 0.94 0.93 6554
precision recall f1-score support
0 0.94 0.99 0.96 5956
1 0.82 0.32 0.46 598
micro avg 0.93 0.93 0.93 6554
macro avg 0.88 0.66 0.71 6554
weighted avg 0.93 0.93 0.92 6554
precision recall f1-score support
0 0.99 1.00 0.99 6455
1 0.71 0.22 0.34 99
micro avg 0.99 0.99 0.99 6554
macro avg 0.85 0.61 0.67 6554
weighted avg 0.98 0.99 0.98 6554
precision recall f1-score support
0 0.98 1.00 0.99 6391
1 0.71 0.03 0.06 163
micro avg 0.98 0.98 0.98 6554
macro avg 0.85 0.52 0.52 6554
weighted avg 0.97 0.98 0.96 6554
precision recall f1-score support
0 0.99 1.00 0.99 6482
1 0.00 0.00 0.00 72
micro avg 0.99 0.99 0.99 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.98 0.99 0.98 6554
precision recall f1-score support
0 0.97 1.00 0.98 6324
1 0.60 0.08 0.14 230
micro avg 0.97 0.97 0.97 6554
macro avg 0.78 0.54 0.56 6554
weighted avg 0.95 0.97 0.95 6554
precision recall f1-score support
0 0.96 1.00 0.98 6256
1 0.79 0.21 0.33 298
micro avg 0.96 0.96 0.96 6554
macro avg 0.88 0.60 0.65 6554
weighted avg 0.96 0.96 0.95 6554
precision recall f1-score support
0 0.87 0.99 0.93 5704
1 0.46 0.04 0.07 850
micro avg 0.87 0.87 0.87 6554
macro avg 0.67 0.52 0.50 6554
weighted avg 0.82 0.87 0.82 6554
precision recall f1-score support
0 0.93 1.00 0.96 6121
1 0.13 0.00 0.01 433
micro avg 0.93 0.93 0.93 6554
macro avg 0.53 0.50 0.49 6554
weighted avg 0.88 0.93 0.90 6554
precision recall f1-score support
0 0.96 1.00 0.98 6248
1 0.59 0.05 0.10 306
micro avg 0.95 0.95 0.95 6554
macro avg 0.77 0.53 0.54 6554
weighted avg 0.94 0.95 0.94 6554
precision recall f1-score support
0 0.96 1.00 0.98 6234
1 0.71 0.17 0.28 320
micro avg 0.96 0.96 0.96 6554
macro avg 0.83 0.58 0.63 6554
weighted avg 0.95 0.96 0.94 6554
precision recall f1-score support
0 0.98 1.00 0.99 6415
1 0.79 0.08 0.14 139
micro avg 0.98 0.98 0.98 6554
macro avg 0.88 0.54 0.57 6554
weighted avg 0.98 0.98 0.97 6554
precision recall f1-score support
0 0.99 1.00 1.00 6509
1 0.00 0.00 0.00 45
micro avg 0.99 0.99 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 0.00 0.00 0.00 80
micro avg 0.99 0.99 0.99 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.98 0.99 0.98 6554
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
micro avg 1.00 1.00 1.00 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 1.00 0.99 6554
precision recall f1-score support
0 0.99 1.00 0.99 6469
1 0.00 0.00 0.00 85
micro avg 0.99 0.99 0.99 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.97 0.99 0.98 6554
precision recall f1-score support
0 0.96 1.00 0.98 6279
1 0.33 0.01 0.01 275
micro avg 0.96 0.96 0.96 6554
macro avg 0.65 0.50 0.50 6554
weighted avg 0.93 0.96 0.94 6554
precision recall f1-score support
0 0.87 0.95 0.91 4750
1 0.83 0.62 0.71 1804
micro avg 0.86 0.86 0.86 6554
macro avg 0.85 0.79 0.81 6554
weighted avg 0.86 0.86 0.85 6554
precision recall f1-score support
0 0.95 1.00 0.97 6042
1 0.87 0.33 0.48 512
micro avg 0.94 0.94 0.94 6554
macro avg 0.91 0.66 0.73 6554
weighted avg 0.94 0.94 0.93 6554
precision recall f1-score support
0 0.94 0.99 0.96 5954
1 0.74 0.40 0.52 600
micro avg 0.93 0.93 0.93 6554
macro avg 0.84 0.69 0.74 6554
weighted avg 0.92 0.93 0.92 6554
precision recall f1-score support
0 0.99 1.00 1.00 6488
1 1.00 0.02 0.03 66
micro avg 0.99 0.99 0.99 6554
macro avg 1.00 0.51 0.51 6554
weighted avg 0.99 0.99 0.99 6554
precision recall f1-score support
0 0.97 0.99 0.98 5942
1 0.89 0.72 0.80 612
micro avg 0.97 0.97 0.97 6554
macro avg 0.93 0.86 0.89 6554
weighted avg 0.96 0.97 0.96 6554
precision recall f1-score support
0 0.98 1.00 0.99 6406
1 1.00 0.09 0.16 148
micro avg 0.98 0.98 0.98 6554
macro avg 0.99 0.54 0.58 6554
weighted avg 0.98 0.98 0.97 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
list(pipeline.get_params().keys())
parameters = {
#'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.5, 0.75, 1.0),
#'vect__max_features': (None, 5000, 10000),
'tfidf__use_idf': (True, False),
'mic__estimator__n_estimators': [10, 50],
#'mic__estimator__min_samples_split': [2, 3, 4]
}
cv = GridSearchCV(pipeline, parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train,y_train)
y_pred = cv.predict(X_test)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
import pickle
pickle.dump(cv, open("model.pickle", "wb"))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
import re
import nltk
nltk.download('punkt')
nltk.download('wordnet')
#from nltk.corpus import wordnet as wn
from nltk.stem.wordnet import WordNetLemmatizer
#from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
#from nltk.stem.porter import PorterStemmer
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, classification_report
from sklearn.svm import SVC,LinearSVC
from sklearn.tree import DecisionTreeClassifier
#import warnings
#warnings.simplefilter('ignore')
# load data from database
engine = create_engine('sqlite:///MessagesDataSet.db')
df = pd.read_sql("SELECT * FROM MessagesDataSet", engine)
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
df.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
#extract words from test
tokens = word_tokenize(text)
#transform word to its base
lemmatizer = WordNetLemmatizer()
clean_tokens = []
# clean words
for token in tokens:
clean_token = lemmatizer.lemmatize(token).lower().strip()
clean_tokens.append(clean_token)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 42, test_size = 0.2)
pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
print(classification_report(Y_test.iloc[:,1:].values, np.array([x[1:] for x in y_pred]), target_names=Y.columns))
###Output
precision recall f1-score support
related 0.82 0.39 0.53 895
request 0.00 0.00 0.00 26
offer 0.76 0.53 0.62 2131
aid_related 0.55 0.08 0.14 422
medical_help 0.77 0.09 0.15 270
medical_products 0.67 0.11 0.19 127
search_and_rescue 0.25 0.01 0.02 88
security 0.52 0.08 0.14 155
military 0.00 0.00 0.00 0
child_alone 0.83 0.22 0.35 339
water 0.90 0.26 0.40 595
food 0.84 0.34 0.48 470
shelter 0.75 0.08 0.15 73
clothing 0.89 0.08 0.14 104
money 0.00 0.00 0.00 60
missing_people 0.56 0.03 0.06 171
refugees 0.79 0.11 0.19 237
death 0.55 0.03 0.05 695
other_aid 0.00 0.00 0.00 328
infrastructure_related 0.61 0.05 0.09 240
transport 0.93 0.05 0.10 267
buildings 0.82 0.07 0.14 122
electricity 0.00 0.00 0.00 32
tools 0.00 0.00 0.00 46
hospitals 0.00 0.00 0.00 22
shops 0.00 0.00 0.00 67
aid_centers 0.17 0.00 0.01 223
other_infrastructure 0.84 0.51 0.63 1438
weather_related 0.89 0.36 0.51 411
floods 0.80 0.38 0.51 486
storm 0.00 0.00 0.00 53
fire 0.89 0.69 0.78 478
earthquake 0.70 0.06 0.11 117
cold 0.85 0.04 0.08 276
other_weather 0.76 0.29 0.42 1021
avg / total 0.72 0.30 0.39 12485
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'clf__estimator__n_estimators':[40, 50],
}
cv = GridSearchCV(pipeline, parameters)
model = cv.fit(X_train, Y_train)
model.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred_after_tuning = model.predict(X_test)
print(classification_report(Y_test.iloc[:,1:].values, np.array([x[1:] for x in y_pred_after_tuning]), target_names=Y.columns))
###Output
precision recall f1-score support
related 0.88 0.41 0.56 895
request 0.00 0.00 0.00 26
offer 0.78 0.60 0.68 2131
aid_related 0.64 0.06 0.12 422
medical_help 0.74 0.05 0.10 270
medical_products 0.29 0.02 0.03 127
search_and_rescue 0.20 0.01 0.02 88
security 0.75 0.10 0.17 155
military 0.00 0.00 0.00 0
child_alone 0.89 0.23 0.37 339
water 0.89 0.36 0.51 595
food 0.89 0.21 0.34 470
shelter 1.00 0.05 0.10 73
clothing 0.80 0.08 0.14 104
money 0.00 0.00 0.00 60
missing_people 0.25 0.01 0.01 171
refugees 0.83 0.12 0.21 237
death 0.72 0.03 0.05 695
other_aid 0.00 0.00 0.00 328
infrastructure_related 0.77 0.08 0.15 240
transport 0.87 0.05 0.09 267
buildings 1.00 0.04 0.08 122
electricity 0.00 0.00 0.00 32
tools 0.00 0.00 0.00 46
hospitals 0.00 0.00 0.00 22
shops 0.00 0.00 0.00 67
aid_centers 0.00 0.00 0.00 223
other_infrastructure 0.87 0.59 0.70 1438
weather_related 0.91 0.38 0.53 411
floods 0.76 0.39 0.52 486
storm 0.00 0.00 0.00 53
fire 0.88 0.73 0.80 478
earthquake 0.90 0.08 0.14 117
cold 0.78 0.03 0.05 276
other_weather 0.85 0.33 0.48 1021
avg / total 0.75 0.33 0.42 12485
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
dtree_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier()))
])
dtree_pipeline.fit(X_train, Y_train)
dtree_y_pred = dtree_pipeline.predict(X_test)
print(classification_report(Y_test.iloc[:,1:].values, np.array([x[1:] for x in dtree_y_pred]), target_names=Y.columns))
###Output
precision recall f1-score support
related 0.58 0.56 0.57 895
request 0.00 0.00 0.00 26
offer 0.63 0.63 0.63 2131
aid_related 0.34 0.32 0.33 422
medical_help 0.42 0.35 0.38 270
medical_products 0.20 0.18 0.19 127
search_and_rescue 0.05 0.05 0.05 88
security 0.39 0.39 0.39 155
military 0.00 0.00 0.00 0
child_alone 0.67 0.64 0.65 339
water 0.73 0.74 0.74 595
food 0.61 0.57 0.59 470
shelter 0.58 0.44 0.50 73
clothing 0.38 0.41 0.39 104
money 0.40 0.28 0.33 60
missing_people 0.34 0.33 0.34 171
refugees 0.57 0.59 0.58 237
death 0.31 0.27 0.29 695
other_aid 0.17 0.15 0.16 328
infrastructure_related 0.29 0.27 0.28 240
transport 0.43 0.37 0.40 267
buildings 0.38 0.30 0.33 122
electricity 0.05 0.03 0.04 32
tools 0.11 0.13 0.12 46
hospitals 0.00 0.00 0.00 22
shops 0.13 0.10 0.12 67
aid_centers 0.12 0.12 0.12 223
other_infrastructure 0.72 0.71 0.71 1438
weather_related 0.58 0.59 0.58 411
floods 0.61 0.63 0.62 486
storm 0.25 0.17 0.20 53
fire 0.78 0.78 0.78 478
earthquake 0.54 0.48 0.51 117
cold 0.22 0.17 0.19 276
other_weather 0.52 0.50 0.51 1021
avg / total 0.53 0.51 0.52 12485
###Markdown
9. Export your model as a pickle file
###Code
import pickle
pickle.dump(pipeline, open('ml_disaster_response_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem.wordnet import WordNetLemmatizer
import nltk
nltk.download(['punkt', 'wordnet','stopwords','averaged_perceptron_tagger', 'maxent_ne_chunker'])
from nltk.corpus import stopwords
import re
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# load data from database
engine = create_engine('sqlite:///data/DisasterResponses.db')
df = pd.read_sql_table('Response', engine)
# check imported table
df.head(2)
# specify dependent and independent variables
X = df.loc[:, 'message']
y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# remove puntuaton and special chars
text = re.sub(r'[^\w]', ' ', text)
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# lemmatize, normalize case, and remove leading/trailing white space
stop_words = stopwords.words('english')
final_tokens = [lemmatizer.lemmatize(token).strip().lower()
for token in tokens
if token not in stop_words and len(token) > 2]
return final_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('multi_clf', MultiOutputClassifier(RandomForestClassifier(random_state=42, n_jobs=-1, n_estimators=20))),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# create train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state=42)
# train pipeline
pipeline.fit(X_train, y_train)
df.loc[10, 'message']
# We test our pipeline with a message
msg = df.loc[10, 'message']
prediction = pipeline.predict([msg])
print('Prediction:', y_train.columns.values[(prediction.flatten()==1)])
print('Actual:', df.columns.values[df.iloc[10, :].values == 1])
###Output
Prediction: ['related' 'request' 'aid_related' 'medical_help' 'medical_products'
'water' 'food' 'other_aid' 'infrastructure_related' 'transport'
'buildings' 'other_infrastructure' 'weather_related' 'floods'
'direct_report']
Actual: ['related' 'request' 'aid_related' 'medical_help' 'medical_products'
'water' 'food' 'other_aid' 'infrastructure_related' 'transport'
'buildings' 'other_infrastructure' 'weather_related' 'floods'
'direct_report']
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Make prediction
pred = pipeline.predict(X_test)
from sklearn.metrics import classification_report
# collect the classification reports
classification_reports = []
for index in range(len(y_test.columns)):
classification_reports.append(classification_report(y_test.values[:, index], pred[:, index], output_dict=True))
# create dataframe for cleaner printing
results = pd.DataFrame(
{'micro avg': [report['micro avg']['f1-score'] for report in classification_reports],
'macro avg': [report['macro avg']['f1-score'] for report in classification_reports],
'weighted avg': [report['weighted avg']['f1-score'] for report in classification_reports]}
)
# add total column
results = results.append(results.sum() / len(results), ignore_index=True)
# add category column
results = pd.concat([pd.DataFrame({'category': y_test.columns.append(pd.Index(['total'])).values}), results], axis=1, sort=False)
# print results
print(results)
print('Average f1-score over all categories: %s (weighted avg)' % results.iloc[-1, 3])
pipeline.get_params()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
from sklearn.model_selection import GridSearchCV
# define hyperparameters of the diffferent estimators
parameters = {
'tfidf__use_idf' : [True, False],
'tfidf__norm': ['l2', 'l1'],
'multi_clf__estimator__oob_score' : [True, False],
'multi_clf__estimator__warm_start' : [True, False],
'multi_clf__estimator__criterion' : ['gini', 'entropy']
}
# initialize grid search cross validation
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=-1)
# fit model perform grid search
cv.fit(X_train, y_train)
# Examine parameters, anything suprising?
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Make prediction with learned parameters
pred_grid = cv.predict(X_test)
def print_classification_summary(y_truth, y_predicted):
# collect the classification reports
reports = []
for index in range(len(y_test.columns)):
reports.append(classification_report(y_truth.values[:, index], y_predicted[:, index], output_dict=True))
# create dataframe for cleaner printing
results = pd.DataFrame(
{'micro avg': [report['micro avg']['f1-score'] for report in reports],
'macro avg': [report['macro avg']['f1-score'] for report in reports],
'weighted avg': [report['weighted avg']['f1-score'] for report in reports]}
)
# add total column
results = results.append(results.sum() / len(results), ignore_index=True)
# add category column
results = pd.concat([pd.DataFrame({'category': y_truth.columns.append(pd.Index(['total'])).values}),
results], axis=1, sort=False)
# print results
print(results)
print('Average f1-score over all categories: %s (weighted avg)' % results.iloc[-1, 3])
# collect the classification reports
classification_reports_grid = []
for index in range(len(y_test.columns)):
classification_reports_grid.append(classification_report(y_test.values[:, index], pred_grid[:, index], output_dict=True))
# create dataframe for cleaner printing
results_grid = pd.DataFrame(
{'micro avg': [report['micro avg']['f1-score'] for report in classification_reports_grid],
'macro avg': [report['macro avg']['f1-score'] for report in classification_reports_grid],
'weighted avg': [report['weighted avg']['f1-score'] for report in classification_reports_grid]}
)
# add total column
results_grid = results_grid.append(results_grid.sum() / len(results_grid), ignore_index=True)
# add category column
results_grid = pd.concat([pd.DataFrame({'category': y_test.columns.append(pd.Index(['total'])).values}),
results_grid], axis=1, sort=False)
print('Average f1-score over all categories: %s (weighted avg)' % results_grid.iloc[-1, 3])
###Output
Average f1-score over all categories: 0.933500875949 (weighted avg)
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
from sklearn.pipeline import FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
if len(pos_tags) == 0:
return False
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
class ResponseLengthExtractor(BaseEstimator, TransformerMixin):
def fit(self, x, y=None):
return self
def response_length(self, text):
return len(text)
def transform(self, X):
X_length = pd.Series(X).apply(self.response_length)
return pd.DataFrame(X_length)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import AdaBoostClassifier
# First we try our starting verb extractor again but
# we combine our to clean up steps into one and add the length as separate feature
pipeline_extended = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('count', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer(use_idf=True, norm='l2')),
])),
('starting_verb', StartingVerbExtractor()),
('tweet_length', ResponseLengthExtractor()),
])),
('multi_clf', MultiOutputClassifier(AdaBoostClassifier(random_state=42, n_estimators=100))),
])
pipeline_extended.get_params()
# define hyperparameters of the diffferent estimators
parameters = {
'multi_clf__estimator__algorithm': ['SAMME', 'SAMME.R']
}
# initialize grid search cross validation
pipeline_extended = GridSearchCV(pipeline_extended, param_grid=parameters, n_jobs=-1)
# fit model perform grid search
pipeline_extended.fit(X_train, y_train)
# Make prediction with learned parameters
pred_extended = pipeline_extended.predict(X_test)
print_classification_summary(y_test, pred_extended)
###Output
category micro avg macro avg weighted avg
0 related 0.781054 0.546025 0.753093
1 request 0.886173 0.774486 0.877322
2 offer 0.993277 0.520053 0.992404
3 aid_related 0.759969 0.746812 0.756133
4 medical_help 0.921925 0.659438 0.907353
5 medical_products 0.952330 0.675609 0.944640
6 search_and_rescue 0.968526 0.595818 0.961291
7 security 0.981054 0.552355 0.975566
8 military 0.969595 0.687923 0.965563
9 child_alone 1.000000 1.000000 1.000000
10 water 0.960581 0.829052 0.958839
11 food 0.944079 0.849055 0.942689
12 shelter 0.939496 0.796398 0.935532
13 clothing 0.987624 0.748401 0.986687
14 money 0.981818 0.713390 0.979318
15 missing_people 0.988999 0.614251 0.986211
16 refugees 0.960275 0.634592 0.953107
17 death 0.965623 0.774288 0.962162
18 other_aid 0.870588 0.588823 0.840165
19 infrastructure_related 0.925134 0.552107 0.902651
20 transport 0.956150 0.685244 0.948699
21 buildings 0.953858 0.715392 0.947091
22 electricity 0.982582 0.668001 0.979619
23 tools 0.993277 0.520053 0.990802
24 hospitals 0.985791 0.553562 0.982770
25 shops 0.991902 0.497967 0.989544
26 aid_centers 0.985027 0.566399 0.981251
27 other_infrastructure 0.950038 0.551141 0.934716
28 weather_related 0.880519 0.840416 0.876290
29 floods 0.955080 0.821667 0.951215
30 storm 0.939649 0.789443 0.934296
31 fire 0.988999 0.657605 0.987661
32 earthquake 0.970512 0.907349 0.970103
33 cold 0.982429 0.728092 0.980380
34 other_weather 0.944385 0.606400 0.931568
35 direct_report 0.850879 0.718196 0.836697
36 total 0.945811 0.685717 0.938984
Average f1-score over all categories: 0.938984100458 (weighted avg)
###Markdown
9. Export your model as a pickle file
###Code
import pickle
# Save the model to disk
pickle.dump(pipeline_extended, open('models/model_adaboost.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import nltk
nltk.download(['punkt', 'wordnet'])
from sqlalchemy import create_engine
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import pickle
# load data from database
engine = create_engine('sqlite:///etl_message.db')
df = pd.read_sql_table("etl_message", con=engine)
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
category_names = Y.columns
X
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def class_rep(model, X_test, y_test, category_names):
y_pred = model.predict(X_test)
for i, col in enumerate(category_names):
print(col)
print(classification_report(y_test[col], y_pred[:, i]))
class_rep(pipeline, X_test, y_test, category_names)
###Output
related
precision recall f1-score support
0 0.63 0.35 0.45 1514
1 0.82 0.94 0.88 4994
2 0.64 0.15 0.25 46
avg / total 0.78 0.80 0.77 6554
request
precision recall f1-score support
0 0.88 0.98 0.93 5460
1 0.81 0.36 0.50 1094
avg / total 0.87 0.88 0.86 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.72 0.89 0.79 3823
1 0.76 0.51 0.61 2731
avg / total 0.74 0.73 0.72 6554
medical_help
precision recall f1-score support
0 0.93 0.99 0.96 6038
1 0.53 0.07 0.12 516
avg / total 0.89 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.96 1.00 0.98 6221
1 0.79 0.13 0.22 333
avg / total 0.95 0.95 0.94 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6378
1 0.56 0.03 0.05 176
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6432
1 0.20 0.01 0.02 122
avg / total 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6330
1 0.77 0.08 0.14 224
avg / total 0.96 0.97 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.98 6140
1 0.84 0.30 0.44 414
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.94 0.99 0.96 5879
1 0.79 0.46 0.58 675
avg / total 0.93 0.93 0.92 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5984
1 0.84 0.24 0.37 570
avg / total 0.92 0.93 0.91 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6459
1 0.75 0.22 0.34 95
avg / total 0.99 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6402
1 0.57 0.03 0.05 152
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6480
1 1.00 0.01 0.03 74
avg / total 0.99 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6327
1 0.71 0.05 0.10 227
avg / total 0.96 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6254
1 0.84 0.12 0.22 300
avg / total 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.86 1.00 0.93 5649
1 0.56 0.03 0.05 905
avg / total 0.82 0.86 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.97 6117
1 0.29 0.00 0.01 437
avg / total 0.89 0.93 0.90 6554
transport
precision recall f1-score support
0 0.96 1.00 0.98 6257
1 0.59 0.06 0.10 297
avg / total 0.94 0.96 0.94 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.98 6234
1 0.64 0.05 0.09 320
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6416
1 0.45 0.04 0.07 138
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6514
1 0.00 0.00 0.00 40
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6526
1 0.00 0.00 0.00 28
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6486
1 0.00 0.00 0.00 68
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 6250
1 0.00 0.00 0.00 304
avg / total 0.91 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.83 0.97 0.89 4702
1 0.85 0.48 0.62 1852
avg / total 0.83 0.83 0.81 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5985
1 0.88 0.32 0.47 569
avg / total 0.93 0.94 0.92 6554
storm
precision recall f1-score support
0 0.92 0.99 0.96 5920
1 0.73 0.24 0.36 634
avg / total 0.90 0.92 0.90 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6476
1 0.00 0.00 0.00 78
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.96 0.99 0.97 5958
1 0.86 0.54 0.67 596
avg / total 0.95 0.95 0.95 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6415
1 0.67 0.04 0.08 139
avg / total 0.97 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.95 1.00 0.97 6211
1 0.47 0.03 0.05 343
avg / total 0.92 0.95 0.92 6554
direct_report
precision recall f1-score support
0 0.86 0.98 0.92 5316
1 0.79 0.34 0.48 1238
avg / total 0.85 0.86 0.84 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {'tfidf__use_idf':[True, False],
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__min_samples_split': [2, 4]}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
class_rep(cv, X_test, y_test)
###Output
c:\users\chris\appdata\local\programs\python\python37\lib\site-packages\sklearn\model_selection\_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
pipeline2.fit(X_train, y_train)
class_rep(pipeline2, X_test, y_test)
parameters2 = {
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 60, 70]
}
cv2 = GridSearchCV(pipeline2, param_grid = parameters2)
cv2
cv2.fit(X_train, y_train)
class_rep(cv2, X_test, y_test)
###Output
c:\users\chris\appdata\local\programs\python\python37\lib\site-packages\sklearn\model_selection\_split.py:2053: FutureWarning: You should specify a value for 'cv' instead of relying on the default value. The default value will change from 3 to 5 in version 0.22.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
9. Export your model as a pickle file
###Code
with open('model.pkl', 'wb') as f:
pickle.dump(cv2, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
from sqlalchemy import create_engine
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table("disaster_message_categories",engine)
X = df.message.values
Y_df = df[['related', 'request', 'offer', 'aid_related', 'medical_help', \
'medical_products', 'search_and_rescue', 'security', 'military',\
'child_alone', 'water', 'food', 'shelter', 'clothing', 'money', \
'missing_people', 'refugees', 'death', 'other_aid', \
'infrastructure_related', 'transport', 'buildings', 'electricity'\
, 'tools', 'hospitals', 'shops', 'aid_centers', \
'other_infrastructure', 'weather_related', 'floods', 'storm', \
'fire', 'earthquake', 'cold', 'other_weather', 'direct_report'\
]]
Y = Y_df.values
X
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
import nltk
nltk.download(['punkt', 'wordnet'])
import re
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# tokenize text
tokens = word_tokenize(text)
# for each token,
# lemmatize, normalize case, and remove leading/trailing white space
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(\
RandomForestClassifier(max_features=500)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
from sklearn.model_selection import train_test_split
# splite data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y)
# train pipeline
pipeline.fit(X_train, y_train)
# checking dimensions for using classification_report in next cell
y_pred.shape
y_pred[:,0].shape
y_test[:,0].shape
len(y_test[0,:])
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
from sklearn.metrics import confusion_matrix, classification_report
# make predictions for test set
y_pred = pipeline.predict(X_test)
# report f1 score, precision and recall for each output category
for i in range(0,len(y_test[0,:])):
print(classification_report(y_test[:,i], y_pred[:,i]))
pipeline.get_params()
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
from sklearn.model_selection import GridSearchCV
parameters = {
'vect__max_df': [0.75,1.0],
'clf__estimator__n_estimators': [10, 20],
}
# create grid search object
cv = GridSearchCV(pipeline, parameters)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
for i in range(0,len(y_test[0,:])):
print(classification_report(y_test[:,i], y_pred[:,i]))
cv.best_params_
# classification_report is useful but it will be
# better to have one number to evaluate
# whole model. So let us check overall accuracy.
from sklearn.metrics import accuracy_score
y_test_pd = pd.DataFrame(data=y_test,
columns=Y_df.columns)
category_names=Y_df.columns
accuracies = np.array([])
for i in range(y_test.shape[1]):
acc = accuracy_score(y_test_pd.iloc[:, i].values, y_pred[:,i])
accuracies = np.append(accuracies,acc)
print('Accuracy of %25s: %.2f' %(category_names[i],acc) )
print('\n average accuracy: %.3f' %(accuracies.mean()))
# Or may be we can compute the same overall accuracy in an easier way
import numpy as np
(y_pred == y_test).mean().mean()
# More tuning for hyperparameters of randomforest pipeline
# with a broader range for parameters
parameters = {
'vect__max_df': [0.75,1.0],
'vect__ngram_range': [(1, 1),(1, 2)],
'clf__estimator__n_estimators': [50, 100]
}
# create grid search object
cv_rf = GridSearchCV(pipeline,
parameters,
verbose=1,
n_jobs=2 # using multiple cores
)
start_time = time.time()
cv_rf.fit(X_train, y_train)
y_pred_rf = cv_rf.predict(X_test)
print("--- %s seconds ---" % (time.time() - start_time))
print("Accuracy: %.3f, best params: %s" %((y_pred_rf == y_test).mean().mean(), cv_rf.best_params_))
###Output
Fitting 5 folds for each of 8 candidates, totalling 40 fits
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF XGBOOST classifier is an implementation of gradient boosted decision trees designed for speed and performance. The principle idea behind this algorithm is to construct new base learners which can be maximally correlated with negative gradient of the loss function, associated with the whole ensemble. xgboost is known to be faster than alterntive classifiers. The training of xgboost is fast because parallelization constructs each inidividual tree via shared-memory parallel programming. Let us see how much difference it makes.
###Code
import time
from xgboost import XGBClassifier
xgb_pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(\
XGBClassifier()))
])
# tuning params
params = {
'clf__estimator__max_depth': [4],
'clf__estimator__n_estimators': [100],
'clf__estimator__min_child_weight': [1, 5],
'clf__estimator__gamma': [0.5, 1],
'clf__estimator__subsample': [0.7, 1.0],
'clf__estimator__colsample_bytree': [0.7, 1.0],
}
cv3 = GridSearchCV(xgb_pipeline2,
params,
verbose=1,
n_jobs=2
)
start_time = time.time()
cv3.fit(X_train, y_train)
y_pred_xgboost3 = cv3.predict(X_test)
(y_pred_xgboost3 == y_test).mean().mean()
print("--- %s seconds ---" % (time.time() - start_time))
print("Accuracy: %.3f, best params: %s" %((y_pred_xgboost3 == y_test).mean().mean(),cv3.best_params_))
###Output
Fitting 5 folds for each of 16 candidates, totalling 80 fits
###Markdown
Did XGboost improve performance?Although overall accuracy stayed the same, Xgboost trained way faster than random forest classifier. Therefore, it is reasonable to use that in train_clasifier.py 9. Export your model as a pickle file
###Code
import pickle
filename = 'classifier.pkl'
pickle.dump(cv_rf, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import sys
import re
import pickle
import pandas as pd
from sqlalchemy import create_engine
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report, f1_score, precision_score, recall_score
from xgboost import XGBClassifier
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql("SELECT * FROM messages", engine)
df.head()
X = ['genre', 'message']
Y = list(set(df.columns)-set(['id', 'message', 'original', 'genre']))
df.loc[df['related'] == 2, 'related'] = 1
for y in Y:
print(y, df[y].unique())
###Output
missing_people [0 1]
related [1 0]
direct_report [0 1]
offer [0 1]
military [0 1]
fire [0 1]
infrastructure_related [0 1]
medical_products [0 1]
request [0 1]
child_alone [0]
hospitals [0 1]
earthquake [0 1]
floods [0 1]
security [0 1]
refugees [0 1]
clothing [0 1]
tools [0 1]
buildings [0 1]
food [0 1]
shelter [0 1]
transport [0 1]
water [0 1]
other_aid [0 1]
aid_centers [0 1]
other_infrastructure [0 1]
other_weather [0 1]
medical_help [0 1]
search_and_rescue [0 1]
money [0 1]
weather_related [0 1]
electricity [0 1]
cold [0 1]
aid_related [0 1]
storm [0 1]
death [0 1]
shops [0 1]
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
# lemmatize andremove stop words
stop_words = stopwords.words("english")
tokens = [lemmatizer.lemmatize(word).strip() for word in tokens if word not in stop_words]
return tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
from sklearn.model_selection import train_test_split
X = df['message']
y = df[Y]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
for i in range(len(Y)):
print('Category: ', y_test.columns[i])
print(classification_report(pd.DataFrame(y_pred).iloc[:, i], \
pd.DataFrame(y_test).iloc[:, i]))
print('Overall F1 (micro): ', f1_score(y_pred, y_test, average='micro'))
print('Overall precision (micro): ', precision_score(y_pred, y_test, average='micro'))
print('Overall recall (micro): ', recall_score(y_pred, y_test, average='micro'))
print('Overall accuracy: ', (y_pred == y_test.values).mean())
###Output
Overall F1 (micro): 0.6424616742031543
Overall precision (micro): 0.5320400409177262
Overall recall (micro): 0.8107220397483716
Overall accuracy: 0.947960009246417
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'vect__binary': (True, False),
'clf__estimator__n_estimators': (100, 500),
#'clf__estimator__class_weight': [None, 'balanced', 'balanced_subsample'],
}
model = GridSearchCV(pipeline, param_grid=parameters)
%time
model.fit(X_train,y_train)
model.best_params_
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
print('Overall F1 (micro): ', f1_score(y_pred, y_test, average='micro'))
print('Overall precision (micro): ', precision_score(y_pred, y_test, average='micro'))
print('Overall recall (micro): ', recall_score(y_pred, y_test, average='micro'))
print('Overall accuracy: ', (y_pred == y_test.values).mean())
###Output
Overall F1 (micro): 0.6441261503829091
Overall precision (micro): 0.5331360514394271
Overall recall (micro): 0.8134790122080383
Overall accuracy: 0.9482296964093081
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF Add genre as one of the feature
###Code
df = pd.concat([pd.get_dummies(df['genre']), df], axis=1)
df.drop(['genre'], axis=1, inplace=True)
df.head()
X = df[['direct', 'news', 'social', 'message']]
y = df[Y]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Define ItemSelector Class to add ['direct', 'news', 'social', 'message'] in the pipelinehttps://scikit-learn.org/0.18/auto_examples/hetero_feature_union.html
###Code
class ItemSelector(BaseEstimator, TransformerMixin):
"""For data grouped by feature, select subset of data at a provided key.
The data is expected to be stored in a 2D data structure, where the first
index is over features and the second is over samples. i.e.
>> len(data[key]) == n_samples
Please note that this is the opposite convention to scikit-learn feature
matrixes (where the first index corresponds to sample).
ItemSelector only requires that the collection implement getitem
(data[key]). Examples include: a dict of lists, 2D numpy array, Pandas
DataFrame, numpy record array, etc.
>> data = {'a': [1, 5, 2, 5, 2, 8],
'b': [9, 4, 1, 4, 1, 3]}
>> ds = ItemSelector(key='a')
>> data['a'] == ds.transform(data)
ItemSelector is not designed to handle data grouped by sample. (e.g. a
list of dicts). If your data is structured this way, consider a
transformer along the lines of `sklearn.feature_extraction.DictVectorizer`.
Parameters
----------
key : hashable, required
The key corresponding to the desired value in a mappable.
"""
def __init__(self, key):
self.key = key
def fit(self, x, y=None):
return self
def transform(self, data_dict):
return data_dict[self.key]
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('selector', ItemSelector(key='message')),
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('genre_enc', ItemSelector(key=['direct', 'social', 'news'])),
])),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
pipeline.fit(X_train,y_train)
y_pred = pipeline.predict(X_test)
for i in range(len(Y)):
print(y_test.columns[i])
print(classification_report(pd.DataFrame(y_pred).iloc[:, i], pd.DataFrame(y_test).iloc[:, i]))
print('Overall F1 (micro): ', f1_score(y_pred, y_test, average='micro'))
print('Overall precision (micro): ', precision_score(y_pred, y_test, average='micro'))
print('Overall recall (micro): ', recall_score(y_pred, y_test, average='micro'))
print('Overall accuracy: ', (y_pred == y_test.values).mean())
###Output
Overall F1 (micro): 0.6424186210856412
Overall precision (micro): 0.5313824346047056
Overall recall (micro): 0.8121161362367393
Overall accuracy: 0.9480145887912879
###Markdown
Try out XGBoost model
###Code
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('selector', ItemSelector(key='message')),
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('genre_enc', ItemSelector(key=['direct', 'social', 'news'])),
])),
('clf', MultiOutputClassifier(XGBClassifier())),
])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
print('Overall F1 (micro): ', f1_score(y_pred, y_test, average='micro'))
print('Overall precision (micro): ', precision_score(y_pred, y_test, average='micro'))
print('Overall recall (micro): ', recall_score(y_pred, y_test, average='micro'))
print('Overall accuracy: ', (y_pred == y_test.values).mean())
pipeline.get_params()
parameters = {
'clf__estimator__max_depth': (7, 10),
'clf__estimator__colsample_bytree': (0.6, 1)
}
model = GridSearchCV(pipeline, param_grid=parameters)
model.fit(X_train,y_train)
model.best_params_
y_pred = model.predict(X_test)
print('Overall F1 (micro): ', f1_score(y_pred, y_test, average='micro'))
print('Overall precision (micro): ', precision_score(y_pred, y_test, average='micro'))
print('Overall recall (micro): ', recall_score(y_pred, y_test, average='micro'))
print('Overall accuracy: ', (y_pred == y_test.values).mean())
###Output
Overall F1 (micro): 0.6795075789297379
Overall precision (micro): 0.5928686248721321
Overall recall (micro): 0.7958022754021185
Overall accuracy: 0.9508527251245698
###Markdown
9. Export your model as a pickle file
###Code
with open('models/classifier.pkl', 'wb') as f:
pickle.dump(model, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
#import sys
#sqlalchemy_utils.__version__
#from distutils.sysconfig import get_python_lib
#print(get_python_lib())
from time import time
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists
import pickle
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
import string
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from sklearn.svm import SVC
###Output
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\kbaka\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\kbaka\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] C:\Users\kbaka\AppData\Roaming\nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\kbaka\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Inspect data
###Code
database_filepath = "data/disaster_response.db"
database_exists(f'sqlite:///{database_filepath}')
engine = create_engine(f'sqlite:///{database_filepath}')
connection = engine.connect()
df = pd.read_sql_table("messages_categories", con=connection)
df.head()
df.columns
for col in df.iloc[:, 3:]:
print(df[col].unique())
def load_data(database_filepath):
'''
Input:
database_filename(str): Filepath of the database.
Output:
X(numpy.ndarray): Array of input features.
y(numpy.ndarray): Output labels, classes.
'''
try:
database_exists(f'sqlite:///{database_filepath}')
engine = create_engine(f'sqlite:///{database_filepath}')
connection = engine.connect()
df = pd.read_sql_table("messages_categories", con=connection)
labels = df.iloc[:,4:].columns
X = df["message"].values
y = df.iloc[:,4:].values
connection.close()
return X, y, labels
except:
print("Database does not exist! Check your database_filepath!")
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
''' Normalize, lemmantize and tokenize text messages.
Input:
text(str): Text messages.
Output:
clean_tokens(str): Normalize, lemmantize and tokenize text messages.
'''
stop_words = set(stopwords.words('english'))
# normalize text
normalized_text = text.lower().strip()
# tokenize text
tokens = word_tokenize(normalized_text)
# lemmantize text and remove stop words and non alpha numericals
clean_tokens = []
for token in tokens:
lemmatizer = WordNetLemmatizer()
clean_token = lemmatizer.lemmatize(token)
if clean_token not in stop_words and clean_token.isalpha():
clean_tokens.append(clean_token)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
#https://machinelearningmastery.com/prepare-text-data-machine-learning-scikit-learn/
X, y, labels = load_data("data/disaster_response.db")
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train
y_train
labels
for data in [X_train, X_test, y_train, y_test]:
print(data.shape, type(data))
type(df)
from sklearn.utils.multiclass import type_of_target
(type_of_target(y_test))
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
def build_model():
'''Build a Machine Learning pipeline using TfidfTransformer, RandomForestClassifier and GridSearchCV
Input:
None
Output:
cv(sklearn.model_selection._search.GridSearchCV): Results of GridSearchCV
'''
text_clf = Pipeline([
('vect', CountVectorizer(tokenizer=partial(tokenize))),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(
estimator=RandomForestClassifier()))
])
parameters = {
'clf__estimator__max_depth': [4, 6, 10, 12],
'clf__estimator__n_estimators': [20, 40, 100],
}
grid_fit = GridSearchCV(
estimator=text_clf,
param_grid=parameters,
verbose=3,
cv=2,
n_jobs=-1)
return grid_fit
from sklearn.utils import parallel_backend
from functools import partial
with parallel_backend('multiprocessing'):
model = build_model() # stop_words='english'
model.fit(X_train,y_train)
###Output
Fitting 2 folds for each of 30 candidates, totalling 60 fits
[Parallel(n_jobs=-1)]: Using backend MultiprocessingBackend with 8 concurrent workers.
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def evaluate_model(model, X_test, y_test, labels):
""" Function that will predict on X_test messages using build_model() function that
transforms messages, extract features and trains a classifer.
Input:
model(sklearn.model_selection._search.GridSearchCV):
X_test(numpy.ndarray): Numpy array of messages that based on which trained model will predict.
y_test(numpy.ndarray): Numpy array of classes that will be used to validate model predictions.
labels(pandas.core.indexes.base.Index): Target labels for a multiclass prediction.
Output:
df(pandas.core.frame.DataFrame): Dataframe that contains report showing the main classification metrics.
"""
y_pred = model.predict(X_test)
df = pd.DataFrame(classification_report(y_test, y_pred, target_names=labels, output_dict=True)).T.reset_index()
df = df.rename(columns = {"index": "labels"})
return df
model.best_score_
model.best_estimator_
X_train.shape, X_test.shape, y_train.shape, y_test.shape, len(labels)
with parallel_backend('multiprocessing'):
df_evaluation = evaluate_model(model, X_test, y_test, labels)
df_evaluation
df_evaluation[["labels", "precision"]].plot(x="labels", y = "precision", kind="bar", rot=90);
df_evaluation[["labels", "recall"]].plot(x="labels", y = "recall", kind="bar", rot=90);
from sklearn.metrics import confusion_matrix
import seaborn as sns
%matplotlib inline
pred = best_clf.predict(X_test)
sns.heatmap(confusion_matrix(y_test, pred), annot = True, fmt = '')
###Output
_____no_output_____
###Markdown
6.Default modelUse grid search to find better parameters.
###Code
def build_model():
'''Build a Machine Learning pipeline using TfidfTransformer, RandomForestClassifier and GridSearchCV
Input:
None
Output:
cv(sklearn.model_selection._search.GridSearchCV): Results of GridSearchCV
'''
text_clf = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(estimator=RandomForestClassifier()))
])
return text_clf
model = build_model()
model.fit(X_train,y_train)
df_evaluation_no_grid = evaluate_model(model, X_test, y_test, labels)
df_evaluation_no_grid
df_evaluation_no_grid[["labels", "precision"]].plot(x="labels", y = "precision", kind="bar", rot=90);
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio! 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
def save_model(model, filepath):
'''Saves the model to defined filepath
Input
model(sklearn.model_selection._search.GridSearchCV): The model to be saved.
model_filepath(str): Filepath where the model will be saved.
Output
This function will save the model as a pickle file on the defined filepath.
'''
temporary_pickle = open(filepath, 'wb')
pickle.dump(model, temporary_pickle)
temporary_pickle.close()
print("Model has been succesfully saved!")
save_model(model, "models/model.pkl")
###Output
Model has been succesfully saved!
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
# Feature importance
# Import a supervised learning model that has 'feature_importances_'
#from sklearn.tree import DecisionTreeClassifier
# Train the supervised model on the training set using .fit(X_train, y_train)
#model = DecisionTreeClassifier()
#model.fit(X_train, y_train)
# Extract the feature importances using .feature_importances_
#importances = model.feature_importances_
# Plot
#vs.feature_plot(importances, X_train, y_train)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
import re
import nltk
import seaborn as sns
import time
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.stem import WordNetLemmatizer
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
#TfidfVectorizer = CountVectorizer + TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.metrics import classification_report,accuracy_score
import pickle
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
# load data from database
###
# create a database engine
# to find the correct file path, use the python os library:
# import os
# print(os.getcwd())
#
###
engine = create_engine('sqlite:////Users/minyan/Desktop/Python_Project/Data_Courses/DSND_Term2-master/Project3DataPipeline/workspace/data/DisasterResponse.db')
#list table names in the database
engine.table_names()
df=pd.read_sql_table('Message',con=engine)
df.head()
df.shape
X=df['message']
Y = df.iloc[ : , -36:]
category_name = Y.columns
X[9]
Y.dtypes
for col in category_name:
print(f'{df[col].name}{df[col].unique()}')
Y = Y.drop(['related','child_alone'],axis=1)
category_name = Y.columns
#a quick look of disaster response cetegory plot
plt.figure(figsize=(18,15))
Y.sum().sort_values(ascending=False).plot(kind='bar')
plt.title("Distribution of Disaster Response Type")
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def p_tokenize(text):
'''
INPUT:
text:raw message
OUTPUT:
X:tokenized words
DESCRIPTION:
The dunction is to process the scentence, normalize texts, tokenize texts.
Convert all cases to lower cases, remove extra space,stop words, and
reduce words to their root form.
'''
clean_tokens=[]
#remove punctuation,normalize case to lower cases, and remove extra space
text = re.sub(r"[^a-zA-Z0-9]"," ",text.lower()).strip()
#tokenize text
tokens=word_tokenize(text)
for w in tokens:
#remove stop words
if w not in stopwords.words("english"):
#lemmatization
#reduce words to their root form
lemmed = WordNetLemmatizer().lemmatize(w)
clean_tokens.append(lemmed)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('tfidvectorizer', TfidfVectorizer(tokenizer=p_tokenize)),#override the tokenizer with customized one
('clf', MultiOutputClassifier(SGDClassifier(n_jobs = -1,random_state=6)))])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.2,random_state=6)
X_train
pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
Y_test_pred=pipeline.predict(X_test)
print(Y_test_pred)
Y_test_pred = pd.DataFrame(data=Y_test_pred,
index=Y_test.index,
columns=category_name)
len(category_name)
# from sklearn.metrics import classification_report
print(classification_report(Y_test, Y_test_pred,target_names=category_name))
print(accuracy_score(Y_test, Y_test_pred))
###Output
0.43039664378337145
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'clf__estimator__alpha': [0.0001,0.001],
'clf__estimator__penalty':['l2'],
'clf__estimator__loss':['hinge']
}
cv = GridSearchCV(pipeline,parameters,cv=3)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, Y_train)
cv.best_score_
cv.best_params_
print(classification_report(Y_test, Y_test_pred,target_names=category_name))
print(accuracy_score(Y_test, Y_test_pred))
###Output
precision recall f1-score support
request 0.77 0.53 0.63 848
offer 0.00 0.00 0.00 28
aid_related 0.77 0.68 0.72 2159
medical_help 0.68 0.15 0.24 402
medical_products 0.75 0.23 0.36 248
search_and_rescue 0.67 0.08 0.14 156
security 0.00 0.00 0.00 96
military 0.72 0.12 0.21 169
water 0.76 0.59 0.66 313
food 0.78 0.71 0.74 571
shelter 0.83 0.52 0.64 466
clothing 0.72 0.42 0.54 80
money 1.00 0.02 0.05 126
missing_people 1.00 0.01 0.03 67
refugees 0.63 0.14 0.22 162
death 0.80 0.41 0.54 259
other_aid 0.57 0.02 0.04 645
infrastructure_related 0.00 0.00 0.00 322
transport 0.71 0.10 0.18 250
buildings 0.72 0.21 0.33 264
electricity 0.82 0.11 0.19 127
tools 0.00 0.00 0.00 36
hospitals 0.00 0.00 0.00 48
shops 0.00 0.00 0.00 25
aid_centers 0.00 0.00 0.00 55
other_infrastructure 0.00 0.00 0.00 211
weather_related 0.88 0.70 0.78 1504
floods 0.92 0.50 0.65 454
storm 0.81 0.61 0.69 524
fire 1.00 0.07 0.13 58
earthquake 0.89 0.78 0.83 502
cold 0.83 0.16 0.27 123
other_weather 0.75 0.03 0.06 275
direct_report 0.76 0.39 0.52 1006
micro avg 0.80 0.44 0.57 12579
macro avg 0.60 0.24 0.31 12579
weighted avg 0.73 0.44 0.52 12579
samples avg 0.42 0.27 0.31 12579
0.43039664378337145
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline = Pipeline([
('vectorizer', CountVectorizer(tokenizer=p_tokenize)),#override the tokenizer with customized one
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=-1,random_state=6)))])
pipeline.fit(X_train, Y_train)
Y_test_pred=pipeline.predict(X_test)
Y_test_pred = pd.DataFrame(data=Y_test_pred,
index=Y_test.index,
columns=category_name)
pipeline.get_params()
parameters = {
#'vectorizer__ngram__range':((1,1),(1,2)),#the range for a string of n words
#'tfidf__smooth__idf':(True,False),
'clf__estimator__n_estimators':[10,50,100,150]
#'clf__estimator_criterion':['gini','entropy']
}
cv = GridSearchCV(pipeline,parameters,cv=3)
start=time.time()
cv.fit(X_train, Y_train)
end=time.time()
process=end-start
print(process)
cv.best_score_
cv.get_params()
#get the evaluation on the test dataset
Y_test_pred=cv.predict(X_test)
Y_test_pred = pd.DataFrame(data=Y_test_pred,
index=Y_test.index,
columns=category_name)
Y_test_pred
print(classification_report(Y_test, Y_test_pred,target_names=category_name))
print(accuracy_score(Y_test, Y_test_pred))
###Output
precision recall f1-score support
request 0.79 0.49 0.60 848
offer 0.00 0.00 0.00 28
aid_related 0.75 0.69 0.72 2159
medical_help 0.67 0.06 0.12 402
medical_products 0.74 0.08 0.15 248
search_and_rescue 0.60 0.02 0.04 156
security 0.25 0.01 0.02 96
military 0.79 0.07 0.12 169
water 0.92 0.34 0.49 313
food 0.82 0.58 0.68 571
shelter 0.82 0.40 0.54 466
clothing 0.80 0.15 0.25 80
money 0.80 0.03 0.06 126
missing_people 0.00 0.00 0.00 67
refugees 0.67 0.04 0.07 162
death 0.85 0.17 0.29 259
other_aid 0.58 0.04 0.08 645
infrastructure_related 0.00 0.00 0.00 322
transport 0.69 0.09 0.16 250
buildings 0.72 0.14 0.23 264
electricity 1.00 0.04 0.08 127
tools 0.00 0.00 0.00 36
hospitals 0.00 0.00 0.00 48
shops 0.00 0.00 0.00 25
aid_centers 0.00 0.00 0.00 55
other_infrastructure 0.00 0.00 0.00 211
weather_related 0.86 0.71 0.78 1504
floods 0.90 0.46 0.61 454
storm 0.82 0.52 0.63 524
fire 0.00 0.00 0.00 58
earthquake 0.89 0.78 0.83 502
cold 1.00 0.05 0.09 123
other_weather 0.57 0.04 0.08 275
direct_report 0.77 0.34 0.47 1006
micro avg 0.80 0.40 0.54 12579
macro avg 0.56 0.19 0.24 12579
weighted avg 0.73 0.40 0.47 12579
samples avg 0.42 0.24 0.29 12579
0.40903890160183065
###Markdown
9. Export your model as a pickle file
###Code
# Create a pickle file for the model
file_name = 'classifier.pkl'
with open(file_name, 'wb') as f:
pickle.dump(cv, f)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import sqlalchemy as DB
import nltk
import pickle
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import classification_report, accuracy_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# load data from database
engine = DB.create_engine('sqlite:///myDB.db')
df = pd.read_sql('select * from myTable', con=engine)
X = df['message'].values
Y = df.drop(['id', 'message', 'genre'], axis=1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
pipeline.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
Y_pred = pipeline.predict(X_test)
Y_pred_df = pd.DataFrame(Y_pred, columns = Y_test.columns)
for column in Y_test.columns:
print('------------------------------------------------------\n')
print('Accuracy: ', accuracy_score(Y_test[column], Y_pred_df[column]))
print('Feature: {}\n'.format(column))
print(classification_report(Y_test[column],Y_pred_df[column]))
###Output
------------------------------------------------------
Accuracy: 0.7934782608695652
Feature: related
precision recall f1-score support
0 0.62 0.34 0.44 1217
1 0.82 0.94 0.87 3986
2 0.60 0.15 0.24 41
micro avg 0.79 0.79 0.79 5244
macro avg 0.68 0.48 0.52 5244
weighted avg 0.77 0.79 0.77 5244
------------------------------------------------------
Accuracy: 0.8861556064073226
Feature: request
precision recall f1-score support
0 0.89 0.98 0.93 4356
1 0.84 0.41 0.55 888
micro avg 0.89 0.89 0.89 5244
macro avg 0.86 0.70 0.74 5244
weighted avg 0.88 0.89 0.87 5244
------------------------------------------------------
Accuracy: 0.9952326468344775
Feature: offer
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 25
micro avg 1.00 1.00 1.00 5244
macro avg 0.50 0.50 0.50 5244
weighted avg 0.99 1.00 0.99 5244
------------------------------------------------------
Accuracy: 0.7301678108314263
Feature: aid_related
precision recall f1-score support
0 0.72 0.88 0.79 3040
1 0.76 0.53 0.62 2204
micro avg 0.73 0.73 0.73 5244
macro avg 0.74 0.70 0.71 5244
weighted avg 0.73 0.73 0.72 5244
------------------------------------------------------
Accuracy: 0.9220061022120518
Feature: medical_help
precision recall f1-score support
0 0.93 1.00 0.96 4831
1 0.54 0.07 0.12 413
micro avg 0.92 0.92 0.92 5244
macro avg 0.73 0.53 0.54 5244
weighted avg 0.90 0.92 0.89 5244
------------------------------------------------------
Accuracy: 0.9534706331045004
Feature: medical_products
precision recall f1-score support
0 0.96 1.00 0.98 4985
1 0.68 0.11 0.19 259
micro avg 0.95 0.95 0.95 5244
macro avg 0.82 0.55 0.58 5244
weighted avg 0.94 0.95 0.94 5244
------------------------------------------------------
Accuracy: 0.9702517162471396
Feature: search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.98 5087
1 0.67 0.01 0.03 157
micro avg 0.97 0.97 0.97 5244
macro avg 0.82 0.51 0.50 5244
weighted avg 0.96 0.97 0.96 5244
------------------------------------------------------
Accuracy: 0.982837528604119
Feature: security
precision recall f1-score support
0 0.98 1.00 0.99 5155
1 0.33 0.01 0.02 89
micro avg 0.98 0.98 0.98 5244
macro avg 0.66 0.51 0.51 5244
weighted avg 0.97 0.98 0.97 5244
------------------------------------------------------
Accuracy: 0.9681540808543097
Feature: military
precision recall f1-score support
0 0.97 1.00 0.98 5077
1 0.50 0.05 0.09 167
micro avg 0.97 0.97 0.97 5244
macro avg 0.73 0.52 0.54 5244
weighted avg 0.95 0.97 0.96 5244
------------------------------------------------------
Accuracy: 1.0
Feature: child_alone
precision recall f1-score support
0 1.00 1.00 1.00 5244
micro avg 1.00 1.00 1.00 5244
macro avg 1.00 1.00 1.00 5244
weighted avg 1.00 1.00 1.00 5244
------------------------------------------------------
Accuracy: 0.9469870327993898
Feature: water
precision recall f1-score support
0 0.95 1.00 0.97 4916
1 0.82 0.20 0.32 328
micro avg 0.95 0.95 0.95 5244
macro avg 0.88 0.60 0.64 5244
weighted avg 0.94 0.95 0.93 5244
------------------------------------------------------
Accuracy: 0.9191456903127384
Feature: food
precision recall f1-score support
0 0.92 0.99 0.96 4655
1 0.85 0.34 0.49 589
micro avg 0.92 0.92 0.92 5244
macro avg 0.88 0.67 0.72 5244
weighted avg 0.91 0.92 0.90 5244
------------------------------------------------------
Accuracy: 0.9250572082379863
Feature: shelter
precision recall f1-score support
0 0.93 0.99 0.96 4782
1 0.79 0.20 0.32 462
micro avg 0.93 0.93 0.93 5244
macro avg 0.86 0.60 0.64 5244
weighted avg 0.92 0.93 0.90 5244
------------------------------------------------------
Accuracy: 0.9849351639969489
Feature: clothing
precision recall f1-score support
0 0.99 1.00 0.99 5158
1 0.73 0.13 0.22 86
micro avg 0.98 0.98 0.98 5244
macro avg 0.86 0.56 0.61 5244
weighted avg 0.98 0.98 0.98 5244
------------------------------------------------------
Accuracy: 0.979023646071701
Feature: money
precision recall f1-score support
0 0.98 1.00 0.99 5134
1 0.50 0.04 0.07 110
micro avg 0.98 0.98 0.98 5244
macro avg 0.74 0.52 0.53 5244
weighted avg 0.97 0.98 0.97 5244
------------------------------------------------------
Accuracy: 0.988558352402746
Feature: missing_people
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 1.00 0.02 0.03 61
micro avg 0.99 0.99 0.99 5244
macro avg 0.99 0.51 0.51 5244
weighted avg 0.99 0.99 0.98 5244
------------------------------------------------------
Accuracy: 0.9668192219679634
Feature: refugees
precision recall f1-score support
0 0.97 1.00 0.98 5065
1 0.61 0.08 0.14 179
micro avg 0.97 0.97 0.97 5244
macro avg 0.79 0.54 0.56 5244
weighted avg 0.96 0.97 0.95 5244
------------------------------------------------------
Accuracy: 0.9544241037376049
Feature: death
precision recall f1-score support
0 0.96 1.00 0.98 4993
1 0.75 0.07 0.13 251
micro avg 0.95 0.95 0.95 5244
macro avg 0.85 0.54 0.55 5244
weighted avg 0.95 0.95 0.94 5244
------------------------------------------------------
Accuracy: 0.8710907704042715
Feature: other_aid
precision recall f1-score support
0 0.87 1.00 0.93 4571
1 0.46 0.03 0.05 673
micro avg 0.87 0.87 0.87 5244
macro avg 0.67 0.51 0.49 5244
weighted avg 0.82 0.87 0.82 5244
------------------------------------------------------
Accuracy: 0.935163996948894
Feature: infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 4907
1 0.20 0.00 0.01 337
micro avg 0.94 0.94 0.94 5244
macro avg 0.57 0.50 0.49 5244
weighted avg 0.89 0.94 0.90 5244
------------------------------------------------------
Accuracy: 0.9528985507246377
Feature: transport
precision recall f1-score support
0 0.95 1.00 0.98 4991
1 0.69 0.04 0.08 253
micro avg 0.95 0.95 0.95 5244
macro avg 0.82 0.52 0.53 5244
weighted avg 0.94 0.95 0.93 5244
------------------------------------------------------
Accuracy: 0.9565217391304348
Feature: buildings
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'clf__estimator__min_samples_split': [2, 4],
}
cv = GridSearchCV(pipeline, param_grid=parameters, verbose=2, n_jobs=4)
cv.fit(X_train, Y_train)
cv_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
overall_accuracy_cv = (cv_pred == Y_test).mean().mean()
overall_accuracy_normal = (Y_pred == Y_test).mean().mean()
print(overall_accuracy_normal)
print(overall_accuracy_cv)
cv_pred_df = pd.DataFrame(cv_pred, columns = Y_test.columns)
for column in Y_test.columns:
print('------------------------------------------------------\n')
print('Feature: {}\n'.format(column))
print('Accuracy: ', accuracy_score(Y_test[column], cv_pred_df[column]))
print(classification_report(Y_test[column],cv_pred_df[column]))
###Output
------------------------------------------------------
Feature: related
Accuracy: 0.7906178489702517
precision recall f1-score support
0 0.61 0.35 0.45 1217
1 0.82 0.93 0.87 3986
2 0.44 0.10 0.16 41
micro avg 0.79 0.79 0.79 5244
macro avg 0.63 0.46 0.49 5244
weighted avg 0.77 0.79 0.77 5244
------------------------------------------------------
Feature: request
Accuracy: 0.8878718535469108
precision recall f1-score support
0 0.89 0.98 0.94 4356
1 0.82 0.43 0.57 888
micro avg 0.89 0.89 0.89 5244
macro avg 0.86 0.71 0.75 5244
weighted avg 0.88 0.89 0.87 5244
------------------------------------------------------
Feature: offer
Accuracy: 0.9952326468344775
precision recall f1-score support
0 1.00 1.00 1.00 5219
1 0.00 0.00 0.00 25
micro avg 1.00 1.00 1.00 5244
macro avg 0.50 0.50 0.50 5244
weighted avg 0.99 1.00 0.99 5244
------------------------------------------------------
Feature: aid_related
Accuracy: 0.725209763539283
precision recall f1-score support
0 0.71 0.89 0.79 3040
1 0.77 0.50 0.60 2204
micro avg 0.73 0.73 0.73 5244
macro avg 0.74 0.69 0.70 5244
weighted avg 0.73 0.73 0.71 5244
------------------------------------------------------
Feature: medical_help
Accuracy: 0.9223874904652937
precision recall f1-score support
0 0.93 1.00 0.96 4831
1 0.57 0.06 0.11 413
micro avg 0.92 0.92 0.92 5244
macro avg 0.75 0.53 0.53 5244
weighted avg 0.90 0.92 0.89 5244
------------------------------------------------------
Feature: medical_products
Accuracy: 0.9544241037376049
precision recall f1-score support
0 0.96 1.00 0.98 4985
1 0.81 0.10 0.18 259
micro avg 0.95 0.95 0.95 5244
macro avg 0.88 0.55 0.58 5244
weighted avg 0.95 0.95 0.94 5244
------------------------------------------------------
Feature: search_and_rescue
Accuracy: 0.9715865751334859
precision recall f1-score support
0 0.97 1.00 0.99 5087
1 0.79 0.07 0.13 157
micro avg 0.97 0.97 0.97 5244
macro avg 0.88 0.53 0.56 5244
weighted avg 0.97 0.97 0.96 5244
------------------------------------------------------
Feature: security
Accuracy: 0.9826468344774981
precision recall f1-score support
0 0.98 1.00 0.99 5155
1 0.25 0.01 0.02 89
micro avg 0.98 0.98 0.98 5244
macro avg 0.62 0.51 0.51 5244
weighted avg 0.97 0.98 0.97 5244
------------------------------------------------------
Feature: military
Accuracy: 0.9683447749809306
precision recall f1-score support
0 0.97 1.00 0.98 5077
1 0.56 0.03 0.06 167
micro avg 0.97 0.97 0.97 5244
macro avg 0.76 0.51 0.52 5244
weighted avg 0.96 0.97 0.95 5244
------------------------------------------------------
Feature: child_alone
Accuracy: 1.0
precision recall f1-score support
0 1.00 1.00 1.00 5244
micro avg 1.00 1.00 1.00 5244
macro avg 1.00 1.00 1.00 5244
weighted avg 1.00 1.00 1.00 5244
------------------------------------------------------
Feature: water
Accuracy: 0.9557589626239512
precision recall f1-score support
0 0.96 1.00 0.98 4916
1 0.86 0.35 0.50 328
micro avg 0.96 0.96 0.96 5244
macro avg 0.91 0.67 0.74 5244
weighted avg 0.95 0.96 0.95 5244
------------------------------------------------------
Feature: food
Accuracy: 0.9185736079328757
precision recall f1-score support
0 0.93 0.99 0.96 4655
1 0.80 0.37 0.50 589
micro avg 0.92 0.92 0.92 5244
macro avg 0.86 0.68 0.73 5244
weighted avg 0.91 0.92 0.90 5244
------------------------------------------------------
Feature: shelter
Accuracy: 0.933257055682685
precision recall f1-score support
0 0.94 0.99 0.96 4782
1 0.78 0.34 0.47 462
micro avg 0.93 0.93 0.93 5244
macro avg 0.86 0.66 0.72 5244
weighted avg 0.93 0.93 0.92 5244
------------------------------------------------------
Feature: clothing
Accuracy: 0.9843630816170862
precision recall f1-score support
0 0.99 1.00 0.99 5158
1 0.67 0.09 0.16 86
micro avg 0.98 0.98 0.98 5244
macro avg 0.83 0.55 0.58 5244
weighted avg 0.98 0.98 0.98 5244
------------------------------------------------------
Feature: money
Accuracy: 0.9792143401983219
precision recall f1-score support
0 0.98 1.00 0.99 5134
1 0.60 0.03 0.05 110
micro avg 0.98 0.98 0.98 5244
macro avg 0.79 0.51 0.52 5244
weighted avg 0.97 0.98 0.97 5244
------------------------------------------------------
Feature: missing_people
Accuracy: 0.988558352402746
precision recall f1-score support
0 0.99 1.00 0.99 5183
1 1.00 0.02 0.03 61
micro avg 0.99 0.99 0.99 5244
macro avg 0.99 0.51 0.51 5244
weighted avg 0.99 0.99 0.98 5244
------------------------------------------------------
Feature: refugees
Accuracy: 0.9677726926010679
precision recall f1-score support
0 0.97 1.00 0.98 5065
1 0.86 0.07 0.12 179
micro avg 0.97 0.97 0.97 5244
macro avg 0.91 0.53 0.55 5244
weighted avg 0.96 0.97 0.95 5244
------------------------------------------------------
Feature: death
Accuracy: 0.9597635392829901
precision recall f1-score support
0 0.96 1.00 0.98 4993
1 0.86 0.19 0.31 251
micro avg 0.96 0.96 0.96 5244
macro avg 0.91 0.59 0.65 5244
weighted avg 0.96 0.96 0.95 5244
------------------------------------------------------
Feature: other_aid
Accuracy: 0.8705186880244088
precision recall f1-score support
0 0.87 0.99 0.93 4571
1 0.43 0.03 0.05 673
micro avg 0.87 0.87 0.87 5244
macro avg 0.65 0.51 0.49 5244
weighted avg 0.82 0.87 0.82 5244
------------------------------------------------------
Feature: infrastructure_related
Accuracy: 0.935163996948894
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
improved_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
improved_pipeline.fit(X_train, Y_train)
improved_pred = improved_pipeline.predict(X_test)
improved_pred_df = pd.DataFrame(improved_pred, columns = Y_test.columns)
for column in Y_test.columns:
print('------------------------------------------------------\n')
print('Accuracy: ', accuracy_score(Y_test[column], improved_pred_df[column]))
print('Feature: {}\n'.format(column))
print(classification_report(Y_test[column],improved_pred_df[column]))
overall_accuracy = (improved_pred == Y_test).mean().mean()
overall_accuracy
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
with open('adaboost.pkl', 'wb') as file:
pickle.dump(improved_pipeline, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
from sqlalchemy import create_engine
import re
import nltk
import string
import numpy as np
from nltk.stem import WordNetLemmatizer
nltk.download(['punkt', 'wordnet', 'stopwords'])
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score
import sklearn
sklearn.__version__
# load data from database
engine = create_engine('sqlite:///data/InsertDatabaseName.db')
df = pd.read_sql_table('InsertTableName', con=engine)
df.head()
# independant variable (X), dependent variable (Y), category names
X = df['message']
Y = df.drop(['message', 'genre', 'id', 'original'], axis=1)
categories = Y.columns.tolist()
list(categories)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
text = text.lower()
# Tokenization
tokens = nltk.word_tokenize(text)
# Lemmatization
lemmatizer = WordNetLemmatizer()
stop_words = nltk.corpus.stopwords.words("english")
lemmatized = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return lemmatized
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
print(X_train.shape, y_train.shape)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
from sklearn.metrics import classification_report
test_pred = pipeline.predict(X_test)
for i in range(len(categories)):
print(categories[i])
print(classification_report(y_test[categories[i]], test_pred[:, i]))
y_train.shape
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params().keys()
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.8, 1.0),
'vect__max_features': (None, 10000),
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__learning_rate': [0.1, 1.0]
}
cv = GridSearchCV(pipeline, parameters, cv=3, n_jobs=-1)
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
test_pred = cv.predict(X_test)
for i in range(len(categories)):
print(categories[i])
print(classification_report(y_test[categories[i]], test_pred[:, i]))
###Output
related
precision recall f1-score support
0 0.60 0.15 0.24 1531
1 0.78 0.97 0.87 4977
2 0.42 0.30 0.35 46
micro avg 0.77 0.77 0.77 6554
macro avg 0.60 0.47 0.49 6554
weighted avg 0.74 0.77 0.72 6554
request
precision recall f1-score support
0 0.91 0.97 0.94 5410
1 0.77 0.53 0.63 1144
micro avg 0.89 0.89 0.89 6554
macro avg 0.84 0.75 0.78 6554
weighted avg 0.88 0.89 0.88 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.08 0.03 0.05 31
micro avg 0.99 0.99 0.99 6554
macro avg 0.54 0.52 0.52 6554
weighted avg 0.99 0.99 0.99 6554
aid_related
precision recall f1-score support
0 0.76 0.87 0.81 3840
1 0.77 0.62 0.69 2714
micro avg 0.76 0.76 0.76 6554
macro avg 0.76 0.74 0.75 6554
weighted avg 0.76 0.76 0.76 6554
medical_help
precision recall f1-score support
0 0.94 0.98 0.96 6037
1 0.61 0.28 0.38 517
micro avg 0.93 0.93 0.93 6554
macro avg 0.77 0.63 0.67 6554
weighted avg 0.91 0.93 0.92 6554
medical_products
precision recall f1-score support
0 0.97 0.99 0.98 6230
1 0.61 0.34 0.44 324
micro avg 0.96 0.96 0.96 6554
macro avg 0.79 0.67 0.71 6554
weighted avg 0.95 0.96 0.95 6554
search_and_rescue
precision recall f1-score support
0 0.98 1.00 0.99 6365
1 0.59 0.20 0.29 189
micro avg 0.97 0.97 0.97 6554
macro avg 0.78 0.60 0.64 6554
weighted avg 0.97 0.97 0.97 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.50 0.10 0.17 118
micro avg 0.98 0.98 0.98 6554
macro avg 0.74 0.55 0.58 6554
weighted avg 0.98 0.98 0.98 6554
military
precision recall f1-score support
0 0.98 0.99 0.98 6331
1 0.56 0.37 0.44 223
micro avg 0.97 0.97 0.97 6554
macro avg 0.77 0.68 0.71 6554
weighted avg 0.96 0.97 0.97 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
micro avg 1.00 1.00 1.00 6554
macro avg 1.00 1.00 1.00 6554
weighted avg 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.97 0.98 0.98 6130
1 0.74 0.62 0.67 424
micro avg 0.96 0.96 0.96 6554
macro avg 0.86 0.80 0.83 6554
weighted avg 0.96 0.96 0.96 6554
food
precision recall f1-score support
0 0.96 0.98 0.97 5812
1 0.81 0.69 0.74 742
micro avg 0.95 0.95 0.95 6554
macro avg 0.88 0.83 0.86 6554
weighted avg 0.94 0.95 0.94 6554
shelter
precision recall f1-score support
0 0.95 0.99 0.97 5970
1 0.78 0.52 0.62 584
micro avg 0.94 0.94 0.94 6554
macro avg 0.87 0.75 0.80 6554
weighted avg 0.94 0.94 0.94 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6442
1 0.71 0.40 0.51 112
micro avg 0.99 0.99 0.99 6554
macro avg 0.85 0.70 0.75 6554
weighted avg 0.98 0.99 0.99 6554
money
precision recall f1-score support
0 0.98 0.99 0.99 6413
1 0.42 0.21 0.28 141
micro avg 0.98 0.98 0.98 6554
macro avg 0.70 0.60 0.63 6554
weighted avg 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.47 0.19 0.28 77
micro avg 0.99 0.99 0.99 6554
macro avg 0.73 0.60 0.63 6554
weighted avg 0.98 0.99 0.99 6554
refugees
precision recall f1-score support
0 0.97 0.99 0.98 6314
1 0.54 0.22 0.31 240
micro avg 0.96 0.96 0.96 6554
macro avg 0.75 0.61 0.65 6554
weighted avg 0.96 0.96 0.96 6554
death
precision recall f1-score support
0 0.98 0.99 0.98 6256
1 0.64 0.50 0.56 298
micro avg 0.96 0.96 0.96 6554
macro avg 0.81 0.74 0.77 6554
weighted avg 0.96 0.96 0.96 6554
other_aid
precision recall f1-score support
0 0.88 0.97 0.93 5678
1 0.49 0.16 0.25 876
micro avg 0.87 0.87 0.87 6554
macro avg 0.69 0.57 0.59 6554
weighted avg 0.83 0.87 0.84 6554
infrastructure_related
precision recall f1-score support
0 0.94 0.99 0.97 6132
1 0.47 0.11 0.18 422
micro avg 0.93 0.93 0.93 6554
macro avg 0.70 0.55 0.57 6554
weighted avg 0.91 0.93 0.92 6554
transport
precision recall f1-score support
0 0.96 0.99 0.98 6231
1 0.67 0.20 0.31 323
micro avg 0.96 0.96 0.96 6554
macro avg 0.82 0.60 0.65 6554
weighted avg 0.95 0.96 0.94 6554
buildings
precision recall f1-score support
0 0.97 0.99 0.98 6233
1 0.67 0.36 0.47 321
micro avg 0.96 0.96 0.96 6554
macro avg 0.82 0.68 0.72 6554
weighted avg 0.95 0.96 0.95 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6423
1 0.52 0.23 0.32 131
micro avg 0.98 0.98 0.98 6554
macro avg 0.75 0.61 0.65 6554
weighted avg 0.98 0.98 0.98 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6521
1 0.00 0.00 0.00 33
micro avg 0.99 0.99 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.42 0.11 0.18 71
micro avg 0.99 0.99 0.99 6554
macro avg 0.71 0.56 0.59 6554
weighted avg 0.98 0.99 0.99 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6524
1 0.00 0.00 0.00 30
micro avg 0.99 0.99 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6473
1 0.21 0.09 0.12 81
micro avg 0.98 0.98 0.98 6554
macro avg 0.60 0.54 0.56 6554
weighted avg 0.98 0.98 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 0.99 0.98 6271
1 0.40 0.12 0.18 283
micro avg 0.95 0.95 0.95 6554
macro avg 0.68 0.55 0.58 6554
weighted avg 0.94 0.95 0.94 6554
weather_related
precision recall f1-score support
0 0.88 0.95 0.91 4719
1 0.84 0.66 0.74 1835
micro avg 0.87 0.87 0.87 6554
macro avg 0.86 0.80 0.82 6554
weighted avg 0.87 0.87 0.86 6554
floods
precision recall f1-score support
0 0.96 0.99 0.97 6000
1 0.86 0.51 0.64 554
micro avg 0.95 0.95 0.95 6554
macro avg 0.91 0.75 0.81 6554
weighted avg 0.95 0.95 0.95 6554
storm
precision recall f1-score support
0 0.95 0.98 0.97 5960
1 0.71 0.50 0.58 594
micro avg 0.94 0.94 0.94 6554
macro avg 0.83 0.74 0.77 6554
weighted avg 0.93 0.94 0.93 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6490
1 0.43 0.16 0.23 64
micro avg 0.99 0.99 0.99 6554
macro avg 0.71 0.58 0.61 6554
weighted avg 0.99 0.99 0.99 6554
earthquake
precision recall f1-score support
0 0.98 0.99 0.98 5940
1 0.86 0.77 0.81 614
micro avg 0.97 0.97 0.97 6554
macro avg 0.92 0.88 0.90 6554
weighted avg 0.97 0.97 0.97 6554
cold
precision recall f1-score support
0 0.99 1.00 0.99 6425
1 0.67 0.32 0.43 129
micro avg 0.98 0.98 0.98 6554
macro avg 0.83 0.66 0.71 6554
weighted avg 0.98 0.98 0.98 6554
other_weather
precision recall f1-score support
0 0.96 0.99 0.97 6228
1 0.39 0.13 0.19 326
micro avg 0.95 0.95 0.95 6554
macro avg 0.68 0.56 0.58 6554
weighted avg 0.93 0.95 0.93 6554
direct_report
precision recall f1-score support
0 0.87 0.96 0.91 5283
1 0.71 0.43 0.54 1271
micro avg 0.86 0.86 0.86 6554
macro avg 0.79 0.69 0.73 6554
weighted avg 0.84 0.86 0.84 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
pipeline2.get_params().keys()
parameters2 = {
'clf__estimator__n_estimators': [50, 100],
'clf__estimator__min_samples_split': [2, 4]
}
cv2 = GridSearchCV(pipeline2, param_grid = parameters2)
cv2.fit(X_train, y_train)
test_pred = cv2.predict(X_test)
for i in range(len(categories)):
print(categories[i])
print(classification_report(y_test[categories[i]], test_pred[:, i]))
cv.best_params_
cv2.best_params_
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
import joblib
joblib.dump(cv.best_estimator_, 'disaster_model.pkl')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
%%capture
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import re
import numpy as np
# nltk
import nltk
nltk.download('stopwords')
nltk.download('wordnet') # download for lemmatization
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
# sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
# from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
# from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
# other models
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
# pickle
import pickle
# load data from database
engine = create_engine('sqlite:///DisasterData.db')
df = pd.read_sql_table('TextMessages', engine)
X = df[["message", "original", "genre"]]
Y = df.drop(columns= ["id", "message", "original", "genre"])
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Normalization
# Convert to lower case
text = text.lower()
# Remove punctuation characters - this regex finds everything which is not a combination of letters
# and numbers and replaces it with a whitespace
text = re.sub(r"[^a-zA-Z0-9]", " ", text)
# Tokenization
# Split into tokens
words = word_tokenize(text)
# Remove stopwords
words = [w for w in words if w not in stopwords.words("english")]
# Part-of-speech tagging maybe useful here?
# Named Entity Recognition usefuk here?
# Stemming - only keep the stem of a word, simple find and replace method which removes f.e. "ing"
# stemmed = [PorterStemmer().stem(w) for w in words]
# Lemmatization - more complex appraoch using dictionaries which can f.e. map "is" and "was" to "be"
# Lemmatize verbs by specifying pos
lemmed_verbs = [WordNetLemmatizer().lemmatize(w, pos='v') for w in words]
# Reduce nouns to their root form
lemmed_nouns = [WordNetLemmatizer().lemmatize(w) for w in lemmed_verbs]
return lemmed_nouns
# Split the data in training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size = 0.05) # We drastically decrease the train_size to allow our GridSearch to run in a feasible amount of time
# Calculate the average accuracy for each target column
def print_acc(name, model, y_test, y_pred):
columns = y_test.columns
y_pred_df = pd.DataFrame(y_pred, columns = columns)
accuracy = (y_pred_df == y_test.reset_index().drop(["index"], axis = 1)).mean()
report = classification_report(y_true = y_test,
y_pred = y_pred,
target_names = list(y_test.columns),
# output_dict = True,
zero_division = 0)
print(f"F1 score, recall and precision per category {name}: \n")
# print(f"Average accuracy: {accuracy.mean()}")
# print(accuracy)
print(report)
return {'name' : name, 'model': model, 'report' : report}
# Create an empty array to store all the results and the models to find the best one in the end
results = []
###Output
_____no_output_____
###Markdown
Native model without optimization (MultiOutputClassifier with RandomForestClassifier)
###Code
# pipeline = Pipeline([
# ('features', FeatureUnion([
# ('text_pipeline', Pipeline([
# ('vect', CountVectorizer(tokenizer=tokenize)),
# ('tfidf', TfidfTransformer())
# ]))
# ])),
# ('clf', MultiOutputClassifier(RandomForestClassifier()))
# ])
random_forest_pipe = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
random_forest_pipe.fit(X_train["message"], y_train)
y_pred = random_forest_pipe.predict(X_test["message"])
results.append(print_acc("MultiOutputClassifier RandomForest", random_forest_pipe, y_test, y_pred))
###Output
F1 score, recall and precision per category MultiOutputClassifier RandomForest:
{'related': {'precision': 0.8064771627211124, 'recall': 0.9589284779992675, 'f1-score': 0.8761203661655393, 'support': 19113}, 'request': {'precision': 0.7895659798334064, 'recall': 0.42486435480066054, 'f1-score': 0.5524539877300614, 'support': 4239}, 'offer': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 112}, 'aid_related': {'precision': 0.7265138154027043, 'recall': 0.598624297616741, 'f1-score': 0.6563977266691454, 'support': 10322}, 'medical_help': {'precision': 0.75, 'recall': 0.0015113350125944584, 'f1-score': 0.0030165912518853692, 'support': 1985}, 'medical_products': {'precision': 0.8, 'recall': 0.003218020917135961, 'f1-score': 0.00641025641025641, 'support': 1243}, 'search_and_rescue': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 697}, 'security': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 454}, 'military': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 824}, 'water': {'precision': 0.8558558558558559, 'recall': 0.060202788339670466, 'f1-score': 0.11249259917110717, 'support': 1578}, 'food': {'precision': 0.8126635269492413, 'recall': 0.5616636528028933, 'f1-score': 0.6642429426860564, 'support': 2765}, 'shelter': {'precision': 0.8462929475587704, 'recall': 0.21224489795918366, 'f1-score': 0.33937635968092816, 'support': 2205}, 'clothing': {'precision': 0.7619047619047619, 'recall': 0.041666666666666664, 'f1-score': 0.07901234567901234, 'support': 384}, 'money': {'precision': 1.0, 'recall': 0.0017574692442882249, 'f1-score': 0.003508771929824561, 'support': 569}, 'missing_people': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 288}, 'refugees': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 840}, 'death': {'precision': 0.8571428571428571, 'recall': 0.005249343832020997, 'f1-score': 0.010434782608695651, 'support': 1143}, 'other_aid': {'precision': 0.5081967213114754, 'recall': 0.009497549019607842, 'f1-score': 0.01864661654135338, 'support': 3264}, 'infrastructure_related': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1635}, 'transport': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1147}, 'buildings': {'precision': 0.7142857142857143, 'recall': 0.00784313725490196, 'f1-score': 0.015515903801396431, 'support': 1275}, 'electricity': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 519}, 'tools': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 152}, 'hospitals': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 278}, 'shops': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 114}, 'aid_centers': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 301}, 'other_infrastructure': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1098}, 'weather_related': {'precision': 0.8644484144707458, 'recall': 0.5558587018954624, 'f1-score': 0.6766299597972383, 'support': 6964}, 'floods': {'precision': 0.9375, 'recall': 0.1820388349514563, 'f1-score': 0.3048780487804878, 'support': 2060}, 'storm': {'precision': 0.8169934640522876, 'recall': 0.10789814415192059, 'f1-score': 0.19062142584826536, 'support': 2317}, 'fire': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 270}, 'earthquake': {'precision': 0.9005315110098709, 'recall': 0.5077054794520548, 'f1-score': 0.6493293183684643, 'support': 2336}, 'cold': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 512}, 'other_weather': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 1320}, 'direct_report': {'precision': 0.7427027027027027, 'recall': 0.2851805728518057, 'f1-score': 0.412117576484703, 'support': 4818}, 'micro avg': {'precision': 0.7979485107624626, 'recall': 0.44921090206087866, 'f1-score': 0.5748217375135415, 'support': 79141}, 'macro avg': {'precision': 0.41403072672004304, 'recall': 0.12931296356480948, 'f1-score': 0.15917730227441204, 'support': 79141}, 'weighted avg': {'precision': 0.6841460796350334, 'recall': 0.44921090206087866, 'f1-score': 0.4807621728093338, 'support': 79141}, 'samples avg': {'precision': 0.6873152418171691, 'recall': 0.4405194125653719, 'f1-score': 0.4861795276591378, 'support': 79141}}
###Markdown
kNN
###Code
# knn_pipe = Pipeline([
# ('vect', CountVectorizer(tokenizer=tokenize)),
# ('tfidf', TfidfTransformer()),
# ('clf', KNeighborsClassifier())
# ])
# knn_pipe.fit(X_train["message"], y_train)
# y_pred_knn = knn_pipe.predict(X_test["message"])
# results.append(print_acc("kNN", knn_pipe, y_test, y_pred_knn))
###Output
_____no_output_____
###Markdown
Decision tree
###Code
# decision_tree_pipe = Pipeline([
# ('vect', CountVectorizer(tokenizer=tokenize)),
# ('tfidf', TfidfTransformer()),
# ('clf', DecisionTreeClassifier())
# ])
# decision_tree_pipe.fit(X_train["message"], y_train)
# y_pred_decision_tree = decision_tree_pipe.predict(X_test["message"])
# results.append(print_acc("Decision Tree", decision_tree_pipe, y_test, y_pred_decision_tree))
###Output
_____no_output_____
###Markdown
Random Forest
###Code
# random_forest_only_pipe = Pipeline([
# ('vect', CountVectorizer(tokenizer=tokenize)),
# ('tfidf', TfidfTransformer()),
# ('clf', RandomForestClassifier())
# ])
# random_forest_only_pipe.fit(X_train["message"], y_train)
# y_pred_random_forest_only = random_forest_only_pipe.predict(X_test["message"])
# results.append(print_acc("Random Forest", random_forest_only_pipe, y_test, y_pred_random_forest_only))
# for result in results:
# print(result["name"])
# print(result["accuracy"].mean())
###Output
_____no_output_____
###Markdown
Improve models using GridSearch MultiOutputClassifier + RandomForestClassifier
###Code
# Check for available parameters to optimize
# random_forest_pipe.get_params().keys()
# parameters_mo_rf = {
# # vect
# # https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
# # tfidf
# # https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html
# 'tfidf__norm' : ['l1', 'l2'],
# # 'tfidf__use_idf' : [True, False],
# # 'tfidf__smooth_idf': [True, False],
# # 'tfidf__sublinear_tf' : [True, False],
# # clf
# # https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
# 'clf__estimator__criterion' : ['gini', 'entropy'],
# 'clf__estimator__n_estimators': [50, 100, 150, 200],
# 'clf__estimator__max_depth' : [None, 5, 10],
# }
# cv_parameters_mo_rf = GridSearchCV(random_forest_pipe, param_grid=parameters_mo_rf)
# cv_parameters_mo_rf.fit(X_train["message"], y_train)
# y_pred_mo_rf_cv = cv_parameters_mo_rf.predict(X_test["message"])
# results.append(print_acc("MultiOutputClassifier Random Forest CV", cv_parameters_mo_rf, y_test, y_pred_mo_rf_cv))
###Output
_____no_output_____
###Markdown
kNN
###Code
# knn_pipe.get_params().keys()
# parameters_knn = {
# # vect
# # https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
# # tfidf
# # https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html
# 'tfidf__norm' : ['l1', 'l2'],
# # 'tfidf__use_idf' : [True, False],
# # 'tfidf__smooth_idf': [True, False],
# # 'tfidf__sublinear_tf' : [True, False],
# # clf
# # https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
# 'clf__n_neighbors' : [3, 5, 8],
# 'clf__weights' : ['uniform', 'distance'],
# 'clf__algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
# }
# cv_knn = GridSearchCV(knn_pipe, param_grid=parameters_knn)
# cv_knn.fit(X_train["message"], y_train)
# y_pred_knn_cv = cv_knn.predict(X_test["message"])
# results.append(print_acc("kNN CV", cv_knn, y_test, y_pred_mo_rf_cv))
###Output
_____no_output_____
###Markdown
Classification report
###Code
from sklearn.metrics import classification_report
report = classification_report(y_true = y_test,
y_pred = y_pred,
target_names = list(y_test.columns),
output_dict = True,
zero_division = 0)
###Output
_____no_output_____
###Markdown
Evaluate the results
###Code
for result in results:
print(result["name"])
print(result["accuracy"].mean())
###Output
MultiOutputClassifier RandomForest
0.9399697146986956
kNN
0.9303656032396095
Decision Tree
0.9243796675499878
Random Forest
0.9397701070310077
MultiOutputClassifier Random Forest CV
0.9394546351424211
kNN CV
0.9394546351424211
###Markdown
As we can see, the models performed all very similar. Only the decision tree model is a bit worse compared to the other ones. Surprisingly, our unoptimized orginal model with a MultiOutpuClassfier and a RandomForestClassifier performed best. Therefore we can assume that the standard model configuration fits good to our problem and the optimization attempt only leads us away from the optimum. 94% is a quite good result so we can stick with that model.
###Code
best_model = results[0]['model']
###Output
_____no_output_____
###Markdown
Now that we found the best model configuration, we retrain the model with 80% of the data
###Code
X_train_new, X_test_new, y_train_new, y_test_new = train_test_split(X, Y, train_size = 0.80)
best_model.fit(X_train_new["message"], y_train_new)
y_pred_final = best_model.predict(X_test_new["message"])
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
model_params = best_model.get_params()
model = best_model
fileObj = open('model_params.obj', 'wb')
pickle.dump(model_params,fileObj)
fileObj.close()
fileObj = open('model.obj', 'wb')
pickle.dump(model,fileObj)
fileObj.close()
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import re
import nltk
nltk.download(['stopwords', 'punkt', 'wordnet'])
import pickle
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from sklearn.pipeline import Pipeline
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
# read in file
engine = create_engine('sqlite:///disaster.db')
df = pd.read_sql_table('disaster', engine)
X = df['message']
y = df.iloc[:, 4:]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
# Define common paras
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
# Normalize and remove punctuation
text = re.sub(r'[^a-zA-Z0-9]', '', text.lower())
# Tokenize text
text = word_tokenize(text)
# Lemmatize and remove stop words
tokens = [lemmatizer.lemmatize(word) for word in text if word not in stop_words]
# return tokens
return tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('mutclf', MultiOutputClassifier(RandomForestClassifier(), n_jobs=-1))])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Split train & test data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Train pipeline
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Predict use the trained model
y_pred = pipeline.predict(X_test)
# Report Model Effectiveness
for i, col in enumerate(y_test.columns):
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_test[col].tolist(), list(y_pred[:, i]), target_names=target_names))
###Output
precision recall f1-score support
class 0 0.62 0.01 0.01 1556
class 1 0.69 0.00 0.00 4952
class 2 0.01 0.98 0.01 46
avg / total 0.67 0.01 0.01 6554
precision recall f1-score support
class 0 0.83 1.00 0.91 5434
class 1 0.75 0.00 0.01 1120
avg / total 0.82 0.83 0.75 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6531
class 1 0.00 0.00 0.00 23
avg / total 0.99 1.00 0.99 6554
precision recall f1-score support
class 0 0.59 1.00 0.74 3842
class 1 0.50 0.00 0.00 2712
avg / total 0.55 0.59 0.43 6554
precision recall f1-score support
class 0 0.93 1.00 0.96 6073
class 1 0.00 0.00 0.00 481
avg / total 0.86 0.93 0.89 6554
precision recall f1-score support
class 0 0.95 1.00 0.97 6227
class 1 0.00 0.00 0.00 327
avg / total 0.90 0.95 0.93 6554
precision recall f1-score support
class 0 0.97 1.00 0.99 6378
class 1 0.00 0.00 0.00 176
avg / total 0.95 0.97 0.96 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6439
class 1 0.00 0.00 0.00 115
avg / total 0.97 0.98 0.97 6554
precision recall f1-score support
class 0 0.97 1.00 0.98 6331
class 1 0.00 0.00 0.00 223
avg / total 0.93 0.97 0.95 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
precision recall f1-score support
class 0 0.93 1.00 0.97 6119
class 1 1.00 0.00 0.00 435
avg / total 0.94 0.93 0.90 6554
precision recall f1-score support
class 0 0.89 1.00 0.94 5816
class 1 0.00 0.00 0.00 738
avg / total 0.79 0.89 0.83 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5943
class 1 1.00 0.00 0.01 611
avg / total 0.92 0.91 0.86 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6461
class 1 1.00 0.01 0.02 93
avg / total 0.99 0.99 0.98 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6405
class 1 0.00 0.00 0.00 149
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6482
class 1 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6554
precision recall f1-score support
class 0 0.97 1.00 0.98 6330
class 1 0.00 0.00 0.00 224
avg / total 0.93 0.97 0.95 6554
precision recall f1-score support
class 0 0.96 1.00 0.98 6270
class 1 0.00 0.00 0.00 284
avg / total 0.92 0.96 0.94 6554
precision recall f1-score support
class 0 0.87 1.00 0.93 5681
class 1 0.00 0.00 0.00 873
avg / total 0.75 0.87 0.80 6554
precision recall f1-score support
class 0 0.94 1.00 0.97 6129
class 1 0.00 0.00 0.00 425
avg / total 0.87 0.94 0.90 6554
precision recall f1-score support
class 0 0.95 1.00 0.98 6254
class 1 1.00 0.00 0.01 300
avg / total 0.96 0.95 0.93 6554
precision recall f1-score support
class 0 0.95 1.00 0.97 6214
class 1 0.33 0.00 0.01 340
avg / total 0.92 0.95 0.92 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6410
class 1 0.00 0.00 0.00 144
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.99 1.00 1.00 6512
class 1 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 6554
precision recall f1-score support
class 0 0.99 1.00 1.00 6491
class 1 0.00 0.00 0.00 63
avg / total 0.98 0.99 0.99 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6529
class 1 0.00 0.00 0.00 25
avg / total 0.99 1.00 0.99 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6466
class 1 0.00 0.00 0.00 88
avg / total 0.97 0.99 0.98 6554
precision recall f1-score support
class 0 0.96 1.00 0.98 6263
class 1 0.00 0.00 0.00 291
avg / total 0.91 0.96 0.93 6554
precision recall f1-score support
class 0 0.73 1.00 0.84 4770
class 1 0.86 0.00 0.01 1784
avg / total 0.76 0.73 0.62 6554
precision recall f1-score support
class 0 0.92 1.00 0.96 6031
class 1 1.00 0.00 0.01 523
avg / total 0.93 0.92 0.88 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5958
class 1 0.50 0.00 0.00 596
avg / total 0.87 0.91 0.87 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6482
class 1 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5935
class 1 0.67 0.00 0.01 619
avg / total 0.88 0.91 0.86 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6423
class 1 0.00 0.00 0.00 131
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.95 1.00 0.98 6254
class 1 0.00 0.00 0.00 300
avg / total 0.91 0.95 0.93 6554
precision recall f1-score support
class 0 0.80 1.00 0.89 5251
class 1 0.75 0.00 0.00 1303
avg / total 0.79 0.80 0.71 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 5000, 10000),
'tfidf__use_idf': (True, False)}
cv = GridSearchCV(pipeline, param_grid=parameters)
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Report New Model Effectiveness
for i, col in enumerate(y_test.columns):
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_test[col].tolist(), list(y_pred[:, i]), target_names=target_names))
###Output
precision recall f1-score support
class 0 0.57 0.01 0.01 1556
class 1 0.76 1.00 0.86 4952
class 2 0.00 0.00 0.00 46
avg / total 0.71 0.76 0.65 6554
precision recall f1-score support
class 0 0.83 1.00 0.91 5434
class 1 1.00 0.00 0.00 1120
avg / total 0.86 0.83 0.75 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6531
class 1 0.00 0.00 0.00 23
avg / total 0.99 1.00 0.99 6554
precision recall f1-score support
class 0 0.59 1.00 0.74 3842
class 1 0.75 0.00 0.00 2712
avg / total 0.65 0.59 0.43 6554
precision recall f1-score support
class 0 0.93 1.00 0.96 6073
class 1 0.00 0.00 0.00 481
avg / total 0.86 0.93 0.89 6554
precision recall f1-score support
class 0 0.95 1.00 0.97 6227
class 1 0.00 0.00 0.00 327
avg / total 0.90 0.95 0.93 6554
precision recall f1-score support
class 0 0.97 1.00 0.99 6378
class 1 0.00 0.00 0.00 176
avg / total 0.95 0.97 0.96 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6439
class 1 0.00 0.00 0.00 115
avg / total 0.97 0.98 0.97 6554
precision recall f1-score support
class 0 0.97 1.00 0.98 6331
class 1 0.00 0.00 0.00 223
avg / total 0.93 0.97 0.95 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
precision recall f1-score support
class 0 0.93 1.00 0.97 6119
class 1 0.00 0.00 0.00 435
avg / total 0.87 0.93 0.90 6554
precision recall f1-score support
class 0 0.89 1.00 0.94 5816
class 1 0.00 0.00 0.00 738
avg / total 0.79 0.89 0.83 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5943
class 1 1.00 0.00 0.01 611
avg / total 0.92 0.91 0.86 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6461
class 1 1.00 0.01 0.02 93
avg / total 0.99 0.99 0.98 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6405
class 1 0.00 0.00 0.00 149
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6482
class 1 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6554
precision recall f1-score support
class 0 0.97 1.00 0.98 6330
class 1 0.00 0.00 0.00 224
avg / total 0.93 0.97 0.95 6554
precision recall f1-score support
class 0 0.96 1.00 0.98 6270
class 1 0.00 0.00 0.00 284
avg / total 0.92 0.96 0.94 6554
precision recall f1-score support
class 0 0.87 1.00 0.93 5681
class 1 0.00 0.00 0.00 873
avg / total 0.75 0.87 0.80 6554
precision recall f1-score support
class 0 0.94 1.00 0.97 6129
class 1 0.50 0.00 0.00 425
avg / total 0.91 0.94 0.90 6554
precision recall f1-score support
class 0 0.95 1.00 0.98 6254
class 1 1.00 0.00 0.01 300
avg / total 0.96 0.95 0.93 6554
precision recall f1-score support
class 0 0.95 1.00 0.97 6214
class 1 0.33 0.00 0.01 340
avg / total 0.92 0.95 0.92 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6410
class 1 0.00 0.00 0.00 144
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.99 1.00 1.00 6512
class 1 0.00 0.00 0.00 42
avg / total 0.99 0.99 0.99 6554
precision recall f1-score support
class 0 0.99 1.00 1.00 6491
class 1 0.00 0.00 0.00 63
avg / total 0.98 0.99 0.99 6554
precision recall f1-score support
class 0 1.00 1.00 1.00 6529
class 1 0.00 0.00 0.00 25
avg / total 0.99 1.00 0.99 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6466
class 1 0.00 0.00 0.00 88
avg / total 0.97 0.99 0.98 6554
precision recall f1-score support
class 0 0.96 1.00 0.98 6263
class 1 0.00 0.00 0.00 291
avg / total 0.91 0.96 0.93 6554
precision recall f1-score support
class 0 0.73 1.00 0.84 4770
class 1 1.00 0.00 0.01 1784
avg / total 0.80 0.73 0.62 6554
precision recall f1-score support
class 0 0.92 1.00 0.96 6031
class 1 1.00 0.01 0.02 523
avg / total 0.93 0.92 0.88 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5958
class 1 0.50 0.00 0.00 596
avg / total 0.87 0.91 0.87 6554
precision recall f1-score support
class 0 0.99 1.00 0.99 6482
class 1 0.00 0.00 0.00 72
avg / total 0.98 0.99 0.98 6554
precision recall f1-score support
class 0 0.91 1.00 0.95 5935
class 1 0.67 0.00 0.01 619
avg / total 0.88 0.91 0.86 6554
precision recall f1-score support
class 0 0.98 1.00 0.99 6423
class 1 0.00 0.00 0.00 131
avg / total 0.96 0.98 0.97 6554
precision recall f1-score support
class 0 0.95 1.00 0.98 6254
class 1 0.00 0.00 0.00 300
avg / total 0.91 0.95 0.93 6554
precision recall f1-score support
class 0 0.80 1.00 0.89 5251
class 1 1.00 0.00 0.00 1303
avg / total 0.84 0.80 0.71 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('mutclf', MultiOutputClassifier(AdaBoostClassifier(), n_jobs=-1))])
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
# Save CV Model
with open('model.pickle', 'wb') as file:
pickle.dump(cv, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
#----------------------------------
import pickle
import warnings
import string
import unittest
warnings.filterwarnings("ignore")
#----------------------------------
import re
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
nltk.download(['punkt', 'wordnet','stopwords'])
# ------------------------------------------
from nltk.stem.porter import PorterStemmer
#from nltk.stem.wordnet import WordNetLemmatizer
# ------------------------------------------
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.utils import shuffle
from sklearn.datasets import make_classification
from sklearn.pipeline import Pipeline, FeatureUnion
# import sklearn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import precision_score, recall_score, f1_score,classification_report, make_scorer
from sklearn.ensemble import AdaBoostClassifier
from sklearn.base import BaseEstimator, TransformerMixin
## Execute this code cell to output the values in the categories table
# connect to the database
engine = create_engine('sqlite:///disaster.db')
# the database file will be disaster.db
df = pd.read_sql_table('disaster', engine)
# load data from database
df = pd.read_sql_table('disaster', engine)
X = df['message']
y = df.drop(['id', 'message', 'original', 'genre'], axis=1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
stop_words = stopwords.words("english")
lemmatizer = WordNetLemmatizer()
# normalize case and remove punctuation
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# tokenize text
tokens = word_tokenize(text)
# lemmatize andremove stop words
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return tokens
def display_results(y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1),labels=labels)
#confusion_mat = confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))
#confusion_matrix(y_test , y_pred , labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
def build_pipeline():
pipeline = Pipeline ([
('vect' , CountVectorizer(tokenizer=tokenize)),
('tfidf' , TfidfTransformer()),
('clf' , MultiOutputClassifier(RandomForestClassifier( ) ))
])
return pipeline
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# train classifier
X_train, X_test, y_train, y_test = train_test_split(X, y)
pipeline = build_pipeline()
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
def display_results2(y_test, y_pred):
results_dict = {}
for pred, label, col in zip(y_pred.transpose(), y_test.values.transpose(), y_test.columns):
print(col)
print(classification_report(label, pred))
results_dict[col] = classification_report(label, pred, output_dict=True)
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
#make_scorer(f1_score, average='micro')
#parameters = {
#'vect__max_df': [0.8]
# ,'clf__estimator__max_depth': (25, 50, None)
# ,'clf__estimator__min_samples_leaf': [1,5,8]
#}
#cv = GridSearchCV(pipeline, parameters, cv=5, n_jobs=-1 ,verbose=10)
#parameters = {
# 'vect__max_df': [0.8]
# ,'clf__estimator__max_depth': (25, 50, None)
# ,'clf__estimator__max_features': ['log2', 'sqrt','auto']
#}
#,'clf__estimator__min_samples_split': (2, 10, 25, 50, 100)
# ,'clf__estimator__min_samples_leaf': [1,5,8]
make_scorer(f1_score, average='micro')
parameters = {
'vect__max_df': [0.8]
,'clf__estimator__max_depth': (25, 50, None)
,'clf__estimator__min_samples_leaf': [1,5,8]
,'clf__estimator__max_features': ['log2', 'sqrt','auto']
,'clf__estimator__min_samples_split': (2, 10, 25, 50, 100)
}
cv = GridSearchCV(pipeline, parameters, cv=5, n_jobs=-1 ,verbose=10)
#parameters = {
# 'vect__max_df': [0.8]
# ,'clf__estimator__max_depth': (25, 50, None)
# ,'clf__estimator__max_features': ['log2', 'sqrt','auto']
#}
#,'clf__estimator__min_samples_split': (2, 10, 25, 50, 100)
# ,'clf__estimator__min_samples_leaf': [1,5,8]
cv.fit(X_train, y_train)
# predict on test data
y_pred_cv = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
#cv_results = model_performance(y_test, y_pred_cv)
display_results2(y_test, y_pred_cv)
###Output
related
precision recall f1-score support
0 0.63 0.47 0.53 1571
1 0.84 0.91 0.87 4928
2 0.46 0.29 0.36 55
micro avg 0.80 0.80 0.80 6554
macro avg 0.64 0.56 0.59 6554
weighted avg 0.78 0.80 0.79 6554
request
precision recall f1-score support
0 0.89 0.98 0.94 5417
1 0.83 0.45 0.58 1137
micro avg 0.89 0.89 0.89 6554
macro avg 0.86 0.71 0.76 6554
weighted avg 0.88 0.89 0.87 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6528
1 0.00 0.00 0.00 26
micro avg 1.00 1.00 1.00 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.75 0.85 0.79 3865
1 0.73 0.58 0.65 2689
micro avg 0.74 0.74 0.74 6554
macro avg 0.74 0.72 0.72 6554
weighted avg 0.74 0.74 0.73 6554
medical_help
precision recall f1-score support
0 0.92 0.99 0.96 6030
1 0.53 0.07 0.12 524
micro avg 0.92 0.92 0.92 6554
macro avg 0.73 0.53 0.54 6554
weighted avg 0.89 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.96 1.00 0.98 6248
1 0.66 0.08 0.15 306
micro avg 0.96 0.96 0.96 6554
macro avg 0.81 0.54 0.56 6554
weighted avg 0.94 0.96 0.94 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6375
1 0.32 0.03 0.06 179
micro avg 0.97 0.97 0.97 6554
macro avg 0.64 0.52 0.52 6554
weighted avg 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6433
1 0.14 0.01 0.02 121
micro avg 0.98 0.98 0.98 6554
macro avg 0.56 0.50 0.50 6554
weighted avg 0.97 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.99 6364
1 0.47 0.10 0.17 190
micro avg 0.97 0.97 0.97 6554
macro avg 0.72 0.55 0.58 6554
weighted avg 0.96 0.97 0.96 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
micro avg 1.00 1.00 1.00 6554
macro avg 1.00 1.00 1.00 6554
weighted avg 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.97 6151
1 0.86 0.24 0.37 403
micro avg 0.95 0.95 0.95 6554
macro avg 0.90 0.62 0.67 6554
weighted avg 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.95 0.98 0.97 5842
1 0.80 0.59 0.68 712
micro avg 0.94 0.94 0.94 6554
macro avg 0.88 0.79 0.82 6554
weighted avg 0.94 0.94 0.94 6554
shelter
precision recall f1-score support
0 0.94 0.99 0.97 5985
1 0.83 0.36 0.50 569
micro avg 0.94 0.94 0.94 6554
macro avg 0.88 0.68 0.74 6554
weighted avg 0.93 0.94 0.93 6554
clothing
precision recall f1-score support
0 0.99 1.00 0.99 6460
1 0.56 0.10 0.16 94
micro avg 0.99 0.99 0.99 6554
macro avg 0.77 0.55 0.58 6554
weighted avg 0.98 0.99 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6401
1 0.67 0.03 0.05 153
micro avg 0.98 0.98 0.98 6554
macro avg 0.82 0.51 0.52 6554
weighted avg 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.50 0.01 0.03 77
micro avg 0.99 0.99 0.99 6554
macro avg 0.74 0.51 0.51 6554
weighted avg 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6319
1 0.56 0.04 0.07 235
micro avg 0.96 0.96 0.96 6554
macro avg 0.76 0.52 0.53 6554
weighted avg 0.95 0.96 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6253
1 0.72 0.14 0.23 301
micro avg 0.96 0.96 0.96 6554
macro avg 0.84 0.57 0.61 6554
weighted avg 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 0.99 0.93 5659
1 0.56 0.05 0.10 895
micro avg 0.86 0.86 0.86 6554
macro avg 0.71 0.52 0.51 6554
weighted avg 0.83 0.86 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.94 1.00 0.97 6136
1 0.11 0.00 0.00 418
micro avg 0.94 0.94 0.94 6554
macro avg 0.52 0.50 0.49 6554
weighted avg 0.88 0.94 0.91 6554
transport
precision recall f1-score support
0 0.96 1.00 0.98 6238
1 0.75 0.09 0.15 316
micro avg 0.95 0.95 0.95 6554
macro avg 0.85 0.54 0.57 6554
weighted avg 0.95 0.95 0.94 6554
buildings
precision recall f1-score support
0 0.96 1.00 0.98 6240
1 0.70 0.12 0.21 314
micro avg 0.96 0.96 0.96 6554
macro avg 0.83 0.56 0.59 6554
weighted avg 0.95 0.96 0.94 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6421
1 0.67 0.06 0.11 133
micro avg 0.98 0.98 0.98 6554
macro avg 0.82 0.53 0.55 6554
weighted avg 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6520
1 0.00 0.00 0.00 34
micro avg 0.99 0.99 0.99 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6483
1 0.00 0.00 0.00 71
micro avg 0.99 0.99 0.99 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6529
1 0.00 0.00 0.00 25
micro avg 1.00 1.00 1.00 6554
macro avg 0.50 0.50 0.50 6554
weighted avg 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.00 0.00 0.00 79
micro avg 0.99 0.99 0.99 6554
macro avg 0.49 0.50 0.50 6554
weighted avg 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.96 1.00 0.98 6269
1 0.33 0.00 0.01 285
micro avg 0.96 0.96 0.96 6554
macro avg 0.64 0.50 0.49 6554
weighted avg 0.93 0.96 0.94 6554
weather_related
precision recall f1-score support
0 0.87 0.95 0.91 4720
1 0.85 0.65 0.73 1834
micro avg 0.87 0.87 0.87 6554
macro avg 0.86 0.80 0.82 6554
weighted avg 0.87 0.87 0.86 6554
floods
precision recall f1-score support
0 0.95 1.00 0.97 5993
1 0.89 0.40 0.55 561
micro avg 0.94 0.94 0.94 6554
macro avg 0.92 0.70 0.76 6554
weighted avg 0.94 0.94 0.93 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
import pickle
import sys
model_filepath = sys.argv[1:]
# export model to pickle file
model_filepath = sys.argv[1:]
pickle.dump(pipeline, open('rf_model.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
pkl_filename = "classifier.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(cv, file)
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
statistics = df.describe()
statistics
import plotly.graph_objs as go
import plotly
from plotly import tools
import plotly.figure_factory as ff
from plotly.offline import init_notebook_mode, iplot
from plotly.graph_objs import Bar
df2 = df.describe().T
table_cat = df.describe(include=['O']).T, index=True, index_title='Categorical columns')
table_cat
df2
table_cat = ff.create_table(df.describe(include=['O']).T, index=True, index_title='Categorical columns')
iplot(table_cat)
table_cat
import json
genre_counts = df.groupby('genre').count()['message']
genre_names = list(genre_counts.index)
# Show distribution of different category
category = list(df.columns[4:])
category_counts = []
for column_name in category:
category_counts.append(np.sum(df[column_name]))
# extract data for top 5 categories
categories = df.iloc[:,4:]
categories_mean_top5 = categories.mean().sort_values(ascending=False).head(5)
categories_names_top5 = list(categories_mean_top5.index)
# extract data for tail 5 categories
categories = df.iloc[:,4:]
categories_mean_tail5 = categories.mean().sort_values(ascending=False).tail(5)
categories_names_tail5 = list(categories_mean_tail5.index)
# create visuals
graphs = [
{
'data': [
Bar(
x=genre_names,
y=genre_counts
)
],
'layout': {
'title': 'Distribution of Message Genres',
'yaxis': {
'title': "Count"
},
'xaxis': {
'title': "Genre"
}
}
},
{
'data': [
Bar(
x=categories_names_top5,
y=categories_mean_top5
)
],
'layout': {
'title': 'Top five Categories',
'yaxis': {
'title': "Mean"
},
'xaxis': {
'title': "Category"
}
}
},
{
'data': [
Bar(
x=categories_names_tail5,
y=categories_mean_tail5
)
],
'layout': {
'title': 'Tail five Categories',
'yaxis': {
'title': "Mean"
},
'xaxis': {
'title': "Categories"
}
}
}
]
# encode plotly graphs in JSON
ids = ["graph-{}".format(i) for i, _ in enumerate(graphs)]
graphJSON = json.dumps(graphs, cls=plotly.utils.PlotlyJSONEncoder)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
import re
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
nltk.download(['punkt', 'wordnet', 'stopwords'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix,classification_report, accuracy_score, recall_score, precision_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sqlalchemy import create_engine
import pickle
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql_table("DisasterResponse", con=engine)
categories = df.columns[4:]
X = df[['message']].values[:, 0]
y = df[categories].values
type(y)
df.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# tokenize text
tokens = word_tokenize(text)
# remove stopwords
#tokens = [t for t in tokens if t not in stopwords.words('english')]
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# iterate through each token
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
def display_results(y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Spiliting data
X_train, X_test, y_train, y_test = train_test_split(X, y)
# train classifier
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# predict on test data
y_pred = pipeline.predict(X_test)
# display results
display_results(y_test[0], y_pred[0])
#display_results(y_test, y_pred)
###Output
Labels: [0 1]
Confusion Matrix:
[[31 0]
[ 1 4]]
Accuracy: 0.9722222222222222
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
# Show parameters for the pipline
pipeline.get_params()
parameters = {
'clf__estimator__n_estimators': [10, 20]
}
cv = GridSearchCV(pipeline, param_grid = parameters)
cv
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
# Fit model
cv.fit(X_train, y_train)
# Predicting model
y_pred = cv.predict(X_test)
#multioutput_classification_report(y_test, y_pred)
columns = ['related', 'request', 'offer', 'aid_related', 'medical_help', 'medical_products',
'search_and_rescue', 'security', 'military', 'child_alone', 'water', 'food', 'shelter',
'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid',
'infrastructure_related', 'transport', 'buildings', 'electricity', 'tools',
'hospitals', 'shops', 'aid_centers', 'other_infrastructure',
'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold', 'other_weather', 'direct_report']
for i,col in enumerate(columns):
print(col)
accuracy = accuracy_score(y_test[i], y_pred[i])
precision = precision_score(y_test[i], y_pred[i])
recall = recall_score(y_test[i], y_pred[i])
print("\tAccuracy: %.2f\tPrecision: %.2f\t Recall: %.2f\n" % (accuracy, precision, recall))
###Output
related
Accuracy: 0.97 Precision: 1.00 Recall: 0.80
request
Accuracy: 0.94 Precision: 1.00 Recall: 0.33
offer
Accuracy: 0.97 Precision: 0.00 Recall: 0.00
aid_related
Accuracy: 1.00 Precision: 1.00 Recall: 1.00
medical_help
Accuracy: 0.97 Precision: 1.00 Recall: 0.67
medical_products
Accuracy: 0.92 Precision: 1.00 Recall: 0.67
search_and_rescue
Accuracy: 0.94 Precision: 1.00 Recall: 0.71
security
Accuracy: 0.94 Precision: 1.00 Recall: 0.60
military
Accuracy: 0.97 Precision: 0.00 Recall: 0.00
child_alone
Accuracy: 0.94 Precision: 0.00 Recall: 0.00
water
Accuracy: 1.00 Precision: 0.00 Recall: 0.00
food
Accuracy: 0.86 Precision: 1.00 Recall: 0.17
shelter
Accuracy: 0.97 Precision: 1.00 Recall: 0.67
clothing
Accuracy: 0.97 Precision: 0.00 Recall: 0.00
money
Accuracy: 0.92 Precision: 1.00 Recall: 0.40
missing_people
Accuracy: 0.97 Precision: 1.00 Recall: 0.75
refugees
Accuracy: 0.94 Precision: 0.33 Recall: 1.00
death
Accuracy: 1.00 Precision: 0.00 Recall: 0.00
other_aid
Accuracy: 0.86 Precision: 1.00 Recall: 0.44
infrastructure_related
Accuracy: 0.86 Precision: 1.00 Recall: 0.29
transport
Accuracy: 1.00 Precision: 1.00 Recall: 1.00
buildings
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
file_name = 'classifier.pkl'
with open (file_name, 'wb') as file:
pickle.dump(cv, file)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import pickle #pickle
from sklearn.externals import joblib
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table("InsertTableName",engine)
X = df.message.values
Y = df.iloc[:,4:].values
#
df.head()
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
for i,name in enumerate(df.iloc[:,4:].columns):
print(name)
print(classification_report(y_test[:,i], y_pred[:,i]))
###Output
related
precision recall f1-score support
0 0.57 0.47 0.52 1463
1 0.86 0.90 0.88 5091
avg / total 0.79 0.80 0.80 6554
request
precision recall f1-score support
0 0.88 0.99 0.93 5410
1 0.85 0.35 0.50 1144
avg / total 0.87 0.88 0.85 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.69 0.91 0.78 3796
1 0.77 0.43 0.55 2758
avg / total 0.72 0.71 0.68 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6037
1 0.65 0.02 0.04 517
avg / total 0.90 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6226
1 0.75 0.04 0.07 328
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6370
1 0.50 0.01 0.02 184
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 1.00 0.01 0.02 127
avg / total 0.98 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6325
1 0.67 0.03 0.05 229
avg / total 0.96 0.97 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.94 1.00 0.97 6129
1 0.87 0.16 0.27 425
avg / total 0.94 0.94 0.93 6554
food
precision recall f1-score support
0 0.91 1.00 0.95 5785
1 0.87 0.24 0.37 769
avg / total 0.90 0.91 0.88 6554
shelter
precision recall f1-score support
0 0.92 1.00 0.96 5954
1 0.85 0.14 0.23 600
avg / total 0.91 0.92 0.89 6554
clothing
precision recall f1-score support
0 0.98 1.00 0.99 6446
1 1.00 0.02 0.04 108
avg / total 0.98 0.98 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.80 0.03 0.05 142
avg / total 0.98 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 1.00 0.01 0.03 70
avg / total 0.99 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6338
1 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6240
1 0.86 0.08 0.14 314
avg / total 0.95 0.96 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5689
1 0.50 0.02 0.04 865
avg / total 0.82 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.97 6116
1 0.50 0.01 0.02 438
avg / total 0.90 0.93 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6245
1 0.67 0.01 0.03 309
avg / total 0.94 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6212
1 0.75 0.03 0.05 342
avg / total 0.94 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6403
1 0.67 0.01 0.03 151
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6517
1 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 1.00 0.01 0.03 76
avg / total 0.99 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 31
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 0.00 0.00 0.00 80
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 6252
1 0.20 0.00 0.01 302
avg / total 0.92 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.81 0.97 0.88 4706
1 0.85 0.41 0.55 1848
avg / total 0.82 0.81 0.79 6554
floods
precision recall f1-score support
0 0.93 1.00 0.96 5988
1 0.85 0.16 0.27 566
avg / total 0.92 0.93 0.90 6554
storm
precision recall f1-score support
0 0.92 0.99 0.96 5942
1 0.74 0.18 0.29 612
avg / total 0.91 0.92 0.89 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.33 0.01 0.02 79
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.95 0.99 0.97 5973
1 0.89 0.41 0.56 581
avg / total 0.94 0.94 0.93 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.80 0.03 0.07 118
avg / total 0.98 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.94 1.00 0.97 6183
1 0.43 0.01 0.02 371
avg / total 0.91 0.94 0.92 6554
direct_report
precision recall f1-score support
0 0.85 0.98 0.91 5258
1 0.80 0.29 0.43 1296
avg / total 0.84 0.85 0.81 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 5000,),
'tfidf__use_idf': (True, False),
'clf__n_estimators': [50, 100],
'clf__min_samples_split': [2, 3],
}
cv = GridSearchCV(pipeline, param_grid=parameters)
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
cv.fit(X_train, y_train)
y_pred = cv.predict(X_test)
cv.best_estimator_
cv.best_params_
for i,name in enumerate(df.iloc[:,4:].columns):
print(name)
print(classification_report(y_test[:,i], y_pred[:,i]))
###Output
related
precision recall f1-score support
0 0.67 0.45 0.54 1463
1 0.86 0.94 0.89 5091
avg / total 0.82 0.83 0.82 6554
request
precision recall f1-score support
0 0.90 0.99 0.94 5410
1 0.89 0.47 0.61 1144
avg / total 0.90 0.90 0.88 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.72 0.92 0.80 3796
1 0.81 0.50 0.62 2758
avg / total 0.76 0.74 0.73 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6037
1 0.50 0.02 0.03 517
avg / total 0.89 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 6226
1 0.78 0.04 0.08 328
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6370
1 0.50 0.02 0.03 184
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 1.00 0.01 0.02 127
avg / total 0.98 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6325
1 0.25 0.01 0.02 229
avg / total 0.94 0.96 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.96 1.00 0.98 6129
1 0.91 0.34 0.49 425
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5785
1 0.85 0.47 0.60 769
avg / total 0.92 0.93 0.92 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5954
1 0.86 0.24 0.38 600
avg / total 0.92 0.93 0.91 6554
clothing
precision recall f1-score support
0 0.98 1.00 0.99 6446
1 0.75 0.03 0.05 108
avg / total 0.98 0.98 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.60 0.02 0.04 142
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 0.00 0.00 0.00 70
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6338
1 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6240
1 0.85 0.07 0.13 314
avg / total 0.95 0.95 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5689
1 0.64 0.04 0.08 865
avg / total 0.84 0.87 0.82 6554
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.97 6116
1 0.00 0.00 0.00 438
avg / total 0.87 0.93 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6245
1 0.00 0.00 0.00 309
avg / total 0.91 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6212
1 0.86 0.02 0.03 342
avg / total 0.94 0.95 0.92 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6403
1 0.50 0.01 0.01 151
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6517
1 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 31
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 0.00 0.00 0.00 80
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 6252
1 0.00 0.00 0.00 302
avg / total 0.91 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.84 0.97 0.90 4706
1 0.88 0.53 0.66 1848
avg / total 0.85 0.85 0.83 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5988
1 0.90 0.30 0.45 566
avg / total 0.93 0.94 0.92 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5942
1 0.74 0.30 0.43 612
avg / total 0.91 0.93 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.67 0.03 0.05 79
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5973
1 0.89 0.68 0.77 581
avg / total 0.96 0.96 0.96 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.33 0.01 0.02 118
avg / total 0.97 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.94 1.00 0.97 6183
1 0.40 0.01 0.01 371
avg / total 0.91 0.94 0.92 6554
direct_report
precision recall f1-score support
0 0.87 0.98 0.92 5258
1 0.86 0.38 0.52 1296
avg / total 0.86 0.86 0.84 6554
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
#add other features besides the TF-IDF
from sklearn.base import BaseEstimator, TransformerMixin
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def build_model():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), ),
'features__text_pipeline__vect__max_df': (0.5, ),
'features__text_pipeline__vect__max_features': (5000,),
'features__text_pipeline__tfidf__use_idf': (False,),
'clf__n_estimators': [100,],
'clf__min_samples_split': [2,],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},)
}
cv = GridSearchCV(pipeline, param_grid=parameters)
return cv
model = build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
for i,name in enumerate(df.iloc[:,4:].columns):
print(name)
print(classification_report(y_test[:,i], y_pred[:,i]))
###Output
related
precision recall f1-score support
0 0.69 0.41 0.52 1463
1 0.85 0.95 0.90 5091
avg / total 0.81 0.83 0.81 6554
request
precision recall f1-score support
0 0.90 0.99 0.94 5410
1 0.89 0.46 0.61 1144
avg / total 0.90 0.90 0.88 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.72 0.91 0.81 3796
1 0.81 0.52 0.63 2758
avg / total 0.76 0.75 0.73 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6037
1 0.64 0.01 0.03 517
avg / total 0.90 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.97 6226
1 0.79 0.03 0.06 328
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6370
1 0.86 0.03 0.06 184
avg / total 0.97 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 1.00 0.01 0.02 127
avg / total 0.98 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6325
1 0.40 0.01 0.02 229
avg / total 0.95 0.96 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.95 1.00 0.97 6129
1 0.95 0.25 0.39 425
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.92 0.99 0.96 5785
1 0.86 0.39 0.54 769
avg / total 0.92 0.92 0.91 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5954
1 0.91 0.20 0.33 600
avg / total 0.92 0.93 0.90 6554
clothing
precision recall f1-score support
0 0.98 1.00 0.99 6446
1 0.75 0.03 0.05 108
avg / total 0.98 0.98 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.60 0.02 0.04 142
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 1.00 0.01 0.03 70
avg / total 0.99 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6338
1 1.00 0.00 0.01 216
avg / total 0.97 0.97 0.95 6554
death
precision recall f1-score support
0 0.95 1.00 0.98 6240
1 0.78 0.04 0.08 314
avg / total 0.95 0.95 0.93 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5689
1 0.63 0.03 0.06 865
avg / total 0.84 0.87 0.81 6554
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.97 6116
1 0.25 0.00 0.00 438
avg / total 0.89 0.93 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6245
1 0.25 0.00 0.01 309
avg / total 0.92 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6212
1 1.00 0.02 0.05 342
avg / total 0.95 0.95 0.93 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6403
1 0.50 0.01 0.01 151
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6517
1 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 31
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 0.00 0.00 0.00 80
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 6252
1 0.00 0.00 0.00 302
avg / total 0.91 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.84 0.97 0.90 4706
1 0.88 0.52 0.65 1848
avg / total 0.85 0.84 0.83 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5988
1 0.89 0.30 0.45 566
avg / total 0.93 0.94 0.92 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5942
1 0.76 0.31 0.44 612
avg / total 0.92 0.93 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.00 0.00 0.00 79
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.96 0.99 0.98 5973
1 0.88 0.62 0.73 581
avg / total 0.96 0.96 0.96 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.50 0.01 0.02 118
avg / total 0.97 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.94 1.00 0.97 6183
1 0.43 0.01 0.02 371
avg / total 0.91 0.94 0.92 6554
direct_report
precision recall f1-score support
0 0.87 0.99 0.92 5258
1 0.88 0.38 0.53 1296
avg / total 0.87 0.87 0.84 6554
###Markdown
9. Export your model as a pickle file
###Code
# save the model as pickle file
with open('cv.pickle', 'wb') as f:
pickle.dump(cv, f)
#load the pickle file
model = joblib.load("cv.pickle")
#test the loaded model
y_pred = model.predict(X_test)
for i,name in enumerate(df.iloc[:,4:].columns):
print(name)
print(classification_report(y_test[:,i], y_pred[:,i]))
###Output
related
precision recall f1-score support
0 0.67 0.45 0.54 1463
1 0.86 0.94 0.89 5091
avg / total 0.82 0.83 0.82 6554
request
precision recall f1-score support
0 0.90 0.99 0.94 5410
1 0.89 0.47 0.61 1144
avg / total 0.90 0.90 0.88 6554
offer
precision recall f1-score support
0 1.00 1.00 1.00 6525
1 0.00 0.00 0.00 29
avg / total 0.99 1.00 0.99 6554
aid_related
precision recall f1-score support
0 0.72 0.92 0.80 3796
1 0.81 0.50 0.62 2758
avg / total 0.76 0.74 0.73 6554
medical_help
precision recall f1-score support
0 0.92 1.00 0.96 6037
1 0.50 0.02 0.03 517
avg / total 0.89 0.92 0.89 6554
medical_products
precision recall f1-score support
0 0.95 1.00 0.98 6226
1 0.78 0.04 0.08 328
avg / total 0.94 0.95 0.93 6554
search_and_rescue
precision recall f1-score support
0 0.97 1.00 0.99 6370
1 0.50 0.02 0.03 184
avg / total 0.96 0.97 0.96 6554
security
precision recall f1-score support
0 0.98 1.00 0.99 6427
1 1.00 0.01 0.02 127
avg / total 0.98 0.98 0.97 6554
military
precision recall f1-score support
0 0.97 1.00 0.98 6325
1 0.25 0.01 0.02 229
avg / total 0.94 0.96 0.95 6554
child_alone
precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water
precision recall f1-score support
0 0.96 1.00 0.98 6129
1 0.91 0.34 0.49 425
avg / total 0.95 0.95 0.94 6554
food
precision recall f1-score support
0 0.93 0.99 0.96 5785
1 0.85 0.47 0.60 769
avg / total 0.92 0.93 0.92 6554
shelter
precision recall f1-score support
0 0.93 1.00 0.96 5954
1 0.86 0.24 0.38 600
avg / total 0.92 0.93 0.91 6554
clothing
precision recall f1-score support
0 0.98 1.00 0.99 6446
1 0.75 0.03 0.05 108
avg / total 0.98 0.98 0.98 6554
money
precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.60 0.02 0.04 142
avg / total 0.97 0.98 0.97 6554
missing_people
precision recall f1-score support
0 0.99 1.00 0.99 6484
1 0.00 0.00 0.00 70
avg / total 0.98 0.99 0.98 6554
refugees
precision recall f1-score support
0 0.97 1.00 0.98 6338
1 0.00 0.00 0.00 216
avg / total 0.94 0.97 0.95 6554
death
precision recall f1-score support
0 0.96 1.00 0.98 6240
1 0.85 0.07 0.13 314
avg / total 0.95 0.95 0.94 6554
other_aid
precision recall f1-score support
0 0.87 1.00 0.93 5689
1 0.64 0.04 0.08 865
avg / total 0.84 0.87 0.82 6554
infrastructure_related
precision recall f1-score support
0 0.93 1.00 0.97 6116
1 0.00 0.00 0.00 438
avg / total 0.87 0.93 0.90 6554
transport
precision recall f1-score support
0 0.95 1.00 0.98 6245
1 0.00 0.00 0.00 309
avg / total 0.91 0.95 0.93 6554
buildings
precision recall f1-score support
0 0.95 1.00 0.97 6212
1 0.86 0.02 0.03 342
avg / total 0.94 0.95 0.92 6554
electricity
precision recall f1-score support
0 0.98 1.00 0.99 6403
1 0.50 0.01 0.01 151
avg / total 0.97 0.98 0.97 6554
tools
precision recall f1-score support
0 0.99 1.00 1.00 6517
1 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6554
hospitals
precision recall f1-score support
0 0.99 1.00 0.99 6478
1 0.00 0.00 0.00 76
avg / total 0.98 0.99 0.98 6554
shops
precision recall f1-score support
0 1.00 1.00 1.00 6523
1 0.00 0.00 0.00 31
avg / total 0.99 1.00 0.99 6554
aid_centers
precision recall f1-score support
0 0.99 1.00 0.99 6474
1 0.00 0.00 0.00 80
avg / total 0.98 0.99 0.98 6554
other_infrastructure
precision recall f1-score support
0 0.95 1.00 0.98 6252
1 0.00 0.00 0.00 302
avg / total 0.91 0.95 0.93 6554
weather_related
precision recall f1-score support
0 0.84 0.97 0.90 4706
1 0.88 0.53 0.66 1848
avg / total 0.85 0.85 0.83 6554
floods
precision recall f1-score support
0 0.94 1.00 0.97 5988
1 0.90 0.30 0.45 566
avg / total 0.93 0.94 0.92 6554
storm
precision recall f1-score support
0 0.93 0.99 0.96 5942
1 0.74 0.30 0.43 612
avg / total 0.91 0.93 0.91 6554
fire
precision recall f1-score support
0 0.99 1.00 0.99 6475
1 0.67 0.03 0.05 79
avg / total 0.98 0.99 0.98 6554
earthquake
precision recall f1-score support
0 0.97 0.99 0.98 5973
1 0.89 0.68 0.77 581
avg / total 0.96 0.96 0.96 6554
cold
precision recall f1-score support
0 0.98 1.00 0.99 6436
1 0.33 0.01 0.02 118
avg / total 0.97 0.98 0.97 6554
other_weather
precision recall f1-score support
0 0.94 1.00 0.97 6183
1 0.40 0.01 0.01 371
avg / total 0.91 0.94 0.92 6554
direct_report
precision recall f1-score support
0 0.87 0.98 0.92 5258
1 0.86 0.38 0.52 1296
avg / total 0.86 0.86 0.84 6554
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
# import libraries
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.metrics import classification_report
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.pipeline import FeatureUnion, Pipeline
import os
os.path.abspath(os.getcwd())
###Output
_____no_output_____
###Markdown
Loading Up DATABASE 'disaster_response' Prepared from ETL Stage
###Code
# load data from database
#def load_data(data_file)
def load_data():
engine = create_engine('sqlite:///disaster_response.db')
conn = engine.connect()
df = pd.read_sql_table('disaster_response', conn)
#df.head(2)
X = df.message.values
y = df.iloc[:, 4:]
##y.dropna(axis = 0, how = 'any', inplace=True)
##y.fillna(0, inplace=True)
colnames = y.columns
#y = y.values
engine.dispose()
return X, y, colnames
###Output
_____no_output_____
###Markdown
Starting Verb Extractor Starting Verb Extractor Function is creating a Few NAN values, that is why avoiding its use for now
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
# tokenize by sentences
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
# tokenize each sentence into words and tag part of speech
pos_tags = nltk.pos_tag(word_tokenize(sentence))
# index pos_tags to get the first word and part of speech tag
first_word, first_tag = pos_tags[0][0], pos_tags[0][1]
# return true if the first word is an appropriate verb or RT for retweet
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return 1
return 0
def fit(self, x, y=None):
return self
def transform(self, X):
# apply starting_verb function to all values in X
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
tokens = word_tokenize(text)
tokens_wihtout_sw = [w for w in tokens if w not in stopwords.words("english") ]
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens_wihtout_sw:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables. ** Feature Union Enables Parallel Execution of Code - Removed
###Code
def model_pipeline():
pipeline = Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('mclf', MultiOutputClassifier(RandomForestClassifier()))
# ('clf', RandomForestClassifier())
])
return pipeline
###Output
_____no_output_____
###Markdown
model-pipeline_with_sw to be used when using verb extracting function 3.2 With GridSearch
###Code
def model_pipeline_with_GS():
pipeline = Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('mclf', MultiOutputClassifier(RandomForestClassifier()))
# ('clf', RandomForestClassifier())
])
parameters = {
#'text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
#'text_pipeline__vect__max_df': (0.5, 0.75),
#'text_pipeline__vect__max_features': (None, 7000),
#'text_pipeline__tfidf__use_idf': (True, False),
#'mclf__estimator__max_depth': [2, 3],
#'mclf__estimator__min_samples_split': [2, 3],
'mclf__estimator__n_estimators': [50, 70],
#'mclf__estimator__max_leaf_nodes' : [3,4]
}
#cv = RandomizedSearchCV(pipeline, param_distributions=parameters)
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs = 4, cv = 2, verbose = 3)
#print(cv.best_params_)
return cv
m = MultiOutputClassifier(LogisticRegression())
m.get_params().keys()
###Output
_____no_output_____
###Markdown
3.3 With Multiple Models
###Code
from sklearn.base import BaseEstimator
class ClfSwitcher(BaseEstimator):
def __init__(
self,
estimator = MultinomialNB(),
):
# """
# A Custom BaseEstimator that can switch between classifiers.
# :param estimator: sklearn object - The classifier
# """
#return self.estimator
self.estimator = estimator
def fit(self, X, y): #pass
return self.estimator.fit(X,y)
#pass
def predict(self, X, y=None):
return self.estimator.predict(X)
def score(self, X, y): #pass
return self.estimator.score(X, y)
#pass
def model_pipeline_with_Multiple():
pipeline = Pipeline([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('clf', ClfSwitcher())
])
parameters = [
{
'clf__estimator': [MultiOutputClassifier(DecisionTreeClassifier())], # SVM if hinge loss / logreg if log loss
#'tfidf__max_df': (0.25, 0.5),
#'tfidf__stop_words': ['english', None],
#'clf__estimator__penalty': ('l2', 'elasticnet', 'l1'),
#'clf__estimator__max_iter': [50, 80],
#'clf__estimator__tol': [1e-4],
#'clf__estimator__loss': ['hinge', 'log', 'modified_huber'],
},
#{
# 'clf__estimator': [RandomForestClassifier()], # SVM if hinge loss / logreg if log loss
#'tfidf__max_df': (0.25, 0.5),
#'tfidf__stop_words': ['english', None],
#'clf__estimator__penalty': ('l2', 'elasticnet', 'l1'),
#'clf__estimator__max_iter': [50, 80],
#'clf__estimator__tol': [1e-4],
#'clf__estimator__loss': ['hinge', 'log', 'modified_huber'],
#},
#{
# 'clf__estimator': [MultiOutputClassifier(LogisticRegression())],
#'text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
#'text_pipeline__vect__max_df': (0.5, 0.75),
#'text_pipeline__vect__max_features': (None, 7000),
#'text_pipeline__tfidf__use_idf': (True, False),
#'clf__estimator__max_depth': [2, 3],
#'clf__estimator__min_samples_split': [2, 3],
#'clf__estimator__n_estimators': [50, 70],
#'clf__estimator__max_leaf_nodes' : [3,4]
#},
{
'clf__estimator': [MultiOutputClassifier(MLPClassifier())],
#'text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
#'text_pipeline__vect__max_df': (0.5, 0.75),
#'text_pipeline__vect__max_features': (None, 7000),
#'text_pipeline__tfidf__use_idf': (True, False),
#'clf__estimator__max_depth': [2, 3],
#'clf__estimator__min_samples_split': [2, 3],
#'clf__estimator__n_estimators': [50, 70],
#'clf__estimator__max_leaf_nodes' : [3,4]
},
]
gscv = GridSearchCV(pipeline, parameters, cv=2, n_jobs=-1, return_train_score=False, verbose=3)
#gscv.fit(train_data, train_labels)
#print(gscv.best_params_)
return gscv
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
#def display_results(y_test, y_pred):
# labels = np.unique(y_pred)
# confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
# accuracy = (y_pred == y_test).mean()
# print("Labels:", labels)
# print("Confusion Matrix:\n", confusion_mat)
# print("Accuracy:", accuracy)
def train(model_specified):
X, y, column_names = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state = 42, shuffle = True)
model = model_specified
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
#display_results(y_test, y_pred)
return y_test, y_pred, column_names
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_test, y_pred, column_names = train(model_pipeline())
target_dataframe = pd.DataFrame(y_pred, columns = column_names)
for i, value in enumerate(target_dataframe):
print("Model Confusion Matrix for the", value, "are below")
print(classification_report(y_test.iloc[:,i], target_dataframe.iloc[:,i] ))
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters. Using Parameters Map Used Earlier to estimate the time taken
###Code
parameters = {
#'text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'text_pipeline__vect__max_df': (0.5, 0.75),
#'text_pipeline__vect__max_features': (None, 7000),
#'text_pipeline__tfidf__use_idf': (True, False),
#'mclf__estimator__max_depth': [2, 3],
#'mclf__estimator__min_samples_split': [2, 3],
'mclf__estimator__n_estimators': [50, 70],
#'mclf__estimator__max_leaf_nodes' : [3,4]
}
#cv =
###Output
_____no_output_____
###Markdown
How many Combinations are there?
###Code
com = 1
for x in parameters.values():
com *= len(x)
print('There are {} combinations'.format(com))
###Output
_____no_output_____
###Markdown
Assuming 100 calculations per second, time taken would be
###Code
print('This would take {:.0f} minutes to finish.'.format((100 * com) / (60)))
###Output
_____no_output_____
###Markdown
Predicting with GridSearch
###Code
y_test_gs, y_pred_gs, column_names = train(model_pipeline_with_GS())
###Output
Fitting 2 folds for each of 2 candidates, totalling 4 fits
[CV] mclf__estimator__n_estimators=50 ................................
[CV] mclf__estimator__n_estimators=50 ................................
[CV] mclf__estimator__n_estimators=70 ................................
[CV] mclf__estimator__n_estimators=70 ................................
[CV] mclf__estimator__n_estimators=50, score=0.15787178368948976, total=18.0min
[CV] mclf__estimator__n_estimators=50, score=0.1529655473179241, total=18.2min
[CV] mclf__estimator__n_estimators=70, score=0.1569995638901003, total=22.5min
[CV] mclf__estimator__n_estimators=70, score=0.1537287396423899, total=22.7min
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
target_dataframe_2 = pd.DataFrame(y_pred_gs, columns = column_names)
for i, value in enumerate(target_dataframe_2):
print("Model Confusion Matrix for the", value, "are below")
print(classification_report(y_test_gs.iloc[:,i], target_dataframe_2.iloc[:,i] ))
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF ***Leaving this Multiple models method for now because it consumes too much a time***
###Code
#y_test_n, y_pred_n, column_names = train(model_pipeline_with_Multiple())
#target_dataframe_3 = pd.DataFrame(y_pred_n, columns = column_names)
#for i, value in enumerate(target_dataframe_2):
# print("Model Confusion Matrix for the", value, "are below")
# print(classification_report(y_test_n.iloc[:,i], target_dataframe_3.iloc[:,i] ))
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
## To save the best parameter only, use the following line of code
##pickle.dump(model_pipeline_with_GS().best_estimator_, open('/home/workspace/MLClassifier', 'wb') )
import pickle
pickle.dump(model_pipeline_with_GS(), open('/home/workspace/MLClassifier', 'wb') )
###Output
_____no_output_____
###Markdown
Testing Implementation of Pickle File
###Code
loaded_model_GS = pickle.load(open('/home/workspace/MLClassifier', 'rb'))
loaded_model_GS
###Output
_____no_output_____
###Markdown
10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
train_model()
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package wordnet to /root/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Fitting 2 folds for each of 2 candidates, totalling 4 fits
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# Print scikit-learn version
import sklearn
print('sklearn: %s' % sklearn.__version__)
# import libraries
import pandas as pd
import numpy as np
import os
import pickle
import bz2
from sqlalchemy import create_engine
import re
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk import pos_tag
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report
from scipy.stats import hmean
from scipy.stats.mstats import gmean
import time
import datetime
# import warnings filter
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql_table('messages_categories',engine)
X = df['message']
Y = df.iloc[:,3:-2]
# Print category columns
category_cols = df.columns[3:-2].tolist()
print(category_cols)
###Output
['related', 'request', 'offer', 'aid_related', 'medical_help', 'medical_products', 'search_and_rescue', 'security', 'military', 'water', 'food', 'shelter', 'clothing', 'money', 'missing_people', 'refugees', 'death', 'other_aid', 'infrastructure_related', 'transport', 'buildings', 'electricity', 'tools', 'hospitals', 'shops', 'aid_centers', 'other_infrastructure', 'weather_related', 'floods', 'storm', 'fire', 'earthquake', 'cold', 'other_weather', 'direct_report']
###Markdown
2. Write a tokenization function to process your text data
###Code
# Get stop words in 'English' language
stop_words = stopwords.words("english")
# Print length of stop words in English language
print('Length of stop words in English language is {}'.format(len(stop_words)))
# Print stop words in English language
print(stop_words)
# Check if any message contain URL link
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
url_count = pd.Series([])
url_count = X.apply(lambda message: len(re.findall(url_regex, message)))
print(type(url_count))
url_count.value_counts().sort_index()
###Output
<class 'pandas.core.series.Series'>
###Markdown
From above, we can observe that most of the messages does not have URL links but there are few messages which do contain URL links. There is only 1 observation which contains 5 URL links.
###Code
# Define function tokenize to normalize, tokenize and lemmatize text string
def tokenize(text):
"""Normalize, tokenize and lemmatize text string
Args:
text: string, String containing message for processing
Returns:
clean_tokens: list, List containing normalized and lemmatized word tokens
"""
# Replace URL links in text string with string 'urlplaceholder'
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# Substitute characters in text string which match regular expression r'[^a-zA-Z0-9]'
# with single whitespace
text = re.sub(r'[^a-zA-Z0-9]', ' ', text)
# Get word tokens from text string
tokens = word_tokenize(text)
# Instantiate WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
# Clean tokens
clean_tokens = []
for tok in tokens:
# convert token to lowercase as stop words are in lowercase
tok_low = tok.lower()
if tok_low not in stop_words:
# Lemmatize token and remove the leading and trailing spaces from lemmatized token
clean_tok = lemmatizer.lemmatize(tok_low).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
# Print first 5 messages and their respective tokens
for idx in X.index.tolist()[0:5]:
print(X.loc[idx])
print('-'*100)
print(tokenize(X.loc[idx]))
print('*'*100)
# Print message with url count of 5
print('Index of message with 5 URL links is {}'.format(url_count[url_count == 5].index[0]))
X[url_count[url_count == 5].index[0]]
# Print message with index location 12598 and its respective tokens
# Index location 12598 contain message with 5 URL links
print(X.loc[12598])
print('-'*100)
print(tokenize(X.loc[12598]))
###Output
Hurricane Sandy Flight Cancellations: Thousands Of Flights Canceled Due. http://t.co/DMo0tbQE Most read by neighbors in #Roseville #Newarkhappy halloween 2012 (@Frankenstorm Apocalypse - Hurricane Sandy w/ 213 others) http://t.co/DTw9W3kKThe protective cover for the Enterprise failed last night. #NYC #sandy @Space Shuttle Enterprise http://t.co/5jexF6ZG@StormTeam8 @CTPostTrumbull Shelbourne rd Trumbull right next door. Never lost power http://t.co/NqMyZ1H8RT @nytimes: The New York Times is providing free, unlimited access to storm coverage on http://t.co/HkHYUWhW and its mobile apps today.
----------------------------------------------------------------------------------------------------
['hurricane', 'sandy', 'flight', 'cancellation', 'thousand', 'flight', 'canceled', 'due', 'urlplaceholder', 'read', 'neighbor', 'roseville', 'newarkhappy', 'halloween', '2012', 'frankenstorm', 'apocalypse', 'hurricane', 'sandy', 'w', '213', 'others', 'urlplaceholder', 'protective', 'cover', 'enterprise', 'failed', 'last', 'night', 'nyc', 'sandy', 'space', 'shuttle', 'enterprise', 'urlplaceholder', 'ctposttrumbull', 'shelbourne', 'rd', 'trumbull', 'right', 'next', 'door', 'never', 'lost', 'power', 'urlplaceholder', 'nytimes', 'new', 'york', 'time', 'providing', 'free', 'unlimited', 'access', 'storm', 'coverage', 'urlplaceholder', 'mobile', 'apps', 'today']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# Create a basic pipeline
pipeline_basic = Pipeline([('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
# Split the data into train and test sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=42)
#start_time = time.time()
start_datetime = datetime.datetime.now().replace(microsecond=0)
# Train basic pipeline
pipeline_basic.fit(X_train, Y_train)
#print("--- %s seconds ---" % (time.time() - start_time))
print("--- Training time: %s ---" % (datetime.datetime.now().replace(microsecond=0) - start_datetime))
###Output
--- Training time: 0:01:01 ---
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# Predict categories from test set
start_datetime = datetime.datetime.now().replace(microsecond=0)
Y_pred_basic = pipeline_basic.predict(X_test)
print("--- Predicting time: %s ---" % (datetime.datetime.now().replace(microsecond=0) - start_datetime))
# Print type and shape of Y_test and Y_pred
print('Y_test has type: {} and its shape is: {}'.format(type(Y_test), Y_test.shape))
print('Y_pred_basic has type: {} and its shape is: {}'.format(type(Y_pred_basic), Y_pred_basic.shape))
# Print first 5 rows of Y_test dataframe
Y_test.head()
# Print first 5 rows in Y_pred ndarray
Y_pred_basic[0:5]
# Print accuracy of basic pipeline for each of individual category
accuracy_basic = (Y_pred_basic == Y_test).mean()
accuracy_basic
# Print overall accuracy of basic pipeline
overall_accuracy_basic = (Y_pred_basic == Y_test).mean().mean()
print('Overall accuracy of basic pipeline is: {}%'.format(round(overall_accuracy_basic*100, 2)))
# Define function to calculate the multi-label f-score
def multi_label_fscore(y_true, y_pred, beta=1):
"""Calculate individual weighted average fbeta score of each category and
geometric mean of weighted average fbeta score of each category
Args:
y_true: dataframe, dataframe containing true labels i.e. Y_test
y_pred: ndarray, ndarray containing predicted labels i.e. Y_pred
beta: numeric, beta value
Returns:
f_score_gmean: float, geometric mean of fbeta score for each category
"""
b = beta
f_score_dict = {}
score_list = []
# Create dataframe y_pred_df from ndarray y_pred
y_pred_df = pd.DataFrame(y_pred, columns=y_true.columns)
for column in y_true.columns:
score = round(fbeta_score(y_true[column], y_pred_df[column], beta, average='weighted'),4)
score_list.append(score)
f_score_dict['category'] = y_true.columns.tolist()
f_score_dict['f_score'] = score_list
f_score_df = pd.DataFrame.from_dict(f_score_dict)
# print(f_score_df)
f_score_gmean = gmean(f_score_df['f_score'])
return f_score_gmean
# Print overall f_score of basic pipeline
multi_f_gmean_basic = multi_label_fscore(Y_test,Y_pred_basic, beta = 1)
print('Overall F_beta_score for basic pipeline is: {0:.2f}%'.format(multi_f_gmean_basic*100))
# Report the basic pipeline f1 score, precision and recall for each output category of the dataset
# by iterating through the columns and calling sklearn's classification_report on each column
for column in Y_test.columns:
print('------------------------------------------------------\n')
print('CATEGORY: {}\n'.format(column))
print(classification_report(Y_test[column],pd.DataFrame(Y_pred_basic, columns=Y_test.columns)[column]))
# Create dict for classification report containg metrics for each of the label for basic pipeline
clf_report_dict_basic = {}
for column in Y_test.columns:
clf_report_dict_basic[column] = classification_report(Y_test[column],\
pd.DataFrame(Y_pred_basic, columns=Y_test.columns)[column],\
output_dict=True)
clf_report_dict_basic
# Define function to create dataframe containing only weighted avg(precision, recall & f1-score) for each of the label
# from the classification report dict
def weighted_avg_metric(clf_report_dict):
"""Create dataframe containing only weighted avg(precision, recall & f1-score) for each of the label
from the classification report dict
Args:
classification report: dict, dict containing classification report for each of the label
Returns:
weighted avg metrics: dataframe, dataframe containing weighted avg metrics(precision, recall & f1-score)
for each of the label
"""
clf_idx = []
clf_metric_precision = []
clf_metric_recall = []
clf_metric_f1_score = []
metric_dict = {}
for key,value in clf_report_dict.items():
clf_idx.append(key)
clf_metric_precision.append(value['weighted avg']['precision'])
clf_metric_recall.append(value['weighted avg']['recall'])
clf_metric_f1_score.append(value['weighted avg']['f1-score'])
metric_dict['precision'] = clf_metric_precision
metric_dict['recall'] = clf_metric_recall
metric_dict['f1_score'] = clf_metric_f1_score
clf_report_df = pd.DataFrame(metric_dict, index=clf_idx)
return clf_report_df
# Calculate weighted avg metric for basic pipeline and concatenate accuracy calculated above for each
# of the label to form a new dataframe
final_metric_basic = pd.concat([weighted_avg_metric(clf_report_dict_basic), accuracy_basic], axis=1)
#print(final_metric_basic.columns)
final_metric_basic.rename(columns={0:'accuracy'}, inplace=True)
final_metric_basic
# Print overall weighted avg accuracy for basic pipeline
gmean(final_metric_basic['accuracy'])
# Print overall weighted avg f1_score for basic pipeline
gmean(final_metric_basic['f1_score'])
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters. 6.1 Add custom Estimator
###Code
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
# print('*'*100)
# print(sentence_list)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
# print(pos_tags)
# print(type(pos_tags))
if pos_tags:
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return float(True)
return float(False)
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
type(X_train[:5])
type(pd.Series(X_train[:5]))
for idx in X_train.index[0:5]:
print(idx, X_train[idx])
SVE = StartingVerbExtractor()
#SVE.fit(X_train[:5]).tranform(X_train)
dir(SVE)
SVE = StartingVerbExtractor()
SVE.starting_verb("Petit Goave #1 needs food. Where can we sleep. Please, we're asking not take years because we can't survive?")
###Output
_____no_output_____
###Markdown
6.2 Improve pipeline
###Code
# Create a new improved pipeline
pipeline_new = Pipeline([('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())])),
('starting_verb', StartingVerbExtractor())])),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
6.3 Specify parameters for grid search
###Code
# Specify parameters for grid search
parameters = {
'features__text_pipeline__vect__ngram_range': [(1,2)],
'features__text_pipeline__vect__max_df': [0.75],
'features__text_pipeline__vect__max_features': [5000],
'features__text_pipeline__tfidf__use_idf': [True],
# 'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
# 'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
# 'features__text_pipeline__vect__max_features': (None, 5000, 10000),
# 'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [200],
'clf__estimator__min_samples_split': [4],
# 'clf__estimator__n_estimators': [50,100,200],
# 'clf__estimator__min_samples_split': [2,3,4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
# {'text_pipeline': 0.5, 'starting_verb': 1},
# {'text_pipeline': 0.8, 'starting_verb': 1},
)
}
###Output
_____no_output_____
###Markdown
6.3 Define custom scorer
###Code
# Specify custom scorer
scorer = make_scorer(multi_label_fscore,greater_is_better = True)
###Output
_____no_output_____
###Markdown
6.4 Create grid search object
###Code
# create grid search object
cv = GridSearchCV(pipeline_new, param_grid=parameters, scoring=scorer,verbose = 2)
# Print grid search CV object params
from pprint import pprint
import json
#data = json.dumps(my_dict, indent=1)
#data=pipeline_new.get_params().keys()
pprint(pipeline_new.get_params())
#type(pipeline_new.get_params())
# Fit GridSearchCV object to training set
cv.fit(X_train, Y_train)
###Output
Fitting 3 folds for each of 1 candidates, totalling 3 fits
[CV] clf__estimator__min_samples_split=4, clf__estimator__n_estimators=200, features__text_pipeline__tfidf__use_idf=True, features__text_pipeline__vect__max_df=0.75, features__text_pipeline__vect__max_features=5000, features__text_pipeline__vect__ngram_range=(1, 2), features__transformer_weights={'text_pipeline': 1, 'starting_verb': 0.5}
###Markdown
**After trying various combinations of parameters defined in parameter's grid, we have the best estimator as per the metrics on the training set and below are the details for best estimator and parameters.**
###Code
# Print best estimator for GridSearchCV object
cv.best_estimator_
# Print best estimator's all parameters
pprint(cv.best_estimator_.get_params())
# Print the score(as per custom scorer) on the training set after the best estimator selected has been refit
cv.best_score_
# Print best estimator parameters selected from the parameter's grid with their values
cv.best_params_
# Print scorer used to select the best parameters for the GridSearchCV object
cv.scorer_
# Print the number of cross-validation splits (folds/iterations) used while fitting the training set
cv.n_splits_
###Output
_____no_output_____
###Markdown
Table of Contents1 ML Pipeline Preparation1.1 Import libraries and load data from database.1.2 Write a tokenization function to process your text data1.3 Train and test your model1.3.1 RandomForest Classifier1.3.2 GridSearchCV with RandomForestClassifier1.3.3 High scores on fitting data, but let a closer look1.4 XGBoost1.5 GridSearch with XGBoost1.6 Try improving your model further. Here are a few ideas:1.6.1 5. Export your model as a pickle file1.6.2 10. Use this notebook to complete train.py ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# filter warning message
import warnings
warnings.filterwarnings('ignore')
# import libraries
import time
import pandas as pd
from sqlalchemy import create_engine
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
# load data from database
engine = create_engine('sqlite:///data/DisasterResponse.db')
df = pd.read_sql("SELECT * FROM message_response", con=engine)
df.isnull().any().sum()
# convert category column data type from int64 to int8
for column in df.columns[2:]:
df[column] = df[column].astype('int8')
# and message from object to string
df['message'] = df['message'].astype(str)
# split features and labels
X = df['message'].values
Y = df.drop(['message', 'genre'], axis=1).copy().values
X.shape
# with 36 categories
Y.shape
###Output
_____no_output_____
###Markdown
Write a tokenization function to process your text data
###Code
# import libraries for NLTK
import re
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
nltk.download('stopwords')
# stopwords.words('english')
# libraries for sklearn
from sklearn.multioutput import MultiOutputClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV, KFold
def tokenize(text):
'''Process text to lower case, remove stopwords, and lemmatize.
Input: A line of text
Return: a list of words (tokens)
'''
text = re.sub(r'[^a-zA-Z0-9]', ' ', text)
tokens = word_tokenize(text)
tokens = [w for w in tokens if w not in stopwords.words('english')]
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
Train and test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
# print out scores with `classification_report`
# three types of scores:
# 1. precision -- portion of the corectly classified to the total classified
# 2. recall -- portion of the corectly classified to the total items should be corectly classified
# 3. f1-score -- weighted balance of recall and precision
# more on these: https://en.wikipedia.org/wiki/F-score
labels = df.columns[2:]
def test_report(Y_predict, Y_test, verbose=False):
'''return a dictionary of scores from classification report'''
scores = dict()
for i in range(len(labels)):
report = classification_report(Y_predict[:, i], Y_test[:, i],
output_dict=True)
if verbose:
scores.update({labels[i]: report})
else:
scores.update({labels[i]: report['weighted avg']})
return scores
def quick_eval(model):
'''evaluate predicting score on testing data'''
Y_predict = model.predict(X_test)
scores = test_report(Y_predict, Y_test)
df_scores = pd.DataFrame.from_dict(data=scores, orient='index')
print(df_scores.mean(axis=0))
return df_scores
# with Pipeline only, no GridSearch
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_knn = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier()))
])
pipeline_knn.fit(X_train, Y_train)
quick_eval(pipeline_knn)
# experiment with GridSearchCV and parameters
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_knn = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(KNeighborsClassifier()))
])
parameters = {
# 'vect__stop_words': (None, 'english'),
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 100, 1000, 10000),
'tfidf__use_idf': (True, False),
# 'tfidf__smooth_idf': (True, False),
# 'tfidf__sublinear_tf': (True, False),
'clf__estimator__n_neighbors': [3, 5, 10, 20],
# 'clf__estimator__weights': ['uniform', 'distance'],
# 'clf__estimator__algorithm': ['auto', 'ball_tree', 'kd_tree', 'brute'],
# 'clf__estimator__leaf_size': [1,10,30,50],
# 'clf__estimator__p': [1,2,3]
}
cv = GridSearchCV(pipeline_knn, param_grid=parameters, cv=3, verbose=True, n_jobs=-1)
start = time.time()
cv.fit(X_train, Y_train)
last_for = time.time() - start
print(f'Total training time: {last_for:.1f} seconds')
# see what the best parameters for fitting so far
cv.best_estimator_
quick_eval(cv)
# using trained model to predict on test data
Y_predict = cv.predict(X_test)
Y_predict.shape
scores = test_report(Y_predict, Y_test)
def visualize_report(score_report, title='Scores on test data'):
'''visualize score report by matplotlib'''
if isinstance(score_report, dict):
df_scores = pd.DataFrame.from_dict(data=scores, orient='index')
else:
df_scores = score_report
fig, ax = plt.subplots(figsize=(10,6), facecolor='white')
width = 0.2
score_types = ['precision', 'recall', 'f1-score']
x = np.arange(0, len(df_scores))
for i, label in enumerate(score_types):
ax.bar(x+i*width, df_scores[label], width=width, label=label)
ax.set_xlim(0, len(df_scores))
ax.xaxis.set_major_locator(MultipleLocator(1))
cat_labels = list(df_scores.index)
cat_labels.insert(0,'')
ax.set_title(title)
ax.set_xticklabels(cat_labels, rotation=90)
fig.legend(ncol=3, loc='lower center')
fig.tight_layout()
fig.savefig('models/evaluate_score.png');
return None
visualize_report(scores)
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:16: UserWarning: FixedFormatter should only be used together with FixedLocator
app.launch_new_instance()
###Markdown
RandomForest Classifier
###Code
from sklearn.ensemble import RandomForestClassifier
# with Pipeline only, no GridSearch
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_rf = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=-1, verbose=False)))
])
pipeline_rf.fit(X_train, Y_train)
quick_eval(pipeline_rf)
###Output
precision 0.976734
recall 0.947006
f1-score 0.960047
support 18351.000000
dtype: float64
###Markdown
GridSearchCV with RandomForestClassifier
###Code
# experiment with GridSearchCV and parameters
# aborted after running too long
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_rf = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 100, 1000, 10000),
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100, 200, 500],
'clf__estimator__bootstrap': [True, False]
}
cv = GridSearchCV(pipeline_rf, param_grid=parameters, cv=3, verbose=True, n_jobs=-1)
start = time.time()
cv.fit(X_train, Y_train)
last_for = time.time() - start
print(f'Total training time: {last_for:.1f} seconds')
quick_eval(cv)
###Output
precision 0.972370
recall 0.948167
f1-score 0.958881
support 18351.000000
dtype: float64
###Markdown
High scores on fitting data, but let a closer look
###Code
# imbalanced dataset, most of columns containing data for 0
# high fitting score, but not very useful to identify a positive message
fig, ax = plt.subplots(figsize=(12,6))
df[df.columns[2:]].mean().plot(kind='bar', ax=ax);
###Output
_____no_output_____
###Markdown
XGBoost
###Code
from xgboost import XGBClassifier
# with Pipeline only, no GridSearch
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_xgb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(XGBClassifier()))
])
pipeline_xgb.fit(X_train, Y_train)
scores = quick_eval(pipeline_xgb)
scores
###Output
precision 0.965628
recall 0.949031
f1-score 0.955863
support 18351.000000
dtype: float64
###Markdown
GridSearch with XGBoost
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.7, random_state=1)
pipeline_xgb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(XGBClassifier()))
])
parameters = {
'vect__ngram_range': ((1, 1), (1, 2)),
# 'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 100, 1000, 10000),
'tfidf__use_idf': (True, False),
'clf__estimator__n_estimators': [50, 100, 200],
# 'clf__estimator__max_depth': [3, 5, 10]
}
kfold = KFold(n_splits=10, random_state=1)
clf_xgb = GridSearchCV(pipeline_xgb, param_grid=parameters, cv=kfold, verbose=1, n_jobs=-1)
start = time.time()
clf_xgb.fit(X_train, Y_train)
last_for = time.time() - start
print(f'Total training time: {last_for:.1f} seconds')
scores = quick_eval(clf_xgb)
scores
visualize_report(scoresls)
with open('models/evaluate_score_xgb.txt', 'w+') as f:
f.write(json.dumps(scores.to_dict(orient='index')))
###Output
_____no_output_____
###Markdown
Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# we can try another classifer
from sklearn.ensemble import RandomForestClassifier
pipeline_rf = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
pipeline_rf.fit(X_train, Y_train)
quick_eval(pipeline_rf)
cv.best_estimator_
###Output
_____no_output_____
###Markdown
5. Export your model as a pickle file
###Code
import joblib
# joblib.dump(pipeline_knn, 'models/knn_clf_v1.pkl')
joblib.dump(clf_xgb, 'models/xgb_clf.pkl', compress=3)
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import string
import re
# nlp libraries
import nltk
nltk.download(['punkt', 'stopwords', 'wordnet'])
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
# ml libraries
import sklearn
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, f1_score, recall_score, precision_score
from sklearn.multioutput import MultiOutputClassifier
# !pip install scikit-learn --upgrade
print(sklearn.__version__)
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql('DisasterResponse.db', engine)
X = df['message'].values
Y = df.drop(['id', 'message', 'original', 'genre'], axis=1).values
# df.head()
df[df.aid_related==2]
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
from contractions import contractions_dict
def expand_contractions(text, contractions_dict):
contractions_pattern = re.compile('({})'.format('|'.join(contractions_dict.keys())),
flags=re.IGNORECASE | re.DOTALL)
expanded_text = contractions_pattern.sub(expand_match, text)
expanded_text = re.sub("'", "", expanded_text)
return expanded_text
def expand_match(contraction):
match = contraction.group(0)
first_char = match[0]
expanded_contraction = contractions_dict.get(match) \
if contractions_dict.get(match) \
else contractions_dict.get(match.lower())
expanded_contraction = expanded_contraction
return expanded_contraction
def tokenize(text):
'''
Args:
text(string): a string containing the message
Return:
tokenized_message(list): a list of words containing the processed message
'''
tokenized_message = []
try:
# for unbalanced parenthesis problem
text = text.replace(')','')
text = text.replace('(','')
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# get list of all urls using regex
detected_urls = re.findall(url_regex, text)
# replace each url in text string with placeholder
for url in detected_urls:
text = re.sub(url, "urlplaceholder", text)
# remove whitespaces
text = re.sub(r" +", " ", text)
# expand contractions
text = expand_contractions(text, contractions_dict)
# tokenize text
tokens = word_tokenize(text)
# initiate lemmatizer
lemmatizer = WordNetLemmatizer()
# get stopwords
stopwords_english = stopwords.words('english')
stopwords_english += 'u'
for word in tokens:
# normalize word
word = word.lower()
if (word not in stopwords_english and # remove stopwords
word not in string.punctuation): # remove punctuation
word = lemmatizer.lemmatize(word) # lemmatizing word
tokenized_message.append(word)
except Exception as e:
print(e)
# print(text)
return tokenized_message
text = "The first time you see The Second Renaissance it may look boring. Look at it at least twice and definitely watch part 2. It will change your view of the matrix. Are the human people the ones https://bachda.com) who started the war ? Is AI a bad thing ?"
print(tokenize(text))
###Output
['first', 'time', 'see', 'second', 'renaissance', 'may', 'look', 'boring', 'look', 'least', 'twice', 'definitely', 'watch', 'part', '2', 'change', 'view', 'matrix', 'human', 'people', 'one', 'urlplaceholder', 'started', 'war', 'ai', 'bad', 'thing']
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
# multi output classifier
pipeline_multi = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=10)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
from time import time
start = time()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=4)
pipeline_multi.fit(X_train, y_train)
end = time()
print("Training time:{}".format(end-start))
###Output
Training time:62.05896782875061
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline_multi.predict(X_test)
report = []
for idx, col in enumerate(y_pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
full_report = []
for idx, col in enumerate(y_pred.T):
full_report.append(classification_report(y_test.T[idx], col))
print(report)
print(np.mean(report))
print(full_report[0])
###Output
precision recall f1-score support
0 0.73 0.40 0.52 1238
1 0.84 0.95 0.89 4006
accuracy 0.82 5244
macro avg 0.78 0.68 0.70 5244
weighted avg 0.81 0.82 0.80 5244
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = {
'vect__ngram_range': ((1,1), (1,2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 5000, 10000),
'tfidf__use_idf': (True, False),
'clf__n_estimators': [100, 200, 300],
'clf__min_samples_split': [2, 3, 4],
}
cv = GridSearchCV(pipeline_multi, param_grid=parameters, n_jobs=10, verbose=10)
cv.fit(X_train, y_train)
import joblib
joblib.dump(cv, "best_params.pkl")
cv.best_params_
# train with best params
# multi output classifier
pipeline_multi_best = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, max_df=0.5, max_features=5000, ngram_range=(1,2))),
('tfidf', TfidfTransformer(use_idf=False)),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators=100, min_samples_split=2, n_jobs=10)))
])
from time import time
start = time()
pipeline_multi_best.fit(X_train, y_train)
end = time()
print("Training time:{}".format(end-start))
y_pred = pipeline_multi_best.predict(X_test)
report = []
for idx, col in enumerate(y_pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
print(report)
print(np.mean(report))
###Output
[0.80826145170927, 0.8839689542463472, 0.9917124722746894, 0.7739841853976814, 0.9043479337286663, 0.947266174588642, 0.965884411896301, 0.9736646879100621, 0.9559498484153882, 1.0, 0.9586550302483187, 0.9479940879403576, 0.9350153975990199, 0.9816585517191361, 0.9665131609339072, 0.9780519218494087, 0.9540567997208698, 0.9515079610756865, 0.8194629236133815, 0.9009293106489187, 0.9420527135590082, 0.9421775783331847, 0.9729654932210757, 0.9891436100131752, 0.9828704447364472, 0.994854209149384, 0.9823006000810917, 0.931509206100485, 0.8786325645865102, 0.9495559046587569, 0.9437514842008289, 0.9853017181560437, 0.971446950540802, 0.9746307332171688, 0.9309944311226099, 0.8332650295154341]
0.9390093871307794
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio! 8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
# add new tranformers for features
from sklearn.base import BaseEstimator, TransformerMixin
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
if pos_tags:
first_word, first_tag = pos_tags[0][0], pos_tags[0][1]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
x_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(x_tagged)
pipeline_improved = Pipeline([
('features', FeatureUnion([
('nlp_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(RandomForestClassifier(n_jobs=10)))
])
%timeit pipeline_improved.fit(X_train, y_train)
%timeit pred = pipeline_improved.predict(X_test)
report = []
for idx, col in enumerate(pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
print(np.mean(report))
###Output
11.8 s ± 258 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
0.9374718642268118
###Markdown
XGBoost for better perfromance
###Code
# try using xgboost
import xgboost as xgb
pipeline_xgb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(xgb.sklearn.XGBClassifier()))
])
start = time()
# X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=4)
pipeline_xgb.fit(X_train, y_train)
end = time()
print("Training Time: {}".format(end-start))
pred = pipeline_xgb.predict(X_test)
report = []
for idx, col in enumerate(pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
print(np.mean(report))
c_report = []
# for idx, col in enumerate(pred):
# c_report.append(classification_report(y_test[:idx], pred[:idx], labels=df.columns[4:].tolist()))
cols = df.columns[4:].tolist()
for idx in range(pred.shape[1]):
c_report.append(classification_report(y_test[:, idx],pred[:, idx], output_dict=True))
f1= []
for i in range(len(c_report)):
f1.append(c_report[i]['weighted avg']['f1-score'])
print(np.mean(f1))
###Output
0.9399919814415109
###Markdown
Oprimize xgboost parameters
###Code
parameters = {
# 'vect__ngram_range': ((1,1), (1,2)),
# 'vect__max_df': (0.5, 0.75, 1.0),
# 'vect__max_features': (None, 5000, 10000),
# 'tfidf__use_idf': (True, False),
'clf__estimator__learning_rate': [0.05, 0.15, 0.25], # shrinks feature values for better boosting
'clf__estimator__max_depth': [4, 6, 8, 10],
'clf__estimator__min_child_weight': [1, 3, 5, 7], # sum of child weights for further partitioning
'clf__estimator__gamma': [0.0, 0.1, 0.2, 0.3, 0.4], # prevents overfitting, split leaf node if min. gamma loss
'clf__estimator__colsample_bytree': [0.3, 0.4, 0.5, 0.7] # subsample ratio of columns when tree is constructed
}
xgb_cv = GridSearchCV(pipeline_xgb, param_grid=parameters, n_jobs=10, verbose=10)
xgb_cv.fit(X_train, y_train)
joblib.dump(xgb_cv, 'xgb_params.pkl')
xgb_cv.best_params_
xgb_cv.best_score_
pipeline_xgb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(xgb.sklearn.XGBClassifier(colsample_bytree=0.7, gamma=0.4, learning_rate=0.25, max_depth=10, min_child_weight=7)))
])
start = time()
# X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=4)
pipeline_xgb.fit(X_train, y_train)
end = time()
print("Training Time: {}".format(end-start))
pred = pipeline_xgb.predict(X_test)
report = []
for idx, col in enumerate(pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
print("Mean f1-score: {}".format(np.mean(report)))
parameters = {
'vect__ngram_range': ((1,1), (1,2)),
'vect__max_df': (0.5, 0.75, 1.0),
'vect__max_features': (None, 5000, 10000),
'tfidf__use_idf': (True, False)
# 'clf__estimator__learning_rate': [0.05, 0.15, 0.25], # shrinks feature values for better boosting
# 'clf__estimator__max_depth': [4, 6, 8, 10],
# 'clf__estimator__min_child_weight': [1, 3, 5, 7], # sum of child weights for further partitioning
# 'clf__estimator__gamma': [0.0, 0.1, 0.2, 0.3, 0.4], # prevents overfitting, split leaf node if min. gamma loss
# 'clf__estimator__colsample_bytree': [0.3, 0.4, 0.5, 0.7] # subsample ratio of columns when tree is constructed
}
vect_cv = GridSearchCV(pipeline_xgb, param_grid=parameters, n_jobs=10, verbose=10)
vect_cv.fit(X_train, y_train)
joblib.dump(vect_cv, 'vect_params.pkl')
vect_cv.best_params_
vect_cv.best_score_
pipeline_xgb = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize, max_df=0.5, max_features=None, ngram_range=(1,2))),
('tfidf', TfidfTransformer(use_idf=False)),
('clf', MultiOutputClassifier(xgb.sklearn.XGBClassifier(colsample_bytree=0.7, gamma=0.4, learning_rate=0.25, max_depth=10, min_child_weight=7)))
])
start = time()
# X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=4)
pipeline_xgb.fit(X_train, y_train)
end = time()
print("Training Time: {}".format(end-start))
pred = pipeline_xgb.predict(X_test)
report = []
for idx, col in enumerate(pred.T):
report.append(f1_score(y_test.T[idx], col, average='weighted'))
print("Mean f1-score: {}".format(np.mean(report)))
type(pipeline_xgb)
###Output
_____no_output_____
###Markdown
9. Export your model as a pickle file
###Code
joblib.dump(pipeline_xgb, 'models/xgboost_model.pkl')
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report,confusion_matrix, precision_score,\
recall_score,accuracy_score, f1_score, make_scorer
from sklearn.base import BaseEstimator, TransformerMixin
import nltk
from nltk import word_tokenize
import pickle
# import libraries
import nltk
nltk.download(['punkt', 'wordnet'])
import pandas as pd
import numpy as np
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sqlalchemy import create_engine
import sqlite3
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
import pickle
# load daata from database
conn = sqlite3.connect('Clean_Messages.db')
df = pd.read_sql('SELECT * FROM Clean_Messages', conn)
df = df.dropna()
X = df["message"]
Y = df.drop("message",1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Takes a Python string object and returns a list of processed words
of the text.
INPUT:
- text - Python str object - A raw text data
OUTPUT:
- stem_words - Python list object - A list of processed words from the input `text`.
"""
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
#remove all non numeric columns from the Y set
Y = Y.drop("id",1)
Y = Y.drop("genre",1)
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf',MultiOutputClassifier(RandomForestClassifier(n_estimators=1000, random_state=0)))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
y_pred
y_pred.shape
category_names=Y.columns
y_test
x=0
for column in y_test.columns:
print(classification_report(y_test[column], y_pred[:,x]))
x=x+1
metrics_list_all=[]
for col in range(y_test.shape[1]):
accuracy = accuracy_score(y_test.iloc[:,col], y_pred[:,col])
precision=precision_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
recall = recall_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
f_1 = f1_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
metrics_list=[accuracy,precision,recall,f_1]
metrics_list_all.append(metrics_list)
metrics_df=pd.DataFrame(metrics_list_all,index=category_names,columns=["Accuracy","Precision","Recall","F_1"])
print(metrics_df)
def avg_accuracy_score(y_true, y_pred):
"""
Assumes that the numpy arrays `y_true` and `y_pred` ararys
are of the same shape and returns the average of the
accuracy score computed columnwise.
y_true - Numpy array - An (m x n) matrix
y_pred - Numpy array - An (m x n) matrix
avg_accuracy - Numpy float64 object - Average of accuracy score
"""
# initialise an empty list
accuracy_results = []
# for each column index in either y_true or y_pred
for idx in range(y_true.shape[-1]):
# Get the accuracy score of the idx-th column of y_true and y_pred
accuracy = accuracy_score(y_true[:,idx], y_pred[:,idx])
# Update accuracy_results with accuracy
accuracy_results.append(accuracy)
# Take the mean of accuracy_results
avg_accuracy = np.mean(accuracy_results)
return avg_accuracy
average_accuracy_score =make_scorer(avg_accuracy_score)
list(pipeline.get_params())
###Output
_____no_output_____
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
parameters = [
{
'clf__estimator__max_leaf_nodes': [50, 100, 200],
'clf__estimator__min_samples_split': [2, 3, 4],
}
]
cv = GridSearchCV(pipeline, param_grid=parameters,
scoring=average_accuracy_score,
verbose=10,
return_train_score=True
)
cv.fit(X_train, y_train)
###Output
Fitting 5 folds for each of 9 candidates, totalling 45 fits
[CV 1/5; 1/9] START clf__estimator__max_leaf_nodes=50, clf__estimator__min_samples_split=2
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
metrics_list_all=[]
for col in range(y_test.shape[1]):
accuracy = accuracy_score(y_test.iloc[:,col], y_pred[:,col])
precision=precision_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
recall = recall_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
f_1 = f1_score(y_test.iloc[:,col], y_pred[:,col],average='micro')
metrics_list=[accuracy,precision,recall,f_1]
metrics_list_all.append(metrics_list)
metrics_df=pd.DataFrame(metrics_list_all,index=category_names,columns=["Accuracy","Precision","Recall","F_1"])
print(metrics_df)
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file 10. Use this notebook to complete `train.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
###Code
Y.
np.sum(Y.isnull())
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import pandas as pd
import numpy as np
import sqlite3
import sqlalchemy
from sqlalchemy import create_engine
import matplotlib.pyplot as plt
%matplotlib inline
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import make_multilabel_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
nltk.download(['punkt', 'wordnet'])
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql_table("disaster_messages", con=engine)
df
X = df['message']
Y = df.iloc[:, 4:]
Y.head(1)
###Output
_____no_output_____
###Markdown
2. Write a tokenization function to process your text data
###Code
def tokenize(text):
"""
Function to tokenize text.
"""
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens=[]
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train, X_test, y_train, y_test = train_test_split(X,Y)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred = pipeline.predict(X_test)
def test_model(y_test, y_pred):
"""
Function to iterate through columns and call sklearn classification report on each.
"""
for index, column in enumerate(y_test):
print(column, classification_report(y_test[column], y_pred[:, index]))
test_model(y_test, y_pred)
###Output
related precision recall f1-score support
0 0.60 0.33 0.43 1515
1 0.81 0.93 0.87 4983
2 0.14 0.02 0.03 56
avg / total 0.76 0.79 0.76 6554
request precision recall f1-score support
0 0.89 0.98 0.93 5461
1 0.80 0.38 0.52 1093
avg / total 0.87 0.88 0.86 6554
offer precision recall f1-score support
0 1.00 1.00 1.00 6527
1 0.00 0.00 0.00 27
avg / total 0.99 1.00 0.99 6554
aid_related precision recall f1-score support
0 0.72 0.89 0.79 3850
1 0.76 0.50 0.61 2704
avg / total 0.74 0.73 0.72 6554
medical_help precision recall f1-score support
0 0.93 0.99 0.96 6044
1 0.54 0.10 0.17 510
avg / total 0.90 0.92 0.90 6554
medical_products precision recall f1-score support
0 0.96 1.00 0.98 6250
1 0.74 0.08 0.15 304
avg / total 0.95 0.96 0.94 6554
search_and_rescue precision recall f1-score support
0 0.97 1.00 0.99 6367
1 0.50 0.03 0.05 187
avg / total 0.96 0.97 0.96 6554
security precision recall f1-score support
0 0.98 1.00 0.99 6438
1 0.33 0.01 0.02 116
avg / total 0.97 0.98 0.97 6554
military precision recall f1-score support
0 0.97 1.00 0.98 6344
1 0.70 0.07 0.12 210
avg / total 0.96 0.97 0.96 6554
child_alone precision recall f1-score support
0 1.00 1.00 1.00 6554
avg / total 1.00 1.00 1.00 6554
water precision recall f1-score support
0 0.94 1.00 0.97 6119
1 0.94 0.18 0.30 435
avg / total 0.94 0.94 0.93 6554
food precision recall f1-score support
0 0.93 0.99 0.96 5851
1 0.85 0.33 0.48 703
avg / total 0.92 0.92 0.91 6554
shelter precision recall f1-score support
0 0.93 0.99 0.96 5998
1 0.75 0.22 0.34 556
avg / total 0.92 0.93 0.91 6554
clothing precision recall f1-score support
0 0.99 1.00 0.99 6451
1 0.82 0.09 0.16 103
avg / total 0.98 0.99 0.98 6554
money precision recall f1-score support
0 0.97 1.00 0.99 6377
1 0.83 0.03 0.05 177
avg / total 0.97 0.97 0.96 6554
missing_people precision recall f1-score support
0 0.99 1.00 1.00 6495
1 0.50 0.02 0.03 59
avg / total 0.99 0.99 0.99 6554
refugees precision recall f1-score support
0 0.96 1.00 0.98 6318
1 0.60 0.03 0.05 236
avg / total 0.95 0.96 0.95 6554
death precision recall f1-score support
0 0.96 1.00 0.98 6272
1 0.70 0.11 0.20 282
avg / total 0.95 0.96 0.95 6554
other_aid precision recall f1-score support
0 0.87 0.99 0.93 5717
1 0.42 0.03 0.06 837
avg / total 0.82 0.87 0.82 6554
infrastructure_related precision recall f1-score support
0 0.93 1.00 0.96 6113
1 0.00 0.00 0.00 441
avg / total 0.87 0.93 0.90 6554
transport precision recall f1-score support
0 0.96 1.00 0.98 6245
1 0.60 0.07 0.12 309
avg / total 0.94 0.95 0.94 6554
buildings precision recall f1-score support
0 0.95 1.00 0.98 6224
1 0.80 0.10 0.17 330
avg / total 0.95 0.95 0.94 6554
electricity precision recall f1-score support
0 0.98 1.00 0.99 6412
1 0.80 0.03 0.05 142
avg / total 0.98 0.98 0.97 6554
tools precision recall f1-score support
0 0.99 1.00 1.00 6509
1 0.00 0.00 0.00 45
avg / total 0.99 0.99 0.99 6554
hospitals precision recall f1-score support
0 0.99 1.00 0.99 6469
1 0.00 0.00 0.00 85
avg / total 0.97 0.99 0.98 6554
shops precision recall f1-score support
0 0.99 1.00 1.00 6517
1 0.00 0.00 0.00 37
avg / total 0.99 0.99 0.99 6554
aid_centers precision recall f1-score support
0 0.99 1.00 0.99 6477
1 0.00 0.00 0.00 77
avg / total 0.98 0.99 0.98 6554
other_infrastructure precision recall f1-score support
0 0.96 1.00 0.98 6263
1 0.33 0.00 0.01 291
avg / total 0.93 0.96 0.93 6554
weather_related precision recall f1-score support
0 0.84 0.96 0.90 4714
1 0.84 0.54 0.66 1840
avg / total 0.84 0.84 0.83 6554
floods precision recall f1-score support
0 0.94 1.00 0.97 6024
1 0.94 0.30 0.45 530
avg / total 0.94 0.94 0.93 6554
storm precision recall f1-score support
0 0.94 0.99 0.96 5947
1 0.78 0.33 0.46 607
avg / total 0.92 0.93 0.92 6554
fire precision recall f1-score support
0 0.99 1.00 0.99 6476
1 1.00 0.04 0.07 78
avg / total 0.99 0.99 0.98 6554
earthquake precision recall f1-score support
0 0.94 0.99 0.97 5909
1 0.87 0.47 0.61 645
avg / total 0.94 0.94 0.93 6554
cold precision recall f1-score support
0 0.98 1.00 0.99 6433
1 0.70 0.12 0.20 121
avg / total 0.98 0.98 0.98 6554
other_weather precision recall f1-score support
0 0.95 1.00 0.97 6227
1 0.47 0.03 0.05 327
avg / total 0.93 0.95 0.93 6554
direct_report precision recall f1-score support
0 0.85 0.98 0.91 5306
1 0.79 0.29 0.42 1248
avg / total 0.84 0.85 0.82 6554
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
# specify parameters for grid search
parameters = {
'clf__estimator__n_estimators' : [50, 100]
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters)
cv
cv.fit(X_train, y_train)
cv.best_params_
###Output
_____no_output_____
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred = cv.predict(X_test)
test_model(y_test, y_pred)
accuracy = (y_pred == y_test).mean()
accuracy
###Output
_____no_output_____
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF 9. Export your model as a pickle file
###Code
import pickle
filename = 'model.pkl'
pickle.dump(cv, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
###Code
# import libraries
import numpy as np
import pandas as pd
import re
from sqlalchemy import create_engine
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.metrics import confusion_matrix,classification_report,fbeta_score,make_scorer
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer
from sklearn.multioutput import MultiOutputClassifier
import pickle
nltk.download(['punkt', 'wordnet','stopwords'])
# load data from database
engine = create_engine('sqlite:///disaster_response.db')
df = df = pd.read_sql('SELECT * FROM disaster_response', engine)
df.head()
df.columns
###Output
_____no_output_____
###Markdown
Message column is the input and we have to classify what kind of message it is so X is column messageand Y is columns from related to direct_report, so we are droping id, message, original, genre columns for y
###Code
df['search_and_rescue'].unique()
X = df['message']
Y = df.drop(['id','message','original','genre'],axis=1)
Y.dtypes
column_names=Y.columns
column_names
for col in column_names:
print(df[col].unique())
###Output
[1 0]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
[0 1]
###Markdown
2. Write a tokenization function to process your text data
###Code
clean_tokens=[]
for message in X[:10]:
text=message.lower()
text=re.sub(r"[^a-zA-Z0-9]"," ",text)
tokens = word_tokenize(text)
lemmatizer=WordNetLemmatizer()
stop_word = stopwords.words("english")
for toks in tokens:
if toks not in stop_word:
clean_tok=lemmatizer.lemmatize(toks).strip()
clean_tokens.append(clean_tok)
print(clean_tokens)
def tokenize(text):
"""
" load data from database
"
" Args:
" text: the text to be tokenized
"
" Returns:
" tokens: the tokens extracted from the text
"
"""
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()).strip()
# tokenize text
tokens = word_tokenize(text)
# lemmatize and remove stop words
lemmatizer = WordNetLemmatizer()
tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stopwords.words('english')]
return tokens
###Output
_____no_output_____
###Markdown
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
###Code
pipeline = Pipeline([("vect",CountVectorizer(tokenizer=tokenize)),
("tfidf",TfidfTransformer()),
("clf",MultiOutputClassifier(RandomForestClassifier()))
])
###Output
_____no_output_____
###Markdown
4. Train pipeline- Split data into train and test sets- Train pipeline
###Code
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state = 42)
np.random.seed(42)
pipeline.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
###Code
y_pred_train=pipeline.predict(X_train)
# from sklearn.metrics import classification_report
#y_preds, Y_test.values
print(classification_report(y_train,y_pred_train, target_names= column_names))
y_pred_test = pipeline.predict(X_test)
print(classification_report(y_test, y_pred_test, target_names=column_names))
###Output
precision recall f1-score support
related 0.85 0.91 0.88 3998
request 0.79 0.44 0.57 891
offer 0.00 0.00 0.00 24
aid_related 0.74 0.58 0.65 2164
medical_help 0.48 0.07 0.12 435
medical_products 0.69 0.10 0.18 279
search_and_rescue 0.50 0.07 0.13 136
security 0.00 0.00 0.00 96
military 0.52 0.08 0.13 158
child_alone 0.00 0.00 0.00 0
water 0.81 0.31 0.45 335
food 0.87 0.43 0.57 584
shelter 0.79 0.36 0.50 468
clothing 0.56 0.07 0.13 70
money 0.57 0.07 0.13 112
missing_people 0.00 0.00 0.00 63
refugees 0.47 0.05 0.10 170
death 0.72 0.15 0.24 247
other_aid 0.52 0.04 0.08 692
infrastructure_related 0.40 0.01 0.02 336
transport 0.76 0.08 0.15 235
buildings 0.91 0.11 0.20 269
electricity 1.00 0.06 0.11 115
tools 0.00 0.00 0.00 35
hospitals 0.00 0.00 0.00 52
shops 0.00 0.00 0.00 25
aid_centers 0.00 0.00 0.00 64
other_infrastructure 0.00 0.00 0.00 225
weather_related 0.85 0.63 0.72 1472
floods 0.90 0.28 0.42 431
storm 0.77 0.47 0.58 479
fire 1.00 0.02 0.04 53
earthquake 0.90 0.68 0.77 515
cold 0.74 0.13 0.23 104
other_weather 0.58 0.10 0.18 267
direct_report 0.74 0.32 0.45 1010
avg / total 0.74 0.48 0.54 16609
###Markdown
6. Improve your modelUse grid search to find better parameters.
###Code
pipeline.get_params()
parameters = {
'tfidf__use_idf': [True]
}
cv = GridSearchCV(pipeline,param_grid=parameters, verbose = 10)
model=cv.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 1 candidates, totalling 3 fits
[CV] tfidf__use_idf=True .............................................
[CV] ... tfidf__use_idf=True, score=0.24274066657130597, total= 2.7min
[CV] tfidf__use_idf=True .............................................
###Markdown
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
###Code
y_pred_train = cv.predict(X_train)
print(classification_report(y_train.values, y_pred_train, target_names=column_names))
y_pred_test = cv.predict(X_test)
print(classification_report(y_test.values, y_pred_test, target_names=column_names))
###Output
precision recall f1-score support
related 0.85 0.91 0.88 3998
request 0.79 0.43 0.56 891
offer 0.00 0.00 0.00 24
aid_related 0.77 0.60 0.67 2164
medical_help 0.63 0.13 0.21 435
medical_products 0.67 0.12 0.20 279
search_and_rescue 0.56 0.07 0.13 136
security 0.00 0.00 0.00 96
military 0.47 0.06 0.10 158
child_alone 0.00 0.00 0.00 0
water 0.80 0.29 0.42 335
food 0.84 0.48 0.61 584
shelter 0.79 0.33 0.47 468
clothing 0.92 0.17 0.29 70
money 0.77 0.09 0.16 112
missing_people 0.00 0.00 0.00 63
refugees 0.64 0.05 0.10 170
death 0.91 0.16 0.27 247
other_aid 0.53 0.05 0.10 692
infrastructure_related 0.20 0.00 0.01 336
transport 0.63 0.07 0.13 235
buildings 0.80 0.14 0.25 269
electricity 0.83 0.04 0.08 115
tools 0.00 0.00 0.00 35
hospitals 0.00 0.00 0.00 52
shops 0.00 0.00 0.00 25
aid_centers 1.00 0.02 0.03 64
other_infrastructure 0.00 0.00 0.00 225
weather_related 0.84 0.60 0.70 1472
floods 0.88 0.35 0.50 431
storm 0.78 0.46 0.58 479
fire 0.33 0.02 0.04 53
earthquake 0.89 0.72 0.80 515
cold 0.86 0.12 0.20 104
other_weather 0.44 0.03 0.06 267
direct_report 0.71 0.30 0.42 1010
avg / total 0.74 0.49 0.55 16609
###Markdown
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
###Code
cv.best_params_
# Try using AdaBoost instead of Random Forest Classifier
pipeline2 = Pipeline([
('vect', CountVectorizer(tokenizer = tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
parameters2 = {'vect__min_df': [5],
'tfidf__use_idf':[True]
}
cv2 = GridSearchCV(pipeline2, param_grid = parameters2, verbose = 10)
# Find best parameters
np.random.seed(42)
model2 = cv2.fit(X_train, y_train)
###Output
Fitting 3 folds for each of 1 candidates, totalling 3 fits
[CV] tfidf__use_idf=True, vect__min_df=5 .............................
[CV] tfidf__use_idf=True, vect__min_df=5, score=0.24159633814904877, total= 2.8min
[CV] tfidf__use_idf=True, vect__min_df=5 .............................
###Markdown
9. Export your model as a pickle file
###Code
pickle.dump(model, open("disaster_Response_model.pkl",'wb'))
###Output
_____no_output_____ |
linearDatasetPreparation.ipynb | ###Markdown
Exploratory data analysis Imports and raw data loading
###Code
import pandas as pd
import numpy as np
raw_train = pd.read_csv('/content/drive/My Drive/Colab Notebooks/mpr/D_train.csv')
raw_test = pd.read_csv('/content/drive/My Drive/Colab Notebooks/mpr/D_test.csv')
###Output
_____no_output_____
###Markdown
Checking the size and different columns in the train and test dfs to ensure they are the same
###Code
raw_train.shape
raw_test.shape
raw_train.columns
raw_test.columns
raw_train.head()
raw_test.head()
###Output
_____no_output_____
###Markdown
Getting the length and dtype info of the train and test data.
###Code
raw_train.info()
raw_test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 21099 entries, 0 to 21098
Data columns (total 39 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 21099 non-null int64
1 Class 21099 non-null int64
2 User 21099 non-null int64
3 X0 21099 non-null float64
4 Y0 21099 non-null float64
5 Z0 21099 non-null float64
6 X1 21099 non-null float64
7 Y1 21099 non-null float64
8 Z1 21099 non-null float64
9 X2 21099 non-null float64
10 Y2 21099 non-null float64
11 Z2 21099 non-null float64
12 X3 20680 non-null float64
13 Y3 20680 non-null float64
14 Z3 20680 non-null float64
15 X4 19285 non-null float64
16 Y4 19285 non-null float64
17 Z4 19285 non-null float64
18 X5 17059 non-null float64
19 Y5 17059 non-null float64
20 Z5 17059 non-null float64
21 X6 12740 non-null float64
22 Y6 12740 non-null float64
23 Z6 12740 non-null float64
24 X7 10446 non-null float64
25 Y7 10446 non-null float64
26 Z7 10446 non-null float64
27 X8 8194 non-null float64
28 Y8 8194 non-null float64
29 Z8 8194 non-null float64
30 X9 5655 non-null float64
31 Y9 5655 non-null float64
32 Z9 5655 non-null float64
33 X10 3860 non-null float64
34 Y10 3860 non-null float64
35 Z10 3860 non-null float64
36 X11 25 non-null float64
37 Y11 25 non-null float64
38 Z11 25 non-null float64
dtypes: float64(36), int64(3)
memory usage: 6.3 MB
###Markdown
Summarizing the numerical values
###Code
raw_train.describe
raw_test.describe
###Output
_____no_output_____
###Markdown
Checking for duplicates
###Code
raw_train.duplicated().sum()
raw_test.duplicated().sum()
###Output
_____no_output_____
###Markdown
From the above analysis, we can make the following conclusions:1. The training and the testing sets have 13 columns each2. There are many empty cells in both the dataframes3. There are no duplicate rows in the datasets4. This data cannot be directly used for modeling. Feature engineering Making meaningful and usable linear data out of the raw data by defining 13 features:1. Mean of the x markers2. Mean of the y markers3. Mean of the z markers4. Std of the x markers5. Std of the y markers6. Std of the z markers7. Number of non 0 values in the data8. Minimum value of the x markers9. Minimum value of the y markers10. Minimum value of the z markers11. Maximum value of the x markers12. Maximum value of the y markers13. Maximum value of the z markers
###Code
#Separating the X, Y and Z axis data
datax = raw_train[['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X7', 'X8', 'X9', 'X10', 'X11']]
datay = raw_train[['Y0', 'Y1', 'Y2', 'Y3', 'Y4', 'Y5', 'Y6', 'Y7', 'Y8', 'Y9', 'Y10', 'Y11']]
dataz = raw_train[['Z0', 'Z1', 'Z2', 'Z3', 'Z4', 'Z5', 'Z6', 'Z7', 'Z8', 'Z9', 'Z10', 'Z11']]
Class = raw_train['Class']
datax = np.array(datax)
datay = np.array(datay)
dataz = np.array(dataz)
Class = np.array(Class)
#Replacing the cells with no entries with 0
datax = np.nan_to_num(datax)
datay = np.nan_to_num(datay)
dataz = np.nan_to_num(dataz)
datax[0:10]
datax.shape
datay[0:10]
datay.shape
dataz[0:10]
dataz.shape
x_mean = []
y_mean = []
z_mean = []
x_std = []
y_std = []
z_std = []
number = []
for i in range (13500):
x_mean.append(np.mean(datax[i]))
y_mean.append(np.mean(datay[i]))
z_mean.append(np.mean(dataz[i]))
x_std.append(np.std(datax[i]))
y_std.append(np.std(datay[i]))
z_std.append(np.std(dataz[i]))
number.append(np.count_nonzero([datax[i], datay[i], dataz[i]]))
#Feature 1 - mean of x-axis values
x_mean = np.array(x_mean)
x_mean
#Feature 2 - mean of y-axis values
y_mean = np.array(y_mean)
y_mean
#Feature 3 - mean of z-axis values
z_mean = np.array(z_mean)
z_mean
#Feature 4 - standard deviation of x-axis values
x_std = np.array(x_std)
x_std
#Feature 5 - standard deviation of y-axis values
y_std = np.array(y_std)
y_std
#Feature 6 - standard deviation of z-axis values
z_std = np.array(z_std)
z_std
#Feature 7 - number of data points present
number = np.array(number)
number
#Feature 8 - min x-axis value
x_min = (np.amin(datax, axis = 1)).T
x_min
#Feature 9 - min y-axis value
y_min = (np.amin(datay, axis = 1)).T
y_min
#Feature 10 - min z-axis value
z_min = (np.amin(dataz, axis = 1)).T
z_min
#Feature 11 - max x-axis value
x_max = (np.amax(datax, axis = 1)).T
x_max
#Feature 12 - max y-axis value
y_max = (np.amax(datay, axis = 1)).T
y_max
#Feature 13 - maz z-axis value
z_max = (np.amax(dataz, axis = 1)).T
z_max
#Saving the features as a CSV file
df = pd.DataFrame({'xmean': x_mean, 'ymean': y_mean, 'zmean': z_mean, 'xstd': x_std, 'ystd': y_std, 'zstd': z_std, 'xmax': x_max, 'ymax':y_max, 'zmax': z_max,'xmin':x_min, 'ymin':y_min, 'zmin':z_min, 'num': number, 'Class': Class })
df.to_csv('linearTrain.csv')
###Output
_____no_output_____
###Markdown
Repeating this for the test dataset
###Code
#Separating the X, Y and Z axis data
datax = raw_test[['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X7', 'X8', 'X9', 'X10', 'X11']]
datay = raw_test[['Y0', 'Y1', 'Y2', 'Y3', 'Y4', 'Y5', 'Y6', 'Y7', 'Y8', 'Y9', 'Y10', 'Y11']]
dataz = raw_test[['Z0', 'Z1', 'Z2', 'Z3', 'Z4', 'Z5', 'Z6', 'Z7', 'Z8', 'Z9', 'Z10', 'Z11']]
Class = raw_test['Class']
datax = np.array(datax)
datay = np.array(datay)
dataz = np.array(dataz)
Class = np.array(Class)
#Replacing the cells with no entries with 0
datax = np.nan_to_num(datax)
datay = np.nan_to_num(datay)
dataz = np.nan_to_num(dataz)
datax[0:10]
datax.shape
datay[0:10]
datay.shape
dataz[0:10]
dataz.shape
x_mean = []
y_mean = []
z_mean = []
x_std = []
y_std = []
z_std = []
number = []
for i in range (len(datax)):
x_mean.append(np.mean(datax[i]))
y_mean.append(np.mean(datay[i]))
z_mean.append(np.mean(dataz[i]))
x_std.append(np.std(datax[i]))
y_std.append(np.std(datay[i]))
z_std.append(np.std(dataz[i]))
number.append(np.count_nonzero([datax[i], datay[i], dataz[i]]))
#Feature 1 - mean of x-axis values
x_mean = np.array(x_mean)
x_mean
#Feature 2 - mean of y-axis values
y_mean = np.array(y_mean)
y_mean
#Feature 3 - mean of z-axis values
z_mean = np.array(z_mean)
z_mean
#Feature 4 - standard deviation of x-axis values
x_std = np.array(x_std)
x_std
#Feature 5 - standard deviation of y-axis values
y_std = np.array(y_std)
y_std
#Feature 6 - standard deviation of z-axis values
z_std = np.array(z_std)
z_std
#Feature 7 - number of data points present
number = np.array(number)
number
#Feature 8 - min x-axis value
x_min = (np.amin(datax, axis = 1)).T
x_min
#Feature 9 - min y-axis value
y_min = (np.amin(datay, axis = 1)).T
y_min
#Feature 10 - min z-axis value
z_min = (np.amin(dataz, axis = 1)).T
z_min
#Feature 11 - max x-axis value
x_max = (np.amax(datax, axis = 1)).T
x_max
#Feature 12 - max y-axis value
y_max = (np.amax(datay, axis = 1)).T
y_max
#Feature 13 - maz z-axis value
z_max = (np.amax(dataz, axis = 1)).T
z_max
#Saving the features as a CSV file
df = pd.DataFrame({'xmean': x_mean, 'ymean': y_mean, 'zmean': z_mean, 'xstd': x_std, 'ystd': y_std, 'zstd': z_std, 'xmax': x_max, 'ymax':y_max, 'zmax': z_max,'xmin':x_min, 'ymin':y_min, 'zmin':z_min, 'num': number, 'Class': Class })
df.to_csv('linearTest.csv')
###Output
_____no_output_____ |
1.0-whs-pdfToCosineDistanceHeatmap.ipynb | ###Markdown
1. Input Folder of pdfs
###Code
relative_folder_path = 'pdfFolder'
pattern = os.path.join(os.getcwd(),relative_folder_path,'*.pdf')
print(pattern)
###Output
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/*.pdf
###Markdown
2. Glob Files Together and Read Text
###Code
pdfs = glob.glob(pattern)
pdfs_column = []
label_column =[]
bad_stuff = ['/n']
for pdf in enumerate(pdfs):
print(pdf[1])
textTest = textract.process(pdf[1]).replace('\n',"")
pdfs_column.append(textTest)
label_column.append(pdf[0])
df = pd.DataFrame({'labels': label_column, 'text': pdfs_column})
df.head(10)
###Output
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/Slate Article Submission_Shakespeare and Skyscrapers_31May2017.pdf
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/CyberSecurity_ROI Essay_WHS_ver2.pdf
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/FinalPaper_Solomon_ver2.pdf
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/Sadybakasov_Alymbek_SotckFish.pdf
/Users/wsolomon/Documents/GitHub/pdfSimilarity/pdfFolder/CyberSecurity_Backdoor Essay_WHS.pdf
###Markdown
3. spaCy and Vectorize Texts
###Code
vecs = []
for raw_text in pdfs_column:
doc = nlp(raw_text.decode('utf8'))
vecs.append(doc.vector)
df['vecs'] = vecs
df.head()
###Output
_____no_output_____
###Markdown
4. Create Cross CosineDistance Matrix
###Code
cosDist_main = []
#Create Cross Cosine Matrix:
for vec1s in vecs:
cosDist_sub = []
for vec2s in vecs:
dist = spatial.distance.cosine(vec1s, vec2s)
cosDist_sub.append(dist)
cosDist_main.append(cosDist_sub)
df_cos = pd.DataFrame(cosDist_main)
df_cos.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.heatmap(df_cos, annot=True)
###Output
_____no_output_____ |
Section 08 - Object Oriented Prog/Lec 69 - Attribute & class Keyword.ipynb | ###Markdown
Creating an instance:
###Code
class Sample():
pass
my_sample = Sample()
type(my_sample)
###Output
_____no_output_____
###Markdown
Creating an attribute:
###Code
class Dog():
def __init__(self,breed):
self.breed = breed
my_dog = Dog(breed="Lab")
type(my_dog)
my_dog.breed
###Output
_____no_output_____
###Markdown
Changing parameter name:
###Code
class Dog():
def __init__(self,mybreed):
self.breed = mybreed
my_dog = Dog(mybreed="Huskie")
type(my_dog)
my_dog.breed
###Output
_____no_output_____
###Markdown
Changing attribute name:
###Code
class Dog():
def __init__(self,mybreed):
self.my_attribute = mybreed
my_dog = Dog(mybreed="Poodle")
type(my_dog)
my_dog.my_attribute
###Output
_____no_output_____
###Markdown
Example with more attribute:
###Code
class Dog():
def __init__(self,breed,name,spots):
# breed & name would'be strings
self.breed = breed
self.name = name
# spots would give True/False
self.spots = spots
my_dog = Dog(breed="lab", name="Sammy", spots=False)
print (my_dog.name)
print (my_dog.breed)
print (my_dog.spots)
###Output
Sammy
lab
False
|
docs/source/notebooks/01-Mach-Zehnder_Interferometer.ipynb | ###Markdown
Mach-Zehnder Interferometer (MZI)**We use SiEPIC EBeam library in this tutorial.** This notebook walks through the process of setting up and simulating a mach-zehnder interferometer device using the OPICS package. A mach-zehnder interferometer is a basic waveguide interference device. It consists of two couplers (or Y branches) connected by two waveguides of different length (see below). The difference between the two waveguide lengths causes differential delay, which contributes to the frequency dependent interference pattern.
###Code
import time
import numpy as np
import matplotlib.pyplot as plt
import opics
###Output
_____no_output_____
###Markdown
Import component libraryImport `ebeam` library from `libs` module.
###Code
ebeam = opics.libraries.ebeam
###Output
_____no_output_____
###Markdown
Define network Create an instance of `Network` class, which is used to add, connect, and simulate circuit components.
###Code
#defining custom frequency data points for a component
f = np.linspace(opics.C*1e6/1.5, opics.C*1e6/1.6, 2000)
circuit_name = "mzi"
circuit = opics.Network(network_id=circuit_name, f=f)
ebeam.Waveguide?
###Output
_____no_output_____
###Markdown
Add circuit componentsAdd grating couplers, 3dB power splitters (e.g. Y-splitter or Y-branch), and waveguides to circuit. You can define custom frequency data points for a component as well (see the example for output_GC).
###Code
#define component instances
input_gc = circuit.add_component(ebeam.GC)
y1 = circuit.add_component(ebeam.Y)
wg1 = circuit.add_component(ebeam.Waveguide, params=dict(length=50e-6))
wg2 = circuit.add_component(ebeam.Waveguide, params=dict(length=150e-6))
y2 = circuit.add_component(ebeam.Y)
output_gc = circuit.add_component(ebeam.GC)
###Output
_____no_output_____
###Markdown
Define circuit connectivityIn this section, we define the component connections. The connections are defined using `Network.connect`, e.g.`Network.connect(component1, component1_port, component2, component2_port)`
###Code
#define circuit connectivity
circuit.connect(input_gc, 1, y1, 0)
circuit.connect(y1, 1, wg1, 0)
circuit.connect(y1, 2, wg2, 0)
circuit.connect(y2, 0, output_gc, 1)
circuit.connect(wg1, 1, y2, 1)
circuit.connect(wg2, 1, y2, 2)
###Output
_____no_output_____
###Markdown
Simuate the circuit
###Code
sim_start = time.time()
#simulate network
circuit.simulate_network()
print("simulation finished in %ss"%(str(round(time.time()-sim_start,2))))
###Output
_____no_output_____
###Markdown
Visualize the simulation result
###Code
circuit.sim_result.plot_sparameters(show_freq=False, scale="log")
###Output
_____no_output_____ |
notebooks/03-1_stocks-prediction.ipynb | ###Markdown
Stock Value PredictionIn this Notebook, we will create the actual prediction system, by testing various approaches and accuracy against multiple time-horizons (target_days variable).First we will load all libraries:
###Code
import pandas as pd
import numpy as np
import sys, os
from datetime import datetime
sys.path.insert(1, '..')
import recommender as rcmd
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
# classification approaches
import tensorflow as tf
from sklearn.linear_model import LogisticRegression
from sklearn.multioutput import MultiOutputClassifier
from sklearn.mixture import GaussianMixture
from sklearn.svm import SVC
# regression approaches
from sklearn.linear_model import LinearRegression
# data handling and scoring
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import recall_score, precision_score, f1_score, mean_squared_error
###Output
_____no_output_____
###Markdown
Next, we create the input data pipelines for stock and statement data. Therefore we will have to split data into training and test sets. There are two options for doing that:* Splitting the list of symbols* Splitting the results list of training stock datapointsWe will use the first option in order ensure a clear split (since the generate data has overlapping time frames, the second options would generate data that might have been seen by the system beforehand).
###Code
# create cache object
cache = rcmd.stocks.Cache()
# load list of all available stocks and sample sub-list
stocks = cache.list_data('stock')
def train_test_data(back, ahead, xlim, split=0.3, count=2000, stocks=stocks, cache=cache):
'''Generetes a train test split'''
sample = np.random.choice(list(stocks.keys()), 2000)
# split the stock data
count_train = int((1-split) * count)
sample_train = sample[:count_train]
sample_test = sample[count_train:]
# generate sample data
df_train = rcmd.learning.preprocess.create_dataset(sample_train, stocks, cache, back, ahead, xlim)
df_test = rcmd.learning.preprocess.create_dataset(sample_test, stocks, cache, back, ahead, xlim)
return df_train, df_test
df_train, df_test = train_test_data(10, 22, (-.5, .5), split=0.2, count=4000)
print(df_train.shape)
df_train.head()
# shortcut: store / load created datasets
df_train.to_csv('../data/train.csv')
df_test.to_csv('../data/test.csv')
# load data
#df_train = pd.read_csv('../data/train.csv')
#df_test = pd.read_csv('../data/test.csv')
###Output
_____no_output_____
###Markdown
Now that we have loaded and split the data, we have to divide it into input and output data:
###Code
def divide_data(df, xlim, balance_mode=None, balance_weight=1):
'''Splits the data into 3 sets: input, ouput_classify, output_regression.
Note that this function will also sample the data if choosen to create a more balanced dataset. Options are:
`under`: Undersamples the data (takes lowest data sample and )
`over`: Oversamples data to the highest number of possible samples
`over_under`: takes the mean count and samples in both directions
Args:
df (DataFrame): DF to contain all relevant data
xlim (tuple): tuple of integers used to clip and scale regression values to a range of 0 to 1
balance_mode (str): Defines the balance mode of the data (options: 'over_under', 'under', 'over')
balance_weight (float): Defines how much the calculate sample count is weighted in comparision to the actual count (should be between 0 and 1)
Returns:
df_X: DataFrame with input values
df_y_cls: DataFrame with classification labels
df_y_reg: DataFrame with regression values
'''
# sample the data correctly
if balance_mode is not None:
if balance_mode == 'over_under':
# find median
num_samples = df['target_cat'].value_counts().median().astype('int')
elif balance_mode == 'over':
# find highest number
num_samples = df['target_cat'].value_counts().max()
elif balance_mode == 'under':
# find minimal number
num_samples = df['target_cat'].value_counts().min()
else:
raise ValueError('Unkown sample mode: {}'.format(balance_mode))
# sample categories
dfs = []
for cat in df['target_cat'].unique():
df_tmp = df[df['target_cat'] == cat]
cur_samples = int(balance_weight * num_samples + (1-balance_weight) * df_tmp.shape[0])
sample = df_tmp.sample(cur_samples, replace=cur_samples > df_tmp.shape[0])
dfs.append(sample)
# concat and shuffle the rows
df = pd.concat(dfs, axis=0).sample(frac=1)
# remove all target cols
df_X = df.drop(['target', 'target_cat', 'norm_price', 'symbol'], axis=1)
# convert to dummy classes
df_y_cls = pd.get_dummies(df['target_cat'], prefix='cat', dummy_na=False)
# clip values and scale to vals
df_y_reg = np.divide( np.subtract( df['target'].clip(xlim[0], xlim[1]), xlim[0] ), (xlim[1] - xlim[0]) )
return df, df_X, df_y_cls, df_y_reg
df_train_bm, X_train, y_ctrain, y_rtrain = divide_data(df_train, (-.5, .5), balance_mode='over_under', balance_weight=0.9)
df_test_bm, X_test, y_ctest, y_rtest = divide_data(df_test, (-.5, .5))
print(pd.concat([y_ctrain.sum(axis=0), y_ctest.sum(axis=0)], axis=1))
###Output
_____no_output_____
###Markdown
Before we create the actual prediction systems, we will have to define metrics, how we want to measure the success of the systems.As we have two approaches (classification and regression) we will use two types metrics:* Precision, Recall & Accuracy* RMSE
###Code
def _metric_classifier(y_true, y_pred, avg=None):
p = precision_score(y_true, y_pred, average=avg)
r = recall_score(y_true, y_pred, average=avg)
f1 = f1_score(y_true, y_pred, average=avg)
return f1, p, r
def score_classifier(y_true, y_pred):
'''Calculates the relevant scores for a classifer and outputs. This should show predicitions per class.'''
f1, p, r = _metric_classifier(y_true, y_pred, avg='micro')
print("Model Performance: F1={:.4f} (P={:.4f} / R={:.4f})".format(f1, p, r))
# list scores of single classes
for i, c in enumerate(y_true.columns):
sf1, sp, sr = _metric_classifier(y_true.iloc[:, i], y_pred[:, i], avg='binary')
print(" {:10} F1={:.4f} (P={:.4f} / R={:.4f})".format(c + ":", sf1, sp, sr))
def score_regression(y_true, y_pred):
mse = mean_squared_error(y_true, y_pred)
print("Model Performance: MSE={:.4f}".format(mse))
###Output
_____no_output_____
###Markdown
ClassificationThe first step is to create a baseline for both approaches (classification and regression). In case of regression our target value will be `target` and for classification it will be `target_cat` (which we might convert into a one-hot vector along the way).Lets start with the simpler form of classification:
###Code
y_ctrain.sum(axis=0)
# scale input data to improve convergance (Note: scaler has to be used for other input data as well)
scaler = StandardScaler()
X_train_std = scaler.fit_transform(X_train)
X_test_std = scaler.transform(X_test)
# train element
classifier = MultiOutputClassifier(LogisticRegression(max_iter=500, solver='lbfgs'))
classifier.fit(X_train_std, y_ctrain)
# predict data
y_pred = classifier.predict(X_test_std)
score_classifier(y_ctest, y_pred)
###Output
_____no_output_____
###Markdown
We can see a strong bias in the system for `cat_3`, which also has the highest number of training samples. Future work might include oversampling or more target datapoint selection to reduce these biases. Next, support vector machines:
###Code
classifier_svm = MultiOutputClassifier(SVC())
classifier_svm.fit(X_train_std, y_ctrain)
y_pred_svm = classifier_svm.predict(X_test_std)
score_classifier(y_ctest, y_pred_svm)
###Output
Model Performance: F1=0.4157 (P=0.5416 / R=0.3372)
cat_0: F1=0.0385 (P=1.0000 / R=0.0196)
cat_1: F1=0.0154 (P=1.0000 / R=0.0078)
cat_2: F1=0.0107 (P=0.4932 / R=0.0054)
cat_3: F1=0.6126 (P=0.5414 / R=0.7053)
cat_4: F1=0.0027 (P=1.0000 / R=0.0014)
cat_5: F1=0.0000 (P=0.0000 / R=0.0000)
###Markdown
We can see the results improve
###Code
class TestCallback(tf.keras.callbacks.Callback):
def __init__(self, data=X_test_std):
self.data = data
def on_epoch_end(self, epoch, logs={}):
loss, acc = self.model.evaluate(self.data, df_test_bm['target_cat'].to_numpy(), verbose=0)
print('\nTesting loss: {}, acc: {}\n'.format(loss, acc))
# simple feed forward network
print(X_train.shape)
print(df_train.shape)
classifier_ffn = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(X_train_std.shape[1],)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(y_ctrain.shape[1], activation=tf.nn.softmax)
])
classifier_ffn.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
classifier_ffn.fit(X_train.to_numpy(), df_train_bm['target_cat'].to_numpy(), epochs=100, callbacks=[TestCallback()])
y_pred_ffn = classifier_ffn.predict(X_test.to_numpy())
y_pred_ffn = pd.get_dummies(y_pred_ffn.argmax(axis=1))
print(y_pred_ffn.sum(axis=0))
score_classifier(y_ctest, y_pred_ffn.to_numpy())
###Output
_____no_output_____
###Markdown
It is noteworthy that the output of the model in the test data resembles the input distribution. Lets try to improve generalization with a more complex model
###Code
act = tf.keras.layers.PReLU
classifier_ffn = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(X_train_std.shape[1],)),
tf.keras.layers.Dense(32), act(),
tf.keras.layers.Dense(64), act(),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(128), act(),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(256), act(),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(128), act(),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64), act(),
tf.keras.layers.Dense(y_ctrain.shape[1], activation=tf.nn.softmax)
])
classifier_ffn.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
classifier_ffn.fit(X_train.to_numpy(), df_train_bm['target_cat'].to_numpy(), epochs=200, callbacks=[TestCallback(X_test.to_numpy())])
y_pred_ffn = classifier_ffn.predict(X_test.to_numpy())
print(y_pred_ffn)
y_pred_ffn = pd.get_dummies(y_pred_ffn.argmax(axis=1))
print(y_pred_ffn.sum(axis=0))
score_classifier(y_ctest, y_pred_ffn.to_numpy())
# save the model
classifier_ffn.save('../data/keras-model.h5')
###Output
_____no_output_____
###Markdown
RegressionThe other possible option is regression. We will test a linear regression against neural networks based on RMSE score to see how the predictions hold.
###Code
reg = LinearRegression()
reg.fit(X_train.iloc[:, :7].to_numpy(), y_rtrain)
y_pred_reg = reg.predict(X_test.iloc[:, :7].to_numpy())
score_regression(y_rtest, y_pred_reg)
###Output
Model Performance: MSE=0.0329
###Markdown
Now the neural Network
###Code
classifier_reg = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(X_train_std.shape[1],)),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(1)
])
opt = tf.keras.optimizers.SGD(learning_rate=0.00000001, nesterov=False)
classifier_reg.compile(optimizer=opt, loss='mean_squared_error', metrics=['accuracy'])
classifier_reg.fit(X_train.to_numpy(), y_rtrain.to_numpy(), epochs=20)
y_pred_reg = classifier_reg.predict(X_test.to_numpy())
score_regression(y_rtest, y_pred_reg)
y_pred_reg
y_pred_reg.shape
y_pred_ffn = classifier_ffn.predict(X_test.to_numpy())
print(y_pred_ffn)
###Output
[[0.00190905 0.0575433 0.4032011 0.48665118 0.04683916 0.00385619]
[0.00347802 0.07319234 0.39260495 0.47064242 0.05408182 0.00600046]
[0.00484556 0.07138 0.38683167 0.45450288 0.06774367 0.01469623]
...
[0.00111771 0.03497788 0.40016484 0.5337459 0.0289243 0.00106938]
[0.00483004 0.07679388 0.38836578 0.45680222 0.06269442 0.01051365]
[0.00092138 0.0323066 0.41052336 0.5202334 0.03445359 0.00156169]]
|
tutorials/03_Relational_Data_Modeling.ipynb | ###Markdown
Relational Data ModelingIn this tutorial we will be showing how to model a real world multi-table dataset using SDV. About the datsetWe have a store series, each of those have a size and a category and additional information in a given date: average temperature in the region, cost of fuel in the region, promotional data, the customer price index, the unemployment rate and whether the date is a special holiday.From those stores we obtained a training of historical data between 2010-02-05 and 2012-11-01. This historical data includes the sales of each department on a specific date.In this notebook, we will show you step-by-step how to download the "Walmart" dataset, explain the structure and sample the data.In this demonstration we will show how SDV can be used to generate synthetic data. And lately, this data can be used to train machine learning models.*The dataset used in this example can be found in [Kaggle](https://www.kaggle.com/c/walmart-recruiting-store-sales-forecasting/data), but we will show how to download it from SDV.* Data model summary stores| Field | Type | Subtype | Additional Properties ||-------|-------------|---------|-----------------------|| Store | id | integer | Primary key || Size | numerical | integer | || Type | categorical | | |Contains information about the 45 stores, indicating the type and size of store. features| Fields | Type | Subtype | Additional Properties ||--------------|-----------|---------|-----------------------------|| Store | id | integer | foreign key (stores.Store) || Date | datetime | | format: "%Y-%m-%d" || IsHoliday | boolean | | || Fuel_Price | numerical | float | || Unemployment | numerical | float | || Temperature | numerical | float | || CPI | numerical | float | || MarkDown1 | numerical | float | || MarkDown2 | numerical | float | || MarkDown3 | numerical | float | || MarkDown4 | numerical | float | || MarkDown5 | numerical | float | |Contains historical training data, which covers to 2010-02-05 to 2012-11-01. depts| Fields | Type | Subtype | Additional Properties ||--------------|-----------|---------|------------------------------|| Store | id | integer | foreign key (stores.Stores) || Date | datetime | | format: "%Y-%m-%d" || Weekly_Sales | numerical | float | || Dept | numerical | integer | || IsHoliday | boolean | | |Contains additional data related to the store, department, and regional activity for the given dates. 1. Load dataLet's start downloading the data set. In this case, we will download the data set *walmart*. We will use the SDV function `load_demo`, we can specify the name of the dataset we want to use and if we want its Metadata object or not. To know more about the demo data [see the documentation](https://sdv-dev.github.io/SDV/api/sdv.demo.html).
###Code
from sdv import load_demo
metadata, tables = load_demo(dataset_name='walmart', metadata=True)
###Output
2020-07-09 21:00:17,378 - INFO - __init__ - Loading table stores
2020-07-09 21:00:17,384 - INFO - __init__ - Loading table features
2020-07-09 21:00:17,402 - INFO - __init__ - Loading table depts
###Markdown
Our dataset is downloaded from an [Amazon S3 bucket](http://sdv-datasets.s3.amazonaws.com/index.html) that contains all available data sets of the `load_demo` method. We can now visualize the metadata structure
###Code
metadata.visualize()
###Output
_____no_output_____
###Markdown
And also validate that the metadata is correctly defined for our data
###Code
metadata.validate(tables)
###Output
_____no_output_____
###Markdown
2. Create an instance of SDV and train the instanceOnce we download it, we have to create an SDV instance. With that instance, we have to analyze the loaded tables to generate a statistical model from the data. In this case, the process of adjusting the model is quickly because the dataset is small. However, with larger datasets it can be a slow process.
###Code
from sdv import SDV
sdv = SDV()
sdv.fit(metadata, tables=tables)
###Output
2020-07-09 21:00:31,480 - INFO - modeler - Modeling stores
2020-07-09 21:00:31,481 - INFO - __init__ - Loading transformer CategoricalTransformer for field Type
2020-07-09 21:00:31,481 - INFO - __init__ - Loading transformer NumericalTransformer for field Size
2020-07-09 21:00:31,491 - INFO - modeler - Modeling features
2020-07-09 21:00:31,492 - INFO - __init__ - Loading transformer DatetimeTransformer for field Date
2020-07-09 21:00:31,493 - INFO - __init__ - Loading transformer NumericalTransformer for field MarkDown1
2020-07-09 21:00:31,493 - INFO - __init__ - Loading transformer BooleanTransformer for field IsHoliday
2020-07-09 21:00:31,493 - INFO - __init__ - Loading transformer NumericalTransformer for field MarkDown4
2020-07-09 21:00:31,494 - INFO - __init__ - Loading transformer NumericalTransformer for field MarkDown3
2020-07-09 21:00:31,494 - INFO - __init__ - Loading transformer NumericalTransformer for field Fuel_Price
2020-07-09 21:00:31,494 - INFO - __init__ - Loading transformer NumericalTransformer for field Unemployment
2020-07-09 21:00:31,495 - INFO - __init__ - Loading transformer NumericalTransformer for field Temperature
2020-07-09 21:00:31,495 - INFO - __init__ - Loading transformer NumericalTransformer for field MarkDown5
2020-07-09 21:00:31,495 - INFO - __init__ - Loading transformer NumericalTransformer for field MarkDown2
2020-07-09 21:00:31,495 - INFO - __init__ - Loading transformer NumericalTransformer for field CPI
2020-07-09 21:00:31,544 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,595 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
/home/xals/Projects/MIT/SDV/sdv/models/copulas.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
self.model.covariance = np.array(values)
2020-07-09 21:00:31,651 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,679 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,707 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,734 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,762 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,790 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,816 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,845 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,872 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,901 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,931 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,959 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:31,986 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,014 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,040 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,070 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,096 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,123 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,152 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,181 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,209 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,235 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,264 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,293 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,322 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,349 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,376 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,405 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,433 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,463 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,492 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,521 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,552 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,583 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,612 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,644 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,674 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,704 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,732 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,762 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,791 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,821 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,852 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,882 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:32,917 - INFO - modeler - Modeling depts
2020-07-09 21:00:32,918 - INFO - __init__ - Loading transformer DatetimeTransformer for field Date
2020-07-09 21:00:32,918 - INFO - __init__ - Loading transformer NumericalTransformer for field Weekly_Sales
2020-07-09 21:00:32,919 - INFO - __init__ - Loading transformer NumericalTransformer for field Dept
2020-07-09 21:00:32,919 - INFO - __init__ - Loading transformer BooleanTransformer for field IsHoliday
2020-07-09 21:00:33,016 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,318 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,334 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,350 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,364 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,381 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,396 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,412 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
2020-07-09 21:00:33,428 - INFO - gaussian - Fitting GaussianMultivariate(distribution="GaussianUnivariate")
###Markdown
Note: We may not want to train the model every time we want to generate new synthetic data. We can [save](https://sdv-dev.github.io/SDV/api/sdv.sdv.htmlsdv.sdv.SDV.save) the SDV instance to [load](https://sdv-dev.github.io/SDV/api/sdv.sdv.htmlsdv.sdv.SDV.save) it later. 3. Generate synthetic dataOnce the instance is trained, we are ready to generate the synthetic data.The easiest way to generate synthetic data for the entire dataset is to call the `sample_all` method. By default, this method generates only 5 rows, but we can specify the row number that will be generated with the `num_rows` argument. To learn more about the available arguments, see [sample_all](https://sdv-dev.github.io/SDV/api/sdv.sampler.htmlsdv.sampler.Sampler.sample_all).
###Code
sdv.modeler.table_sizes
samples = sdv.sample_all()
###Output
_____no_output_____
###Markdown
This returns a dictionary with a `pandas.DataFrame` for each table.
###Code
samples['stores'].head()
samples['features'].head()
samples['depts'].head()
###Output
_____no_output_____
###Markdown
We may not want to generate data for all tables in the dataset, rather for just one table. This is possible with SDV using the `sample` method. To use it we only need to specify the name of the table we want to synthesize and the row numbers to generate. In this case, the "walmart" data set has 3 tables: stores, features and depts.In the following example, we will generate 1000 rows of the "features" table.
###Code
sdv.sample('features', 1000)
###Output
_____no_output_____
###Markdown
Relational Data ModelingIn this tutorial we will be showing how to model a real world multi-table dataset using SDV. About the datsetWe have a store series, each of those have a size and a category and additional information in a given date: average temperature in the region, cost of fuel in the region, promotional data, the customer price index, the unemployment rate and whether the date is a special holiday.From those stores we obtained a training of historical data between 2010-02-05 and 2012-11-01. This historical data includes the sales of each department on a specific date.In this notebook, we will show you step-by-step how to download the "Walmart" dataset, explain the structure and sample the data.In this demonstration we will show how SDV can be used to generate synthetic data. And lately, this data can be used to train machine learning models.*The dataset used in this example can be found in [Kaggle](https://www.kaggle.com/c/walmart-recruiting-store-sales-forecasting/data), but we will show how to download it from SDV.* Data model summary stores| Field | Type | Subtype | Additional Properties ||-------|-------------|---------|-----------------------|| Store | id | integer | Primary key || Size | numerical | integer | || Type | categorical | | |Contains information about the 45 stores, indicating the type and size of store. features| Fields | Type | Subtype | Additional Properties ||--------------|-----------|---------|-----------------------------|| Store | id | integer | foreign key (stores.Store) || Date | datetime | | format: "%Y-%m-%d" || IsHoliday | boolean | | || Fuel_Price | numerical | float | || Unemployment | numerical | float | || Temperature | numerical | float | || CPI | numerical | float | || MarkDown1 | numerical | float | || MarkDown2 | numerical | float | || MarkDown3 | numerical | float | || MarkDown4 | numerical | float | || MarkDown5 | numerical | float | |Contains historical training data, which covers to 2010-02-05 to 2012-11-01. depts| Fields | Type | Subtype | Additional Properties ||--------------|-----------|---------|------------------------------|| Store | id | integer | foreign key (stores.Stores) || Date | datetime | | format: "%Y-%m-%d" || Weekly_Sales | numerical | float | || Dept | numerical | integer | || IsHoliday | boolean | | |Contains additional data related to the store, department, and regional activity for the given dates. Load relational dataLet's start downloading the data set. In this case, we will download the data set *walmart*. We will use the SDV function `load_demo`, we can specify the name of the dataset we want to use and if we want its Metadata object or not. To know more about the `load_demo` function [see its documentation](https://sdv-dev.github.io/SDV/api_reference/api/sdv.demo.load_demo.html).
###Code
# Setup logging and warnings
import logging;
logging.basicConfig(level=logging.INFO)
logging.getLogger().setLevel(level=logging.WARNING)
logging.getLogger('sdv').setLevel(level=logging.INFO)
import warnings
warnings.simplefilter("ignore")
from sdv import load_demo
metadata, tables = load_demo(dataset_name='walmart', metadata=True)
###Output
2020-08-05 20:21:53,505 - INFO - sdv.metadata - Loading table stores
2020-08-05 20:21:53,513 - INFO - sdv.metadata - Loading table features
2020-08-05 20:21:53,526 - INFO - sdv.metadata - Loading table depts
###Markdown
Our dataset is downloaded from an [Amazon S3 bucket](http://sdv-datasets.s3.amazonaws.com/index.html) that contains all available data sets of the `load_demo` method. We can now visualize the metadata structure
###Code
metadata.visualize()
###Output
_____no_output_____
###Markdown
And also validate that the metadata is correctly defined for our data
###Code
metadata.validate(tables)
from sdv.utils import display_tables
display_tables(tables)
###Output
_____no_output_____
###Markdown
Model the data with SDVOnce we download it, we have to create an SDV instance. With that instance, we have to analyze the loaded tables to generate a statistical model from the data. In this case, the process of adjusting the model is quickly because the dataset is small. However, with larger datasets it can be a slow process.
###Code
from sdv import SDV
sdv = SDV()
sdv.fit(metadata, tables=tables)
###Output
2020-08-05 20:21:55,259 - INFO - sdv.modeler - Modeling stores
2020-08-05 20:21:55,260 - INFO - sdv.metadata - Loading transformer CategoricalTransformer for field Type
2020-08-05 20:21:55,260 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Size
2020-08-05 20:21:55,269 - INFO - sdv.modeler - Modeling depts
2020-08-05 20:21:55,269 - INFO - sdv.metadata - Loading transformer DatetimeTransformer for field Date
2020-08-05 20:21:55,270 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Weekly_Sales
2020-08-05 20:21:55,270 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Dept
2020-08-05 20:21:55,270 - INFO - sdv.metadata - Loading transformer BooleanTransformer for field IsHoliday
2020-08-05 20:21:56,148 - INFO - sdv.modeler - Modeling features
2020-08-05 20:21:56,149 - INFO - sdv.metadata - Loading transformer DatetimeTransformer for field Date
2020-08-05 20:21:56,149 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field MarkDown1
2020-08-05 20:21:56,149 - INFO - sdv.metadata - Loading transformer BooleanTransformer for field IsHoliday
2020-08-05 20:21:56,149 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field MarkDown4
2020-08-05 20:21:56,150 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field MarkDown3
2020-08-05 20:21:56,150 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Fuel_Price
2020-08-05 20:21:56,151 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Unemployment
2020-08-05 20:21:56,151 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field Temperature
2020-08-05 20:21:56,152 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field MarkDown5
2020-08-05 20:21:56,152 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field MarkDown2
2020-08-05 20:21:56,152 - INFO - sdv.metadata - Loading transformer NumericalTransformer for field CPI
2020-08-05 20:21:57,675 - INFO - sdv.modeler - Modeling Complete
###Markdown
Note: We may not want to train the model every time we want to generate new synthetic data. We can [save](https://sdv-dev.github.io/SDV/api/sdv.sdv.htmlsdv.sdv.SDV.save) the SDV instance to [load](https://sdv-dev.github.io/SDV/api/sdv.sdv.htmlsdv.sdv.SDV.save) it later. Generate synthetic dataOnce the instance is trained, we are ready to generate the synthetic data.The easiest way to generate synthetic data for the entire dataset is to call the `sample_all` method. By default, this method generates only 5 rows, but we can specify the row number that will be generated with the `num_rows` argument.
###Code
sdv.modeler.table_sizes
samples = sdv.sample_all()
###Output
_____no_output_____
###Markdown
This returns a dictionary with the same format as the input `tables`, witha `pandas.DataFrame` for each table.
###Code
samples.keys()
display_tables(samples)
###Output
_____no_output_____
###Markdown
We may not want to generate data for all tables in the dataset, rather for just one table. This is possible with SDV using the `sample` method. To use it we only need to specify the name of the table we want to synthesize and the row numbers to generate. In this case, the "walmart" data set has 3 tables: stores, features and depts.In the following example, we will generate 1000 rows of the "features" table.
###Code
sdv.sample('features', 1000, sample_children=False)
###Output
_____no_output_____ |
notebooks/00-Python Object and Data Structure Basics/08-Files.ipynb | ###Markdown
FilesPython uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note: You will probably need to install certain libraries or modules to interact with those various file types, but they are easily available. (We will cover downloading modules later on in the course).Python has a built-in open function that allows us to open and play with basic file types. First we will need a file though. We're going to use some IPython magic to create a text file! IPython Writing a File This function is specific to jupyter notebooks! Alternatively, quickly create a simple .txt file with sublime text editor.
###Code
%%writefile test.txt
Hello, this is a quick test file.
###Output
Overwriting test.txt
###Markdown
Python Opening a fileLet's being by opening the file test.txt that is located in the same directory as this notebook. For now we will work with files located in the same directory as the notebook or .py script you are using.It is very easy to get an error on this step:
###Code
myfile = open('whoops.txt')
###Output
_____no_output_____
###Markdown
To avoid this error,make sure your .txt file is saved in the same location as your notebook, to check your notebook location, use **pwd**:
###Code
pwd
###Output
_____no_output_____
###Markdown
**Alternatively, to grab files from any location on your computer, simply pass in the entire file path. **For Windows you need to use double \ so python doesn't treat the second \ as an escape character, a file path is in the form: myfile = open("C:\\Users\\YourUserName\\Home\\Folder\\myfile.txt")For MacOS and Linux you use slashes in the opposite direction: myfile = open("/Users/YouUserName/Folder/myfile.txt")
###Code
# Open the text.txt we made earlier
my_file = open('test.txt')
# We can now read the file
my_file.read()
# But what happens if we try to read it again?
my_file.read()
###Output
_____no_output_____
###Markdown
This happens because you can imagine the reading "cursor" is at the end of the file after having read it. So there is nothing left to read. We can reset the "cursor" like this:
###Code
# Seek to the start of file (index 0)
my_file.seek(0)
# Now read again
my_file.read()
###Output
_____no_output_____
###Markdown
You can read a file line by line using the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course.
###Code
# Readlines returns a list of the lines in the file
my_file.seek(0)
my_file.readlines()
###Output
_____no_output_____
###Markdown
When you have finished using a file, it is always good practice to close it.
###Code
my_file.close()
###Output
_____no_output_____
###Markdown
Writing to a FileBy default, the `open()` function will only allow us to read the file. We need to pass the argument `'w'` to write over the file. For example:
###Code
# Add a second argument to the function, 'w' which stands for write.
# Passing 'w+' lets us read and write to the file
my_file = open('test.txt','w+')
###Output
_____no_output_____
###Markdown
Use caution! Opening a file with `'w'` or `'w+'` truncates the original, meaning that anything that was in the original file **is deleted**!
###Code
# Write to the file
my_file.write('This is a new line')
# Read the file
my_file.seek(0)
my_file.read()
my_file.close() # always do this when you're done with a file
###Output
_____no_output_____
###Markdown
Appending to a FilePassing the argument `'a'` opens the file and puts the pointer at the end, so anything written is appended. Like `'w+'`, `'a+'` lets us read and write to a file. If the file does not exist, one will be created.
###Code
my_file = open('test.txt','a+')
my_file.write('\nThis is text being appended to test.txt')
my_file.write('\nAnd another line here.')
my_file.seek(0)
print(my_file.read())
my_file.close()
###Output
_____no_output_____
###Markdown
Appending with `%%writefile`We can do the same thing using IPython cell magic:
###Code
%%writefile -a test.txt
This is text being appended to test.txt
And another line here.
###Output
Appending to test.txt
###Markdown
Add a blank space if you want the first line to begin on its own line, as Jupyter won't recognize escape sequences like `\n` Iterating through a FileLets get a quick preview of a for loop by iterating over a text file. First let's make a new text file with some IPython Magic:
###Code
%%writefile test.txt
First Line
Second Line
###Output
Overwriting test.txt
###Markdown
Now we can use a little bit of flow to tell the program to for through every line of the file and do something:
###Code
for line in open('test.txt'):
print(line)
###Output
First Line
Second Line
###Markdown
Don't worry about fully understanding this yet, for loops are coming up soon. But we'll break down what we did above. We said that for every line in this text file, go ahead and print that line. It's important to note a few things here:1. We could have called the "line" object anything (see example below).2. By not calling `.read()` on the file, the whole text file was not stored in memory.3. Notice the indent on the second line for print. This whitespace is required in Python.
###Code
# Pertaining to the first point above
for asdf in open('test.txt'):
print(asdf)
###Output
First Line
Second Line
|
1-data-prep.ipynb | ###Markdown
Workflow1. Load data into a pandas DataFrame. * Use the data file loan_data.csv from the GitHub repository or download the file below.Examine the datatypes to ensure they are as expected; convert columns to the expected datatype, if needed.loan_data.csv.zip * Examine the head and tail of the data, and use the .describe() function of Pandas DataFrames for basic EDA.2. Examine each variable to determine if it can be used as-is, or requires feature engineering or data cleaning. * Use pandas-profiling for a quick way to perform some basic EDA on the entire dataset at once. * Note columns that you want to feature engineer and data clean, as you will do that in Milestone 2.3. Examine the interrelationships of features to the target variable (loan default) Using the risk ratio (a.k.a. “odds ratio,” the default rate of a group divided by the global default rate), mutual information, and correlations to the target column (LOAN_DEFAULT) to understand feature importance.1. Perform as much additional EDA as you see fit, such as other plots (along the lines of boxplots and methods such as clustering.)1. Drop any columns you’ve deemed unnecessary, and save the data to disk (e.g., as a CSV file) for your next step. You might also drop columns earlier in the process. NotesUnderstanding the loan data is key.Our target column (the one we want to predict) is “LOAN_DEFAULT.”Other columns that can be inputs to our machine learning algorithms, will be defined in a given data dictionary (“Data Dictionary.xlsx”). Since the data is from India, the currency denomination is Indian Rupees.Primary and secondary accounts are other loans that the lender took out before the current loan was entered into the dataset; the disbursed amounts for these loans can be 0.The amount of the loan is held in the “DISBURSED_AMOUNT” column.EMI amounts are lenders’ monthly payments. See these Wikipedia for explanations of Aadhaar, and Permanent Account Number (PAN).This data was originally used in the hackathon/competition “Vehicle Loan Default Prediction”.Data from the Kaggle dataset has been for this project, so it won’t exactly match the data in the Kaggle dataset.EDA processPart of your EDA process should be understanding which columns you can safely remove. You can come up with and use your own removal process, but you might do something like the following for a binary classification problem like this:1. Examine the head and tail of the data, looking at the .info() and .describe() results from Pandas DataFrames and scanning for missing values, including placeholders like 0s, -999, -1, etc.1. Look for columns with little variation (and 0s)1. Look for ‘unique’ columns (like ID columns)1. Note anything else interesting (e.g. any columns you suspect may not be important) and columns to feature engineer1. Perform any feature engineering necessary for EDA (e.g. dtype conversions, like from a string to a date)1. Examine the target column (e.g. fraction of defaulted loans)1. Look at the risk ratio (odds ratio)1. Look at correlations1. Look at mutual information scores1. Generate other plots of the data, such as box plots and correlation plots1. Potentially cluster the data using k-means, DBSCAN, hierarchical clustering, etc. (not required here but good to keep in mind)1. The steps for examining individual columns (points 3-6 above) can be performed when you examine the results from running pandas-profiling.
###Code
import pandas as pd
import re
from pandas_profiling import ProfileReport
df = pd.read_csv('loan_data.csv')
# uncomment to build profile
# profile = ProfileReport(df, explorative=True)
# profile.to_file('loan_data.html')
###Output
_____no_output_____
###Markdown
Column AnalysisColumn analysis was done using ProfileReport.* Useful * DISBURSED_AMOUNT * ASSET_COST * LTV * PERFORM_CNS_SCORE * Employment_type * SEC_CURRENT_BALANCE * PRI_DISBURSED_AMOUNT * PRIMARY_INSTAL_AMT * SEC_INSTAL_AMT * LOAN_DEFAULT: gold standard * PRI_NO_OF_ACCTS: scored high for correlations, but 50% are 0* Requires cleaning * PERFORM_CNS_SCORE_DESCRIPTION: interaction with PERFORM_CNS_SCORE? (needs cleaning first) * CREDIT_HISTORY_LENGTH* New variables * Age_at_Disbursal: Date_of_birth and DisbursalDate* Not useful * BRANCH_ID: this might have geographical info, but no way to group by location from ID * SUPPLIER_ID: same as BRANCH_ID * MANUFACTURER_ID: any relevance to cost/loan amount is captured elsewhere (how $$ is [car) * CURRENT_PINCODE * DISBURSAL_DATE * Date_of_birth * Employee_code_ID * State_id: Could geography play a role? This doesn't help with distinctions like urban/suburban/rural * Flags related to voterid, driving, passport don't seem helpful since I don't know under what conditions these were obtained * SEC_ACCOUNTS: not enough data * SEC_SANCTIONED/DISBURSED: overlap with CURRENT_BALANCE
###Code
# set all columns to lower case to simplify
df.columns = [col.lower() for col in df.columns]
# variables to keep
df = df.loc[:, [
'uniqueid', # keep the individual
'disbursed_amount',
'asset_cost',
'ltv',
'employment_type',
'sec_current_balance',
'pri_disbursed_amount',
'primary_instal_amt',
'sec_instal_amt',
'perform_cns_score_description',
'perform_cns_score',
'date_of_birth',
'disbursal_date',
'loan_default',
'average_acct_age',
'credit_history_length',
]]
# generate age_at_disbursal
df['age_at_disbursal'] = (
(pd.to_datetime(df['disbursal_date']) - pd.to_datetime(df['date_of_birth'])).dt.days / 365.25
).apply(int)
df['age_at_disbursal'].hist()
del df['date_of_birth']
del df['disbursal_date']
df.dtypes
# remove these low values and set all not-scored to 0, these can be imputed later
df.loc[
df['perform_cns_score_description'].str.contains('Not Scored'),
'perform_cns_score'
] = 0
del df['perform_cns_score_description']
def extract_yrs_mon(x):
m = re.search(r'(\d+)yrs (\d+)mon', x)
return int(m.group(1)) * 12 + int(m.group(2))
df['credit_history_in_months'] = df['credit_history_length'].apply(extract_yrs_mon)
df['average_acct_age_in_months'] = df['average_acct_age'].apply(extract_yrs_mon)
del df['credit_history_length']
del df['average_acct_age']
df.to_csv('loan_data_cleaned.csv', index=False)
###Output
_____no_output_____
###Markdown
1. Data Preparation Download dataset from S3 to Local
###Code
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt ./data/
###Output
download: s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt to data/churn.txt
###Markdown
Pick a random sample as holdout for testing
###Code
import pandas as pd
df = pd.read_csv('./data/churn.txt')
df.head(10)
df = df.sample(500)
df.drop('Churn?', axis=1, inplace=True)
df.head(10)
df.to_csv('./data/unlabeled.csv', header=None,index=False)
###Output
_____no_output_____
###Markdown
Collating TCR-pMHC Data This notebook collates and filters data from the following databases:- IEDB- McPAS-TCR- TBAdb- VDJdb
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Set some convenient pandas options for the notebook environement
pd.set_option('display.max_columns', 100)
pd.options.mode.chained_assignment = None
###Output
_____no_output_____
###Markdown
IEDB Data
###Code
# Path to the most recent snapshot extracted from iedb
path = "data/input/iedb/2020-03-24.csv"
iedb = pd.read_csv(path, sep=',', dtype=str)
# Filter to valid epitope sequences
iedb = iedb[iedb['Description'].str.isalpha()]
display(iedb)
iedb.columns
# Subset the data to the columns that will be used as input features to the model
harmonized_iedb = iedb[['Chain 1 CDR3 Curated',
'Chain 1 CDR3 Calculated',
'Chain 2 CDR3 Curated',
'Chain 2 CDR3 Calculated',
'Description',
'MHC Allele Names',
'Curated Chain 1 V Gene',
'Calculated Chain 1 V Gene',
'Curated Chain 1 J Gene',
'Calculated Chain 1 J Gene',
'Curated Chain 2 V Gene',
'Calculated Chain 2 V Gene',
'Curated Chain 2 J Gene',
'Calculated Chain 2 J Gene']]
# Coalesce teh 'curated' and 'calculated' features into single columns, using 'calculated' to fill in missing values
harmonized_iedb['Chain 1 CDR3 Curated'].fillna(harmonized_iedb['Chain 1 CDR3 Calculated'], inplace=True)
harmonized_iedb['Chain 2 CDR3 Curated'].fillna(harmonized_iedb['Chain 2 CDR3 Calculated'], inplace=True)
harmonized_iedb['Curated Chain 1 V Gene'].fillna(harmonized_iedb['Calculated Chain 1 V Gene'], inplace=True)
harmonized_iedb['Curated Chain 1 J Gene'].fillna(harmonized_iedb['Calculated Chain 1 J Gene'], inplace=True)
harmonized_iedb['Curated Chain 2 V Gene'].fillna(harmonized_iedb['Calculated Chain 2 V Gene'], inplace=True)
harmonized_iedb['Curated Chain 2 J Gene'].fillna(harmonized_iedb['Calculated Chain 2 J Gene'], inplace=True)
harmonized_iedb.drop(['Chain 1 CDR3 Calculated',
'Chain 2 CDR3 Calculated',
'Calculated Chain 1 V Gene',
'Calculated Chain 1 J Gene',
'Calculated Chain 2 V Gene',
'Calculated Chain 2 J Gene'], axis=1, inplace=True)
harmonized_iedb['source'] = "iedb"
harmonized_iedb
###Output
_____no_output_____
###Markdown
McPAS-TCR Data
###Code
# Path to the most recent snapshot extracted from the DB
path = "data/input/mcpas-tcr/McPAS-TCR.csv"
mcpas = pd.read_csv(path, sep=',', dtype=str)
# Filter the data to the relevant species and T cell type
mcpas = mcpas[mcpas['Species']=="Human"]
mcpas = mcpas[mcpas['T.Cell.Type']=="CD8"]
display(mcpas)
mcpas.columns
# Subset the data to the columns that will be used as input features to the model
harmonized_mcpas = mcpas[['CDR3.alpha.aa',
'CDR3.beta.aa',
'Epitope.peptide',
'MHC',
'TRAV',
'TRAJ',
'TRBV',
'TRBJ']]
harmonized_mcpas['source'] = "mcpas"
harmonized_mcpas
###Output
_____no_output_____
###Markdown
TBAdb Data
###Code
# Path to the most recent snapshot extracted from the DB
path = "data/input/tbadb/TBAdb.csv"
tbadb = pd.read_csv(path, sep=',', dtype=str)
# Filter the data to valid CDR3beta and relevant T cell type
tbadb = tbadb[tbadb['CDR3.beta.aa']!="-"]
tbadb = tbadb[tbadb['Cell.subtype'].isin(['-','CD8+','T','CD8'])]
display(tbadb)
tbadb.columns
# Subset the data to the columns that will be used as input features to the model
harmonized_tbadb = tbadb[['CDR3.alpha.aa',
'CDR3.beta.aa',
'Antigen.sequence',
'HLA',
'Valpha',
'Jalpha',
'Vbeta',
'Jbeta']]
harmonized_tbadb['source'] = "tbadb"
harmonized_tbadb
###Output
_____no_output_____
###Markdown
VDJdb Data
###Code
# Path to the most recent snapshot extracted from the DB
path = "data/input/vdjdb/2020-03-17.tsv"
vdjdb = pd.read_csv(path, sep='\t', dtype=str)
# Filter the data to the relevant MHC class
vdjdb = vdjdb[vdjdb['MHC class'].isin(['MHCI'])]
display(vdjdb)
vdjdb.columns
# Subset the data to the columns that will be used as input features to the model
# Note that in this DB alpha and beta chains are recorded separately and must be joined
tra = vdjdb[['complex.id', 'Gene', 'CDR3', 'Epitope', 'MHC A', 'V', 'J']][vdjdb['Gene']=='TRA']
trb = vdjdb[['complex.id', 'Gene', 'CDR3', 'Epitope', 'MHC A', 'V', 'J']][vdjdb['Gene']=='TRB']
joined = tra.merge(trb, on='complex.id', how='inner', suffixes=['_TRA','_TRB'])
joined
harmonized_vdjdb = joined[['CDR3_TRA', 'CDR3_TRB', 'Epitope_TRB', 'MHC A_TRB', 'V_TRA', 'J_TRA', 'V_TRB', 'J_TRB']]
harmonized_vdjdb['source'] = "vdjdb"
harmonized_vdjdb
###Output
_____no_output_____
###Markdown
Collated, harmonized set
###Code
# Providing common aliases for the harmonized set of columns
harmonized_columns = ['cdr3a', 'cdr3b', 'epitope', 'hla', 'v_a', 'j_a', 'v_b', 'j_b', 'source']
harmonized_set = [harmonized_iedb, harmonized_mcpas, harmonized_tbadb, harmonized_vdjdb]
# Rename the columns in each dataframe
for df in harmonized_set:
df.columns = harmonized_columns
# Concatenate the individual dataframes into one
collated_df = pd.concat(harmonized_set)
collated_df.drop_duplicates(inplace=True)
collated_df
###Output
_____no_output_____
###Markdown
Overlap between the DBs
###Code
# Calculate the pairwise overlap between the databases
crosstab = pd.merge(collated_df, collated_df, on=['cdr3a', 'cdr3b', 'epitope'])
pd.crosstab(crosstab.source_x, crosstab.source_y)
# Calculate the pairwise overlap between the databases where the alpha chain, beta chain, and epitope are all present
temp = collated_df.dropna()
temp = temp[~((temp['cdr3a']=='-') | (temp['cdr3b']=='-') | (temp['epitope']=='-'))]
crosstab = pd.merge(temp, temp, on=['cdr3a', 'cdr3b', 'epitope'])
pd.crosstab(crosstab.source_x, crosstab.source_y)
###Output
_____no_output_____
###Markdown
Filter the data further
###Code
# Keep a full list of unique CDR sequences for later use
cdr_sequences = pd.concat([collated_df['cdr3a'],collated_df['cdr3b']], ignore_index=True)
cdr_sequences.drop_duplicates(inplace=True)
cdr_sequences.dropna(inplace=True)
# Definite a function that will remove any sequences that contain characters that are not valid amino acid codes
def validate_sequences(df, columns):
alphabet = "ACDEFGHIKLMNPQRSTVWY"
regex = f"[^{alphabet}]"
for column in columns:
df = df[~df[column].str.contains(regex, na=True)]
return(df)
# Filter invalid sequences from the collated dataframe
collated_df = validate_sequences(collated_df, ['cdr3a', 'cdr3b', 'epitope'])
# Define a function to clean up HLA allele labels
def validate_hla(df, columns):
for column in columns:
# Remove erroneously tagged murine records
df = df[~df[column].str.contains(r'(H-2Kb)|(H2 class II)|(HLA class II)', na=True)]
# Remove erroneously tagged MHC II records
df = df[~df[column].str.contains(r'HLA-D', na=True)]
# Add "HLA-" prefix to those records missing it
df[column] = df[column].replace({r'^([A-Z]\*[0-9]*\:[0-9]*)' : r'HLA-\1'}, regex=True)
# Truncate to a single HLA allele
df[column] = df[column].replace({r'^(HLA-[A-Z]\*[0-9]*\:[0-9]*)[ ,].*' : r'\1'}, regex=True)
return(df)
# Correct and filter the HLA allele labels in the collated dataframe
collated_df = validate_hla(collated_df, ['hla'])
# Define a function to clean up V and J gene labels
def validate_genes(df, columns):
for column in columns:
# Remove erroneous HTML encoded characters
df[column] = df[column].replace({r' ' : r''}, regex=True)
return(df)
# Clean up the V and J gene labels in the collated dataframe
collated_df = validate_genes(collated_df, ['v_a', 'j_a', 'v_b', 'j_b'])
###Output
_____no_output_____
###Markdown
Drop duplicate records
###Code
# Drop duplicate records (after fixing the labels above)
collated_df.drop_duplicates(subset=collated_df.columns.difference(['source']), keep='last', inplace=True)
collated_df.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
Output the data
###Code
# Output CDR sequences
path = 'data/input/collated/cdr-sequences.csv'
cdr_sequences.to_csv(path, index=False)
# Output the collated dataframe
path = 'data/input/collated/collated.csv'
collated_df.to_csv(path, index=False)
###Output
_____no_output_____ |
Assignments/DSCI_633_Assignment_05.ipynb | ###Markdown
Name: Pranav Nair Course: DSCI_633 Assignment 05 Step 0: Import NN libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report
import tensorflow
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Step 1: Load the data
###Code
# Loading the training set and the test set from keras
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()
# Interpreting X_train_full
print("X_train_full is a numpy array containing ",X_train_full.shape[0], " matrices", " where each matrix is a two-dimensional vector having", X_train_full.shape[2], "rows and ", X_train_full.shape[1], "columns")
X_train_full
###Output
X_train_full is a numpy array containing 60000 matrices where each matrix is a two-dimensional vector having 28 rows and 28 columns
###Markdown
Let us inspect a random matrix from `X_train_full`
###Code
# Inspecting matrix element at index 200
X_train_full[200]
print("Maximum value of the matrix at index 200: ", X_train_full[200].max())
print("Minimum value of the matrix at index 200: ", X_train_full[200].min())
###Output
Maximum value of the matrix at index 200: 255
Minimum value of the matrix at index 200: 0
###Markdown
It looks like the values inside every matrix represent a pixel intensity value between 0 and 255 and on plotting the matrix, it should be an image. Let's plot these intensity values
###Code
# Plotting the matrix using plt.imshow()
plt.imshow(X_train_full[200], cmap="binary");
# Corresponding target label for the matrix at index 200
y_train_full[200]
###Output
_____no_output_____
###Markdown
This is an image of `1` which is exactly the target variable value for `y_train_full[200]` Hence we can say that every matrix in `X_train_full` is a 28x28 matrix vector for the pixel intensities of the image of the digit represented by the corresponding value in `y_train_full`
###Code
# Let's now check the distribution of labels in the target variable
target_df = pd.DataFrame(y_train_full, columns=['target'])
target_df['target'].value_counts().plot.bar();
###Output
_____no_output_____
###Markdown
We can see that all the labels in the target are not perfectly distributed equally, but there is no imbalance of the categories. Step 2: Data Preprocessing Since we are going to optimise the data using Stochastic Gradient Descent as one of the optimizers, let us normalise the data and bring it between the range 0-1 by dividing with the maximum pixel intensity value 255
###Code
X_train_full = X_train_full/255.0
###Output
_____no_output_____
###Markdown
Step 3: Divide dataset into Train/Test
###Code
# Let's check the shape of X_train_full and X_test_full
print("X_train_full shape: ",X_train_full.shape)
print("X_test shape: ", X_test.shape)
###Output
X_train_full shape: (60000, 28, 28)
X_test shape: (10000, 28, 28)
###Markdown
We can see that there are 60000 matrices of dimension 28x28 in `X_train_full` i.e there are 60000 images in `X_train_full`Also, there are 10000 matrices of dimension 28x28 in `X_test` i.e there are 10000 images in `X_test` Let us take the first 5000 values from `X_train_full` and the corresponding values from `y_train_full` and use them as the validation data and then use the remaining 55000 values from index 5000 to 60000 as the training data
###Code
X_val, y_val = X_train_full[:5000], y_train_full[:5000]
X_train, y_train = X_train_full[5000:], y_train_full[5000:]
# Train data
print("Train data")
print("X_train shape: ",X_train.shape)
print("y-train shape: ",y_train.shape)
print()
# Validation data
print("Validation data")
print("X_val shape: ",X_val.shape)
print("y_val shape: ",y_val.shape)
print()
# Test data
print("Test data")
print("X_test shape: ",X_test.shape)
print("y-test shape: ",y_test.shape)
###Output
Train data
X_train shape: (55000, 28, 28)
y-train shape: (55000,)
Validation data
X_val shape: (5000, 28, 28)
y_val shape: (5000,)
Test data
X_test shape: (10000, 28, 28)
y-test shape: (10000,)
###Markdown
Step 4: Build a Simple Dense Network using ExponentialLearningRate
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Initialising a Sequential model
model = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons
model.add(keras.layers.Dense(300))
# Adding a second Dense layer of 100 neurons
model.add(keras.layers.Dense(100))
# Adding the output layer containing 10 neurons, one for each class label between 0-9
model.add(keras.layers.Dense(10))
# Defining an exponential decay function that reduces the learning rate by a factor of (1/10)^(1/20) causing an exponential decrease starting right after epoch 0
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
# Compiling the model using Stochastic Gradient Descent as the optimizer and sparse categorical crossentropy as the loss since
# the target variable is label-encoded and not one-hot encoded.
model.compile(optimizer=keras.optimizers.SGD(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(exponential_decay_fn)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 2s 1ms/step - loss: 5.3023 - accuracy: 0.0978 - val_loss: 5.1838 - val_accuracy: 0.1070 - lr: 0.0089
Epoch 2/30
1719/1719 [==============================] - 2s 1ms/step - loss: 5.2462 - accuracy: 0.0965 - val_loss: 5.1837 - val_accuracy: 0.1070 - lr: 0.0079
Epoch 3/30
1719/1719 [==============================] - 2s 1ms/step - loss: 5.2342 - accuracy: 0.0965 - val_loss: 5.1779 - val_accuracy: 0.1070 - lr: 0.0071
Epoch 4/30
1719/1719 [==============================] - 2s 1ms/step - loss: 5.1734 - accuracy: 0.0965 - val_loss: 5.0936 - val_accuracy: 0.1070 - lr: 0.0063
Epoch 5/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.5828 - accuracy: 0.1434 - val_loss: 2.2561 - val_accuracy: 0.2466 - lr: 0.0056
Epoch 6/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2600 - accuracy: 0.2321 - val_loss: 2.2745 - val_accuracy: 0.2360 - lr: 0.0050
Epoch 7/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2645 - accuracy: 0.2245 - val_loss: 2.2678 - val_accuracy: 0.2188 - lr: 0.0045
Epoch 8/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2611 - accuracy: 0.2359 - val_loss: 2.2703 - val_accuracy: 0.2452 - lr: 0.0040
Epoch 9/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2366 - accuracy: 0.2261 - val_loss: 2.1899 - val_accuracy: 0.2196 - lr: 0.0035
Epoch 10/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.5211 - accuracy: 0.1669 - val_loss: 5.1696 - val_accuracy: 0.1276 - lr: 0.0032
Epoch 11/30
1719/1719 [==============================] - 2s 1ms/step - loss: 5.2409 - accuracy: 0.1283 - val_loss: 5.1772 - val_accuracy: 0.1366 - lr: 0.0028
Epoch 12/30
1719/1719 [==============================] - 2s 1ms/step - loss: 4.5757 - accuracy: 0.1280 - val_loss: 3.6610 - val_accuracy: 0.1002 - lr: 0.0025
Epoch 13/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.7476 - accuracy: 0.0985 - val_loss: 3.6397 - val_accuracy: 0.1002 - lr: 0.0022
Epoch 14/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.7508 - accuracy: 0.0985 - val_loss: 3.6624 - val_accuracy: 0.1002 - lr: 0.0020
Epoch 15/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.7311 - accuracy: 0.0985 - val_loss: 3.6338 - val_accuracy: 0.1004 - lr: 0.0018
Epoch 16/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.6964 - accuracy: 0.0985 - val_loss: 3.6193 - val_accuracy: 0.1004 - lr: 0.0016
Epoch 17/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.6789 - accuracy: 0.0988 - val_loss: 3.5660 - val_accuracy: 0.1008 - lr: 0.0014
Epoch 18/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.6537 - accuracy: 0.0991 - val_loss: 3.5757 - val_accuracy: 0.1010 - lr: 0.0013
Epoch 19/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.6706 - accuracy: 0.0998 - val_loss: 3.5278 - val_accuracy: 0.1024 - lr: 0.0011
Epoch 20/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.6325 - accuracy: 0.1019 - val_loss: 3.5362 - val_accuracy: 0.1050 - lr: 1.0000e-03
Epoch 21/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.5803 - accuracy: 0.1082 - val_loss: 3.5243 - val_accuracy: 0.1034 - lr: 8.9125e-04
Epoch 22/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.5441 - accuracy: 0.0994 - val_loss: 3.4910 - val_accuracy: 0.1026 - lr: 7.9433e-04
Epoch 23/30
1719/1719 [==============================] - 2s 1ms/step - loss: 3.5303 - accuracy: 0.0942 - val_loss: 3.4888 - val_accuracy: 0.0968 - lr: 7.0795e-04
Epoch 24/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.7047 - accuracy: 0.0803 - val_loss: 2.1971 - val_accuracy: 0.0576 - lr: 6.3096e-04
Epoch 25/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2188 - accuracy: 0.0527 - val_loss: 2.2267 - val_accuracy: 0.0584 - lr: 5.6234e-04
Epoch 26/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2212 - accuracy: 0.0534 - val_loss: 2.2338 - val_accuracy: 0.0586 - lr: 5.0119e-04
Epoch 27/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2505 - accuracy: 0.0524 - val_loss: 2.2572 - val_accuracy: 0.0604 - lr: 4.4668e-04
Epoch 28/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2410 - accuracy: 0.0529 - val_loss: 2.2337 - val_accuracy: 0.0568 - lr: 3.9811e-04
Epoch 29/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2249 - accuracy: 0.0527 - val_loss: 2.2200 - val_accuracy: 0.0570 - lr: 3.5481e-04
Epoch 30/30
1719/1719 [==============================] - 2s 1ms/step - loss: 2.2111 - accuracy: 0.0523 - val_loss: 2.2137 - val_accuracy: 0.0568 - lr: 3.1623e-04
###Markdown
As we can see from the above plots, the accuracy of the model on the training data and the validation data is quite low and the loss is increasing instead of decreasing. This is mainly because of not using any activation function. Step 5: Use sigmoid, relu, and softmax as activation functions
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Initialising a Sequential model
model = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model.add(keras.layers.Dense(10, activation="sigmoid"))
# Defining an exponential decay function that reduces the learning rate by a factor of (1/10)^(1/20) causing an exponential decrease starting right from epoch 0
def exponential_decay_fn(epoch, lr):
return lr * 0.1**(1 / 20)
# Compiling the model using Stochastic Gradient Descent as the optimizer and sparse categorical crossentropy as the loss since
# the target variable is label-encoded and not one-hot encoded.
model.compile(optimizer=keras.optimizers.SGD(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_activation = model.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(exponential_decay_fn)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_activation.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.8280 - accuracy: 0.7833 - val_loss: 0.3385 - val_accuracy: 0.9086 - lr: 0.0089
Epoch 2/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3201 - accuracy: 0.9107 - val_loss: 0.2739 - val_accuracy: 0.9244 - lr: 0.0079
Epoch 3/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2683 - accuracy: 0.9239 - val_loss: 0.2310 - val_accuracy: 0.9346 - lr: 0.0071
Epoch 4/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2368 - accuracy: 0.9337 - val_loss: 0.2114 - val_accuracy: 0.9398 - lr: 0.0063
Epoch 5/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2140 - accuracy: 0.9399 - val_loss: 0.1975 - val_accuracy: 0.9444 - lr: 0.0056
Epoch 6/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1977 - accuracy: 0.9443 - val_loss: 0.1849 - val_accuracy: 0.9462 - lr: 0.0050
Epoch 7/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1849 - accuracy: 0.9479 - val_loss: 0.1758 - val_accuracy: 0.9474 - lr: 0.0045
Epoch 8/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1747 - accuracy: 0.9506 - val_loss: 0.1668 - val_accuracy: 0.9516 - lr: 0.0040
Epoch 9/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1667 - accuracy: 0.9530 - val_loss: 0.1618 - val_accuracy: 0.9556 - lr: 0.0035
Epoch 10/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1596 - accuracy: 0.9552 - val_loss: 0.1559 - val_accuracy: 0.9550 - lr: 0.0032
Epoch 11/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1539 - accuracy: 0.9570 - val_loss: 0.1526 - val_accuracy: 0.9586 - lr: 0.0028
Epoch 12/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1492 - accuracy: 0.9582 - val_loss: 0.1487 - val_accuracy: 0.9584 - lr: 0.0025
Epoch 13/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1451 - accuracy: 0.9593 - val_loss: 0.1451 - val_accuracy: 0.9612 - lr: 0.0022
Epoch 14/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1416 - accuracy: 0.9602 - val_loss: 0.1436 - val_accuracy: 0.9604 - lr: 0.0020
Epoch 15/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1386 - accuracy: 0.9612 - val_loss: 0.1414 - val_accuracy: 0.9616 - lr: 0.0018
Epoch 16/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1360 - accuracy: 0.9621 - val_loss: 0.1400 - val_accuracy: 0.9614 - lr: 0.0016
Epoch 17/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1337 - accuracy: 0.9628 - val_loss: 0.1382 - val_accuracy: 0.9630 - lr: 0.0014
Epoch 18/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1317 - accuracy: 0.9633 - val_loss: 0.1370 - val_accuracy: 0.9628 - lr: 0.0013
Epoch 19/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1302 - accuracy: 0.9639 - val_loss: 0.1366 - val_accuracy: 0.9616 - lr: 0.0011
Epoch 20/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1285 - accuracy: 0.9643 - val_loss: 0.1345 - val_accuracy: 0.9634 - lr: 1.0000e-03
Epoch 21/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1273 - accuracy: 0.9647 - val_loss: 0.1332 - val_accuracy: 0.9650 - lr: 8.9125e-04
Epoch 22/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1262 - accuracy: 0.9650 - val_loss: 0.1327 - val_accuracy: 0.9644 - lr: 7.9433e-04
Epoch 23/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1251 - accuracy: 0.9654 - val_loss: 0.1324 - val_accuracy: 0.9640 - lr: 7.0795e-04
Epoch 24/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1243 - accuracy: 0.9655 - val_loss: 0.1319 - val_accuracy: 0.9646 - lr: 6.3096e-04
Epoch 25/30
1719/1719 [==============================] - 3s 1ms/step - loss: 0.1236 - accuracy: 0.9657 - val_loss: 0.1310 - val_accuracy: 0.9648 - lr: 5.6234e-04
Epoch 26/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1228 - accuracy: 0.9657 - val_loss: 0.1306 - val_accuracy: 0.9644 - lr: 5.0119e-04
Epoch 27/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1222 - accuracy: 0.9659 - val_loss: 0.1302 - val_accuracy: 0.9648 - lr: 4.4668e-04
Epoch 28/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1217 - accuracy: 0.9661 - val_loss: 0.1296 - val_accuracy: 0.9660 - lr: 3.9811e-04
Epoch 29/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1213 - accuracy: 0.9664 - val_loss: 0.1294 - val_accuracy: 0.9656 - lr: 3.5481e-04
Epoch 30/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1207 - accuracy: 0.9667 - val_loss: 0.1295 - val_accuracy: 0.9644 - lr: 3.1623e-04
###Markdown
From the results of the above training, we can see that the loss is decreasing and the accuracy on the training and the validation data both are close to 1 and approximately equal throughout the 30 epochs. This is an indication of neither overfitting nor underfitting of the model. It means that our model seems to have learnt to generalise the data well. Step 6: Plot the loss as a function of the learning rate
###Code
# Getting the loss
loss = history_activation.history['loss']
# Getting the learning rates
learning_rate = history_activation.history['lr']
# Plotting loss on the y-axis as a function of the learning_rate on the x-axis
plt.plot(learning_rate, loss)
plt.title("Loss vs learning_rate");
plt.xlabel("learning_rate"),
plt.ylabel("loss")
###Output
_____no_output_____
###Markdown
Step 7 - What is the value of lr when loss shoots up? As we can see from the above graph of the loss as a function of the learning_rate, the loss increases gradually and then shoots up at the learning rate value of 0.008 Step 8 - compile losses, use various optimizers. Check the documentation on losses to learn more. Since we have used SGD() optimiser for the above result, let's now try using the below optimisers - 1. RMS_prop - 2. Adam - 3. Adagrad But prior to that let us train the MLP model using a modified exponential decay function
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Initialising a Sequential model
model = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model.add(keras.layers.Dense(10, activation="sigmoid"))
# Writing a custom scheduler that exponentially reduces the learning rate after 10 epochs.
# source of the below function - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler
def scheduler(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Compiling the model using Stochastic Gradient Descent as the optimizer and sparse categorical crossentropy as the loss since
# the target variable is label-encoded and not one-hot encoded.
model.compile(optimizer=keras.optimizers.SGD(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_activation = model.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(scheduler)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_activation.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.7724 - accuracy: 0.8032 - val_loss: 0.3257 - val_accuracy: 0.9118 - lr: 0.0100
Epoch 2/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3046 - accuracy: 0.9130 - val_loss: 0.2524 - val_accuracy: 0.9326 - lr: 0.0100
Epoch 3/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2497 - accuracy: 0.9288 - val_loss: 0.2156 - val_accuracy: 0.9422 - lr: 0.0100
Epoch 4/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2131 - accuracy: 0.9391 - val_loss: 0.1868 - val_accuracy: 0.9492 - lr: 0.0100
Epoch 5/30
1719/1719 [==============================] - 3s 1ms/step - loss: 0.1863 - accuracy: 0.9473 - val_loss: 0.1736 - val_accuracy: 0.9556 - lr: 0.0100
Epoch 6/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1655 - accuracy: 0.9532 - val_loss: 0.1549 - val_accuracy: 0.9588 - lr: 0.0100
Epoch 7/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1482 - accuracy: 0.9579 - val_loss: 0.1404 - val_accuracy: 0.9626 - lr: 0.0100
Epoch 8/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1342 - accuracy: 0.9621 - val_loss: 0.1301 - val_accuracy: 0.9638 - lr: 0.0100
Epoch 9/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1227 - accuracy: 0.9653 - val_loss: 0.1240 - val_accuracy: 0.9666 - lr: 0.0100
Epoch 10/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1124 - accuracy: 0.9686 - val_loss: 0.1127 - val_accuracy: 0.9702 - lr: 0.0100
Epoch 11/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1034 - accuracy: 0.9712 - val_loss: 0.1100 - val_accuracy: 0.9706 - lr: 0.0090
Epoch 12/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0959 - accuracy: 0.9734 - val_loss: 0.1067 - val_accuracy: 0.9708 - lr: 0.0082
Epoch 13/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0894 - accuracy: 0.9756 - val_loss: 0.1003 - val_accuracy: 0.9708 - lr: 0.0074
Epoch 14/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0846 - accuracy: 0.9766 - val_loss: 0.0982 - val_accuracy: 0.9718 - lr: 0.0067
Epoch 15/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0802 - accuracy: 0.9778 - val_loss: 0.0949 - val_accuracy: 0.9732 - lr: 0.0061
Epoch 16/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0765 - accuracy: 0.9792 - val_loss: 0.0920 - val_accuracy: 0.9744 - lr: 0.0055
Epoch 17/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0732 - accuracy: 0.9800 - val_loss: 0.0903 - val_accuracy: 0.9736 - lr: 0.0050
Epoch 18/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0707 - accuracy: 0.9811 - val_loss: 0.0897 - val_accuracy: 0.9736 - lr: 0.0045
Epoch 19/30
1719/1719 [==============================] - 3s 1ms/step - loss: 0.0684 - accuracy: 0.9818 - val_loss: 0.0885 - val_accuracy: 0.9746 - lr: 0.0041
Epoch 20/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0663 - accuracy: 0.9821 - val_loss: 0.0868 - val_accuracy: 0.9752 - lr: 0.0037
Epoch 21/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0645 - accuracy: 0.9830 - val_loss: 0.0863 - val_accuracy: 0.9752 - lr: 0.0033
Epoch 22/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0629 - accuracy: 0.9835 - val_loss: 0.0851 - val_accuracy: 0.9758 - lr: 0.0030
Epoch 23/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0615 - accuracy: 0.9839 - val_loss: 0.0853 - val_accuracy: 0.9750 - lr: 0.0027
Epoch 24/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0603 - accuracy: 0.9841 - val_loss: 0.0836 - val_accuracy: 0.9754 - lr: 0.0025
Epoch 25/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0593 - accuracy: 0.9849 - val_loss: 0.0829 - val_accuracy: 0.9752 - lr: 0.0022
Epoch 26/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0583 - accuracy: 0.9852 - val_loss: 0.0827 - val_accuracy: 0.9764 - lr: 0.0020
Epoch 27/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0575 - accuracy: 0.9853 - val_loss: 0.0827 - val_accuracy: 0.9764 - lr: 0.0018
Epoch 28/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0567 - accuracy: 0.9856 - val_loss: 0.0818 - val_accuracy: 0.9760 - lr: 0.0017
Epoch 29/30
1719/1719 [==============================] - 3s 1ms/step - loss: 0.0560 - accuracy: 0.9857 - val_loss: 0.0819 - val_accuracy: 0.9764 - lr: 0.0015
Epoch 30/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0555 - accuracy: 0.9861 - val_loss: 0.0812 - val_accuracy: 0.9758 - lr: 0.0014
###Markdown
As we can see, the accuracy levels reported using the modified exponential decay function are slightly higher compared to the previous function. Hence, we shall use this modified function of exponential decay while training the below models
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Using Adam() as the optimiser
# Initialising a Sequential model
model_3 = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model_3.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model_3.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model_3.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model_3.add(keras.layers.Dense(10, activation="sigmoid"))
# Writing a custom scheduler that exponentially reduces the learning rate exponentially after 10 epochs.
# source of the below function - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler
def scheduler(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Compiling the model
model_3.compile(optimizer=keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_adam = model_3.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(scheduler)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_adam.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2295 - accuracy: 0.9322 - val_loss: 0.1066 - val_accuracy: 0.9684 - lr: 0.0010
Epoch 2/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0930 - accuracy: 0.9719 - val_loss: 0.0794 - val_accuracy: 0.9778 - lr: 0.0010
Epoch 3/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0628 - accuracy: 0.9805 - val_loss: 0.0741 - val_accuracy: 0.9804 - lr: 0.0010
Epoch 4/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0473 - accuracy: 0.9845 - val_loss: 0.0751 - val_accuracy: 0.9780 - lr: 0.0010
Epoch 5/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0380 - accuracy: 0.9874 - val_loss: 0.0750 - val_accuracy: 0.9790 - lr: 0.0010
Epoch 6/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.0293 - accuracy: 0.9907 - val_loss: 0.0831 - val_accuracy: 0.9794 - lr: 0.0010
Epoch 7/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.0257 - accuracy: 0.9919 - val_loss: 0.0876 - val_accuracy: 0.9804 - lr: 0.0010
Epoch 8/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0212 - accuracy: 0.9929 - val_loss: 0.0915 - val_accuracy: 0.9800 - lr: 0.0010
Epoch 9/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.0192 - accuracy: 0.9938 - val_loss: 0.1000 - val_accuracy: 0.9764 - lr: 0.0010
Epoch 10/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0172 - accuracy: 0.9943 - val_loss: 0.1027 - val_accuracy: 0.9762 - lr: 0.0010
Epoch 11/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0143 - accuracy: 0.9954 - val_loss: 0.0962 - val_accuracy: 0.9818 - lr: 9.0484e-04
Epoch 12/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0086 - accuracy: 0.9971 - val_loss: 0.0920 - val_accuracy: 0.9806 - lr: 8.1873e-04
Epoch 13/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0067 - accuracy: 0.9978 - val_loss: 0.0882 - val_accuracy: 0.9840 - lr: 7.4082e-04
Epoch 14/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0065 - accuracy: 0.9978 - val_loss: 0.1334 - val_accuracy: 0.9760 - lr: 6.7032e-04
Epoch 15/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0043 - accuracy: 0.9985 - val_loss: 0.1034 - val_accuracy: 0.9802 - lr: 6.0653e-04
Epoch 16/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0025 - accuracy: 0.9993 - val_loss: 0.1010 - val_accuracy: 0.9820 - lr: 5.4881e-04
Epoch 17/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0027 - accuracy: 0.9991 - val_loss: 0.0888 - val_accuracy: 0.9850 - lr: 4.9659e-04
Epoch 18/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0011 - accuracy: 0.9996 - val_loss: 0.0897 - val_accuracy: 0.9846 - lr: 4.4933e-04
Epoch 19/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0013 - accuracy: 0.9995 - val_loss: 0.0979 - val_accuracy: 0.9848 - lr: 4.0657e-04
Epoch 20/30
1719/1719 [==============================] - 3s 2ms/step - loss: 7.2279e-04 - accuracy: 0.9997 - val_loss: 0.0955 - val_accuracy: 0.9844 - lr: 3.6788e-04
Epoch 21/30
1719/1719 [==============================] - 3s 2ms/step - loss: 3.6541e-04 - accuracy: 0.9999 - val_loss: 0.0966 - val_accuracy: 0.9856 - lr: 3.3287e-04
Epoch 22/30
1719/1719 [==============================] - 3s 2ms/step - loss: 2.3770e-04 - accuracy: 0.9999 - val_loss: 0.0914 - val_accuracy: 0.9862 - lr: 3.0119e-04
Epoch 23/30
1719/1719 [==============================] - 3s 2ms/step - loss: 1.4786e-04 - accuracy: 0.9999 - val_loss: 0.0977 - val_accuracy: 0.9844 - lr: 2.7253e-04
Epoch 24/30
1719/1719 [==============================] - 3s 2ms/step - loss: 9.9852e-05 - accuracy: 1.0000 - val_loss: 0.0973 - val_accuracy: 0.9848 - lr: 2.4660e-04
Epoch 25/30
1719/1719 [==============================] - 4s 2ms/step - loss: 1.3058e-04 - accuracy: 0.9999 - val_loss: 0.0989 - val_accuracy: 0.9854 - lr: 2.2313e-04
Epoch 26/30
1719/1719 [==============================] - 3s 2ms/step - loss: 6.3012e-05 - accuracy: 1.0000 - val_loss: 0.0988 - val_accuracy: 0.9852 - lr: 2.0190e-04
Epoch 27/30
1719/1719 [==============================] - 3s 2ms/step - loss: 6.0064e-05 - accuracy: 1.0000 - val_loss: 0.1017 - val_accuracy: 0.9850 - lr: 1.8268e-04
Epoch 28/30
1719/1719 [==============================] - 4s 2ms/step - loss: 5.8700e-05 - accuracy: 1.0000 - val_loss: 0.1003 - val_accuracy: 0.9856 - lr: 1.6530e-04
Epoch 29/30
1719/1719 [==============================] - 3s 2ms/step - loss: 5.7822e-05 - accuracy: 1.0000 - val_loss: 0.0988 - val_accuracy: 0.9858 - lr: 1.4957e-04
Epoch 30/30
1719/1719 [==============================] - 3s 2ms/step - loss: 5.7000e-05 - accuracy: 1.0000 - val_loss: 0.1010 - val_accuracy: 0.9848 - lr: 1.3534e-04
###Markdown
Using Adam Optimiser the accuracy has almost reached 1.00 on the training data and 0.98 on the validation data and the losses are also pretty low
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Using RMSProp() as the optimiser
# Initialising a Sequential model
model_4 = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model_4.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model_4.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model_4.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model_4.add(keras.layers.Dense(10, activation="sigmoid"))
# Writing a custom scheduler that exponentially reduces the learning rate exponentially after 10 epochs.
# source of the below function - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler
def scheduler(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Compiling the model
model_4.compile(optimizer=keras.optimizers.RMSprop(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_rmsprop = model_4.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(scheduler)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_rmsprop.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.2364 - accuracy: 0.9303 - val_loss: 0.1030 - val_accuracy: 0.9662 - lr: 0.0010
Epoch 2/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.1020 - accuracy: 0.9705 - val_loss: 0.0946 - val_accuracy: 0.9734 - lr: 0.0010
Epoch 3/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0751 - accuracy: 0.9789 - val_loss: 0.0856 - val_accuracy: 0.9788 - lr: 0.0010
Epoch 4/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.0575 - accuracy: 0.9832 - val_loss: 0.0809 - val_accuracy: 0.9784 - lr: 0.0010
Epoch 5/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.0472 - accuracy: 0.9870 - val_loss: 0.1196 - val_accuracy: 0.9728 - lr: 0.0010
Epoch 6/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0383 - accuracy: 0.9896 - val_loss: 0.0992 - val_accuracy: 0.9782 - lr: 0.0010
Epoch 7/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.0310 - accuracy: 0.9915 - val_loss: 0.0996 - val_accuracy: 0.9800 - lr: 0.0010
Epoch 8/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0252 - accuracy: 0.9930 - val_loss: 0.0881 - val_accuracy: 0.9812 - lr: 0.0010
Epoch 9/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0229 - accuracy: 0.9939 - val_loss: 0.1187 - val_accuracy: 0.9774 - lr: 0.0010
Epoch 10/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0191 - accuracy: 0.9947 - val_loss: 0.1181 - val_accuracy: 0.9802 - lr: 0.0010
Epoch 11/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0144 - accuracy: 0.9964 - val_loss: 0.1159 - val_accuracy: 0.9808 - lr: 9.0484e-04
Epoch 12/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0103 - accuracy: 0.9973 - val_loss: 0.1191 - val_accuracy: 0.9802 - lr: 8.1873e-04
Epoch 13/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0062 - accuracy: 0.9983 - val_loss: 0.1173 - val_accuracy: 0.9818 - lr: 7.4082e-04
Epoch 14/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0051 - accuracy: 0.9985 - val_loss: 0.1152 - val_accuracy: 0.9832 - lr: 6.7032e-04
Epoch 15/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0038 - accuracy: 0.9990 - val_loss: 0.1201 - val_accuracy: 0.9822 - lr: 6.0653e-04
Epoch 16/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0027 - accuracy: 0.9993 - val_loss: 0.1318 - val_accuracy: 0.9816 - lr: 5.4881e-04
Epoch 17/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0024 - accuracy: 0.9993 - val_loss: 0.1301 - val_accuracy: 0.9818 - lr: 4.9659e-04
Epoch 18/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.0015 - accuracy: 0.9995 - val_loss: 0.1337 - val_accuracy: 0.9824 - lr: 4.4933e-04
Epoch 19/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0015 - accuracy: 0.9996 - val_loss: 0.1403 - val_accuracy: 0.9814 - lr: 4.0657e-04
Epoch 20/30
1719/1719 [==============================] - 4s 2ms/step - loss: 0.0013 - accuracy: 0.9996 - val_loss: 0.1425 - val_accuracy: 0.9822 - lr: 3.6788e-04
Epoch 21/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0011 - accuracy: 0.9996 - val_loss: 0.1400 - val_accuracy: 0.9828 - lr: 3.3287e-04
Epoch 22/30
1719/1719 [==============================] - 3s 2ms/step - loss: 9.8663e-04 - accuracy: 0.9996 - val_loss: 0.1422 - val_accuracy: 0.9822 - lr: 3.0119e-04
Epoch 23/30
1719/1719 [==============================] - 4s 2ms/step - loss: 9.1887e-04 - accuracy: 0.9996 - val_loss: 0.1446 - val_accuracy: 0.9822 - lr: 2.7253e-04
Epoch 24/30
1719/1719 [==============================] - 3s 2ms/step - loss: 8.9183e-04 - accuracy: 0.9997 - val_loss: 0.1425 - val_accuracy: 0.9834 - lr: 2.4660e-04
Epoch 25/30
1719/1719 [==============================] - 3s 2ms/step - loss: 8.1258e-04 - accuracy: 0.9997 - val_loss: 0.1423 - val_accuracy: 0.9832 - lr: 2.2313e-04
Epoch 26/30
1719/1719 [==============================] - 4s 2ms/step - loss: 8.5554e-04 - accuracy: 0.9997 - val_loss: 0.1461 - val_accuracy: 0.9824 - lr: 2.0190e-04
Epoch 27/30
1719/1719 [==============================] - 3s 2ms/step - loss: 8.1142e-04 - accuracy: 0.9997 - val_loss: 0.1446 - val_accuracy: 0.9828 - lr: 1.8268e-04
Epoch 28/30
1719/1719 [==============================] - 3s 2ms/step - loss: 8.1000e-04 - accuracy: 0.9997 - val_loss: 0.1451 - val_accuracy: 0.9830 - lr: 1.6530e-04
Epoch 29/30
1719/1719 [==============================] - 3s 2ms/step - loss: 8.0523e-04 - accuracy: 0.9997 - val_loss: 0.1468 - val_accuracy: 0.9826 - lr: 1.4957e-04
Epoch 30/30
1719/1719 [==============================] - 3s 2ms/step - loss: 7.9760e-04 - accuracy: 0.9997 - val_loss: 0.1463 - val_accuracy: 0.9826 - lr: 1.3534e-04
###Markdown
RMSprop() also performs better compared to SGD() in terms of accuracy on the validation data
###Code
# Reference for the below model has been taken from page no 295 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Using adgrad optimiser
# Initialising a Sequential model
model_5 = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model_5.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model_5.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model_5.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model_5.add(keras.layers.Dense(10, activation="sigmoid"))
# Writing a custom scheduler that exponentially reduces the learning rate exponentially after 10 epochs.
# source of the below function - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler
def scheduler(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Compiling the model
model_5.compile(optimizer=keras.optimizers.Adagrad(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_adagrad = model_5.fit(X_train, y_train, epochs=30, validation_data=(X_val, y_val), callbacks=[keras.callbacks.LearningRateScheduler(scheduler)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_adagrad.history).plot(figsize=(10, 8))
###Output
Epoch 1/30
1719/1719 [==============================] - 3s 2ms/step - loss: 1.4509 - accuracy: 0.6898 - val_loss: 0.6820 - val_accuracy: 0.8594 - lr: 0.0010
Epoch 2/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.5488 - accuracy: 0.8674 - val_loss: 0.4397 - val_accuracy: 0.8918 - lr: 0.0010
Epoch 3/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.4213 - accuracy: 0.8908 - val_loss: 0.3698 - val_accuracy: 0.9038 - lr: 0.0010
Epoch 4/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3712 - accuracy: 0.9005 - val_loss: 0.3348 - val_accuracy: 0.9120 - lr: 0.0010
Epoch 5/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3420 - accuracy: 0.9065 - val_loss: 0.3119 - val_accuracy: 0.9192 - lr: 0.0010
Epoch 6/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.3216 - accuracy: 0.9124 - val_loss: 0.2952 - val_accuracy: 0.9214 - lr: 0.0010
Epoch 7/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3057 - accuracy: 0.9162 - val_loss: 0.2822 - val_accuracy: 0.9262 - lr: 0.0010
Epoch 8/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2929 - accuracy: 0.9190 - val_loss: 0.2713 - val_accuracy: 0.9280 - lr: 0.0010
Epoch 9/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2823 - accuracy: 0.9221 - val_loss: 0.2624 - val_accuracy: 0.9308 - lr: 0.0010
Epoch 10/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2728 - accuracy: 0.9243 - val_loss: 0.2548 - val_accuracy: 0.9328 - lr: 0.0010
Epoch 11/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2649 - accuracy: 0.9262 - val_loss: 0.2479 - val_accuracy: 0.9342 - lr: 9.0484e-04
Epoch 12/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2585 - accuracy: 0.9283 - val_loss: 0.2429 - val_accuracy: 0.9358 - lr: 8.1873e-04
Epoch 13/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2530 - accuracy: 0.9301 - val_loss: 0.2384 - val_accuracy: 0.9366 - lr: 7.4082e-04
Epoch 14/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2487 - accuracy: 0.9315 - val_loss: 0.2349 - val_accuracy: 0.9370 - lr: 6.7032e-04
Epoch 15/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2448 - accuracy: 0.9321 - val_loss: 0.2318 - val_accuracy: 0.9370 - lr: 6.0653e-04
Epoch 16/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2416 - accuracy: 0.9327 - val_loss: 0.2295 - val_accuracy: 0.9372 - lr: 5.4881e-04
Epoch 17/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2389 - accuracy: 0.9334 - val_loss: 0.2269 - val_accuracy: 0.9392 - lr: 4.9659e-04
Epoch 18/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2365 - accuracy: 0.9342 - val_loss: 0.2250 - val_accuracy: 0.9394 - lr: 4.4933e-04
Epoch 19/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2344 - accuracy: 0.9350 - val_loss: 0.2232 - val_accuracy: 0.9398 - lr: 4.0657e-04
Epoch 20/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2325 - accuracy: 0.9354 - val_loss: 0.2217 - val_accuracy: 0.9398 - lr: 3.6788e-04
Epoch 21/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2310 - accuracy: 0.9357 - val_loss: 0.2204 - val_accuracy: 0.9400 - lr: 3.3287e-04
Epoch 22/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2296 - accuracy: 0.9363 - val_loss: 0.2193 - val_accuracy: 0.9402 - lr: 3.0119e-04
Epoch 23/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2284 - accuracy: 0.9365 - val_loss: 0.2184 - val_accuracy: 0.9404 - lr: 2.7253e-04
Epoch 24/30
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2273 - accuracy: 0.9367 - val_loss: 0.2175 - val_accuracy: 0.9410 - lr: 2.4660e-04
Epoch 25/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2264 - accuracy: 0.9370 - val_loss: 0.2167 - val_accuracy: 0.9410 - lr: 2.2313e-04
Epoch 26/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2255 - accuracy: 0.9373 - val_loss: 0.2160 - val_accuracy: 0.9414 - lr: 2.0190e-04
Epoch 27/30
1719/1719 [==============================] - 3s 2ms/step - loss: 0.2248 - accuracy: 0.9374 - val_loss: 0.2154 - val_accuracy: 0.9414 - lr: 1.8268e-04
Epoch 28/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2242 - accuracy: 0.9377 - val_loss: 0.2149 - val_accuracy: 0.9416 - lr: 1.6530e-04
Epoch 29/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2236 - accuracy: 0.9376 - val_loss: 0.2144 - val_accuracy: 0.9414 - lr: 1.4957e-04
Epoch 30/30
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2231 - accuracy: 0.9379 - val_loss: 0.2140 - val_accuracy: 0.9418 - lr: 1.3534e-04
###Markdown
Adagrad() optimiser has given the lowest accuracy so far amongst all the optimisers Step 9 - Use earlystoppping() when the desired metric has stopped improving
###Code
# Reference for the below model has been taken from page no 312 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Initialising a Sequential model
model = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model.add(keras.layers.Dense(10, activation="sigmoid"))
# Writing a custom scheduler that exponentially reduces the learning rate exponentially after 10 epochs.
# source of the below function - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/LearningRateScheduler
def scheduler(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Defining an early stopping instance with the validation loss as the value to be monitored and a patience level of three
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode="auto")
# Compiling the model
model.compile(optimizer=keras.optimizers.SGD(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data. Choosing the number of epochs equal to 100
history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping, keras.callbacks.LearningRateScheduler(exponential_decay_fn)])
###Output
Epoch 1/100
1719/1719 [==============================] - 4s 2ms/step - loss: 0.8857 - accuracy: 0.7811 - val_loss: 0.3586 - val_accuracy: 0.9030 - lr: 0.0089
Epoch 2/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.3227 - accuracy: 0.9080 - val_loss: 0.2786 - val_accuracy: 0.9200 - lr: 0.0079
Epoch 3/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2688 - accuracy: 0.9238 - val_loss: 0.2413 - val_accuracy: 0.9324 - lr: 0.0071
Epoch 4/100
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2375 - accuracy: 0.9322 - val_loss: 0.2170 - val_accuracy: 0.9426 - lr: 0.0063
Epoch 5/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.2150 - accuracy: 0.9385 - val_loss: 0.1998 - val_accuracy: 0.9474 - lr: 0.0056
Epoch 6/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1983 - accuracy: 0.9439 - val_loss: 0.1858 - val_accuracy: 0.9496 - lr: 0.0050
Epoch 7/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1850 - accuracy: 0.9478 - val_loss: 0.1752 - val_accuracy: 0.9530 - lr: 0.0045
Epoch 8/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1749 - accuracy: 0.9498 - val_loss: 0.1684 - val_accuracy: 0.9548 - lr: 0.0040
Epoch 9/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1665 - accuracy: 0.9528 - val_loss: 0.1629 - val_accuracy: 0.9578 - lr: 0.0035
Epoch 10/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1598 - accuracy: 0.9548 - val_loss: 0.1589 - val_accuracy: 0.9592 - lr: 0.0032
Epoch 11/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1542 - accuracy: 0.9564 - val_loss: 0.1544 - val_accuracy: 0.9596 - lr: 0.0028
Epoch 12/100
1719/1719 [==============================] - 3s 1ms/step - loss: 0.1493 - accuracy: 0.9578 - val_loss: 0.1499 - val_accuracy: 0.9622 - lr: 0.0025
Epoch 13/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1453 - accuracy: 0.9588 - val_loss: 0.1494 - val_accuracy: 0.9598 - lr: 0.0022
Epoch 14/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1418 - accuracy: 0.9604 - val_loss: 0.1458 - val_accuracy: 0.9604 - lr: 0.0020
Epoch 15/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1389 - accuracy: 0.9613 - val_loss: 0.1424 - val_accuracy: 0.9620 - lr: 0.0018
Epoch 16/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1365 - accuracy: 0.9621 - val_loss: 0.1413 - val_accuracy: 0.9630 - lr: 0.0016
Epoch 17/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1342 - accuracy: 0.9631 - val_loss: 0.1395 - val_accuracy: 0.9636 - lr: 0.0014
Epoch 18/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1323 - accuracy: 0.9639 - val_loss: 0.1380 - val_accuracy: 0.9632 - lr: 0.0013
Epoch 19/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1305 - accuracy: 0.9640 - val_loss: 0.1378 - val_accuracy: 0.9636 - lr: 0.0011
Epoch 20/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1291 - accuracy: 0.9647 - val_loss: 0.1358 - val_accuracy: 0.9644 - lr: 1.0000e-03
Epoch 21/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1278 - accuracy: 0.9648 - val_loss: 0.1352 - val_accuracy: 0.9634 - lr: 8.9125e-04
Epoch 22/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1266 - accuracy: 0.9653 - val_loss: 0.1338 - val_accuracy: 0.9650 - lr: 7.9433e-04
Epoch 23/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1257 - accuracy: 0.9657 - val_loss: 0.1336 - val_accuracy: 0.9646 - lr: 7.0795e-04
Epoch 24/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1248 - accuracy: 0.9659 - val_loss: 0.1327 - val_accuracy: 0.9654 - lr: 6.3096e-04
Epoch 25/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1240 - accuracy: 0.9659 - val_loss: 0.1323 - val_accuracy: 0.9648 - lr: 5.6234e-04
Epoch 26/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1233 - accuracy: 0.9664 - val_loss: 0.1321 - val_accuracy: 0.9652 - lr: 5.0119e-04
Epoch 27/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1228 - accuracy: 0.9665 - val_loss: 0.1316 - val_accuracy: 0.9640 - lr: 4.4668e-04
Epoch 28/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1222 - accuracy: 0.9665 - val_loss: 0.1310 - val_accuracy: 0.9652 - lr: 3.9811e-04
Epoch 29/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1217 - accuracy: 0.9669 - val_loss: 0.1308 - val_accuracy: 0.9658 - lr: 3.5481e-04
Epoch 30/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1213 - accuracy: 0.9666 - val_loss: 0.1304 - val_accuracy: 0.9648 - lr: 3.1623e-04
Epoch 31/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1209 - accuracy: 0.9670 - val_loss: 0.1304 - val_accuracy: 0.9656 - lr: 2.8184e-04
Epoch 32/100
1719/1719 [==============================] - 3s 1ms/step - loss: 0.1206 - accuracy: 0.9669 - val_loss: 0.1300 - val_accuracy: 0.9658 - lr: 2.5119e-04
Epoch 33/100
1719/1719 [==============================] - 3s 1ms/step - loss: 0.1203 - accuracy: 0.9671 - val_loss: 0.1298 - val_accuracy: 0.9658 - lr: 2.2387e-04
Epoch 34/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1200 - accuracy: 0.9671 - val_loss: 0.1298 - val_accuracy: 0.9658 - lr: 1.9953e-04
Epoch 35/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1198 - accuracy: 0.9673 - val_loss: 0.1295 - val_accuracy: 0.9656 - lr: 1.7783e-04
Epoch 36/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1196 - accuracy: 0.9675 - val_loss: 0.1293 - val_accuracy: 0.9658 - lr: 1.5849e-04
Epoch 37/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1194 - accuracy: 0.9675 - val_loss: 0.1293 - val_accuracy: 0.9652 - lr: 1.4125e-04
Epoch 38/100
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1192 - accuracy: 0.9675 - val_loss: 0.1291 - val_accuracy: 0.9656 - lr: 1.2589e-04
Epoch 39/100
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1191 - accuracy: 0.9675 - val_loss: 0.1290 - val_accuracy: 0.9660 - lr: 1.1220e-04
Epoch 40/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1190 - accuracy: 0.9676 - val_loss: 0.1289 - val_accuracy: 0.9656 - lr: 1.0000e-04
Epoch 41/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1189 - accuracy: 0.9677 - val_loss: 0.1288 - val_accuracy: 0.9656 - lr: 8.9125e-05
Epoch 42/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1187 - accuracy: 0.9676 - val_loss: 0.1288 - val_accuracy: 0.9656 - lr: 7.9433e-05
Epoch 43/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1187 - accuracy: 0.9676 - val_loss: 0.1287 - val_accuracy: 0.9656 - lr: 7.0795e-05
Epoch 44/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1186 - accuracy: 0.9677 - val_loss: 0.1287 - val_accuracy: 0.9658 - lr: 6.3096e-05
Epoch 45/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1185 - accuracy: 0.9678 - val_loss: 0.1287 - val_accuracy: 0.9658 - lr: 5.6234e-05
Epoch 46/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1184 - accuracy: 0.9677 - val_loss: 0.1286 - val_accuracy: 0.9656 - lr: 5.0119e-05
Epoch 47/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1184 - accuracy: 0.9677 - val_loss: 0.1286 - val_accuracy: 0.9658 - lr: 4.4668e-05
Epoch 48/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1183 - accuracy: 0.9679 - val_loss: 0.1285 - val_accuracy: 0.9656 - lr: 3.9811e-05
Epoch 49/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1183 - accuracy: 0.9678 - val_loss: 0.1285 - val_accuracy: 0.9656 - lr: 3.5481e-05
Epoch 50/100
1719/1719 [==============================] - 3s 2ms/step - loss: 0.1182 - accuracy: 0.9677 - val_loss: 0.1285 - val_accuracy: 0.9656 - lr: 3.1623e-05
Epoch 51/100
1719/1719 [==============================] - 2s 1ms/step - loss: 0.1182 - accuracy: 0.9677 - val_loss: 0.1284 - val_accuracy: 0.9656 - lr: 2.8184e-05
###Markdown
We can see above that on using early stopping, the training got stopped after 91 epochs since there was no improvement in the validation loss for a few number of epochs Step 10 - create checkpoint We shall use Adam() as the optimiser for creating a checkpoint of the model since it has given the highest accuracy amongst all the optimisers used
###Code
# Reference for the below model has been taken from page no 312 of the book
# Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O’Reilly-Media-2019
# Initialising a Sequential model
model = keras.models.Sequential()
# Adding a Flatten layer that converts every 28x28 matrix into a 1D array input of 784 values
model.add(keras.layers.Flatten(input_shape=[28, 28]))
# Adding a Dense layer of 300 neurons with relu as the activation function
model.add(keras.layers.Dense(300, activation="relu"))
# Adding a second Dense layer of 100 neurons with relu as the activation function
model.add(keras.layers.Dense(100, activation="relu"))
# Adding the output layer containing 10 neurons, one for each class label between 0-9 with softmax as the activation function
model.add(keras.layers.Dense(10, activation="sigmoid"))
# Defining a function for exponential decay of the learning rate
def exponential_decay_fn(epoch, lr):
if epoch < 10:
return lr
else:
return lr * np.math.exp(-0.1)
# Defining an early stopping instance with the validation loss as the value to be monitored and a patience level of three
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode="auto")
# Creating a model checkpoint
checkpoint = keras.callbacks.ModelCheckpoint("best_classifier.h5", save_best_only=True)
# Compiling the model. Choosing Adam as the optimiser since it gives the best accuracy
model.compile(optimizer=keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Fitting the model on the data
history_all_callbacks = model.fit(X_train, y_train, epochs=50, validation_data=(X_val, y_val), callbacks=[early_stopping, checkpoint, keras.callbacks.LearningRateScheduler(exponential_decay_fn)])
# Plotting the loss, accuracy, validation loss, validation accuracy
pd.DataFrame(history_all_callbacks.history).plot(figsize=(10, 8))
###Output
Epoch 1/50
1719/1719 [==============================] - 3s 1ms/step - loss: 0.2322 - accuracy: 0.9326 - val_loss: 0.1186 - val_accuracy: 0.9664 - lr: 0.0010
Epoch 2/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0955 - accuracy: 0.9703 - val_loss: 0.0900 - val_accuracy: 0.9750 - lr: 0.0010
Epoch 3/50
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0642 - accuracy: 0.9801 - val_loss: 0.0828 - val_accuracy: 0.9740 - lr: 0.0010
Epoch 4/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0473 - accuracy: 0.9845 - val_loss: 0.0872 - val_accuracy: 0.9760 - lr: 0.0010
Epoch 5/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0366 - accuracy: 0.9881 - val_loss: 0.0902 - val_accuracy: 0.9754 - lr: 0.0010
Epoch 6/50
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0312 - accuracy: 0.9896 - val_loss: 0.0801 - val_accuracy: 0.9800 - lr: 0.0010
Epoch 7/50
1719/1719 [==============================] - 3s 1ms/step - loss: 0.0256 - accuracy: 0.9913 - val_loss: 0.0845 - val_accuracy: 0.9786 - lr: 0.0010
Epoch 8/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0229 - accuracy: 0.9921 - val_loss: 0.0790 - val_accuracy: 0.9808 - lr: 0.0010
Epoch 9/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0200 - accuracy: 0.9934 - val_loss: 0.0864 - val_accuracy: 0.9794 - lr: 0.0010
Epoch 10/50
1719/1719 [==============================] - 3s 2ms/step - loss: 0.0177 - accuracy: 0.9941 - val_loss: 0.0875 - val_accuracy: 0.9802 - lr: 0.0010
Epoch 11/50
1719/1719 [==============================] - 2s 1ms/step - loss: 0.0138 - accuracy: 0.9953 - val_loss: 0.0843 - val_accuracy: 0.9808 - lr: 9.0484e-04
Epoch 00011: early stopping
###Markdown
The above training got stopped after 11 epochs itself and we have got an accuracy of 0.9808 on the validation data Step 11 - report accuracy
###Code
# Loading the saved model
best_model = keras.models.load_model("best_classifier.h5")
# Calculating the accuracy on the test data
best_model.evaluate(X_test, y_test)
###Output
313/313 [==============================] - 0s 937us/step - loss: 0.2817 - accuracy: 0.9069
|
part05_DQM_intro.ipynb | ###Markdown
Discrete Quadratic ModelOr Binary Quadratic model with disjoint set of one-hot constraintsSuppose there are $N$ discrete variables (or $N$ groups of binary variables). Each variable $x_i$ has $C_i$ cases. The equation below is the most general form of the energy for a DQM (up to a constant).$$\Large H = \sum_{i,k} a_{i,k} x_{i,k} + \sum_{i,k,j,l} w_{i,k,j,l} x_{i,k} x_{j,l}$$$$\Large i, j \in \left\{0, 1, 2, ..., N - 1\right\}$$ $$\Large k \in \left\{0, 1, 2, ..., C_i - 1 \right\}$$$$\Large l \in \left\{0, 1, 2, ..., C_j - 1 \right\}$$The Hamiltonian above is subject to the following set of constraints.$$\Large \sum_{k=0}^{C_i - 1} x_{i,k} = 1 ~~~~~ \forall i$$By definition, the coefficient $\large w_{i,k,i,l}$ has no effect on the energy because:$$\Large x_{i,k}x_{i, l} = 0 ~~~~~ k\neq l $$(in python implementation, this coefficient is undefined).The total number of binary variables is:$$ N_b = \sum_i C_i $$
###Code
from dimod import DQM
DQM?
from dimod import ExactDQMSolver
ExactDQMSolver?
from dwave.system import LeapHybridDQMSampler
LeapHybridDQMSampler?
###Output
_____no_output_____ |
1-Lessons/Lesson22/.ipynb_checkpoints/classification-checkpoint.ipynb | ###Markdown
Chronic kidney disease (CKD)
###Code
df = pd.read_csv("ckd.csv")
df
df = df[["Hemoglobin", "Blood Glucose Random", "White Blood Cell Count", "Class"]].copy()
df
df["Hemoglobin_su"] = (df["Hemoglobin"] - df["Hemoglobin"].mean()) / df["Hemoglobin"].std(ddof=0)
df["Glucose_su"] = (df["Blood Glucose Random"] - df["Blood Glucose Random"].mean()) / df["Blood Glucose Random"].std(ddof=0)
df["WhiteBCC_su"] = (df["White Blood Cell Count"] - df["White Blood Cell Count"].mean()) / df["White Blood Cell Count"].std(ddof=0)
df = df[["Hemoglobin_su", "Glucose_su", "WhiteBCC_su", "Class"]].copy()
df
import matplotlib.pyplot as plt
yes_ckd_df = df[df["Class"] == 1]
no_ckd_df = df[df["Class"] == 0]
plt.scatter(x=yes_ckd_df["Hemoglobin_su"], y=yes_ckd_df["Glucose_su"], label="YES CKD" )
plt.scatter(x=no_ckd_df["Hemoglobin_su"], y=no_ckd_df["Glucose_su"], label="NO CKD" )
plt.xlabel("Hemoglobin")
plt.ylabel("Glucose")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Checking new patient
###Code
new_patient = [0, 1.5]
yes_ckd_df = df[df["Class"] == 1]
no_ckd_df = df[df["Class"] == 0]
plt.scatter(x=new_patient[0], y=new_patient[1], color="red", label="Unknown")
plt.scatter(x=yes_ckd_df["Hemoglobin_su"], y=yes_ckd_df["Glucose_su"], label="YES CKD" )
plt.scatter(x=no_ckd_df["Hemoglobin_su"], y=no_ckd_df["Glucose_su"], label="NO CKD" )
plt.xlabel("Hemoglobin")
plt.ylabel("Glucose")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Nearest neighbor
###Code
new_point = [0, 1.5]
###Output
_____no_output_____
###Markdown
**Distance between new_point and labeled points**
###Code
import math as m
def euclide_distance(point1_x, point1_y, point2_x, point2_y):
temp = (point1_x - point2_x)**2 + (point1_y - point2_y)**2
return m.sqrt(temp)
distances_to_new_patient = []
for index, row in df.iterrows():
point1_x = row["Hemoglobin_su"]
point1_y = row["Glucose_su"]
distance = euclide_distance(point1_x, point1_y, new_point[0], new_point[1])
distances_to_new_patient.append(distance)
df["Distance"] = distances_to_new_patient
# obtain 10 cloest points
df = df.sort_values(["Distance"], ascending=True)
closest_points = df.head(10)
closest_points
# find the most common "class" in the cloest points
closest_points["Class"].mode().values[0]
new_patient = [0, 1.5]
yes_ckd_df = closest_points[closest_points["Class"] == 1]
no_ckd_df = closest_points[closest_points["Class"] == 0]
plt.scatter(x=new_patient[0], y=new_patient[1], color="red", label="Unknown")
plt.scatter(x=yes_ckd_df["Hemoglobin_su"], y=yes_ckd_df["Glucose_su"], label="YES CKD" )
plt.scatter(x=no_ckd_df["Hemoglobin_su"], y=no_ckd_df["Glucose_su"], label="NO CKD" )
plt.xlabel("Hemoglobin")
plt.ylabel("Glucose")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Decision boundary
###Code
def classify(reference_points, new_point, k_neighbors=1):
distances_to_new_patient = []
for index, row in df.iterrows():
point1_x = row["Hemoglobin_su"]
point1_y = row["Glucose_su"]
distance = euclide_distance(point1_x, point1_y, new_point[0], new_point[1])
distances_to_new_patient.append(distance)
reference_points["Distance"] = distances_to_new_patient
reference_points = reference_points.sort_values(["Distance"], ascending=True)
closest_points = reference_points.head(k_neighbors)
predicted_label = closest_points["Class"].mode().values[0]
return predicted_label
import numpy as np
import matplotlib.pyplot as plt
yes_ckd_df = df[df["Class"] == 1]
no_ckd_df = df[df["Class"] == 0]
hemoglobins = np.arange(-2, 2, 0.1)
glucoses = np.arange(-2, 2, 0.1)
for h in hemoglobins:
for g in glucoses:
new_point = [h, g]
predicted_val = classify(df, new_point, k_neighbors=1)
if predicted_val == 1:
plt.scatter(x=h, y=g, color="blue", alpha=0.2)
else:
plt.scatter(x=h, y=g, color="orange", alpha=0.2)
plt.scatter(x=yes_ckd_df["Hemoglobin_su"], y=yes_ckd_df["Glucose_su"], color="blue", label="YES CKD" )
plt.scatter(x=no_ckd_df["Hemoglobin_su"], y=no_ckd_df["Glucose_su"], color="orange", label="NO CKD" )
plt.xlabel("Hemoglobin")
plt.ylabel("Glucose")
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/Result-Analyse/Modeling-Differences/i2b2-modeling-differences.ipynb | ###Markdown
Differences that modeling cause to the baseline model in i2b2 data for reference, command that was run within scripts/ was ```CUDA_VISIBLE_DEVICES= python main.py -- --dataset=i2b2 --preprocessing_type= --border_size=-1 --num_epoches=150 --lr_values 0.001 0.0001 0.00001 --lr_boundaries 60 120```This was gotten after preliminary hyperparameter tuning; and other options exist such as --use_elmo
###Code
# command for the old data - just classification
# for reference, command that was run within scripts/ was ```CUDA_VISIBLE_DEVICES=<device_no> python main.py --<cross_validate/use_test> --dataset=i2b2 --preprocessing_type=<entity_blinding/punct_digit/punct_stop_digit> --num_epoches=100 --lr_values 0.001 0.0001 --lr_boundaries 70```
# This was gotten after preliminary hyperparameter tuning
from scipy.stats import ttest_rel
def paired_ttest(score1, score2):
all_three_macroF1_score1 = [x for x in zip(*score1)]
all_three_macroF1_score2 = [x for x in zip(*score2)]
ttests = [ttest_rel(macro_f1_score1, macro_f1_score2)
for macro_f1_score1, macro_f1_score2 in zip(all_three_macroF1_score1, all_three_macroF1_score2)]
print('8 way evaluation: \t', ttests[0])
print('2 way evaluation: \t', ttests[1])
print('Problem-Treatment: \t', ttests[2])
print('Problem-Test: \t\t', ttests[3])
print('Problem-Problem: \t\t', ttests[4])
###Output
_____no_output_____
###Markdown
First compare the cross validated score differences
###Code
# the commented out values are those for the old dataset
# baseline_test = (84.37, 68.76, 90.68, 90.6)
# # model ID 6198ab41-3183-40f3-9254-d86a2b26e4ed on gray - deleted (let's keep results in harrison)
# below is for the new model but with the border size of 50
# baseline_test = (62.83, 86.55, 50.1, 78.48, 47.64)
# model ID 7789e891-fb56-433f-9e4c-006d81a89802 on harrison
baseline_test = (59.75, 83.17, 52.42, 70.91, 54.75)
#for baseline model with ID b960aa6a-1ff1-4c76-897a-4b1d289f86eb
# (8way, 2way, Prob-Treat, Prob-Test, Prob-Prob)
# results on the cross validation reporting
baseline = [(68.75, 86.54, 62.35, 75.95, 68.24), (71.29, 87.1, 65.38, 78.26, 70.25),
(70.53, 87.05, 64.92, 77.36, 70.16), (69.66, 85.72, 64.75, 77.12, 66.44),
(70.26, 85.85, 64.99, 77.46, 68.4)]
# model ID cd087669-3124-4899-ae93-107abfaa13a6
# 70.10 +- 0.85 86.45 +- 0.58 64.48 +- 1.08 77.23 +- 0.75
# # # Still need to run this baseline
# # #baseline = currently running on harrison Feb 15, 2019
# # # temp baseline for now
# # # baseline = [(90.35, 84.26, 92.58, 92.86), (88.71, 77.25, 92.89, 93.27), (89.57, 81.2, 92.55, 93.16),
# # # (86.16, 75.21, 89.89, 91.82), (87.79, 78.66, 92.47, 89.47)]
# # baseline = [(89.65, 83.48, 91.88, 92.04), (88.47, 79.31, 91.69, 92.31), (90.52, 83.62, 92.59, 94.02),
# # (88.07, 78.79, 92.35, 90.35), (88.73, 81.67, 92.11, 90.52)]
# # # model ID de365f82-b85d-415a-acb5-c43d7e7f4040 on gray
# baseline = [(73.82, 88.97, 68.6, 83.79, 61.61), (73.7, 88.71, 63.07, 84.99, 65.5),
# (72.99, 88.88, 66.67, 81.54, 64.39), (72.01, 89.88, 57.96, 85.19, 64.79),
# (72.04, 88.15, 64.34, 83.54, 61.41)]
# # model ID 3244b20d-e82f-44f1-a459-46f66e132481 in models_to_keep data medg misc
elmo_model = [(72.08, 87.9, 65.25, 79.05, 73.17), (72.86, 87.93, 67.69, 78.3, 73.31),
(73.2, 88.03, 68.09, 79.65, 72.24), (71.19, 87.14, 63.98, 79.92, 69.93),
(73.34, 88.06, 66.54, 82.07, 71.43)]
# model ID d4bce62a-233c-4d6a-9ef4-2d088dea0a3b
# 72.53 +- 0.80 87.81 +- 0.34 66.31 +- 1.53 79.80 +- 1.27
# # below is with the PubMed model weights
# elmo_model = [(73.54, 89.67, 67.19, 83.25, 62.8), (76.66, 90.11, 70.09, 85.57, 68.1),
# (74.17, 90.16, 68.6, 83.55, 63.93), (74.85, 90.72, 66.67, 85.56, 64.68),
# (73.88, 88.41, 68.18, 84.65, 61.4)]
# # model ID 4c162539-5a8e-4c4b-bd91-e4bbf1e26dee
# # elmo_model = [(74.05, 89.41, 63.45, 85.94, 65.42), (72.51, 89.99, 63.57, 84.46, 61.61),
# # (74.97, 89.71, 69.42, 83.12, 66.96), (70.67, 87.77, 64.17, 81.65, 58.56),
# # (74.7, 90.83, 66.13, 84.97, 66.04)]
# # model ID a4ba512c-c0d2-4911-8eb5-1a236b4f2457
# # below is with the problematic folds
# # elmo_model = [(72.1, 89.16, 65.29, 82.14, 61.32), (51.91, 85.78, 42.93, 71.18, 0.0),
# # (49.7, 83.13, 44.59, 65.68, 0.0), (44.61, 84.64, 22.86, 64.25, 0.0),
# # (45.57, 84.01, 36.59, 60.35, 0.0)]
# # model ID 5a13415b-3f9c-4554-ad55-b150e64456ea -- need to delete
# # 52.78 +- 10.02 85.34 +- 2.09 42.45 +- 13.74 68.72 +- 7.56
# # Above indicates a problem with the way that the data has been split - because the std is too high
# # seed for splitting should be changed in this case.
piecewise_model = [(73.43, 89.22, 69.11, 80.08, 70.43), (74.36, 89.89, 71.91, 76.03, 75.86),
(75.37, 89.98, 73.56, 80.6, 70.27), (73.11, 89.05, 69.94, 79.0, 69.01),
(72.67, 88.3, 70.87, 79.74, 64.67)]
# model ID fb56fba5-e514-4d7c-aaa6-b39556755d4f
# 73.79 +- 0.97 89.29 +- 0.61 71.08 +- 1.55 79.09 +- 1.62
# piecewise_model = [(73.47, 89.54, 70.23, 80.0, 64.76), (76.0, 90.5, 67.47, 85.93, 67.86),
# (75.66, 89.97, 73.02, 83.38, 65.18),
# (74.41, 90.78, 66.4, 85.19, 64.81), (73.34, 89.11, 68.42, 83.92, 60.44)]
# # model ID 50f2975f-fb21-4805-b380-b305a1e04ca2
# #74.58 +- 1.09 89.98 +- 0.61 69.11 +- 2.33 83.68 +- 2.05
bert_CLS = [(65.83, 84.93, 58.67, 73.8, 66.22), (69.0, 86.03, 61.71, 76.11, 71.23),
(68.06, 85.43, 60.45, 76.96, 68.37), (66.97, 85.28, 59.53, 76.6, 65.54),
(66.98, 85.46, 60.19, 75.16, 66.24)]
# model ID 47bd09bf-af9e-4859-8942-b106d4731b04
# 67.37 +- 1.08 85.43 +- 0.36 60.11 +- 1.01 75.73 +- 1.14
bert_tokens = [(71.23, 87.51, 63.08, 79.57, 72.47), (72.91, 88.47, 65.78, 80.0, 74.23),
(73.24, 87.83, 67.68, 79.74, 73.14), (69.78, 86.21, 64.0, 77.54, 67.6),
(73.16, 87.81, 67.32, 80.78, 71.28)]
# model ID 061331e0-087c-46b0-b53e-7aab8ac87801
# 72.06 +- 1.36 87.57 +- 0.75 65.57 +- 1.80 79.53 +- 1.08
paired_ttest(baseline, piecewise_model)
paired_ttest(baseline, elmo_model)
paired_ttest(baseline, bert_CLS)
paired_ttest(baseline, bert_tokens)
paired_ttest(elmo_model, bert_tokens)
# elmo_model_general_big = [(75.32, 90.5, 72.43, 84.05, 62.2), (75.41, 90.31, 65.25, 85.71, 67.89),
# (74.58, 90.03, 67.5, 83.12, 66.99), (72.68, 90.52, 61.22, 84.8, 64.0),
# (73.52, 88.66, 69.02, 83.84, 60.44)]
# # model ID 750a3dd2-6719-43f5-ad01-12c234b4fda5
# # 74.30 +- 1.06 90.00 +- 0.69 67.08 +- 3.75 84.30 +- 0.88
###Output
_____no_output_____
###Markdown
Additional Experiments for i2b2 Entity blinding (+ Elmo) (+Bert)
###Code
# this is on the evaluation fold
entity_blinding_elmo = [(76.12, 88.88, 72.73, 77.35, 79.73), (78.88, 90.03, 74.54, 83.51, 78.77),
(78.26, 89.54, 74.79, 82.86, 76.77), (76.25, 88.7, 74.55, 77.49, 77.18),
(78.99, 89.67, 75.68, 82.86, 78.35)]
# model ID a484fac5-02c9-4005-8210-7c0b824b1d34
# 77.70 +- 1.26 89.36 +- 0.50 74.46 +- 0.96 80.81 +- 2.78
# entity_blinding_elmo = [(76.16, 90.24, 75.95, 82.05, 65.74), (77.29, 89.86, 73.21, 85.71, 66.67),
# (79.58, 90.93, 76.19, 86.22, 71.17), (80.19, 91.49, 77.92, 85.57, 73.21),
# (77.21, 89.43, 75.32, 84.03, 66.67)]
# #model ID 4f446314-3da7-43fd-bc98-d1c0507098bd
# # 78.09 +- 1.53 90.39 +- 0.74 75.72 +- 1.52 84.72 +- 1.52
# # this is with PubMed elmo
entity_blinding_bert_tokens = [(76.05, 88.24, 71.98, 79.32, 77.55), (77.24, 89.11, 73.64, 82.23, 75.42),
(76.61, 88.66, 73.22, 80.34, 76.19), (75.34, 88.31, 72.03, 78.45, 76.03),
(78.38, 88.84, 75.68, 83.65, 74.51)]
# model ID 32e95086-c338-4660-9d36-03c707601021
# 76.72 +- 1.04 88.63 +- 0.33 73.31 +- 1.35 80.80 +- 1.90
# execution time 32 hours
###Output
_____no_output_____
###Markdown
Entity blind + Piecewise pool (+Elmo) (+Bert tokens)
###Code
entity_blinding_piecewise_pool_elmo = [(79.05, 90.68, 74.0, 83.54, 80.41), (79.01, 90.62, 73.94, 83.33, 80.68),
(79.11, 90.13, 75.92, 83.58, 77.29), (79.46, 89.63, 76.95, 83.9, 76.61),
(80.41, 90.8, 77.58, 84.75, 78.26)]
# model ID 6e655ec8-3ec9-4c14-adc6-982974aa2cbb
# 79.41 +- 0.53 90.37 +- 0.44 75.68 +- 1.49 83.82 +- 0.50
entity_blinding_piecewise_pool_bert_tokens = [(78.37, 90.54, 73.05, 83.19, 79.86), (80.31, 90.86, 76.49, 83.68, 81.36),
(79.47, 89.93, 77.89, 82.16, 77.74), (78.31, 89.54, 75.45, 82.91, 75.79),
(81.11, 90.85, 79.43, 86.13, 75.84)]
# model ID 7e084293-d2a7-4033-8fe4-164beee8ffdf
# 79.51 +- 1.09 90.34 +- 0.53 76.46 +- 2.17 83.61 +- 1.35
###Output
_____no_output_____
###Markdown
Entity blind + piecewise pool
###Code
# this is on the cross val report mode
entity_blinding_piecewise_pool = [(76.34, 89.41, 71.94, 79.83, 78.15), (79.1, 90.52, 75.7, 82.25, 79.73),
(78.64, 89.59, 75.45, 83.9, 75.68), (77.37, 89.29, 74.51, 81.09, 76.29),
(79.17, 89.87, 78.75, 82.2, 75.08)]
# model ID b9128322-cbcf-4d5c-944b-e4fc26db38c4
# 78.12 +- 1.10 89.74 +- 0.44 75.27 +- 2.19 81.85 +- 1.35
# entity_blinding_piecewise_pool = [(76.23, 90.24, 76.73, 81.41, 66.67), (78.66, 90.37, 77.12, 85.57, 68.12),
# (80.56, 91.18, 79.49, 85.43, 72.89), (78.87, 90.65, 79.31, 85.35, 66.96),
# (77.38, 89.68, 74.4, 85.29, 66.37)]
# #model ID 03b9fe97-5692-47de-95b4-11afe90114ad
# # 78.34 +- 1.46 90.42 +- 0.49 77.41 +- 1.87 84.61 +- 1.60
###Output
_____no_output_____
###Markdown
Piecewise pool (+Elmo) (+Bert)
###Code
piecewise_pool_elmo = [(75.06, 89.8, 69.65, 81.86, 73.5), (74.22, 90.23, 69.77, 77.57, 76.66),
(75.79, 90.32, 72.34, 81.29, 73.1), (73.85, 89.88, 69.57, 79.83, 71.89),
(74.9, 89.28, 71.35, 82.22, 69.63)]
# model ID 1e21fcb0-2fd5-4edf-b317-68634c759c19
# 74.76 +- 0.68 89.90 +- 0.37 70.54 +- 1.12 80.55 +- 1.70
# piecewise_pool_elmo = [(75.14, 90.37, 71.21, 82.53, 66.02), (77.23, 91.26, 70.68, 85.14, 70.32),
# (77.14, 90.86, 72.73, 83.8, 70.14), (77.27, 91.93, 71.26, 87.23, 66.67),
# (72.77, 88.54, 67.15, 84.71, 58.18)]
# # model ID 0b105264-9ef7-4266-a7e5-f53d1d7d1099
# # 75.91 +- 1.76 90.59 +- 1.15 70.61 +- 1.86 84.68 +- 1.56
piecewise_pool_bert_tokens = [(74.28, 89.71, 67.7, 84.26, 69.86), (74.04, 90.08, 69.11, 78.13, 76.22),
(76.06, 90.52, 72.87, 81.55, 72.98), (73.64, 88.61, 71.54, 79.15, 68.33),
(75.32, 89.13, 72.12, 82.02, 70.38)]
# model ID 19af6aae-16ae-4440-af06-47b120c29d2b
# 74.67 +- 0.89 89.61 +- 0.68 70.67 +- 1.95 81.02 +- 2.17
###Output
_____no_output_____
###Markdown
Paired ttests
###Code
paired_ttest(elmo_model, entity_blinding_elmo)
paired_ttest(bert_tokens, entity_blinding_bert_tokens)
paired_ttest(entity_blinding_elmo, entity_blinding_bert_tokens)
paired_ttest(elmo_model, entity_blinding_piecewise_pool_elmo)
paired_ttest(bert_tokens, entity_blinding_piecewise_pool_bert_tokens)
paired_ttest(entity_blinding_piecewise_pool_elmo, entity_blinding_piecewise_pool_bert_tokens)
paired_ttest(piecewise_model, entity_blinding_piecewise_pool)
paired_ttest(elmo_model, piecewise_pool_elmo)
paired_ttest(bert_tokens, piecewise_pool_bert_tokens)
paired_ttest(piecewise_pool_elmo, piecewise_pool_bert_tokens)
###Output
8 way evaluation: Ttest_relResult(statistic=0.45556505757824095, pvalue=0.6723368568398465)
2 way evaluation: Ttest_relResult(statistic=1.1543756519376487, pvalue=0.31261871958740944)
Problem-Treatment: Ttest_relResult(statistic=-0.19781325353145715, pvalue=0.8528372108875544)
Problem-Test: Ttest_relResult(statistic=-0.8885335654210631, pvalue=0.4244580968715774)
Problem-Problem: Ttest_relResult(statistic=1.5267259360493228, pvalue=0.2015362091550892)
###Markdown
piecewise pool model is better for i2b2elmo model does not seem statistically significantly differnet than the baseline model, but the above is with a pickle splitting seed of 2 rather than 5 which is the default. Test score results for the above are (all model IDs the shared NFS folder): (border size -1) ```(59.75, 83.17, 52.42, 70.91, 54.75)``` for baseline model with ID b960aa6a-1ff1-4c76-897a-4b1d289f86eb```(60.85, 83.69, 52.34, 72.72, 57.08)``` for piecewise pool model with model ID c1a272c2-0268-4641-bb7d-be7e32d3b836```(63.18, 84.54, 54.73, 74.89, 59.55)``` for elmo model with model ID 2ef144cd-0d7d-4b01-942f-7b65380f9490***BERT (from clinical data - Emily's training)`(56.79, 81.91, 48.56, 69.52, 52.16)` for the baseline model with bert CLS simple bert appending (to the fixed size sentence rep) with model ID 1458f1db-0290-4d8e-97e7-d5c298cfb683 Another run (just to verify): `(56.36, 82.05, 47.46, 69.66, 52.22)` with model ID d67c42a6-9410-481f-ab37-17021261e32e`(63.11, 84.91, 54.53, 75.62, 57.49)` for baseline model with bert token level addition with model ID b5576118-9d6e-4b0a-948b-782705826a55
###Code
# Test score results for the above are (all model IDs the shared NFS folder): (with border size 50)
# ```(62.83, 86.55, 50.1, 78.48, 47.64)``` for baseline model with ID 7789e891-fb56-433f-9e4c-006d81a89802
# ```(66.73, 88.08, 54.74, 81.24, 51.28)``` for elmo model with model ID 63f1e537-da50-495c-be8f-fabd209a058c
# ```(64.67, 87.07, 53.88, 79.52, 47.58)``` for piecewise pool model with model ID 15344c2c-1f2a-4420-9000-83c2be452129
###Output
_____no_output_____
###Markdown
Additional experiments `(70.46, 86.17, 61.92, 78.32, 71.67)` for the elmo model and entity blinding with ID 1df015ba-d906-42c0-b22a-1db930cfc9d6`(70.62, 86.14, 60.95, 78.67, 73.94)` for the piecewise pool model and entity blinding with elmo and ID is d0b840dc-fcab-4144-9714-37e82f2b95ec`(69.73, 85.44, 60.03, 77.19, 73.9)` for the entity blinding and piecewise pool model with ID b9bc6c62-5ca8-4aa5-98e8-61eb3536209c`(63.19, 84.92, 54.13, 74.81, 61.66)` for the piecewise pool model and elmo with ID b6a9db36-b334-41b0-a103-ee01cde0f34c`(70.56, 85.66, 61.68, 78.39, 72.34)` for the bert tokens model and entity blinding with ID fe40eb2f-52b5-45dd-94a2-16f84973effd`(71.01, 86.26, 61.71, 79.1, 73.77)` for the bert tokens model with entity blinding and piecewise pooling with model ID ceffcfde-a039-4e5e-bae9-8176f3e99868`(63.23, 85.45, 54.76, 75.03, 59.44)` for the bert tokens model with piecewise pooling with model ID 49c14cda-f3f3-4eb5-a77f-4860363cfbae
###Code
# with border size 50
# `(73.03, 88.79, 64.25, 84.19, 59.2)` for the elmo model and entity blinding with ID 63d9fda1-2931-4dec-b7e9-cfd56cae58e8
# `(73.38, 89.0, 64.75, 84.78, 58.5)` for the piecewise pool model and entity blinding with elmo and ID is eb55046d-7bdd-4fc7-9f0c-c40c9808e8a6
# `(72.75, 88.17, 65.95, 83.13, 58.59)` for the entity blinding and piecewise pool model with ID 7c46e59a-e335-44c5-90c3-ce4782ab2f66
# `(67.01, 88.05, 55.66, 81.75, 50.25)` for the piecewise pool model and elmo with ID 1e76f364-8509-4106-8280-6b862b920e70
# border size 50
# Elmo model with the embeddings of the large model returns a result of `(65.05, 87.62, 51.74, 80.6, 48.43)` with model ID 77cea5cb-ab0c-482d-b9f9-762b0eb1ee28
# ```(64.8, 87.02, 55.43, 78.23, 47.29)``` for elmo model with model ID fd25ca11-27fc-4b89-816e-22867aa586a6 for the old elmo model
###Output
_____no_output_____ |
build_data/Build Data GMO.ipynb | ###Markdown
Load Data from Quandl
###Code
file_key = open("../dev/archive/quandl_key.txt","r")
API_KEY = file_key.read()
file_key.close()
quandl.ApiConfig.api_key = API_KEY
start_date = '1991-10-01'
end_date = '2021-10-31'
sigs_ticks = ["MULTPL/SP500_DIV_YIELD_MONTH","MULTPL/SP500_EARNINGS_YIELD_MONTH","YC/USA10Y"]
sigs_names = ['DP','EP', 'US10Y']
sigs_info = pd.DataFrame({'Name':sigs_names,'Ticker':sigs_ticks}).set_index('Name')
signals = pd.DataFrame()
for idx,tick in enumerate(sigs_info['Ticker']):
temp = quandl.get(tick, start_date=start_date, end_date=end_date)
temp.columns = [sigs_info.index[idx]]
signals = signals.join(temp,rsuffix='_',how='outer')
# some monthly data reported at start of month--assume we do not have it until end of month
signals = signals.resample('M').last()
signals.columns.name = 'SP500 Multiples'
signals
spy_tick = 'EOD/SPY'
data = quandl.get(spy_tick, start_date=start_date, end_date=end_date)[['Adj_Close']]
spy = data.resample('M').last().pct_change()
spy.rename(columns={'Adj_Close':'SPY'},inplace=True)
rf_tick = 'YC/USA3M'
data = quandl.get(rf_tick, start_date=start_date, end_date=end_date)
rf = data.resample('M').last()/(12*100)
rf.rename(columns={'Rate':'US3M'},inplace=True)
gmo_tick = 'GMWAX'
data = yf.download(gmo_tick, start=start_date, end=end_date)['Adj Close']
gmo = data.resample('M').last().pct_change()
gmo.name = gmo_tick
gmo.dropna(inplace=True)
rets = spy.join(gmo,how='outer')
rets.dropna(axis=0,inplace=True,how='all')
rets
signals, rets = signals.align(rets,join='inner',axis=0)
rf, _ = rf.align(rets,join='inner',axis=0)
###Output
_____no_output_____
###Markdown
Save Data to Excel
###Code
with pd.ExcelWriter('gmo_analysis_data.xlsx') as writer:
info.to_excel(writer, sheet_name = 'descriptions')
signals.to_excel(writer, sheet_name= 'signals')
rets.to_excel(writer, sheet_name='returns (total)')
rf.to_excel(writer, sheet_name='risk-free rate')
###Output
_____no_output_____ |
corona.ipynb | ###Markdown
###Code
# install chromium, its driver, and selenium
#!apt install chromium-chromedriver
#!pip install selenium
# set options to be headless, ..
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
# open it, go to a website, and get results
wd = webdriver.Chrome('chromedriver',options=options)
wd.get("https://www.google.com")
print(wd.title) # results
# divs = wd.find_elements_by_css_selector('div')
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
from bs4 import BeautifulSoup
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
import re
import time
import requests
import random
import os
import os.path
import csv
import datetime
import random
from getpass import getpass
import time
import random
from dateutil import parser
import calendar
from urllib.request import Request, urlopen
from fake_useragent import UserAgent
import pandas as pd
import matplotlib.pyplot as plt
import json
plt.style.use('fivethirtyeight')
import warnings
warnings.filterwarnings("ignore")
idx = datetime.date.today()
t0 = time.time()
#********************************************
topic = "coronavirus"
keyword = "Coronavirus India"
website = "https://www.google.com"
browser = webdriver.Chrome('chromedriver',options=options)
browser.get(website)
browser.maximize_window()
def writerows(rows, filename):
with open(filename, 'a', encoding='utf-8') as toWrite:
writer = csv.writer(toWrite)
writer.writerows([rows])
def scrapePage(keyword,rank):
soup = BeautifulSoup(browser.page_source,"html.parser")
result_block = soup.find_all('div', attrs={'class': 'g'})
for r in result_block:
link = r.find('a', href=True)
title = r.find('h3')
description = r.find('span', attrs={'class': 'st'})
if r.find(class_="f"):
res = r.find(class_="f").text
else:
res = "-1"
if link and title and description:
link = link['href']
title = title.get_text().strip()
if description:
description = description.get_text().strip()
if link != '#':
rank += 1
row = [keyword,rank,title,res,description,link]
print(row)
print(30*"--")
writerows(row,outputFile)
outputFile = "C:/CART/{}_{}.csv".format(topic.title().replace(" ",""),idx)
print(outputFile)
time.sleep(random.randint(1,100)/50)
search = browser.find_element_by_class_name("gLFyf")
search.clear()
search.send_keys(keyword)
search.send_keys(Keys.RETURN);
time.sleep(random.randint(40,100)/50)
soup = BeautifulSoup(browser.page_source,"html.parser")
time.sleep(random.randint(40,100)/50)
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2)
count = 0
search = browser.find_element_by_class_name("gLFyf")
search.clear()
search.send_keys(keyword)
search.send_keys(Keys.RETURN);
time.sleep(random.randint(1,100)/50)
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
noPages = int(soup.findAll("a", class_="fl")[-1].text)
for i in range(noPages):
browser.find_element_by_id("pnnext").click()
time.sleep(random.randint(25,100)/25)
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
count += 1
scrapePage(keyword,count)
df['Source'] = df['Link'].apply(lambda x: x.split("//")[1].split("/")[0])
excl = ['books.google.com.in']
df= df[~df['Source'].isin(excl)]
print(df.shape)
print(df['Source'].value_counts())
#df[df['Source']=="forum.lowyat.net"]
df.to_csv("C:/CART/{}_prep_{}.csv".format(topic.title().replace(" ",""),idx),encoding="utf-8",index=None)
df.head()
import nltk
from nltk.util import ngrams
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.collocations import *
from nltk.corpus import stopwords
from collections import Counter
# modules for generating the word cloud
from os import path, getcwd
from PIL import Image
from wordcloud import WordCloud, ImageColorGenerator
#tokenizer = nltk.data.load('nltk:tokenizers/punkt/english.pickle')
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
field ="Description"
df[field] = df[field].astype(str)
df[field] = df[field].apply(lambda x: x.lower())
print(len(df[field].str.cat(sep=', ')))
#All property Description
print("concat desc")
df[field] = df[field].astype(str)
df[field] = df[field].apply(lambda x: x.lower())
tw = df[field].str.cat(sep=', ')
print('concat length',len(tw))
tw = re.sub(r'^https?:\/\/.*[\r\n]*', '', tw, flags=re.MULTILINE)
tw = re.sub(r'[^a-zA-Z\s]', ' ', tw)
tw = tw.replace("netflix","").replace("hahaha","").replace("hahah","").replace("haha","")
#tw = re.sub(r'[^a-zA-Z0-9\s]', ' ', tw)
#tw = re.sub(r'[\W_]', ' ', tw)
tok_text = tokenizer.tokenize(tw)
print('tokenize length',len(tok_text))
#remove stopwords
stopwords = nltk.corpus.stopwords.words('english')
malayStopWords =['di','pada','kat',"ke","ko","ye"]
word_set = set(w for w in tok_text if w.lower() not in stopwords)
print('length after english stop words',len(word_set))
word_set = set(w for w in word_set if w.lower() not in malayStopWords)
print('length after malay stop words',len(word_set))
print(len(word_set))
#get topwords and leastwords
word_count_dict = Counter(w.lower() for w in word_set)
w = word_count_dict.most_common()
print(len(w))
w = pd.DataFrame(w)
w = w.rename(columns={0:'word',1:'count'})
desc = w[~w.word.str.match('^\-?(\d*\.?\d+|\d+\.?\d*)$')]
#remove top words
topWord = list(desc[:10]["word"])
print(list(desc[:100]["word"]))
#leastWord = list(desc[desc['count']==1]["word"])
#word_set = [x for x in word_set if x not in leastWord]
text = ' '.join(word_set).lower()
print("done.....................")
from wordcloud import WordCloud, STOPWORDS
from scipy.misc import imread
wordcloud = WordCloud(
width = 1500, height=1500,
stopwords=STOPWORDS,
background_color='black',
#mask=imread('C:/Users/Faizal/Anaconda3/my_map2/lib/images/train.png'),
).generate(text)
plt.figure(figsize=(14,14))
plt.title(keyword, color="grey", size=25, y=1.01)
plt.imshow(wordcloud)
plt.axis('off')
#plt.text(x = 0, y = 1450, fontsize = 15, alpha = 1,color = 'white', s = " www.redangpow.com ", backgroundcolor="grey")
plt.savefig('F:/RAP Cares/{}_{}.png'.format(topic.title().replace(" ",""),idx), transparent=True, bbox_inches='tight',papertype = 'a4')
plt.show()
import collections
ngr = ngrams(tok_text, 5)
ng = collections.Counter(ngr).most_common()
ng = pd.DataFrame(ng)
ng.columns = ["Phrase"," Count"]
ng["Phrase"] = ng["Phrase"].apply(lambda x: " ".join(x))
ng[:20]
ngr = ngrams(tok_text, 4)
ng = collections.Counter(ngr).most_common()
ng = pd.DataFrame(ng)
ng.columns = ["Phrase"," Count"]
ng["Phrase"] = ng["Phrase"].apply(lambda x: " ".join(x))
ng[:20]
ngr = ngrams(tok_text, 3)
ng = collections.Counter(ngr).most_common()
ng = pd.DataFrame(ng)
ng.columns = ["Phrase"," Count"]
ng["Phrase"] = ng["Phrase"].apply(lambda x: " ".join(x))
ng[:20]
ngr = ngrams(tok_text, 2)
ng = collections.Counter(ngr).most_common()
ng = pd.DataFrame(ng)
ng.columns = ["Phrase"," Count"]
ng["Phrase"] = ng["Phrase"].apply(lambda x: " ".join(x))
ng[:20]
ngr = ngrams(tok_text, 1)
ng = collections.Counter(ngr).most_common()
ng = pd.DataFrame(ng)
ng.columns = ["Phrase"," Count"]
ng["Phrase"] = ng["Phrase"].apply(lambda x: " ".join(x))
ng[:20]
###Output
_____no_output_____
###Markdown
###Code
import zlib
import lzma
cc = """1 attaaaggtt tataccttcc caggtaacaa accaaccaac tttcgatctc ttgtagatct
61 gttctctaaa cgaactttaa aatctgtgtg gctgtcactc ggctgcatgc ttagtgcact
121 cacgcagtat aattaataac taattactgt cgttgacagg acacgagtaa ctcgtctatc
181 ttctgcaggc tgcttacggt ttcgtccgtg ttgcagccga tcatcagcac atctaggttt
241 cgtccgggtg tgaccgaaag gtaagatgga gagccttgtc cctggtttca acgagaaaac
301 acacgtccaa ctcagtttgc ctgttttaca ggttcgcgac gtgctcgtac gtggctttgg
361 agactccgtg gaggaggtct tatcagaggc acgtcaacat cttaaagatg gcacttgtgg
421 cttagtagaa gttgaaaaag gcgttttgcc tcaacttgaa cagccctatg tgttcatcaa
481 acgttcggat gctcgaactg cacctcatgg tcatgttatg gttgagctgg tagcagaact
541 cgaaggcatt cagtacggtc gtagtggtga gacacttggt gtccttgtcc ctcatgtggg
601 cgaaatacca gtggcttacc gcaaggttct tcttcgtaag aacggtaata aaggagctgg
661 tggccatagt tacggcgccg atctaaagtc atttgactta ggcgacgagc ttggcactga
721 tccttatgaa gattttcaag aaaactggaa cactaaacat agcagtggtg ttacccgtga
781 actcatgcgt gagcttaacg gaggggcata cactcgctat gtcgataaca acttctgtgg
841 ccctgatggc taccctcttg agtgcattaa agaccttcta gcacgtgctg gtaaagcttc
901 atgcactttg tccgaacaac tggactttat tgacactaag aggggtgtat actgctgccg
961 tgaacatgag catgaaattg cttggtacac ggaacgttct gaaaagagct atgaattgca
1021 gacacctttt gaaattaaat tggcaaagaa atttgacacc ttcaatgggg aatgtccaaa
1081 ttttgtattt cccttaaatt ccataatcaa gactattcaa ccaagggttg aaaagaaaaa
1141 gcttgatggc tttatgggta gaattcgatc tgtctatcca gttgcgtcac caaatgaatg
1201 caaccaaatg tgcctttcaa ctctcatgaa gtgtgatcat tgtggtgaaa cttcatggca
1261 gacgggcgat tttgttaaag ccacttgcga attttgtggc actgagaatt tgactaaaga
1321 aggtgccact acttgtggtt acttacccca aaatgctgtt gttaaaattt attgtccagc
1381 atgtcacaat tcagaagtag gacctgagca tagtcttgcc gaataccata atgaatctgg
1441 cttgaaaacc attcttcgta agggtggtcg cactattgcc tttggaggct gtgtgttctc
1501 ttatgttggt tgccataaca agtgtgccta ttgggttcca cgtgctagcg ctaacatagg
1561 ttgtaaccat acaggtgttg ttggagaagg ttccgaaggt cttaatgaca accttcttga
1621 aatactccaa aaagagaaag tcaacatcaa tattgttggt gactttaaac ttaatgaaga
1681 gatcgccatt attttggcat ctttttctgc ttccacaagt gcttttgtgg aaactgtgaa
1741 aggtttggat tataaagcat tcaaacaaat tgttgaatcc tgtggtaatt ttaaagttac
1801 aaaaggaaaa gctaaaaaag gtgcctggaa tattggtgaa cagaaatcaa tactgagtcc
1861 tctttatgca tttgcatcag aggctgctcg tgttgtacga tcaattttct cccgcactct
1921 tgaaactgct caaaattctg tgcgtgtttt acagaaggcc gctataacaa tactagatgg
1981 aatttcacag tattcactga gactcattga tgctatgatg ttcacatctg atttggctac
2041 taacaatcta gttgtaatgg cctacattac aggtggtgtt gttcagttga cttcgcagtg
2101 gctaactaac atctttggca ctgtttatga aaaactcaaa cccgtccttg attggcttga
2161 agagaagttt aaggaaggtg tagagtttct tagagacggt tgggaaattg ttaaatttat
2221 ctcaacctgt gcttgtgaaa ttgtcggtgg acaaattgtc acctgtgcaa aggaaattaa
2281 ggagagtgtt cagacattct ttaagcttgt aaataaattt ttggctttgt gtgctgactc
2341 tatcattatt ggtggagcta aacttaaagc cttgaattta ggtgaaacat ttgtcacgca
2401 ctcaaaggga ttgtacagaa agtgtgttaa atccagagaa gaaactggcc tactcatgcc
2461 tctaaaagcc ccaaaagaaa ttatcttctt agagggagaa acacttccca cagaagtgtt
2521 aacagaggaa gttgtcttga aaactggtga tttacaacca ttagaacaac ctactagtga
2581 agctgttgaa gctccattgg ttggtacacc agtttgtatt aacgggctta tgttgctcga
2641 aatcaaagac acagaaaagt actgtgccct tgcacctaat atgatggtaa caaacaatac
2701 cttcacactc aaaggcggtg caccaacaaa ggttactttt ggtgatgaca ctgtgataga
2761 agtgcaaggt tacaagagtg tgaatatcac ttttgaactt gatgaaagga ttgataaagt
2821 acttaatgag aagtgctctg cctatacagt tgaactcggt acagaagtaa atgagttcgc
2881 ctgtgttgtg gcagatgctg tcataaaaac tttgcaacca gtatctgaat tacttacacc
2941 actgggcatt gatttagatg agtggagtat ggctacatac tacttatttg atgagtctgg
3001 tgagtttaaa ttggcttcac atatgtattg ttctttctac cctccagatg aggatgaaga
3061 agaaggtgat tgtgaagaag aagagtttga gccatcaact caatatgagt atggtactga
3121 agatgattac caaggtaaac ctttggaatt tggtgccact tctgctgctc ttcaacctga
3181 agaagagcaa gaagaagatt ggttagatga tgatagtcaa caaactgttg gtcaacaaga
3241 cggcagtgag gacaatcaga caactactat tcaaacaatt gttgaggttc aacctcaatt
3301 agagatggaa cttacaccag ttgttcagac tattgaagtg aatagtttta gtggttattt
3361 aaaacttact gacaatgtat acattaaaaa tgcagacatt gtggaagaag ctaaaaaggt
3421 aaaaccaaca gtggttgtta atgcagccaa tgtttacctt aaacatggag gaggtgttgc
3481 aggagcctta aataaggcta ctaacaatgc catgcaagtt gaatctgatg attacatagc
3541 tactaatgga ccacttaaag tgggtggtag ttgtgtttta agcggacaca atcttgctaa
3601 acactgtctt catgttgtcg gcccaaatgt taacaaaggt gaagacattc aacttcttaa
3661 gagtgcttat gaaaatttta atcagcacga agttctactt gcaccattat tatcagctgg
3721 tatttttggt gctgacccta tacattcttt aagagtttgt gtagatactg ttcgcacaaa
3781 tgtctactta gctgtctttg ataaaaatct ctatgacaaa cttgtttcaa gctttttgga
3841 aatgaagagt gaaaagcaag ttgaacaaaa gatcgctgag attcctaaag aggaagttaa
3901 gccatttata actgaaagta aaccttcagt tgaacagaga aaacaagatg ataagaaaat
3961 caaagcttgt gttgaagaag ttacaacaac tctggaagaa actaagttcc tcacagaaaa
4021 cttgttactt tatattgaca ttaatggcaa tcttcatcca gattctgcca ctcttgttag
4081 tgacattgac atcactttct taaagaaaga tgctccatat atagtgggtg atgttgttca
4141 agagggtgtt ttaactgctg tggttatacc tactaaaaag gctggtggca ctactgaaat
4201 gctagcgaaa gctttgagaa aagtgccaac agacaattat ataaccactt acccgggtca
4261 gggtttaaat ggttacactg tagaggaggc aaagacagtg cttaaaaagt gtaaaagtgc
4321 cttttacatt ctaccatcta ttatctctaa tgagaagcaa gaaattcttg gaactgtttc
4381 ttggaatttg cgagaaatgc ttgcacatgc agaagaaaca cgcaaattaa tgcctgtctg
4441 tgtggaaact aaagccatag tttcaactat acagcgtaaa tataagggta ttaaaataca
4501 agagggtgtg gttgattatg gtgctagatt ttacttttac accagtaaaa caactgtagc
4561 gtcacttatc aacacactta acgatctaaa tgaaactctt gttacaatgc cacttggcta
4621 tgtaacacat ggcttaaatt tggaagaagc tgctcggtat atgagatctc tcaaagtgcc
4681 agctacagtt tctgtttctt cacctgatgc tgttacagcg tataatggtt atcttacttc
4741 ttcttctaaa acacctgaag aacattttat tgaaaccatc tcacttgctg gttcctataa
4801 agattggtcc tattctggac aatctacaca actaggtata gaatttctta agagaggtga
4861 taaaagtgta tattacacta gtaatcctac cacattccac ctagatggtg aagttatcac
4921 ctttgacaat cttaagacac ttctttcttt gagagaagtg aggactatta aggtgtttac
4981 aacagtagac aacattaacc tccacacgca agttgtggac atgtcaatga catatggaca
5041 acagtttggt ccaacttatt tggatggagc tgatgttact aaaataaaac ctcataattc
5101 acatgaaggt aaaacatttt atgttttacc taatgatgac actctacgtg ttgaggcttt
5161 tgagtactac cacacaactg atcctagttt tctgggtagg tacatgtcag cattaaatca
5221 cactaaaaag tggaaatacc cacaagttaa tggtttaact tctattaaat gggcagataa
5281 caactgttat cttgccactg cattgttaac actccaacaa atagagttga agtttaatcc
5341 acctgctcta caagatgctt attacagagc aagggctggt gaagctgcta acttttgtgc
5401 acttatctta gcctactgta ataagacagt aggtgagtta ggtgatgtta gagaaacaat
5461 gagttacttg tttcaacatg ccaatttaga ttcttgcaaa agagtcttga acgtggtgtg
5521 taaaacttgt ggacaacagc agacaaccct taagggtgta gaagctgtta tgtacatggg
5581 cacactttct tatgaacaat ttaagaaagg tgttcagata ccttgtacgt gtggtaaaca
5641 agctacaaaa tatctagtac aacaggagtc accttttgtt atgatgtcag caccacctgc
5701 tcagtatgaa cttaagcatg gtacatttac ttgtgctagt gagtacactg gtaattacca
5761 gtgtggtcac tataaacata taacttctaa agaaactttg tattgcatag acggtgcttt
5821 acttacaaag tcctcagaat acaaaggtcc tattacggat gttttctaca aagaaaacag
5881 ttacacaaca accataaaac cagttactta taaattggat ggtgttgttt gtacagaaat
5941 tgaccctaag ttggacaatt attataagaa agacaattct tatttcacag agcaaccaat
6001 tgatcttgta ccaaaccaac catatccaaa cgcaagcttc gataatttta agtttgtatg
6061 tgataatatc aaatttgctg atgatttaaa ccagttaact ggttataaga aacctgcttc
6121 aagagagctt aaagttacat ttttccctga cttaaatggt gatgtggtgg ctattgatta
6181 taaacactac acaccctctt ttaagaaagg agctaaattg ttacataaac ctattgtttg
6241 gcatgttaac aatgcaacta ataaagccac gtataaacca aatacctggt gtatacgttg
6301 tctttggagc acaaaaccag ttgaaacatc aaattcgttt gatgtactga agtcagagga
6361 cgcgcaggga atggataatc ttgcctgcga agatctaaaa ccagtctctg aagaagtagt
6421 ggaaaatcct accatacaga aagacgttct tgagtgtaat gtgaaaacta ccgaagttgt
6481 aggagacatt atacttaaac cagcaaataa tagtttaaaa attacagaag aggttggcca
6541 cacagatcta atggctgctt atgtagacaa ttctagtctt actattaaga aacctaatga
6601 attatctaga gtattaggtt tgaaaaccct tgctactcat ggtttagctg ctgttaatag
6661 tgtcccttgg gatactatag ctaattatgc taagcctttt cttaacaaag ttgttagtac
6721 aactactaac atagttacac ggtgtttaaa ccgtgtttgt actaattata tgccttattt
6781 ctttacttta ttgctacaat tgtgtacttt tactagaagt acaaattcta gaattaaagc
6841 atctatgccg actactatag caaagaatac tgttaagagt gtcggtaaat tttgtctaga
6901 ggcttcattt aattatttga agtcacctaa tttttctaaa ctgataaata ttataatttg
6961 gtttttacta ttaagtgttt gcctaggttc tttaatctac tcaaccgctg ctttaggtgt
7021 tttaatgtct aatttaggca tgccttctta ctgtactggt tacagagaag gctatttgaa
7081 ctctactaat gtcactattg caacctactg tactggttct ataccttgta gtgtttgtct
7141 tagtggttta gattctttag acacctatcc ttctttagaa actatacaaa ttaccatttc
7201 atcttttaaa tgggatttaa ctgcttttgg cttagttgca gagtggtttt tggcatatat
7261 tcttttcact aggtttttct atgtacttgg attggctgca atcatgcaat tgtttttcag
7321 ctattttgca gtacatttta ttagtaattc ttggcttatg tggttaataa ttaatcttgt
7381 acaaatggcc ccgatttcag ctatggttag aatgtacatc ttctttgcat cattttatta
7441 tgtatggaaa agttatgtgc atgttgtaga cggttgtaat tcatcaactt gtatgatgtg
7501 ttacaaacgt aatagagcaa caagagtcga atgtacaact attgttaatg gtgttagaag
7561 gtccttttat gtctatgcta atggaggtaa aggcttttgc aaactacaca attggaattg
7621 tgttaattgt gatacattct gtgctggtag tacatttatt agtgatgaag ttgcgagaga
7681 cttgtcacta cagtttaaaa gaccaataaa tcctactgac cagtcttctt acatcgttga
7741 tagtgttaca gtgaagaatg gttccatcca tctttacttt gataaagctg gtcaaaagac
7801 ttatgaaaga cattctctct ctcattttgt taacttagac aacctgagag ctaataacac
7861 taaaggttca ttgcctatta atgttatagt ttttgatggt aaatcaaaat gtgaagaatc
7921 atctgcaaaa tcagcgtctg tttactacag tcagcttatg tgtcaaccta tactgttact
7981 agatcaggca ttagtgtctg atgttggtga tagtgcggaa gttgcagtta aaatgtttga
8041 tgcttacgtt aatacgtttt catcaacttt taacgtacca atggaaaaac tcaaaacact
8101 agttgcaact gcagaagctg aacttgcaaa gaatgtgtcc ttagacaatg tcttatctac
8161 ttttatttca gcagctcggc aagggtttgt tgattcagat gtagaaacta aagatgttgt
8221 tgaatgtctt aaattgtcac atcaatctga catagaagtt actggcgata gttgtaataa
8281 ctatatgctc acctataaca aagttgaaaa catgacaccc cgtgaccttg gtgcttgtat
8341 tgactgtagt gcgcgtcata ttaatgcgca ggtagcaaaa agtcacaaca ttgctttgat
8401 atggaacgtt aaagatttca tgtcattgtc tgaacaacta cgaaaacaaa tacgtagtgc
8461 tgctaaaaag aataacttac cttttaagtt gacatgtgca actactagac aagttgttaa
8521 tgttgtaaca acaaagatag cacttaaggg tggtaaaatt gttaataatt ggttgaagca
8581 gttaattaaa gttacacttg tgttcctttt tgttgctgct attttctatt taataacacc
8641 tgttcatgtc atgtctaaac atactgactt ttcaagtgaa atcataggat acaaggctat
8701 tgatggtggt gtcactcgtg acatagcatc tacagatact tgttttgcta acaaacatgc
8761 tgattttgac acatggttta gccagcgtgg tggtagttat actaatgaca aagcttgccc
8821 attgattgct gcagtcataa caagagaagt gggttttgtc gtgcctggtt tgcctggcac
8881 gatattacgc acaactaatg gtgacttttt gcatttctta cctagagttt ttagtgcagt
8941 tggtaacatc tgttacacac catcaaaact tatagagtac actgactttg caacatcagc
9001 ttgtgttttg gctgctgaat gtacaatttt taaagatgct tctggtaagc cagtaccata
9061 ttgttatgat accaatgtac tagaaggttc tgttgcttat gaaagtttac gccctgacac
9121 acgttatgtg ctcatggatg gctctattat tcaatttcct aacacctacc ttgaaggttc
9181 tgttagagtg gtaacaactt ttgattctga gtactgtagg cacggcactt gtgaaagatc
9241 agaagctggt gtttgtgtat ctactagtgg tagatgggta cttaacaatg attattacag
9301 atctttacca ggagttttct gtggtgtaga tgctgtaaat ttacttacta atatgtttac
9361 accactaatt caacctattg gtgctttgga catatcagca tctatagtag ctggtggtat
9421 tgtagctatc gtagtaacat gccttgccta ctattttatg aggtttagaa gagcttttgg
9481 tgaatacagt catgtagttg cctttaatac tttactattc cttatgtcat tcactgtact
9541 ctgtttaaca ccagtttact cattcttacc tggtgtttat tctgttattt acttgtactt
9601 gacattttat cttactaatg atgtttcttt tttagcacat attcagtgga tggttatgtt
9661 cacaccttta gtacctttct ggataacaat tgcttatatc atttgtattt ccacaaagca
9721 tttctattgg ttctttagta attacctaaa gagacgtgta gtctttaatg gtgtttcctt
9781 tagtactttt gaagaagctg cgctgtgcac ctttttgtta aataaagaaa tgtatctaaa
9841 gttgcgtagt gatgtgctat tacctcttac gcaatataat agatacttag ctctttataa
9901 taagtacaag tattttagtg gagcaatgga tacaactagc tacagagaag ctgcttgttg
9961 tcatctcgca aaggctctca atgacttcag taactcaggt tctgatgttc tttaccaacc
10021 accacaaacc tctatcacct cagctgtttt gcagagtggt tttagaaaaa tggcattccc
10081 atctggtaaa gttgagggtt gtatggtaca agtaacttgt ggtacaacta cacttaacgg
10141 tctttggctt gatgacgtag tttactgtcc aagacatgtg atctgcacct ctgaagacat
10201 gcttaaccct aattatgaag atttactcat tcgtaagtct aatcataatt tcttggtaca
10261 ggctggtaat gttcaactca gggttattgg acattctatg caaaattgtg tacttaagct
10321 taaggttgat acagccaatc ctaagacacc taagtataag tttgttcgca ttcaaccagg
10381 acagactttt tcagtgttag cttgttacaa tggttcacca tctggtgttt accaatgtgc
10441 tatgaggccc aatttcacta ttaagggttc attccttaat ggttcatgtg gtagtgttgg
10501 ttttaacata gattatgact gtgtctcttt ttgttacatg caccatatgg aattaccaac
10561 tggagttcat gctggcacag acttagaagg taacttttat ggaccttttg ttgacaggca
10621 aacagcacaa gcagctggta cggacacaac tattacagtt aatgttttag cttggttgta
10681 cgctgctgtt ataaatggag acaggtggtt tctcaatcga tttaccacaa ctcttaatga
10741 ctttaacctt gtggctatga agtacaatta tgaacctcta acacaagacc atgttgacat
10801 actaggacct ctttctgctc aaactggaat tgccgtttta gatatgtgtg cttcattaaa
10861 agaattactg caaaatggta tgaatggacg taccatattg ggtagtgctt tattagaaga
10921 tgaatttaca ccttttgatg ttgttagaca atgctcaggt gttactttcc aaagtgcagt
10981 gaaaagaaca atcaagggta cacaccactg gttgttactc acaattttga cttcactttt
11041 agttttagtc cagagtactc aatggtcttt gttctttttt ttgtatgaaa atgccttttt
11101 accttttgct atgggtatta ttgctatgtc tgcttttgca atgatgtttg tcaaacataa
11161 gcatgcattt ctctgtttgt ttttgttacc ttctcttgcc actgtagctt attttaatat
11221 ggtctatatg cctgctagtt gggtgatgcg tattatgaca tggttggata tggttgatac
11281 tagtttgtct ggttttaagc taaaagactg tgttatgtat gcatcagctg tagtgttact
11341 aatccttatg acagcaagaa ctgtgtatga tgatggtgct aggagagtgt ggacacttat
11401 gaatgtcttg acactcgttt ataaagttta ttatggtaat gctttagatc aagccatttc
11461 catgtgggct cttataatct ctgttacttc taactactca ggtgtagtta caactgtcat
11521 gtttttggcc agaggtattg tttttatgtg tgttgagtat tgccctattt tcttcataac
11581 tggtaataca cttcagtgta taatgctagt ttattgtttc ttaggctatt tttgtacttg
11641 ttactttggc ctcttttgtt tactcaaccg ctactttaga ctgactcttg gtgtttatga
11701 ttacttagtt tctacacagg agtttagata tatgaattca cagggactac tcccacccaa
11761 gaatagcata gatgccttca aactcaacat taaattgttg ggtgttggtg gcaaaccttg
11821 tatcaaagta gccactgtac agtctaaaat gtcagatgta aagtgcacat cagtagtctt
11881 actctcagtt ttgcaacaac tcagagtaga atcatcatct aaattgtggg ctcaatgtgt
11941 ccagttacac aatgacattc tcttagctaa agatactact gaagcctttg aaaaaatggt
12001 ttcactactt tctgttttgc tttccatgca gggtgctgta gacataaaca agctttgtga
12061 agaaatgctg gacaacaggg caaccttaca agctatagcc tcagagttta gttcccttcc
12121 atcatatgca gcttttgcta ctgctcaaga agcttatgag caggctgttg ctaatggtga
12181 ttctgaagtt gttcttaaaa agttgaagaa gtctttgaat gtggctaaat ctgaatttga
12241 ccgtgatgca gccatgcaac gtaagttgga aaagatggct gatcaagcta tgacccaaat
12301 gtataaacag gctagatctg aggacaagag ggcaaaagtt actagtgcta tgcagacaat
12361 gcttttcact atgcttagaa agttggataa tgatgcactc aacaacatta tcaacaatgc
12421 aagagatggt tgtgttccct tgaacataat acctcttaca acagcagcca aactaatggt
12481 tgtcatacca gactataaca catataaaaa tacgtgtgat ggtacaacat ttacttatgc
12541 atcagcattg tgggaaatcc aacaggttgt agatgcagat agtaaaattg ttcaacttag
12601 tgaaattagt atggacaatt cacctaattt agcatggcct cttattgtaa cagctttaag
12661 ggccaattct gctgtcaaat tacagaataa tgagcttagt cctgttgcac tacgacagat
12721 gtcttgtgct gccggtacta cacaaactgc ttgcactgat gacaatgcgt tagcttacta
12781 caacacaaca aagggaggta ggtttgtact tgcactgtta tccgatttac aggatttgaa
12841 atgggctaga ttccctaaga gtgatggaac tggtactatc tatacagaac tggaaccacc
12901 ttgtaggttt gttacagaca cacctaaagg tcctaaagtg aagtatttat actttattaa
12961 aggattaaac aacctaaata gaggtatggt acttggtagt ttagctgcca cagtacgtct
13021 acaagctggt aatgcaacag aagtgcctgc caattcaact gtattatctt tctgtgcttt
13081 tgctgtagat gctgctaaag cttacaaaga ttatctagct agtgggggac aaccaatcac
13141 taattgtgtt aagatgttgt gtacacacac tggtactggt caggcaataa cagttacacc
13201 ggaagccaat atggatcaag aatcctttgg tggtgcatcg tgttgtctgt actgccgttg
13261 ccacatagat catccaaatc ctaaaggatt ttgtgactta aaaggtaagt atgtacaaat
13321 acctacaact tgtgctaatg accctgtggg ttttacactt aaaaacacag tctgtaccgt
13381 ctgcggtatg tggaaaggtt atggctgtag ttgtgatcaa ctccgcgaac ccatgcttca
13441 gtcagctgat gcacaatcgt ttttaaacgg gtttgcggtg taagtgcagc ccgtcttaca
13501 ccgtgcggca caggcactag tactgatgtc gtatacaggg cttttgacat ctacaatgat
13561 aaagtagctg gttttgctaa attcctaaaa actaattgtt gtcgcttcca agaaaaggac
13621 gaagatgaca atttaattga ttcttacttt gtagttaaga gacacacttt ctctaactac
13681 caacatgaag aaacaattta taatttactt aaggattgtc cagctgttgc taaacatgac
13741 ttctttaagt ttagaataga cggtgacatg gtaccacata tatcacgtca acgtcttact
13801 aaatacacaa tggcagacct cgtctatgct ttaaggcatt ttgatgaagg taattgtgac
13861 acattaaaag aaatacttgt cacatacaat tgttgtgatg atgattattt caataaaaag
13921 gactggtatg attttgtaga aaacccagat atattacgcg tatacgccaa cttaggtgaa
13981 cgtgtacgcc aagctttgtt aaaaacagta caattctgtg atgccatgcg aaatgctggt
14041 attgttggtg tactgacatt agataatcaa gatctcaatg gtaactggta tgatttcggt
14101 gatttcatac aaaccacgcc aggtagtgga gttcctgttg tagattctta ttattcattg
14161 ttaatgccta tattaacctt gaccagggct ttaactgcag agtcacatgt tgacactgac
14221 ttaacaaagc cttacattaa gtgggatttg ttaaaatatg acttcacgga agagaggtta
14281 aaactctttg accgttattt taaatattgg gatcagacat accacccaaa ttgtgttaac
14341 tgtttggatg acagatgcat tctgcattgt gcaaacttta atgttttatt ctctacagtg
14401 ttcccaccta caagttttgg accactagtg agaaaaatat ttgttgatgg tgttccattt
14461 gtagtttcaa ctggatacca cttcagagag ctaggtgttg tacataatca ggatgtaaac
14521 ttacatagct ctagacttag ttttaaggaa ttacttgtgt atgctgctga ccctgctatg
14581 cacgctgctt ctggtaatct attactagat aaacgcacta cgtgcttttc agtagctgca
14641 cttactaaca atgttgcttt tcaaactgtc aaacccggta attttaacaa agacttctat
14701 gactttgctg tgtctaaggg tttctttaag gaaggaagtt ctgttgaatt aaaacacttc
14761 ttctttgctc aggatggtaa tgctgctatc agcgattatg actactatcg ttataatcta
14821 ccaacaatgt gtgatatcag acaactacta tttgtagttg aagttgttga taagtacttt
14881 gattgttacg atggtggctg tattaatgct aaccaagtca tcgtcaacaa cctagacaaa
14941 tcagctggtt ttccatttaa taaatggggt aaggctagac tttattatga ttcaatgagt
15001 tatgaggatc aagatgcact tttcgcatat acaaaacgta atgtcatccc tactataact
15061 caaatgaatc ttaagtatgc cattagtgca aagaatagag ctcgcaccgt agctggtgtc
15121 tctatctgta gtactatgac caatagacag tttcatcaaa aattattgaa atcaatagcc
15181 gccactagag gagctactgt agtaattgga acaagcaaat tctatggtgg ttggcacaac
15241 atgttaaaaa ctgtttatag tgatgtagaa aaccctcacc ttatgggttg ggattatcct
15301 aaatgtgata gagccatgcc taacatgctt agaattatgg cctcacttgt tcttgctcgc
15361 aaacatacaa cgtgttgtag cttgtcacac cgtttctata gattagctaa tgagtgtgct
15421 caagtattga gtgaaatggt catgtgtggc ggttcactat atgttaaacc aggtggaacc
15481 tcatcaggag atgccacaac tgcttatgct aatagtgttt ttaacatttg tcaagctgtc
15541 acggccaatg ttaatgcact tttatctact gatggtaaca aaattgccga taagtatgtc
15601 cgcaatttac aacacagact ttatgagtgt ctctatagaa atagagatgt tgacacagac
15661 tttgtgaatg agttttacgc atatttgcgt aaacatttct caatgatgat actctctgac
15721 gatgctgttg tgtgtttcaa tagcacttat gcatctcaag gtctagtggc tagcataaag
15781 aactttaagt cagttcttta ttatcaaaac aatgttttta tgtctgaagc aaaatgttgg
15841 actgagactg accttactaa aggacctcat gaattttgct ctcaacatac aatgctagtt
15901 aaacagggtg atgattatgt gtaccttcct tacccagatc catcaagaat cctaggggcc
15961 ggctgttttg tagatgatat cgtaaaaaca gatggtacac ttatgattga acggttcgtg
16021 tctttagcta tagatgctta cccacttact aaacatccta atcaggagta tgctgatgtc
16081 tttcatttgt acttacaata cataagaaag ctacatgatg agttaacagg acacatgtta
16141 gacatgtatt ctgttatgct tactaatgat aacacttcaa ggtattggga acctgagttt
16201 tatgaggcta tgtacacacc gcatacagtc ttacaggctg ttggggcttg tgttctttgc
16261 aattcacaga cttcattaag atgtggtgct tgcatacgta gaccattctt atgttgtaaa
16321 tgctgttacg accatgtcat atcaacatca cataaattag tcttgtctgt taatccgtat
16381 gtttgcaatg ctccaggttg tgatgtcaca gatgtgactc aactttactt aggaggtatg
16441 agctattatt gtaaatcaca taaaccaccc attagttttc cattgtgtgc taatggacaa
16501 gtttttggtt tatataaaaa tacatgtgtt ggtagcgata atgttactga ctttaatgca
16561 attgcaacat gtgactggac aaatgctggt gattacattt tagctaacac ctgtactgaa
16621 agactcaagc tttttgcagc agaaacgctc aaagctactg aggagacatt taaactgtct
16681 tatggtattg ctactgtacg tgaagtgctg tctgacagag aattacatct ttcatgggaa
16741 gttggtaaac ctagaccacc acttaaccga aattatgtct ttactggtta tcgtgtaact
16801 aaaaacagta aagtacaaat aggagagtac acctttgaaa aaggtgacta tggtgatgct
16861 gttgtttacc gaggtacaac aacttacaaa ttaaatgttg gtgattattt tgtgctgaca
16921 tcacatacag taatgccatt aagtgcacct acactagtgc cacaagagca ctatgttaga
16981 attactggct tatacccaac actcaatatc tcagatgagt tttctagcaa tgttgcaaat
17041 tatcaaaagg ttggtatgca aaagtattct acactccagg gaccacctgg tactggtaag
17101 agtcattttg ctattggcct agctctctac tacccttctg ctcgcatagt gtatacagct
17161 tgctctcatg ccgctgttga tgcactatgt gagaaggcat taaaatattt gcctatagat
17221 aaatgtagta gaattatacc tgcacgtgct cgtgtagagt gttttgataa attcaaagtg
17281 aattcaacat tagaacagta tgtcttttgt actgtaaatg cattgcctga gacgacagca
17341 gatatagttg tctttgatga aatttcaatg gccacaaatt atgatttgag tgttgtcaat
17401 gccagattac gtgctaagca ctatgtgtac attggcgacc ctgctcaatt acctgcacca
17461 cgcacattgc taactaaggg cacactagaa ccagaatatt tcaattcagt gtgtagactt
17521 atgaaaacta taggtccaga catgttcctc ggaacttgtc ggcgttgtcc tgctgaaatt
17581 gttgacactg tgagtgcttt ggtttatgat aataagctta aagcacataa agacaaatca
17641 gctcaatgct ttaaaatgtt ttataagggt gttatcacgc atgatgtttc atctgcaatt
17701 aacaggccac aaataggcgt ggtaagagaa ttccttacac gtaaccctgc ttggagaaaa
17761 gctgtcttta tttcacctta taattcacag aatgctgtag cctcaaagat tttgggacta
17821 ccaactcaaa ctgttgattc atcacagggc tcagaatatg actatgtcat attcactcaa
17881 accactgaaa cagctcactc ttgtaatgta aacagattta atgttgctat taccagagca
17941 aaagtaggca tactttgcat aatgtctgat agagaccttt atgacaagtt gcaatttaca
18001 agtcttgaaa ttccacgtag gaatgtggca actttacaag ctgaaaatgt aacaggactc
18061 tttaaagatt gtagtaaggt aatcactggg ttacatccta cacaggcacc tacacacctc
18121 agtgttgaca ctaaattcaa aactgaaggt ttatgtgttg acatacctgg catacctaag
18181 gacatgacct atagaagact catctctatg atgggtttta aaatgaatta tcaagttaat
18241 ggttacccta acatgtttat cacccgcgaa gaagctataa gacatgtacg tgcatggatt
18301 ggcttcgatg tcgaggggtg tcatgctact agagaagctg ttggtaccaa tttaccttta
18361 cagctaggtt tttctacagg tgttaaccta gttgctgtac ctacaggtta tgttgataca
18421 cctaataata cagatttttc cagagttagt gctaaaccac cgcctggaga tcaatttaaa
18481 cacctcatac cacttatgta caaaggactt ccttggaatg tagtgcgtat aaagattgta
18541 caaatgttaa gtgacacact taaaaatctc tctgacagag tcgtatttgt cttatgggca
18601 catggctttg agttgacatc tatgaagtat tttgtgaaaa taggacctga gcgcacctgt
18661 tgtctatgtg atagacgtgc cacatgcttt tccactgctt cagacactta tgcctgttgg
18721 catcattcta ttggatttga ttacgtctat aatccgttta tgattgatgt tcaacaatgg
18781 ggttttacag gtaacctaca aagcaaccat gatctgtatt gtcaagtcca tggtaatgca
18841 catgtagcta gttgtgatgc aatcatgact aggtgtctag ctgtccacga gtgctttgtt
18901 aagcgtgttg actggactat tgaatatcct ataattggtg atgaactgaa gattaatgcg
18961 gcttgtagaa aggttcaaca catggttgtt aaagctgcat tattagcaga caaattccca
19021 gttcttcacg acattggtaa ccctaaagct attaagtgtg tacctcaagc tgatgtagaa
19081 tggaagttct atgatgcaca gccttgtagt gacaaagctt ataaaataga agaattattc
19141 tattcttatg ccacacattc tgacaaattc acagatggtg tatgcctatt ttggaattgc
19201 aatgtcgata gatatcctgc taattccatt gtttgtagat ttgacactag agtgctatct
19261 aaccttaact tgcctggttg tgatggtggc agtttgtatg taaataaaca tgcattccac
19321 acaccagctt ttgataaaag tgcttttgtt aatttaaaac aattaccatt tttctattac
19381 tctgacagtc catgtgagtc tcatggaaaa caagtagtgt cagatataga ttatgtacca
19441 ctaaagtctg ctacgtgtat aacacgttgc aatttaggtg gtgctgtctg tagacatcat
19501 gctaatgagt acagattgta tctcgatgct tataacatga tgatctcagc tggctttagc
19561 ttgtgggttt acaaacaatt tgatacttat aacctctgga acacttttac aagacttcag
19621 agtttagaaa atgtggcttt taatgttgta aataagggac actttgatgg acaacagggt
19681 gaagtaccag tttctatcat taataacact gtttacacaa aagttgatgg tgttgatgta
19741 gaattgtttg aaaataaaac aacattacct gttaatgtag catttgagct ttgggctaag
19801 cgcaacatta aaccagtacc agaggtgaaa atactcaata atttgggtgt ggacattgct
19861 gctaatactg tgatctggga ctacaaaaga gatgctccag cacatatatc tactattggt
19921 gtttgttcta tgactgacat agccaagaaa ccaactgaaa cgatttgtgc accactcact
19981 gtcttttttg atggtagagt tgatggtcaa gtagacttat ttagaaatgc ccgtaatggt
20041 gttcttatta cagaaggtag tgttaaaggt ttacaaccat ctgtaggtcc caaacaagct
20101 agtcttaatg gagtcacatt aattggagaa gccgtaaaaa cacagttcaa ttattataag
20161 aaagttgatg gtgttgtcca acaattacct gaaacttact ttactcagag tagaaattta
20221 caagaattta aacccaggag tcaaatggaa attgatttct tagaattagc tatggatgaa
20281 ttcattgaac ggtataaatt agaaggctat gccttcgaac atatcgttta tggagatttt
20341 agtcatagtc agttaggtgg tttacatcta ctgattggac tagctaaacg ttttaaggaa
20401 tcaccttttg aattagaaga ttttattcct atggacagta cagttaaaaa ctatttcata
20461 acagatgcgc aaacaggttc atctaagtgt gtgtgttctg ttattgattt attacttgat
20521 gattttgttg aaataataaa atcccaagat ttatctgtag tttctaaggt tgtcaaagtg
20581 actattgact atacagaaat ttcatttatg ctttggtgta aagatggcca tgtagaaaca
20641 ttttacccaa aattacaatc tagtcaagcg tggcaaccgg gtgttgctat gcctaatctt
20701 tacaaaatgc aaagaatgct attagaaaag tgtgaccttc aaaattatgg tgatagtgca
20761 acattaccta aaggcataat gatgaatgtc gcaaaatata ctcaactgtg tcaatattta
20821 aacacattaa cattagctgt accctataat atgagagtta tacattttgg tgctggttct
20881 gataaaggag ttgcaccagg tacagctgtt ttaagacagt ggttgcctac gggtacgctg
20941 cttgtcgatt cagatcttaa tgactttgtc tctgatgcag attcaacttt gattggtgat
21001 tgtgcaactg tacatacagc taataaatgg gatctcatta ttagtgatat gtacgaccct
21061 aagactaaaa atgttacaaa agaaaatgac tctaaagagg gttttttcac ttacatttgt
21121 gggtttatac aacaaaagct agctcttgga ggttccgtgg ctataaagat aacagaacat
21181 tcttggaatg ctgatcttta taagctcatg ggacacttcg catggtggac agcctttgtt
21241 actaatgtga atgcgtcatc atctgaagca tttttaattg gatgtaatta tcttggcaaa
21301 ccacgcgaac aaatagatgg ttatgtcatg catgcaaatt acatattttg gaggaataca
21361 aatccaattc agttgtcttc ctattcttta tttgacatga gtaaatttcc ccttaaatta
21421 aggggtactg ctgttatgtc tttaaaagaa ggtcaaatca atgatatgat tttatctctt
21481 cttagtaaag gtagacttat aattagagaa aacaacagag ttgttatttc tagtgatgtt
21541 cttgttaaca actaaacgaa caatgtttgt ttttcttgtt ttattgccac tagtctctag
21601 tcagtgtgtt aatcttacaa ccagaactca attaccccct gcatacacta attctttcac
21661 acgtggtgtt tattaccctg acaaagtttt cagatcctca gttttacatt caactcagga
21721 cttgttctta cctttctttt ccaatgttac ttggttccat gctatacatg tctctgggac
21781 caatggtact aagaggtttg ataaccctgt cctaccattt aatgatggtg tttattttgc
21841 ttccactgag aagtctaaca taataagagg ctggattttt ggtactactt tagattcgaa
21901 gacccagtcc ctacttattg ttaataacgc tactaatgtt gttattaaag tctgtgaatt
21961 tcaattttgt aatgatccat ttttgggtgt ttattaccac aaaaacaaca aaagttggat
22021 ggaaagtgag ttcagagttt attctagtgc gaataattgc acttttgaat atgtctctca
22081 gccttttctt atggaccttg aaggaaaaca gggtaatttc aaaaatctta gggaatttgt
22141 gtttaagaat attgatggtt attttaaaat atattctaag cacacgccta ttaatttagt
22201 gcgtgatctc cctcagggtt tttcggcttt agaaccattg gtagatttgc caataggtat
22261 taacatcact aggtttcaaa ctttacttgc tttacataga agttatttga ctcctggtga
22321 ttcttcttca ggttggacag ctggtgctgc agcttattat gtgggttatc ttcaacctag
22381 gacttttcta ttaaaatata atgaaaatgg aaccattaca gatgctgtag actgtgcact
22441 tgaccctctc tcagaaacaa agtgtacgtt gaaatccttc actgtagaaa aaggaatcta
22501 tcaaacttct aactttagag tccaaccaac agaatctatt gttagatttc ctaatattac
22561 aaacttgtgc ccttttggtg aagtttttaa cgccaccaga tttgcatctg tttatgcttg
22621 gaacaggaag agaatcagca actgtgttgc tgattattct gtcctatata attccgcatc
22681 attttccact tttaagtgtt atggagtgtc tcctactaaa ttaaatgatc tctgctttac
22741 taatgtctat gcagattcat ttgtaattag aggtgatgaa gtcagacaaa tcgctccagg
22801 gcaaactgga aagattgctg attataatta taaattacca gatgatttta caggctgcgt
22861 tatagcttgg aattctaaca atcttgattc taaggttggt ggtaattata attacctgta
22921 tagattgttt aggaagtcta atctcaaacc ttttgagaga gatatttcaa ctgaaatcta
22981 tcaggccggt agcacacctt gtaatggtgt tgaaggtttt aattgttact ttcctttaca
23041 atcatatggt ttccaaccca ctaatggtgt tggttaccaa ccatacagag tagtagtact
23101 ttcttttgaa cttctacatg caccagcaac tgtttgtgga cctaaaaagt ctactaattt
23161 ggttaaaaac aaatgtgtca atttcaactt caatggttta acaggcacag gtgttcttac
23221 tgagtctaac aaaaagtttc tgcctttcca acaatttggc agagacattg ctgacactac
23281 tgatgctgtc cgtgatccac agacacttga gattcttgac attacaccat gttcttttgg
23341 tggtgtcagt gttataacac caggaacaaa tacttctaac caggttgctg ttctttatca
23401 ggatgttaac tgcacagaag tccctgttgc tattcatgca gatcaactta ctcctacttg
23461 gcgtgtttat tctacaggtt ctaatgtttt tcaaacacgt gcaggctgtt taataggggc
23521 tgaacatgtc aacaactcat atgagtgtga catacccatt ggtgcaggta tatgcgctag
23581 ttatcagact cagactaatt ctcctcggcg ggcacgtagt gtagctagtc aatccatcat
23641 tgcctacact atgtcacttg gtgcagaaaa ttcagttgct tactctaata actctattgc
23701 catacccaca aattttacta ttagtgttac cacagaaatt ctaccagtgt ctatgaccaa
23761 gacatcagta gattgtacaa tgtacatttg tggtgattca actgaatgca gcaatctttt
23821 gttgcaatat ggcagttttt gtacacaatt aaaccgtgct ttaactggaa tagctgttga
23881 acaagacaaa aacacccaag aagtttttgc acaagtcaaa caaatttaca aaacaccacc
23941 aattaaagat tttggtggtt ttaatttttc acaaatatta ccagatccat caaaaccaag
24001 caagaggtca tttattgaag atctactttt caacaaagtg acacttgcag atgctggctt
24061 catcaaacaa tatggtgatt gccttggtga tattgctgct agagacctca tttgtgcaca
24121 aaagtttaac ggccttactg ttttgccacc tttgctcaca gatgaaatga ttgctcaata
24181 cacttctgca ctgttagcgg gtacaatcac ttctggttgg acctttggtg caggtgctgc
24241 attacaaata ccatttgcta tgcaaatggc ttataggttt aatggtattg gagttacaca
24301 gaatgttctc tatgagaacc aaaaattgat tgccaaccaa tttaatagtg ctattggcaa
24361 aattcaagac tcactttctt ccacagcaag tgcacttgga aaacttcaag atgtggtcaa
24421 ccaaaatgca caagctttaa acacgcttgt taaacaactt agctccaatt ttggtgcaat
24481 ttcaagtgtt ttaaatgata tcctttcacg tcttgacaaa gttgaggctg aagtgcaaat
24541 tgataggttg atcacaggca gacttcaaag tttgcagaca tatgtgactc aacaattaat
24601 tagagctgca gaaatcagag cttctgctaa tcttgctgct actaaaatgt cagagtgtgt
24661 acttggacaa tcaaaaagag ttgatttttg tggaaagggc tatcatctta tgtccttccc
24721 tcagtcagca cctcatggtg tagtcttctt gcatgtgact tatgtccctg cacaagaaaa
24781 gaacttcaca actgctcctg ccatttgtca tgatggaaaa gcacactttc ctcgtgaagg
24841 tgtctttgtt tcaaatggca cacactggtt tgtaacacaa aggaattttt atgaaccaca
24901 aatcattact acagacaaca catttgtgtc tggtaactgt gatgttgtaa taggaattgt
24961 caacaacaca gtttatgatc ctttgcaacc tgaattagac tcattcaagg aggagttaga
25021 taaatatttt aagaatcata catcaccaga tgttgattta ggtgacatct ctggcattaa
25081 tgcttcagtt gtaaacattc aaaaagaaat tgaccgcctc aatgaggttg ccaagaattt
25141 aaatgaatct ctcatcgatc tccaagaact tggaaagtat gagcagtata taaaatggcc
25201 atggtacatt tggctaggtt ttatagctgg cttgattgcc atagtaatgg tgacaattat
25261 gctttgctgt atgaccagtt gctgtagttg tctcaagggc tgttgttctt gtggatcctg
25321 ctgcaaattt gatgaagacg actctgagcc agtgctcaaa ggagtcaaat tacattacac
25381 ataaacgaac ttatggattt gtttatgaga atcttcacaa ttggaactgt aactttgaag
25441 caaggtgaaa tcaaggatgc tactccttca gattttgttc gcgctactgc aacgataccg
25501 atacaagcct cactcccttt cggatggctt attgttggcg ttgcacttct tgctgttttt
25561 cagagcgctt ccaaaatcat aaccctcaaa aagagatggc aactagcact ctccaagggt
25621 gttcactttg tttgcaactt gctgttgttg tttgtaacag tttactcaca ccttttgctc
25681 gttgctgctg gccttgaagc cccttttctc tatctttatg ctttagtcta cttcttgcag
25741 agtataaact ttgtaagaat aataatgagg ctttggcttt gctggaaatg ccgttccaaa
25801 aacccattac tttatgatgc caactatttt ctttgctggc atactaattg ttacgactat
25861 tgtatacctt acaatagtgt aacttcttca attgtcatta cttcaggtga tggcacaaca
25921 agtcctattt ctgaacatga ctaccagatt ggtggttata ctgaaaaatg ggaatctgga
25981 gtaaaagact gtgttgtatt acacagttac ttcacttcag actattacca gctgtactca
26041 actcaattga gtacagacac tggtgttgaa catgttacct tcttcatcta caataaaatt
26101 gttgatgagc ctgaagaaca tgtccaaatt cacacaatcg acggttcatc cggagttgtt
26161 aatccagtaa tggaaccaat ttatgatgaa ccgacgacga ctactagcgt gcctttgtaa
26221 gcacaagctg atgagtacga acttatgtac tcattcgttt cggaagagac aggtacgtta
26281 atagttaata gcgtacttct ttttcttgct ttcgtggtat tcttgctagt tacactagcc
26341 atccttactg cgcttcgatt gtgtgcgtac tgctgcaata ttgttaacgt gagtcttgta
26401 aaaccttctt tttacgttta ctctcgtgtt aaaaatctga attcttctag agttcctgat
26461 cttctggtct aaacgaacta aatattatat tagtttttct gtttggaact ttaattttag
26521 ccatggcaga ttccaacggt actattaccg ttgaagagct taaaaagctc cttgaacaat
26581 ggaacctagt aataggtttc ctattcctta catggatttg tcttctacaa tttgcctatg
26641 ccaacaggaa taggtttttg tatataatta agttaatttt cctctggctg ttatggccag
26701 taactttagc ttgttttgtg cttgctgctg tttacagaat aaattggatc accggtggaa
26761 ttgctatcgc aatggcttgt cttgtaggct tgatgtggct cagctacttc attgcttctt
26821 tcagactgtt tgcgcgtacg cgttccatgt ggtcattcaa tccagaaact aacattcttc
26881 tcaacgtgcc actccatggc actattctga ccagaccgct tctagaaagt gaactcgtaa
26941 tcggagctgt gatccttcgt ggacatcttc gtattgctgg acaccatcta ggacgctgtg
27001 acatcaagga cctgcctaaa gaaatcactg ttgctacatc acgaacgctt tcttattaca
27061 aattgggagc ttcgcagcgt gtagcaggtg actcaggttt tgctgcatac agtcgctaca
27121 ggattggcaa ctataaatta aacacagacc attccagtag cagtgacaat attgctttgc
27181 ttgtacagta agtgacaaca gatgtttcat ctcgttgact ttcaggttac tatagcagag
27241 atattactaa ttattatgag gacttttaaa gtttccattt ggaatcttga ttacatcata
27301 aacctcataa ttaaaaattt atctaagtca ctaactgaga ataaatattc tcaattagat
27361 gaagagcaac caatggagat tgattaaacg aacatgaaaa ttattctttt cttggcactg
27421 ataacactcg ctacttgtga gctttatcac taccaagagt gtgttagagg tacaacagta
27481 cttttaaaag aaccttgctc ttctggaaca tacgagggca attcaccatt tcatcctcta
27541 gctgataaca aatttgcact gacttgcttt agcactcaat ttgcttttgc ttgtcctgac
27601 ggcgtaaaac acgtctatca gttacgtgcc agatcagttt cacctaaact gttcatcaga
27661 caagaggaag ttcaagaact ttactctcca atttttctta ttgttgcggc aatagtgttt
27721 ataacacttt gcttcacact caaaagaaag acagaatgat tgaactttca ttaattgact
27781 tctatttgtg ctttttagcc tttctgctat tccttgtttt aattatgctt attatctttt
27841 ggttctcact tgaactgcaa gatcataatg aaacttgtca cgcctaaacg aacatgaaat
27901 ttcttgtttt cttaggaatc atcacaactg tagctgcatt tcaccaagaa tgtagtttac
27961 agtcatgtac tcaacatcaa ccatatgtag ttgatgaccc gtgtcctatt cacttctatt
28021 ctaaatggta tattagagta ggagctagaa aatcagcacc tttaattgaa ttgtgcgtgg
28081 atgaggctgg ttctaaatca cccattcagt acatcgatat cggtaattat acagtttcct
28141 gtttaccttt tacaattaat tgccaggaac ctaaattggg tagtcttgta gtgcgttgtt
28201 cgttctatga agacttttta gagtatcatg acgttcgtgt tgttttagat ttcatctaaa
28261 cgaacaaact aaaatgtctg ataatggacc ccaaaatcag cgaaatgcac cccgcattac
28321 gtttggtgga ccctcagatt caactggcag taaccagaat ggagaacgca gtggggcgcg
28381 atcaaaacaa cgtcggcccc aaggtttacc caataatact gcgtcttggt tcaccgctct
28441 cactcaacat ggcaaggaag accttaaatt ccctcgagga caaggcgttc caattaacac
28501 caatagcagt ccagatgacc aaattggcta ctaccgaaga gctaccagac gaattcgtgg
28561 tggtgacggt aaaatgaaag atctcagtcc aagatggtat ttctactacc taggaactgg
28621 gccagaagct ggacttccct atggtgctaa caaagacggc atcatatggg ttgcaactga
28681 gggagccttg aatacaccaa aagatcacat tggcacccgc aatcctgcta acaatgctgc
28741 aatcgtgcta caacttcctc aaggaacaac attgccaaaa ggcttctacg cagaagggag
28801 cagaggcggc agtcaagcct cttctcgttc ctcatcacgt agtcgcaaca gttcaagaaa
28861 ttcaactcca ggcagcagta ggggaacttc tcctgctaga atggctggca atggcggtga
28921 tgctgctctt gctttgctgc tgcttgacag attgaaccag cttgagagca aaatgtctgg
28981 taaaggccaa caacaacaag gccaaactgt cactaagaaa tctgctgctg aggcttctaa
29041 gaagcctcgg caaaaacgta ctgccactaa agcatacaat gtaacacaag ctttcggcag
29101 acgtggtcca gaacaaaccc aaggaaattt tggggaccag gaactaatca gacaaggaac
29161 tgattacaaa cattggccgc aaattgcaca atttgccccc agcgcttcag cgttcttcgg
29221 aatgtcgcgc attggcatgg aagtcacacc ttcgggaacg tggttgacct acacaggtgc
29281 catcaaattg gatgacaaag atccaaattt caaagatcaa gtcattttgc tgaataagca
29341 tattgacgca tacaaaacat tcccaccaac agagcctaaa aaggacaaaa agaagaaggc
29401 tgatgaaact caagccttac cgcagagaca gaagaaacag caaactgtga ctcttcttcc
29461 tgctgcagat ttggatgatt tctccaaaca attgcaacaa tccatgagca gtgctgactc
29521 aactcaggcc taaactcatg cagaccacac aaggcagatg ggctatataa acgttttcgc
29581 ttttccgttt acgatatata gtctactctt gtgcagaatg aattctcgta actacatagc
29641 acaagtagat gtagttaact ttaatctcac atagcaatct ttaatcagtg tgtaacatta
29701 gggaggactt gaaagagcca ccacattttc accgaggcca cgcggagtac gatcgagtgt
29761 acagtgaaca atgctaggga gagctgccta tatggaagag ccctaatgtg taaaattaat
29821 tttagtagtg ctatccccat gtgattttaa tagcttctta ggagaatgac aaaaaaaaaa
29881 aaaaaaaaaa aaaaaaaaaa aaa
""".strip()
for s in " \n0123456789":
cc = cc.replace(s, "")
cc = cc.lower()
print(cc)
# Asn or Asp / B AAU, AAC; GAU, GAC
# Gln or Glu / Z CAA, CAG; GAA, GAG
# START AUG
tt = """Ala/A GCU,GCC,GCA,GCG
Ile/I AUU,AUC,AUA
Arg/R CGU,CGC,CGA,CGG,AGA,AGG
Leu/L CUU,CUC,CUA,CUG,UUA,UUG
Asn/N AAU,AAC
Lys/K AAA,AAG
Asp/D GAU,GAC
Met/M AUG
Phe/F UUU,UUC
Cys/C UGU,UGC
Pro/P CCU,CCC,CCA,CCG
Gln/Q CAA,CAG
Ser/S UCU,UCC,UCA,UCG,AGU,AGC
Glu/E GAA,GAG
Thr/T ACU,ACC,ACA,ACG
Trp/W UGG
Gly/G GGU,GGC,GGA,GGG
Tyr/Y UAU,UAC
His/H CAU,CAC
Val/V GUU,GUC,GUA,GUG
STOP UAA,UGA,UAG
""".strip()
dec = {}
for t in tt.split('\n'):
k, v = t.split(" ")
if "/" in k:
k = k.split("/")[-1].strip()
k = k.replace("STOP", "#")
v = v.replace(",", " ").replace(";", "").lower().replace('u', 't').split(" ")
for vv in v:
dec[vv] = k
print(dec)
aa = []
for rf in range(3):
for i in range(rf, len(cc)-3,3):
aa.append(dec[cc[i:i+3]])
aa = ''.join(aa).split("#")
print(aa, end="\n")
aa.find()
###Output
_____no_output_____
###Markdown
Importing Necessary Libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import fbprophet # Necessary for forecasting.
###Output
_____no_output_____
###Markdown
Reading Dataset and Adding Some Necessary Columns
###Code
df = pd.read_csv('/content/drive/My Drive/tr/Covid19-Turkey.csv')
#Adding number of days column.
number_of_days = pd.DataFrame(np.arange(1,len(df.Date)+1,1))
a = {"Number Of Days": number_of_days.values}
df = df.assign(**a)
df.head()
#Adding daily deaths column.
yesterday_deaths = 0
Daily_deaths = []
for current_deaths in df['Total Deaths']:
if current_deaths>yesterday_deaths:
Daily_deaths.append(current_deaths-yesterday_deaths)
else :
Daily_deaths.append(0)
yesterday_deaths = current_deaths
Daily_deaths=pd.DataFrame(Daily_deaths)
df['Daily Deaths'] = Daily_deaths
#Adding daily recovered column.
yesterday_recovered = 0
Daily_recovered = []
for current_recovered in df['Total Recovered']:
if current_recovered>yesterday_recovered:
Daily_recovered.append(current_recovered-yesterday_recovered)
else :
Daily_recovered.append(0)
yesterday_recovered = current_recovered
Daily_recovered=pd.DataFrame(Daily_recovered)
df['Daily Recovered'] = Daily_recovered
###Output
_____no_output_____
###Markdown
Visualization **Total Cases by Days**
###Code
sns.lineplot(x="Number Of Days", y="Total Cases", data = df)
###Output
_____no_output_____
###Markdown
**Daily Cases by Days**
###Code
sns.lineplot(x="Number Of Days", y="Daily Cases", data = df)
###Output
_____no_output_____
###Markdown
**Total Deaths by Days**
###Code
sns.lineplot(x="Number Of Days", y="Total Deaths", data = df)
###Output
_____no_output_____
###Markdown
**Daily Deaths by Days**
###Code
sns.lineplot(x="Number Of Days", y="Daily Deaths", data = df)
###Output
_____no_output_____
###Markdown
**Total Test Cases and Daily Cases by Days**
###Code
plt.plot(df['Number Of Days'],df['Daily Test Cases'],color ='blue',label ='Daily Test Cases')
plt.plot(df['Number Of Days'],df['Daily Cases'],color ='red',label='Daily Cases')
plt.legend()
plt.xlabel('Number Of Days')
plt.ylabel('Value')
###Output
_____no_output_____
###Markdown
**Total Cases, Daily Case, and Daily Recovered by Days**
###Code
plt.plot(df['Number Of Days'],df['Daily Recovered'],color ='blue',label ='Daily Recovered')
plt.plot(df['Number Of Days'],df['Daily Cases'],color ='red',label='Daily Cases')
plt.plot(df['Number Of Days'],df['Total Cases'],color ='green',label ='Total Cases')
plt.legend()
plt.xlabel('Number Of Days')
plt.ylabel('Value')
###Output
_____no_output_____
###Markdown
Forecasting **Making Some New DataFrames from Dataset for Forecasting**
###Code
tc=df['Total Cases']
nod=df['Number Of Days']
date = df["Date"]
date = date.str.replace("/","-")
tc_nod = pd.DataFrame({"Total Cases": tc,"Date": date})
td=df['Total Deaths']
td_nod = pd.DataFrame({"Total Deaths": td,"Date": date})
tr=df['Total Recovered']
tr_nod = pd.DataFrame({"Total Recovered": tr,"Date": date})
dc=df['Daily Cases']
dc=pd.DataFrame({"Daily Cases": dc,"Date": date})
###Output
_____no_output_____
###Markdown
**Total Cases Forecasting**
###Code
tc_nod = tc_nod.rename(columns={'Date': 'ds', 'Total Cases': 'y'})
fbp1 = fbprophet.Prophet()
fbp1.fit(tc_nod)
future1 = fbp1.make_future_dataframe(periods=30,freq="M")
future1.tail()
forecast1 = fbp.predict(future1)
forecast1[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig1 = fbp.plot(forecast)
plt.xlabel('Days')
plt.ylabel('Total Cases')
plt.ticklabel_format(style='plain', axis='y')
###Output
_____no_output_____
###Markdown
**Total Deaths Forecasting**
###Code
td_nod = td_nod.rename(columns={'Date': 'ds', 'Total Deaths': 'y'})
fbp2 = fbprophet.Prophet()
fbp2.fit(td_nod)
future2 = fbp2.make_future_dataframe(periods=30,freq="M")
future2.tail()
forecast2 = fbp2.predict(future2)
forecast2[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig2 = fbp2.plot(forecast2)
plt.xlabel('Date')
plt.ylabel('Total Deaths')
plt.ticklabel_format(style='plain', axis='y')
###Output
_____no_output_____
###Markdown
**Total Recovered Forecasting**
###Code
tr_nod = tr_nod.rename(columns={'Date': 'ds', 'Total Recovered': 'y'})
fbp3 = fbprophet.Prophet()
fbp3.fit(tr_nod)
future3 = fbp3.make_future_dataframe(periods=30,freq="M")
future3.tail()
forecast3 = fbp3.predict(future)
forecast3[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig3 = fbp3.plot(forecast3)
plt.xlabel('Days')
plt.ylabel('Total Recovered')
plt.ticklabel_format(style='plain', axis='y')
###Output
_____no_output_____
###Markdown
**Daily Cases Forecasting**
###Code
dc = dc.rename(columns={'Date': 'ds', 'Daily Cases': 'y'})
fbp4 = fbprophet.Prophet()
fbp4.fit(dc)
future4 = fbp4.make_future_dataframe(periods=100,freq="D")
future4.tail()
forecast4 = fbp4.predict(future4)
forecast4[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig4 = fbp4.plot(forecast4)
plt.xlabel('Days')
plt.ylabel('Daily Cases')
###Output
_____no_output_____
###Markdown
Final Project * Starter Code * Pick Table and Plot * Finite Difference with Model and Plot * Integrate Model and Plot --- Starter Code---
###Code
import sys
import requests
import json
import re
import datetime
import bs4
import matplotlib.pyplot as plt
import scipy.optimize
import numpy as np
#
# Helper Functions
#
def __guess_date(datestr):
"""
parses strings of the form "Jan30" to python dates with year 2020
"""
for fmt in ["%b%d", "%b %d"]:
try:
return datetime.datetime.strptime(datestr+"-2020", fmt+"-%Y")
except:
pass
return datetime.datetime(2020, 1, 1, 0, 0)
def get_corona_data():
"""
returns a list of dictionaries with various corona data each of the form:
table = {
"name": "table name",
"title": "table title",
"subtitle": "table subtitle",
"x": ["Jan01", "Jan02", "Jan03", "Jan04", "Jan05"],
"series": [
{
"name": "name for this line",
"values": [1, 2, 4, 8, 16]
}
]
}
"""
url = "https://www.worldometers.info/coronavirus/coronavirus-cases/"
html = bs4.BeautifulSoup(requests.get(url).text, "html.parser")
get_table = re.compile("Highcharts.chart\((.*?),(.*?)\);")
add_quotes = re.compile("([a-zA-Z0-9]+):")
remove_inner_quotes = re.compile("'([^'\"]*)\"?([a-zA-Z0-9]*)\"?(:?[^']*)'")
remove_last_comma = re.compile(",\s*\]")
remove_last_comma2 = re.compile(",\s*\}")
tables = list()
for tag in html.findAll("script"):
text = tag.text
if "Highcharts.chart(" in text:
match = get_table.search(text.replace("\n",""))
name = match.group(1)
name = name.replace("'","")
try:
datastr = match.group(2)
datastr = datastr.replace("d\\'Ivoire", "dIvoire")
jstr = add_quotes.sub("\"\\1\":", datastr)
jstr2 = remove_inner_quotes.sub("'\\1\\2\\3'", jstr)
jstr2 = remove_inner_quotes.sub("'\\1\\2\\3'", jstr2)
jstr2 = remove_inner_quotes.sub("\"\\1\\2\\3\"", jstr2)
jstr2 = jstr2.replace("'", '"')
jstr3 = remove_last_comma.sub("]", jstr2)
jstr3 = remove_last_comma2.sub("}", jstr3)
data = json.loads(jstr3)
if "xAxis" not in data:
continue
xdata = data["xAxis"]["categories"]
xdata = [__guess_date(datestr) for datestr in xdata]
series = list()
for line in data["series"]:
series.append({"name":line["name"], "values":line["data"]})
tables.append({
"name": name,
"title": data["title"]["text"],
"subtitle": data["subtitle"]["text"],
"x": xdata,
"series": series
})
except Exception as e:
msglen = 148 - len(name)
errmsg = str(e)
longmsg = len(errmsg) > msglen-3
errmsg = errmsg[:msglen]
if longmsg:
errmsg = errmsg[:-3] + "..."
print(f"warning ({name}):", errmsg, file=sys.stderr)
return tables
# show available data tables
tables = get_corona_data()
for t in tables:
print(f"{t['name']} - {t['title']}")
print("\n")
# pick the table with given name
TABLE_NAME = "coronavirus-cases-linear"
table = next(filter(lambda t: t["name"] == TABLE_NAME, tables))
# print some of the table data
print(table["title"])
y = table["series"][0]["values"]
print(np.array(y))
###Output
coronavirus-cases-linear - Total Cases
coronavirus - Daily New Cases
coronavirus-cases-growth - Growth Factor
coronavirus-cases-linear-outchina - Total Cases outside of China
coronavirus-cases-log-outchina - Total Cases outside of China
coronavirus-outchina - Daily New Cases outside of China
coronavirus-cases-growth-outchina - Growth Factor outside of China
graph-active-cases-total - Active Cases
graph-cured-total - Total Cured
graph-cured-daily - Daily Cured
cases-cured-daily - New Cases vs. New Recoveries
total-serious-linear - Total Serious and Critical Cases
total-serious-log - Total Serious and Critical Cases
deaths-cured-outcome - Outcome of total closed cases (recovery rate vs death rate)
Total Cases
[ 580 845 1317 2015 2800 4581 6058 7813 9823
11950 14553 17391 20630 24545 28266 31439 34876 37552
40553 43099 45134 59287 64438 67100 69197 71329 73332
75184 75700 76677 77673 78651 79205 80087 80828 81820
83112 84615 86604 88585 90443 93016 95314 98425 102050
106099 109991 114381 118948 126214 134509 145416 156475 169511
182431 198161 218843 244988 275680 305132 337612 379105 422940
471497 532491 597044 663805 724220 786006 859798 936851 1016948
1118684 1203505 1275007 1349051 1434167 1518614 1604252 1698881 1779842
1852365 1923937]
###Markdown
--- Pick Table and Plot---
###Code
# get total cases data
TABLE_NAME = "coronavirus-cases-linear"
table = next(filter(lambda t: t["name"] == TABLE_NAME, tables))
series = table["series"][0]
ylabel = series["name"]
title = table["title"]
x = table["x"]
y = series["values"]
# plot data
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot_date(x, y, label=ylabel)
ax.set(title=table["title"])
ax.set(xlabel="Date")
ax.set(ylabel="Cases")
ax.legend()
print("plotting covid data")
plt.show()
###Output
plotting covid data
###Markdown
--- Finite Difference with Model and Plot---
###Code
# calc deriv
x2 = list(x)
x2.pop()
y2 = list()
oldy = y[0]
for newy in y[1:]:
y2.append(newy - oldy)
oldy = newy
# fit gaussian to deriv: y = A exp( -(x-mu)^2 / (2 s^2) )
offsetx = 30
x2 = x2[offsetx:]
y2 = y2[offsetx:]
shiftx = np.array([(xx - x2[0]).days for xx in x2])
curve = lambda x,A,s,mu: A * np.exp(-0.5*np.square(x-mu)/s**2)
(A, s, mu), cov = scipy.optimize.curve_fit(curve, shiftx, y2, p0=(9e6,10,100))
fity = A * np.exp(-0.5*np.square(shiftx-mu)/s**2)
print(f"y = A e^(-1/2*(x-mu)^2/s^2)\nA = {A}\ns = {s}\nmu = {mu}")
# plot deriv with fit
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot_date(x2, y2, label="cases")
ax.plot_date(x2, fity, "-", label="model e^(-x^2)")
ax.set(title="Cases by Day")
ax.set(xlabel="Date")
ax.set(ylabel="Cases")
ax.legend()
print("plotting covid data derivative")
plt.show()
###Output
y = A e^(-1/2*(x-mu)^2/s^2)
A = 85878.30512722685
s = 11.707166157289212
mu = 44.385944010651244
plotting covid data derivative
###Markdown
--- Integrate Model and Plot---
###Code
# plot orig with fit to future
newx2 = np.array([i for i in range(len(x) + 20)])
fity2 = A * np.exp(-0.5*np.square(newx2-mu)/s**2)
fity2 = np.cumsum(fity2)
newx = [x2[0] + datetime.timedelta(days=xx.item(0)) for xx in newx2]
cap = A*np.sqrt(2*np.pi)*s
print(f"carrying capacity: {cap}")
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot_date(x, y, label="cases")
ax.plot_date(newx, fity2, "-", label="integrated model")
ax.set(title=table["title"] + " Extrapolation")
ax.set(xlabel="Date")
ax.set(ylabel="Cases")
ax.legend()
print("plotting covid data with extrapolation")
plt.show()
###Output
carrying capacity: 2520142.9801302557
plotting covid data with extrapolation
###Markdown
Visualize df using Plotly (Optional)
###Code
# import plotly.express as px
# import datetime
# today_date = datetime.datetime.today().date().strftime("%d-%m-%Y")
# fig = px.choropleth(df, locations="iso_alpha",
# color="TotalCases",
# hover_name="Country",
# color_continuous_scale=px.colors.diverging.Portland,
# title='Daily Coronavirus Cases in the Word [{}]'.format(today_date)\
# +' Source: <a https://www.worldometers.info/coronavirus/">Worldometers</a>',
# height=600,
# range_color=[0,1000],
# labels={'TotalCases':'Min Number of cases'})
# fig.show()
###Output
_____no_output_____
###Markdown
###Code
# packages to install goes here
!pip install nglview
import numpy as np
import nglview as nv
# !pip install nglview;
# https://www.ncbi.nlm.nih.gov/nuccore/NC_045512 nucleotides
cc = """1 attaaaggtt tataccttcc caggtaacaa accaaccaac tttcgatctc ttgtagatct
61 gttctctaaa cgaactttaa aatctgtgtg gctgtcactc ggctgcatgc ttagtgcact
121 cacgcagtat aattaataac taattactgt cgttgacagg acacgagtaa ctcgtctatc
181 ttctgcaggc tgcttacggt ttcgtccgtg ttgcagccga tcatcagcac atctaggttt
241 cgtccgggtg tgaccgaaag gtaagatgga gagccttgtc cctggtttca acgagaaaac
301 acacgtccaa ctcagtttgc ctgttttaca ggttcgcgac gtgctcgtac gtggctttgg
361 agactccgtg gaggaggtct tatcagaggc acgtcaacat cttaaagatg gcacttgtgg
421 cttagtagaa gttgaaaaag gcgttttgcc tcaacttgaa cagccctatg tgttcatcaa
481 acgttcggat gctcgaactg cacctcatgg tcatgttatg gttgagctgg tagcagaact
541 cgaaggcatt cagtacggtc gtagtggtga gacacttggt gtccttgtcc ctcatgtggg
601 cgaaatacca gtggcttacc gcaaggttct tcttcgtaag aacggtaata aaggagctgg
661 tggccatagt tacggcgccg atctaaagtc atttgactta ggcgacgagc ttggcactga
721 tccttatgaa gattttcaag aaaactggaa cactaaacat agcagtggtg ttacccgtga
781 actcatgcgt gagcttaacg gaggggcata cactcgctat gtcgataaca acttctgtgg
841 ccctgatggc taccctcttg agtgcattaa agaccttcta gcacgtgctg gtaaagcttc
901 atgcactttg tccgaacaac tggactttat tgacactaag aggggtgtat actgctgccg
961 tgaacatgag catgaaattg cttggtacac ggaacgttct gaaaagagct atgaattgca
1021 gacacctttt gaaattaaat tggcaaagaa atttgacacc ttcaatgggg aatgtccaaa
1081 ttttgtattt cccttaaatt ccataatcaa gactattcaa ccaagggttg aaaagaaaaa
1141 gcttgatggc tttatgggta gaattcgatc tgtctatcca gttgcgtcac caaatgaatg
1201 caaccaaatg tgcctttcaa ctctcatgaa gtgtgatcat tgtggtgaaa cttcatggca
1261 gacgggcgat tttgttaaag ccacttgcga attttgtggc actgagaatt tgactaaaga
1321 aggtgccact acttgtggtt acttacccca aaatgctgtt gttaaaattt attgtccagc
1381 atgtcacaat tcagaagtag gacctgagca tagtcttgcc gaataccata atgaatctgg
1441 cttgaaaacc attcttcgta agggtggtcg cactattgcc tttggaggct gtgtgttctc
1501 ttatgttggt tgccataaca agtgtgccta ttgggttcca cgtgctagcg ctaacatagg
1561 ttgtaaccat acaggtgttg ttggagaagg ttccgaaggt cttaatgaca accttcttga
1621 aatactccaa aaagagaaag tcaacatcaa tattgttggt gactttaaac ttaatgaaga
1681 gatcgccatt attttggcat ctttttctgc ttccacaagt gcttttgtgg aaactgtgaa
1741 aggtttggat tataaagcat tcaaacaaat tgttgaatcc tgtggtaatt ttaaagttac
1801 aaaaggaaaa gctaaaaaag gtgcctggaa tattggtgaa cagaaatcaa tactgagtcc
1861 tctttatgca tttgcatcag aggctgctcg tgttgtacga tcaattttct cccgcactct
1921 tgaaactgct caaaattctg tgcgtgtttt acagaaggcc gctataacaa tactagatgg
1981 aatttcacag tattcactga gactcattga tgctatgatg ttcacatctg atttggctac
2041 taacaatcta gttgtaatgg cctacattac aggtggtgtt gttcagttga cttcgcagtg
2101 gctaactaac atctttggca ctgtttatga aaaactcaaa cccgtccttg attggcttga
2161 agagaagttt aaggaaggtg tagagtttct tagagacggt tgggaaattg ttaaatttat
2221 ctcaacctgt gcttgtgaaa ttgtcggtgg acaaattgtc acctgtgcaa aggaaattaa
2281 ggagagtgtt cagacattct ttaagcttgt aaataaattt ttggctttgt gtgctgactc
2341 tatcattatt ggtggagcta aacttaaagc cttgaattta ggtgaaacat ttgtcacgca
2401 ctcaaaggga ttgtacagaa agtgtgttaa atccagagaa gaaactggcc tactcatgcc
2461 tctaaaagcc ccaaaagaaa ttatcttctt agagggagaa acacttccca cagaagtgtt
2521 aacagaggaa gttgtcttga aaactggtga tttacaacca ttagaacaac ctactagtga
2581 agctgttgaa gctccattgg ttggtacacc agtttgtatt aacgggctta tgttgctcga
2641 aatcaaagac acagaaaagt actgtgccct tgcacctaat atgatggtaa caaacaatac
2701 cttcacactc aaaggcggtg caccaacaaa ggttactttt ggtgatgaca ctgtgataga
2761 agtgcaaggt tacaagagtg tgaatatcac ttttgaactt gatgaaagga ttgataaagt
2821 acttaatgag aagtgctctg cctatacagt tgaactcggt acagaagtaa atgagttcgc
2881 ctgtgttgtg gcagatgctg tcataaaaac tttgcaacca gtatctgaat tacttacacc
2941 actgggcatt gatttagatg agtggagtat ggctacatac tacttatttg atgagtctgg
3001 tgagtttaaa ttggcttcac atatgtattg ttctttctac cctccagatg aggatgaaga
3061 agaaggtgat tgtgaagaag aagagtttga gccatcaact caatatgagt atggtactga
3121 agatgattac caaggtaaac ctttggaatt tggtgccact tctgctgctc ttcaacctga
3181 agaagagcaa gaagaagatt ggttagatga tgatagtcaa caaactgttg gtcaacaaga
3241 cggcagtgag gacaatcaga caactactat tcaaacaatt gttgaggttc aacctcaatt
3301 agagatggaa cttacaccag ttgttcagac tattgaagtg aatagtttta gtggttattt
3361 aaaacttact gacaatgtat acattaaaaa tgcagacatt gtggaagaag ctaaaaaggt
3421 aaaaccaaca gtggttgtta atgcagccaa tgtttacctt aaacatggag gaggtgttgc
3481 aggagcctta aataaggcta ctaacaatgc catgcaagtt gaatctgatg attacatagc
3541 tactaatgga ccacttaaag tgggtggtag ttgtgtttta agcggacaca atcttgctaa
3601 acactgtctt catgttgtcg gcccaaatgt taacaaaggt gaagacattc aacttcttaa
3661 gagtgcttat gaaaatttta atcagcacga agttctactt gcaccattat tatcagctgg
3721 tatttttggt gctgacccta tacattcttt aagagtttgt gtagatactg ttcgcacaaa
3781 tgtctactta gctgtctttg ataaaaatct ctatgacaaa cttgtttcaa gctttttgga
3841 aatgaagagt gaaaagcaag ttgaacaaaa gatcgctgag attcctaaag aggaagttaa
3901 gccatttata actgaaagta aaccttcagt tgaacagaga aaacaagatg ataagaaaat
3961 caaagcttgt gttgaagaag ttacaacaac tctggaagaa actaagttcc tcacagaaaa
4021 cttgttactt tatattgaca ttaatggcaa tcttcatcca gattctgcca ctcttgttag
4081 tgacattgac atcactttct taaagaaaga tgctccatat atagtgggtg atgttgttca
4141 agagggtgtt ttaactgctg tggttatacc tactaaaaag gctggtggca ctactgaaat
4201 gctagcgaaa gctttgagaa aagtgccaac agacaattat ataaccactt acccgggtca
4261 gggtttaaat ggttacactg tagaggaggc aaagacagtg cttaaaaagt gtaaaagtgc
4321 cttttacatt ctaccatcta ttatctctaa tgagaagcaa gaaattcttg gaactgtttc
4381 ttggaatttg cgagaaatgc ttgcacatgc agaagaaaca cgcaaattaa tgcctgtctg
4441 tgtggaaact aaagccatag tttcaactat acagcgtaaa tataagggta ttaaaataca
4501 agagggtgtg gttgattatg gtgctagatt ttacttttac accagtaaaa caactgtagc
4561 gtcacttatc aacacactta acgatctaaa tgaaactctt gttacaatgc cacttggcta
4621 tgtaacacat ggcttaaatt tggaagaagc tgctcggtat atgagatctc tcaaagtgcc
4681 agctacagtt tctgtttctt cacctgatgc tgttacagcg tataatggtt atcttacttc
4741 ttcttctaaa acacctgaag aacattttat tgaaaccatc tcacttgctg gttcctataa
4801 agattggtcc tattctggac aatctacaca actaggtata gaatttctta agagaggtga
4861 taaaagtgta tattacacta gtaatcctac cacattccac ctagatggtg aagttatcac
4921 ctttgacaat cttaagacac ttctttcttt gagagaagtg aggactatta aggtgtttac
4981 aacagtagac aacattaacc tccacacgca agttgtggac atgtcaatga catatggaca
5041 acagtttggt ccaacttatt tggatggagc tgatgttact aaaataaaac ctcataattc
5101 acatgaaggt aaaacatttt atgttttacc taatgatgac actctacgtg ttgaggcttt
5161 tgagtactac cacacaactg atcctagttt tctgggtagg tacatgtcag cattaaatca
5221 cactaaaaag tggaaatacc cacaagttaa tggtttaact tctattaaat gggcagataa
5281 caactgttat cttgccactg cattgttaac actccaacaa atagagttga agtttaatcc
5341 acctgctcta caagatgctt attacagagc aagggctggt gaagctgcta acttttgtgc
5401 acttatctta gcctactgta ataagacagt aggtgagtta ggtgatgtta gagaaacaat
5461 gagttacttg tttcaacatg ccaatttaga ttcttgcaaa agagtcttga acgtggtgtg
5521 taaaacttgt ggacaacagc agacaaccct taagggtgta gaagctgtta tgtacatggg
5581 cacactttct tatgaacaat ttaagaaagg tgttcagata ccttgtacgt gtggtaaaca
5641 agctacaaaa tatctagtac aacaggagtc accttttgtt atgatgtcag caccacctgc
5701 tcagtatgaa cttaagcatg gtacatttac ttgtgctagt gagtacactg gtaattacca
5761 gtgtggtcac tataaacata taacttctaa agaaactttg tattgcatag acggtgcttt
5821 acttacaaag tcctcagaat acaaaggtcc tattacggat gttttctaca aagaaaacag
5881 ttacacaaca accataaaac cagttactta taaattggat ggtgttgttt gtacagaaat
5941 tgaccctaag ttggacaatt attataagaa agacaattct tatttcacag agcaaccaat
6001 tgatcttgta ccaaaccaac catatccaaa cgcaagcttc gataatttta agtttgtatg
6061 tgataatatc aaatttgctg atgatttaaa ccagttaact ggttataaga aacctgcttc
6121 aagagagctt aaagttacat ttttccctga cttaaatggt gatgtggtgg ctattgatta
6181 taaacactac acaccctctt ttaagaaagg agctaaattg ttacataaac ctattgtttg
6241 gcatgttaac aatgcaacta ataaagccac gtataaacca aatacctggt gtatacgttg
6301 tctttggagc acaaaaccag ttgaaacatc aaattcgttt gatgtactga agtcagagga
6361 cgcgcaggga atggataatc ttgcctgcga agatctaaaa ccagtctctg aagaagtagt
6421 ggaaaatcct accatacaga aagacgttct tgagtgtaat gtgaaaacta ccgaagttgt
6481 aggagacatt atacttaaac cagcaaataa tagtttaaaa attacagaag aggttggcca
6541 cacagatcta atggctgctt atgtagacaa ttctagtctt actattaaga aacctaatga
6601 attatctaga gtattaggtt tgaaaaccct tgctactcat ggtttagctg ctgttaatag
6661 tgtcccttgg gatactatag ctaattatgc taagcctttt cttaacaaag ttgttagtac
6721 aactactaac atagttacac ggtgtttaaa ccgtgtttgt actaattata tgccttattt
6781 ctttacttta ttgctacaat tgtgtacttt tactagaagt acaaattcta gaattaaagc
6841 atctatgccg actactatag caaagaatac tgttaagagt gtcggtaaat tttgtctaga
6901 ggcttcattt aattatttga agtcacctaa tttttctaaa ctgataaata ttataatttg
6961 gtttttacta ttaagtgttt gcctaggttc tttaatctac tcaaccgctg ctttaggtgt
7021 tttaatgtct aatttaggca tgccttctta ctgtactggt tacagagaag gctatttgaa
7081 ctctactaat gtcactattg caacctactg tactggttct ataccttgta gtgtttgtct
7141 tagtggttta gattctttag acacctatcc ttctttagaa actatacaaa ttaccatttc
7201 atcttttaaa tgggatttaa ctgcttttgg cttagttgca gagtggtttt tggcatatat
7261 tcttttcact aggtttttct atgtacttgg attggctgca atcatgcaat tgtttttcag
7321 ctattttgca gtacatttta ttagtaattc ttggcttatg tggttaataa ttaatcttgt
7381 acaaatggcc ccgatttcag ctatggttag aatgtacatc ttctttgcat cattttatta
7441 tgtatggaaa agttatgtgc atgttgtaga cggttgtaat tcatcaactt gtatgatgtg
7501 ttacaaacgt aatagagcaa caagagtcga atgtacaact attgttaatg gtgttagaag
7561 gtccttttat gtctatgcta atggaggtaa aggcttttgc aaactacaca attggaattg
7621 tgttaattgt gatacattct gtgctggtag tacatttatt agtgatgaag ttgcgagaga
7681 cttgtcacta cagtttaaaa gaccaataaa tcctactgac cagtcttctt acatcgttga
7741 tagtgttaca gtgaagaatg gttccatcca tctttacttt gataaagctg gtcaaaagac
7801 ttatgaaaga cattctctct ctcattttgt taacttagac aacctgagag ctaataacac
7861 taaaggttca ttgcctatta atgttatagt ttttgatggt aaatcaaaat gtgaagaatc
7921 atctgcaaaa tcagcgtctg tttactacag tcagcttatg tgtcaaccta tactgttact
7981 agatcaggca ttagtgtctg atgttggtga tagtgcggaa gttgcagtta aaatgtttga
8041 tgcttacgtt aatacgtttt catcaacttt taacgtacca atggaaaaac tcaaaacact
8101 agttgcaact gcagaagctg aacttgcaaa gaatgtgtcc ttagacaatg tcttatctac
8161 ttttatttca gcagctcggc aagggtttgt tgattcagat gtagaaacta aagatgttgt
8221 tgaatgtctt aaattgtcac atcaatctga catagaagtt actggcgata gttgtaataa
8281 ctatatgctc acctataaca aagttgaaaa catgacaccc cgtgaccttg gtgcttgtat
8341 tgactgtagt gcgcgtcata ttaatgcgca ggtagcaaaa agtcacaaca ttgctttgat
8401 atggaacgtt aaagatttca tgtcattgtc tgaacaacta cgaaaacaaa tacgtagtgc
8461 tgctaaaaag aataacttac cttttaagtt gacatgtgca actactagac aagttgttaa
8521 tgttgtaaca acaaagatag cacttaaggg tggtaaaatt gttaataatt ggttgaagca
8581 gttaattaaa gttacacttg tgttcctttt tgttgctgct attttctatt taataacacc
8641 tgttcatgtc atgtctaaac atactgactt ttcaagtgaa atcataggat acaaggctat
8701 tgatggtggt gtcactcgtg acatagcatc tacagatact tgttttgcta acaaacatgc
8761 tgattttgac acatggttta gccagcgtgg tggtagttat actaatgaca aagcttgccc
8821 attgattgct gcagtcataa caagagaagt gggttttgtc gtgcctggtt tgcctggcac
8881 gatattacgc acaactaatg gtgacttttt gcatttctta cctagagttt ttagtgcagt
8941 tggtaacatc tgttacacac catcaaaact tatagagtac actgactttg caacatcagc
9001 ttgtgttttg gctgctgaat gtacaatttt taaagatgct tctggtaagc cagtaccata
9061 ttgttatgat accaatgtac tagaaggttc tgttgcttat gaaagtttac gccctgacac
9121 acgttatgtg ctcatggatg gctctattat tcaatttcct aacacctacc ttgaaggttc
9181 tgttagagtg gtaacaactt ttgattctga gtactgtagg cacggcactt gtgaaagatc
9241 agaagctggt gtttgtgtat ctactagtgg tagatgggta cttaacaatg attattacag
9301 atctttacca ggagttttct gtggtgtaga tgctgtaaat ttacttacta atatgtttac
9361 accactaatt caacctattg gtgctttgga catatcagca tctatagtag ctggtggtat
9421 tgtagctatc gtagtaacat gccttgccta ctattttatg aggtttagaa gagcttttgg
9481 tgaatacagt catgtagttg cctttaatac tttactattc cttatgtcat tcactgtact
9541 ctgtttaaca ccagtttact cattcttacc tggtgtttat tctgttattt acttgtactt
9601 gacattttat cttactaatg atgtttcttt tttagcacat attcagtgga tggttatgtt
9661 cacaccttta gtacctttct ggataacaat tgcttatatc atttgtattt ccacaaagca
9721 tttctattgg ttctttagta attacctaaa gagacgtgta gtctttaatg gtgtttcctt
9781 tagtactttt gaagaagctg cgctgtgcac ctttttgtta aataaagaaa tgtatctaaa
9841 gttgcgtagt gatgtgctat tacctcttac gcaatataat agatacttag ctctttataa
9901 taagtacaag tattttagtg gagcaatgga tacaactagc tacagagaag ctgcttgttg
9961 tcatctcgca aaggctctca atgacttcag taactcaggt tctgatgttc tttaccaacc
10021 accacaaacc tctatcacct cagctgtttt gcagagtggt tttagaaaaa tggcattccc
10081 atctggtaaa gttgagggtt gtatggtaca agtaacttgt ggtacaacta cacttaacgg
10141 tctttggctt gatgacgtag tttactgtcc aagacatgtg atctgcacct ctgaagacat
10201 gcttaaccct aattatgaag atttactcat tcgtaagtct aatcataatt tcttggtaca
10261 ggctggtaat gttcaactca gggttattgg acattctatg caaaattgtg tacttaagct
10321 taaggttgat acagccaatc ctaagacacc taagtataag tttgttcgca ttcaaccagg
10381 acagactttt tcagtgttag cttgttacaa tggttcacca tctggtgttt accaatgtgc
10441 tatgaggccc aatttcacta ttaagggttc attccttaat ggttcatgtg gtagtgttgg
10501 ttttaacata gattatgact gtgtctcttt ttgttacatg caccatatgg aattaccaac
10561 tggagttcat gctggcacag acttagaagg taacttttat ggaccttttg ttgacaggca
10621 aacagcacaa gcagctggta cggacacaac tattacagtt aatgttttag cttggttgta
10681 cgctgctgtt ataaatggag acaggtggtt tctcaatcga tttaccacaa ctcttaatga
10741 ctttaacctt gtggctatga agtacaatta tgaacctcta acacaagacc atgttgacat
10801 actaggacct ctttctgctc aaactggaat tgccgtttta gatatgtgtg cttcattaaa
10861 agaattactg caaaatggta tgaatggacg taccatattg ggtagtgctt tattagaaga
10921 tgaatttaca ccttttgatg ttgttagaca atgctcaggt gttactttcc aaagtgcagt
10981 gaaaagaaca atcaagggta cacaccactg gttgttactc acaattttga cttcactttt
11041 agttttagtc cagagtactc aatggtcttt gttctttttt ttgtatgaaa atgccttttt
11101 accttttgct atgggtatta ttgctatgtc tgcttttgca atgatgtttg tcaaacataa
11161 gcatgcattt ctctgtttgt ttttgttacc ttctcttgcc actgtagctt attttaatat
11221 ggtctatatg cctgctagtt gggtgatgcg tattatgaca tggttggata tggttgatac
11281 tagtttgtct ggttttaagc taaaagactg tgttatgtat gcatcagctg tagtgttact
11341 aatccttatg acagcaagaa ctgtgtatga tgatggtgct aggagagtgt ggacacttat
11401 gaatgtcttg acactcgttt ataaagttta ttatggtaat gctttagatc aagccatttc
11461 catgtgggct cttataatct ctgttacttc taactactca ggtgtagtta caactgtcat
11521 gtttttggcc agaggtattg tttttatgtg tgttgagtat tgccctattt tcttcataac
11581 tggtaataca cttcagtgta taatgctagt ttattgtttc ttaggctatt tttgtacttg
11641 ttactttggc ctcttttgtt tactcaaccg ctactttaga ctgactcttg gtgtttatga
11701 ttacttagtt tctacacagg agtttagata tatgaattca cagggactac tcccacccaa
11761 gaatagcata gatgccttca aactcaacat taaattgttg ggtgttggtg gcaaaccttg
11821 tatcaaagta gccactgtac agtctaaaat gtcagatgta aagtgcacat cagtagtctt
11881 actctcagtt ttgcaacaac tcagagtaga atcatcatct aaattgtggg ctcaatgtgt
11941 ccagttacac aatgacattc tcttagctaa agatactact gaagcctttg aaaaaatggt
12001 ttcactactt tctgttttgc tttccatgca gggtgctgta gacataaaca agctttgtga
12061 agaaatgctg gacaacaggg caaccttaca agctatagcc tcagagttta gttcccttcc
12121 atcatatgca gcttttgcta ctgctcaaga agcttatgag caggctgttg ctaatggtga
12181 ttctgaagtt gttcttaaaa agttgaagaa gtctttgaat gtggctaaat ctgaatttga
12241 ccgtgatgca gccatgcaac gtaagttgga aaagatggct gatcaagcta tgacccaaat
12301 gtataaacag gctagatctg aggacaagag ggcaaaagtt actagtgcta tgcagacaat
12361 gcttttcact atgcttagaa agttggataa tgatgcactc aacaacatta tcaacaatgc
12421 aagagatggt tgtgttccct tgaacataat acctcttaca acagcagcca aactaatggt
12481 tgtcatacca gactataaca catataaaaa tacgtgtgat ggtacaacat ttacttatgc
12541 atcagcattg tgggaaatcc aacaggttgt agatgcagat agtaaaattg ttcaacttag
12601 tgaaattagt atggacaatt cacctaattt agcatggcct cttattgtaa cagctttaag
12661 ggccaattct gctgtcaaat tacagaataa tgagcttagt cctgttgcac tacgacagat
12721 gtcttgtgct gccggtacta cacaaactgc ttgcactgat gacaatgcgt tagcttacta
12781 caacacaaca aagggaggta ggtttgtact tgcactgtta tccgatttac aggatttgaa
12841 atgggctaga ttccctaaga gtgatggaac tggtactatc tatacagaac tggaaccacc
12901 ttgtaggttt gttacagaca cacctaaagg tcctaaagtg aagtatttat actttattaa
12961 aggattaaac aacctaaata gaggtatggt acttggtagt ttagctgcca cagtacgtct
13021 acaagctggt aatgcaacag aagtgcctgc caattcaact gtattatctt tctgtgcttt
13081 tgctgtagat gctgctaaag cttacaaaga ttatctagct agtgggggac aaccaatcac
13141 taattgtgtt aagatgttgt gtacacacac tggtactggt caggcaataa cagttacacc
13201 ggaagccaat atggatcaag aatcctttgg tggtgcatcg tgttgtctgt actgccgttg
13261 ccacatagat catccaaatc ctaaaggatt ttgtgactta aaaggtaagt atgtacaaat
13321 acctacaact tgtgctaatg accctgtggg ttttacactt aaaaacacag tctgtaccgt
13381 ctgcggtatg tggaaaggtt atggctgtag ttgtgatcaa ctccgcgaac ccatgcttca
13441 gtcagctgat gcacaatcgt ttttaaacgg gtttgcggtg taagtgcagc ccgtcttaca
13501 ccgtgcggca caggcactag tactgatgtc gtatacaggg cttttgacat ctacaatgat
13561 aaagtagctg gttttgctaa attcctaaaa actaattgtt gtcgcttcca agaaaaggac
13621 gaagatgaca atttaattga ttcttacttt gtagttaaga gacacacttt ctctaactac
13681 caacatgaag aaacaattta taatttactt aaggattgtc cagctgttgc taaacatgac
13741 ttctttaagt ttagaataga cggtgacatg gtaccacata tatcacgtca acgtcttact
13801 aaatacacaa tggcagacct cgtctatgct ttaaggcatt ttgatgaagg taattgtgac
13861 acattaaaag aaatacttgt cacatacaat tgttgtgatg atgattattt caataaaaag
13921 gactggtatg attttgtaga aaacccagat atattacgcg tatacgccaa cttaggtgaa
13981 cgtgtacgcc aagctttgtt aaaaacagta caattctgtg atgccatgcg aaatgctggt
14041 attgttggtg tactgacatt agataatcaa gatctcaatg gtaactggta tgatttcggt
14101 gatttcatac aaaccacgcc aggtagtgga gttcctgttg tagattctta ttattcattg
14161 ttaatgccta tattaacctt gaccagggct ttaactgcag agtcacatgt tgacactgac
14221 ttaacaaagc cttacattaa gtgggatttg ttaaaatatg acttcacgga agagaggtta
14281 aaactctttg accgttattt taaatattgg gatcagacat accacccaaa ttgtgttaac
14341 tgtttggatg acagatgcat tctgcattgt gcaaacttta atgttttatt ctctacagtg
14401 ttcccaccta caagttttgg accactagtg agaaaaatat ttgttgatgg tgttccattt
14461 gtagtttcaa ctggatacca cttcagagag ctaggtgttg tacataatca ggatgtaaac
14521 ttacatagct ctagacttag ttttaaggaa ttacttgtgt atgctgctga ccctgctatg
14581 cacgctgctt ctggtaatct attactagat aaacgcacta cgtgcttttc agtagctgca
14641 cttactaaca atgttgcttt tcaaactgtc aaacccggta attttaacaa agacttctat
14701 gactttgctg tgtctaaggg tttctttaag gaaggaagtt ctgttgaatt aaaacacttc
14761 ttctttgctc aggatggtaa tgctgctatc agcgattatg actactatcg ttataatcta
14821 ccaacaatgt gtgatatcag acaactacta tttgtagttg aagttgttga taagtacttt
14881 gattgttacg atggtggctg tattaatgct aaccaagtca tcgtcaacaa cctagacaaa
14941 tcagctggtt ttccatttaa taaatggggt aaggctagac tttattatga ttcaatgagt
15001 tatgaggatc aagatgcact tttcgcatat acaaaacgta atgtcatccc tactataact
15061 caaatgaatc ttaagtatgc cattagtgca aagaatagag ctcgcaccgt agctggtgtc
15121 tctatctgta gtactatgac caatagacag tttcatcaaa aattattgaa atcaatagcc
15181 gccactagag gagctactgt agtaattgga acaagcaaat tctatggtgg ttggcacaac
15241 atgttaaaaa ctgtttatag tgatgtagaa aaccctcacc ttatgggttg ggattatcct
15301 aaatgtgata gagccatgcc taacatgctt agaattatgg cctcacttgt tcttgctcgc
15361 aaacatacaa cgtgttgtag cttgtcacac cgtttctata gattagctaa tgagtgtgct
15421 caagtattga gtgaaatggt catgtgtggc ggttcactat atgttaaacc aggtggaacc
15481 tcatcaggag atgccacaac tgcttatgct aatagtgttt ttaacatttg tcaagctgtc
15541 acggccaatg ttaatgcact tttatctact gatggtaaca aaattgccga taagtatgtc
15601 cgcaatttac aacacagact ttatgagtgt ctctatagaa atagagatgt tgacacagac
15661 tttgtgaatg agttttacgc atatttgcgt aaacatttct caatgatgat actctctgac
15721 gatgctgttg tgtgtttcaa tagcacttat gcatctcaag gtctagtggc tagcataaag
15781 aactttaagt cagttcttta ttatcaaaac aatgttttta tgtctgaagc aaaatgttgg
15841 actgagactg accttactaa aggacctcat gaattttgct ctcaacatac aatgctagtt
15901 aaacagggtg atgattatgt gtaccttcct tacccagatc catcaagaat cctaggggcc
15961 ggctgttttg tagatgatat cgtaaaaaca gatggtacac ttatgattga acggttcgtg
16021 tctttagcta tagatgctta cccacttact aaacatccta atcaggagta tgctgatgtc
16081 tttcatttgt acttacaata cataagaaag ctacatgatg agttaacagg acacatgtta
16141 gacatgtatt ctgttatgct tactaatgat aacacttcaa ggtattggga acctgagttt
16201 tatgaggcta tgtacacacc gcatacagtc ttacaggctg ttggggcttg tgttctttgc
16261 aattcacaga cttcattaag atgtggtgct tgcatacgta gaccattctt atgttgtaaa
16321 tgctgttacg accatgtcat atcaacatca cataaattag tcttgtctgt taatccgtat
16381 gtttgcaatg ctccaggttg tgatgtcaca gatgtgactc aactttactt aggaggtatg
16441 agctattatt gtaaatcaca taaaccaccc attagttttc cattgtgtgc taatggacaa
16501 gtttttggtt tatataaaaa tacatgtgtt ggtagcgata atgttactga ctttaatgca
16561 attgcaacat gtgactggac aaatgctggt gattacattt tagctaacac ctgtactgaa
16621 agactcaagc tttttgcagc agaaacgctc aaagctactg aggagacatt taaactgtct
16681 tatggtattg ctactgtacg tgaagtgctg tctgacagag aattacatct ttcatgggaa
16741 gttggtaaac ctagaccacc acttaaccga aattatgtct ttactggtta tcgtgtaact
16801 aaaaacagta aagtacaaat aggagagtac acctttgaaa aaggtgacta tggtgatgct
16861 gttgtttacc gaggtacaac aacttacaaa ttaaatgttg gtgattattt tgtgctgaca
16921 tcacatacag taatgccatt aagtgcacct acactagtgc cacaagagca ctatgttaga
16981 attactggct tatacccaac actcaatatc tcagatgagt tttctagcaa tgttgcaaat
17041 tatcaaaagg ttggtatgca aaagtattct acactccagg gaccacctgg tactggtaag
17101 agtcattttg ctattggcct agctctctac tacccttctg ctcgcatagt gtatacagct
17161 tgctctcatg ccgctgttga tgcactatgt gagaaggcat taaaatattt gcctatagat
17221 aaatgtagta gaattatacc tgcacgtgct cgtgtagagt gttttgataa attcaaagtg
17281 aattcaacat tagaacagta tgtcttttgt actgtaaatg cattgcctga gacgacagca
17341 gatatagttg tctttgatga aatttcaatg gccacaaatt atgatttgag tgttgtcaat
17401 gccagattac gtgctaagca ctatgtgtac attggcgacc ctgctcaatt acctgcacca
17461 cgcacattgc taactaaggg cacactagaa ccagaatatt tcaattcagt gtgtagactt
17521 atgaaaacta taggtccaga catgttcctc ggaacttgtc ggcgttgtcc tgctgaaatt
17581 gttgacactg tgagtgcttt ggtttatgat aataagctta aagcacataa agacaaatca
17641 gctcaatgct ttaaaatgtt ttataagggt gttatcacgc atgatgtttc atctgcaatt
17701 aacaggccac aaataggcgt ggtaagagaa ttccttacac gtaaccctgc ttggagaaaa
17761 gctgtcttta tttcacctta taattcacag aatgctgtag cctcaaagat tttgggacta
17821 ccaactcaaa ctgttgattc atcacagggc tcagaatatg actatgtcat attcactcaa
17881 accactgaaa cagctcactc ttgtaatgta aacagattta atgttgctat taccagagca
17941 aaagtaggca tactttgcat aatgtctgat agagaccttt atgacaagtt gcaatttaca
18001 agtcttgaaa ttccacgtag gaatgtggca actttacaag ctgaaaatgt aacaggactc
18061 tttaaagatt gtagtaaggt aatcactggg ttacatccta cacaggcacc tacacacctc
18121 agtgttgaca ctaaattcaa aactgaaggt ttatgtgttg acatacctgg catacctaag
18181 gacatgacct atagaagact catctctatg atgggtttta aaatgaatta tcaagttaat
18241 ggttacccta acatgtttat cacccgcgaa gaagctataa gacatgtacg tgcatggatt
18301 ggcttcgatg tcgaggggtg tcatgctact agagaagctg ttggtaccaa tttaccttta
18361 cagctaggtt tttctacagg tgttaaccta gttgctgtac ctacaggtta tgttgataca
18421 cctaataata cagatttttc cagagttagt gctaaaccac cgcctggaga tcaatttaaa
18481 cacctcatac cacttatgta caaaggactt ccttggaatg tagtgcgtat aaagattgta
18541 caaatgttaa gtgacacact taaaaatctc tctgacagag tcgtatttgt cttatgggca
18601 catggctttg agttgacatc tatgaagtat tttgtgaaaa taggacctga gcgcacctgt
18661 tgtctatgtg atagacgtgc cacatgcttt tccactgctt cagacactta tgcctgttgg
18721 catcattcta ttggatttga ttacgtctat aatccgttta tgattgatgt tcaacaatgg
18781 ggttttacag gtaacctaca aagcaaccat gatctgtatt gtcaagtcca tggtaatgca
18841 catgtagcta gttgtgatgc aatcatgact aggtgtctag ctgtccacga gtgctttgtt
18901 aagcgtgttg actggactat tgaatatcct ataattggtg atgaactgaa gattaatgcg
18961 gcttgtagaa aggttcaaca catggttgtt aaagctgcat tattagcaga caaattccca
19021 gttcttcacg acattggtaa ccctaaagct attaagtgtg tacctcaagc tgatgtagaa
19081 tggaagttct atgatgcaca gccttgtagt gacaaagctt ataaaataga agaattattc
19141 tattcttatg ccacacattc tgacaaattc acagatggtg tatgcctatt ttggaattgc
19201 aatgtcgata gatatcctgc taattccatt gtttgtagat ttgacactag agtgctatct
19261 aaccttaact tgcctggttg tgatggtggc agtttgtatg taaataaaca tgcattccac
19321 acaccagctt ttgataaaag tgcttttgtt aatttaaaac aattaccatt tttctattac
19381 tctgacagtc catgtgagtc tcatggaaaa caagtagtgt cagatataga ttatgtacca
19441 ctaaagtctg ctacgtgtat aacacgttgc aatttaggtg gtgctgtctg tagacatcat
19501 gctaatgagt acagattgta tctcgatgct tataacatga tgatctcagc tggctttagc
19561 ttgtgggttt acaaacaatt tgatacttat aacctctgga acacttttac aagacttcag
19621 agtttagaaa atgtggcttt taatgttgta aataagggac actttgatgg acaacagggt
19681 gaagtaccag tttctatcat taataacact gtttacacaa aagttgatgg tgttgatgta
19741 gaattgtttg aaaataaaac aacattacct gttaatgtag catttgagct ttgggctaag
19801 cgcaacatta aaccagtacc agaggtgaaa atactcaata atttgggtgt ggacattgct
19861 gctaatactg tgatctggga ctacaaaaga gatgctccag cacatatatc tactattggt
19921 gtttgttcta tgactgacat agccaagaaa ccaactgaaa cgatttgtgc accactcact
19981 gtcttttttg atggtagagt tgatggtcaa gtagacttat ttagaaatgc ccgtaatggt
20041 gttcttatta cagaaggtag tgttaaaggt ttacaaccat ctgtaggtcc caaacaagct
20101 agtcttaatg gagtcacatt aattggagaa gccgtaaaaa cacagttcaa ttattataag
20161 aaagttgatg gtgttgtcca acaattacct gaaacttact ttactcagag tagaaattta
20221 caagaattta aacccaggag tcaaatggaa attgatttct tagaattagc tatggatgaa
20281 ttcattgaac ggtataaatt agaaggctat gccttcgaac atatcgttta tggagatttt
20341 agtcatagtc agttaggtgg tttacatcta ctgattggac tagctaaacg ttttaaggaa
20401 tcaccttttg aattagaaga ttttattcct atggacagta cagttaaaaa ctatttcata
20461 acagatgcgc aaacaggttc atctaagtgt gtgtgttctg ttattgattt attacttgat
20521 gattttgttg aaataataaa atcccaagat ttatctgtag tttctaaggt tgtcaaagtg
20581 actattgact atacagaaat ttcatttatg ctttggtgta aagatggcca tgtagaaaca
20641 ttttacccaa aattacaatc tagtcaagcg tggcaaccgg gtgttgctat gcctaatctt
20701 tacaaaatgc aaagaatgct attagaaaag tgtgaccttc aaaattatgg tgatagtgca
20761 acattaccta aaggcataat gatgaatgtc gcaaaatata ctcaactgtg tcaatattta
20821 aacacattaa cattagctgt accctataat atgagagtta tacattttgg tgctggttct
20881 gataaaggag ttgcaccagg tacagctgtt ttaagacagt ggttgcctac gggtacgctg
20941 cttgtcgatt cagatcttaa tgactttgtc tctgatgcag attcaacttt gattggtgat
21001 tgtgcaactg tacatacagc taataaatgg gatctcatta ttagtgatat gtacgaccct
21061 aagactaaaa atgttacaaa agaaaatgac tctaaagagg gttttttcac ttacatttgt
21121 gggtttatac aacaaaagct agctcttgga ggttccgtgg ctataaagat aacagaacat
21181 tcttggaatg ctgatcttta taagctcatg ggacacttcg catggtggac agcctttgtt
21241 actaatgtga atgcgtcatc atctgaagca tttttaattg gatgtaatta tcttggcaaa
21301 ccacgcgaac aaatagatgg ttatgtcatg catgcaaatt acatattttg gaggaataca
21361 aatccaattc agttgtcttc ctattcttta tttgacatga gtaaatttcc ccttaaatta
21421 aggggtactg ctgttatgtc tttaaaagaa ggtcaaatca atgatatgat tttatctctt
21481 cttagtaaag gtagacttat aattagagaa aacaacagag ttgttatttc tagtgatgtt
21541 cttgttaaca actaaacgaa caatgtttgt ttttcttgtt ttattgccac tagtctctag
21601 tcagtgtgtt aatcttacaa ccagaactca attaccccct gcatacacta attctttcac
21661 acgtggtgtt tattaccctg acaaagtttt cagatcctca gttttacatt caactcagga
21721 cttgttctta cctttctttt ccaatgttac ttggttccat gctatacatg tctctgggac
21781 caatggtact aagaggtttg ataaccctgt cctaccattt aatgatggtg tttattttgc
21841 ttccactgag aagtctaaca taataagagg ctggattttt ggtactactt tagattcgaa
21901 gacccagtcc ctacttattg ttaataacgc tactaatgtt gttattaaag tctgtgaatt
21961 tcaattttgt aatgatccat ttttgggtgt ttattaccac aaaaacaaca aaagttggat
22021 ggaaagtgag ttcagagttt attctagtgc gaataattgc acttttgaat atgtctctca
22081 gccttttctt atggaccttg aaggaaaaca gggtaatttc aaaaatctta gggaatttgt
22141 gtttaagaat attgatggtt attttaaaat atattctaag cacacgccta ttaatttagt
22201 gcgtgatctc cctcagggtt tttcggcttt agaaccattg gtagatttgc caataggtat
22261 taacatcact aggtttcaaa ctttacttgc tttacataga agttatttga ctcctggtga
22321 ttcttcttca ggttggacag ctggtgctgc agcttattat gtgggttatc ttcaacctag
22381 gacttttcta ttaaaatata atgaaaatgg aaccattaca gatgctgtag actgtgcact
22441 tgaccctctc tcagaaacaa agtgtacgtt gaaatccttc actgtagaaa aaggaatcta
22501 tcaaacttct aactttagag tccaaccaac agaatctatt gttagatttc ctaatattac
22561 aaacttgtgc ccttttggtg aagtttttaa cgccaccaga tttgcatctg tttatgcttg
22621 gaacaggaag agaatcagca actgtgttgc tgattattct gtcctatata attccgcatc
22681 attttccact tttaagtgtt atggagtgtc tcctactaaa ttaaatgatc tctgctttac
22741 taatgtctat gcagattcat ttgtaattag aggtgatgaa gtcagacaaa tcgctccagg
22801 gcaaactgga aagattgctg attataatta taaattacca gatgatttta caggctgcgt
22861 tatagcttgg aattctaaca atcttgattc taaggttggt ggtaattata attacctgta
22921 tagattgttt aggaagtcta atctcaaacc ttttgagaga gatatttcaa ctgaaatcta
22981 tcaggccggt agcacacctt gtaatggtgt tgaaggtttt aattgttact ttcctttaca
23041 atcatatggt ttccaaccca ctaatggtgt tggttaccaa ccatacagag tagtagtact
23101 ttcttttgaa cttctacatg caccagcaac tgtttgtgga cctaaaaagt ctactaattt
23161 ggttaaaaac aaatgtgtca atttcaactt caatggttta acaggcacag gtgttcttac
23221 tgagtctaac aaaaagtttc tgcctttcca acaatttggc agagacattg ctgacactac
23281 tgatgctgtc cgtgatccac agacacttga gattcttgac attacaccat gttcttttgg
23341 tggtgtcagt gttataacac caggaacaaa tacttctaac caggttgctg ttctttatca
23401 ggatgttaac tgcacagaag tccctgttgc tattcatgca gatcaactta ctcctacttg
23461 gcgtgtttat tctacaggtt ctaatgtttt tcaaacacgt gcaggctgtt taataggggc
23521 tgaacatgtc aacaactcat atgagtgtga catacccatt ggtgcaggta tatgcgctag
23581 ttatcagact cagactaatt ctcctcggcg ggcacgtagt gtagctagtc aatccatcat
23641 tgcctacact atgtcacttg gtgcagaaaa ttcagttgct tactctaata actctattgc
23701 catacccaca aattttacta ttagtgttac cacagaaatt ctaccagtgt ctatgaccaa
23761 gacatcagta gattgtacaa tgtacatttg tggtgattca actgaatgca gcaatctttt
23821 gttgcaatat ggcagttttt gtacacaatt aaaccgtgct ttaactggaa tagctgttga
23881 acaagacaaa aacacccaag aagtttttgc acaagtcaaa caaatttaca aaacaccacc
23941 aattaaagat tttggtggtt ttaatttttc acaaatatta ccagatccat caaaaccaag
24001 caagaggtca tttattgaag atctactttt caacaaagtg acacttgcag atgctggctt
24061 catcaaacaa tatggtgatt gccttggtga tattgctgct agagacctca tttgtgcaca
24121 aaagtttaac ggccttactg ttttgccacc tttgctcaca gatgaaatga ttgctcaata
24181 cacttctgca ctgttagcgg gtacaatcac ttctggttgg acctttggtg caggtgctgc
24241 attacaaata ccatttgcta tgcaaatggc ttataggttt aatggtattg gagttacaca
24301 gaatgttctc tatgagaacc aaaaattgat tgccaaccaa tttaatagtg ctattggcaa
24361 aattcaagac tcactttctt ccacagcaag tgcacttgga aaacttcaag atgtggtcaa
24421 ccaaaatgca caagctttaa acacgcttgt taaacaactt agctccaatt ttggtgcaat
24481 ttcaagtgtt ttaaatgata tcctttcacg tcttgacaaa gttgaggctg aagtgcaaat
24541 tgataggttg atcacaggca gacttcaaag tttgcagaca tatgtgactc aacaattaat
24601 tagagctgca gaaatcagag cttctgctaa tcttgctgct actaaaatgt cagagtgtgt
24661 acttggacaa tcaaaaagag ttgatttttg tggaaagggc tatcatctta tgtccttccc
24721 tcagtcagca cctcatggtg tagtcttctt gcatgtgact tatgtccctg cacaagaaaa
24781 gaacttcaca actgctcctg ccatttgtca tgatggaaaa gcacactttc ctcgtgaagg
24841 tgtctttgtt tcaaatggca cacactggtt tgtaacacaa aggaattttt atgaaccaca
24901 aatcattact acagacaaca catttgtgtc tggtaactgt gatgttgtaa taggaattgt
24961 caacaacaca gtttatgatc ctttgcaacc tgaattagac tcattcaagg aggagttaga
25021 taaatatttt aagaatcata catcaccaga tgttgattta ggtgacatct ctggcattaa
25081 tgcttcagtt gtaaacattc aaaaagaaat tgaccgcctc aatgaggttg ccaagaattt
25141 aaatgaatct ctcatcgatc tccaagaact tggaaagtat gagcagtata taaaatggcc
25201 atggtacatt tggctaggtt ttatagctgg cttgattgcc atagtaatgg tgacaattat
25261 gctttgctgt atgaccagtt gctgtagttg tctcaagggc tgttgttctt gtggatcctg
25321 ctgcaaattt gatgaagacg actctgagcc agtgctcaaa ggagtcaaat tacattacac
25381 ataaacgaac ttatggattt gtttatgaga atcttcacaa ttggaactgt aactttgaag
25441 caaggtgaaa tcaaggatgc tactccttca gattttgttc gcgctactgc aacgataccg
25501 atacaagcct cactcccttt cggatggctt attgttggcg ttgcacttct tgctgttttt
25561 cagagcgctt ccaaaatcat aaccctcaaa aagagatggc aactagcact ctccaagggt
25621 gttcactttg tttgcaactt gctgttgttg tttgtaacag tttactcaca ccttttgctc
25681 gttgctgctg gccttgaagc cccttttctc tatctttatg ctttagtcta cttcttgcag
25741 agtataaact ttgtaagaat aataatgagg ctttggcttt gctggaaatg ccgttccaaa
25801 aacccattac tttatgatgc caactatttt ctttgctggc atactaattg ttacgactat
25861 tgtatacctt acaatagtgt aacttcttca attgtcatta cttcaggtga tggcacaaca
25921 agtcctattt ctgaacatga ctaccagatt ggtggttata ctgaaaaatg ggaatctgga
25981 gtaaaagact gtgttgtatt acacagttac ttcacttcag actattacca gctgtactca
26041 actcaattga gtacagacac tggtgttgaa catgttacct tcttcatcta caataaaatt
26101 gttgatgagc ctgaagaaca tgtccaaatt cacacaatcg acggttcatc cggagttgtt
26161 aatccagtaa tggaaccaat ttatgatgaa ccgacgacga ctactagcgt gcctttgtaa
26221 gcacaagctg atgagtacga acttatgtac tcattcgttt cggaagagac aggtacgtta
26281 atagttaata gcgtacttct ttttcttgct ttcgtggtat tcttgctagt tacactagcc
26341 atccttactg cgcttcgatt gtgtgcgtac tgctgcaata ttgttaacgt gagtcttgta
26401 aaaccttctt tttacgttta ctctcgtgtt aaaaatctga attcttctag agttcctgat
26461 cttctggtct aaacgaacta aatattatat tagtttttct gtttggaact ttaattttag
26521 ccatggcaga ttccaacggt actattaccg ttgaagagct taaaaagctc cttgaacaat
26581 ggaacctagt aataggtttc ctattcctta catggatttg tcttctacaa tttgcctatg
26641 ccaacaggaa taggtttttg tatataatta agttaatttt cctctggctg ttatggccag
26701 taactttagc ttgttttgtg cttgctgctg tttacagaat aaattggatc accggtggaa
26761 ttgctatcgc aatggcttgt cttgtaggct tgatgtggct cagctacttc attgcttctt
26821 tcagactgtt tgcgcgtacg cgttccatgt ggtcattcaa tccagaaact aacattcttc
26881 tcaacgtgcc actccatggc actattctga ccagaccgct tctagaaagt gaactcgtaa
26941 tcggagctgt gatccttcgt ggacatcttc gtattgctgg acaccatcta ggacgctgtg
27001 acatcaagga cctgcctaaa gaaatcactg ttgctacatc acgaacgctt tcttattaca
27061 aattgggagc ttcgcagcgt gtagcaggtg actcaggttt tgctgcatac agtcgctaca
27121 ggattggcaa ctataaatta aacacagacc attccagtag cagtgacaat attgctttgc
27181 ttgtacagta agtgacaaca gatgtttcat ctcgttgact ttcaggttac tatagcagag
27241 atattactaa ttattatgag gacttttaaa gtttccattt ggaatcttga ttacatcata
27301 aacctcataa ttaaaaattt atctaagtca ctaactgaga ataaatattc tcaattagat
27361 gaagagcaac caatggagat tgattaaacg aacatgaaaa ttattctttt cttggcactg
27421 ataacactcg ctacttgtga gctttatcac taccaagagt gtgttagagg tacaacagta
27481 cttttaaaag aaccttgctc ttctggaaca tacgagggca attcaccatt tcatcctcta
27541 gctgataaca aatttgcact gacttgcttt agcactcaat ttgcttttgc ttgtcctgac
27601 ggcgtaaaac acgtctatca gttacgtgcc agatcagttt cacctaaact gttcatcaga
27661 caagaggaag ttcaagaact ttactctcca atttttctta ttgttgcggc aatagtgttt
27721 ataacacttt gcttcacact caaaagaaag acagaatgat tgaactttca ttaattgact
27781 tctatttgtg ctttttagcc tttctgctat tccttgtttt aattatgctt attatctttt
27841 ggttctcact tgaactgcaa gatcataatg aaacttgtca cgcctaaacg aacatgaaat
27901 ttcttgtttt cttaggaatc atcacaactg tagctgcatt tcaccaagaa tgtagtttac
27961 agtcatgtac tcaacatcaa ccatatgtag ttgatgaccc gtgtcctatt cacttctatt
28021 ctaaatggta tattagagta ggagctagaa aatcagcacc tttaattgaa ttgtgcgtgg
28081 atgaggctgg ttctaaatca cccattcagt acatcgatat cggtaattat acagtttcct
28141 gtttaccttt tacaattaat tgccaggaac ctaaattggg tagtcttgta gtgcgttgtt
28201 cgttctatga agacttttta gagtatcatg acgttcgtgt tgttttagat ttcatctaaa
28261 cgaacaaact aaaatgtctg ataatggacc ccaaaatcag cgaaatgcac cccgcattac
28321 gtttggtgga ccctcagatt caactggcag taaccagaat ggagaacgca gtggggcgcg
28381 atcaaaacaa cgtcggcccc aaggtttacc caataatact gcgtcttggt tcaccgctct
28441 cactcaacat ggcaaggaag accttaaatt ccctcgagga caaggcgttc caattaacac
28501 caatagcagt ccagatgacc aaattggcta ctaccgaaga gctaccagac gaattcgtgg
28561 tggtgacggt aaaatgaaag atctcagtcc aagatggtat ttctactacc taggaactgg
28621 gccagaagct ggacttccct atggtgctaa caaagacggc atcatatggg ttgcaactga
28681 gggagccttg aatacaccaa aagatcacat tggcacccgc aatcctgcta acaatgctgc
28741 aatcgtgcta caacttcctc aaggaacaac attgccaaaa ggcttctacg cagaagggag
28801 cagaggcggc agtcaagcct cttctcgttc ctcatcacgt agtcgcaaca gttcaagaaa
28861 ttcaactcca ggcagcagta ggggaacttc tcctgctaga atggctggca atggcggtga
28921 tgctgctctt gctttgctgc tgcttgacag attgaaccag cttgagagca aaatgtctgg
28981 taaaggccaa caacaacaag gccaaactgt cactaagaaa tctgctgctg aggcttctaa
29041 gaagcctcgg caaaaacgta ctgccactaa agcatacaat gtaacacaag ctttcggcag
29101 acgtggtcca gaacaaaccc aaggaaattt tggggaccag gaactaatca gacaaggaac
29161 tgattacaaa cattggccgc aaattgcaca atttgccccc agcgcttcag cgttcttcgg
29221 aatgtcgcgc attggcatgg aagtcacacc ttcgggaacg tggttgacct acacaggtgc
29281 catcaaattg gatgacaaag atccaaattt caaagatcaa gtcattttgc tgaataagca
29341 tattgacgca tacaaaacat tcccaccaac agagcctaaa aaggacaaaa agaagaaggc
29401 tgatgaaact caagccttac cgcagagaca gaagaaacag caaactgtga ctcttcttcc
29461 tgctgcagat ttggatgatt tctccaaaca attgcaacaa tccatgagca gtgctgactc
29521 aactcaggcc taaactcatg cagaccacac aaggcagatg ggctatataa acgttttcgc
29581 ttttccgttt acgatatata gtctactctt gtgcagaatg aattctcgta actacatagc
29641 acaagtagat gtagttaact ttaatctcac atagcaatct ttaatcagtg tgtaacatta
29701 gggaggactt gaaagagcca ccacattttc accgaggcca cgcggagtac gatcgagtgt
29761 acagtgaaca atgctaggga gagctgccta tatggaagag ccctaatgtg taaaattaat
29821 tttagtagtg ctatccccat gtgattttaa tagcttctta ggagaatgac aaaaaaaaaa
29881 aaaaaaaaaa aaaaaaaaaa aaa
""".strip()
for s in " \n0123456789":
cc = cc.replace(s, "")
# This is raw nucleotides
# Asn or Asp / B AAU, AAC; GAU, GAC
# Gln or Glu / Z CAA, CAG; GAA, GAG
# START AUG
tt = """Ala / A GCU, GCC, GCA, GCG
Ile / I AUU, AUC, AUA
Arg / R CGU, CGC, CGA, CGG; AGA, AGG
Leu / L CUU, CUC, CUA, CUG; UUA, UUG
Asn / N AAU, AAC
Lys / K AAA, AAG
Asp / D GAU, GAC
Met / M AUG
Phe / F UUU, UUC
Cys / C UGU, UGC
Pro / P CCU, CCC, CCA, CCG
Gln / Q CAA, CAG
Ser / S UCU, UCC, UCA, UCG; AGU, AGC
Glu / E GAA, GAG
Thr / T ACU, ACC, ACA, ACG
Trp / W UGG
Gly / G GGU, GGC, GGA, GGG
Tyr / Y UAU, UAC
His / H CAU, CAC
Val / V GUU, GUC, GUA, GUG
STOP UAA, UGA, UAG
""".strip()
dec = {}
for t in tt.split("\n"):
k = t[:len("Val / V")].strip()
v = t[len("Val / V "):]
if '/' in k:
k = k.split("/")[-1].strip()
k = k.replace("STOP", "*")
v = v.replace(",", "").replace(";", "").lower().replace("u", "t").split(" ")
for vv in v:
if vv in dec:
print("dup", vv)
dec[vv.strip()] = k
# the protein conversions
def translate(x, protein=False):
x = x.lower()
aa = []
for i in range(0, len(x)-2, 3):
aa.append(dec[x[i:i+3]])
aa= ''.join(aa)
if(protein):
if aa[0]!= "M" or aa[-1] != "*":
print("BAD PROTEIN")
print(aa)
return None
aa = aa[:-1]
return aa
corona = {}
corona['untranslated_region'] = cc[0:265]
corona['orf1a'] = translate(cc[266-1:13483], True)
corona['orf1b'] = translate(cc[13468-1:21555], False).strip("*") # chop off the stop, note this doesn't have a start
corona['spike_glycoprotein'] = translate(cc[21563-1:25384], True)
corona['orf3a'] = translate(cc[25393-1:26220], True)
corona['envelope_protein'] = translate(cc[26245-1:26472], True) # also known as small membrane
corona['membrane_glycoprotein'] = translate(cc[26523-1:27191], True)
corona['orf6'] = translate(cc[27202-1:27387], True)
corona['orf7a'] = translate(cc[27394-1:27759], True)
corona['orf7b'] = translate(cc[27756-1:27887], True) # is this one real?
corona['orf8'] = translate(cc[27894-1:28259], True)
corona['nucleocapsid_phosphoprotein'] = translate(cc[28274-1:29533], True)
corona['orf10'] = translate(cc[29558-1:29674], True)
print(corona)
orf6 = corona['orf6']
view = nv.show_mdanalysis(orf6)
view.add_unitcell()
view.control.rotate(
mda.lib.transformations.quaternion_from_euler(
-np.pi/2, np.pi/2, np.pi/6, 'rzyz').tolist())
view.control.zoom(-0.3)
view.show()
###Output
_____no_output_____
###Markdown
Corona Genome Analysis Let's start by retreiving the complete genome of Coronavirus. The records are extracted from the wuhan region. Source: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512>Orthocoronavirinae, in the family Coronaviridae, order Nidovirales, and realm Riboviria. They are enveloped viruses with a positive-sense single-stranded RNA genome and a nucleocapsid of helical symmetry. This is wrapped in a icosahedral protein shell. The genome size of coronaviruses ranges from approximately 26 to 32 kilobases, one of the largest among RNA viruses. They have characteristic club-shaped spikes that project from their surface, which in electron micrographs create an image reminiscent of the solar corona, from which their name derives.> **Basic Information:** Coronavirus is a single stranded RNA-virus (DNA is double stranded). RNA polymers are made up of nucleotides. These nucleotides have three parts: 1) a five carbon Ribose sugar, 2) a phosphate molecule and 3) one of four nitrogenous bases: adenine(a), guanine(g), cytosine(c) or uracil(u) / thymine(t). > Thymine is found in DNA and Uracil in RNA. But for following analysis, you can consider (u) and (t) to be analogous.
###Code
corona = """
1 attaaaggtt tataccttcc caggtaacaa accaaccaac tttcgatctc ttgtagatct
61 gttctctaaa cgaactttaa aatctgtgtg gctgtcactc ggctgcatgc ttagtgcact
121 cacgcagtat aattaataac taattactgt cgttgacagg acacgagtaa ctcgtctatc
181 ttctgcaggc tgcttacggt ttcgtccgtg ttgcagccga tcatcagcac atctaggttt
241 cgtccgggtg tgaccgaaag gtaagatgga gagccttgtc cctggtttca acgagaaaac
301 acacgtccaa ctcagtttgc ctgttttaca ggttcgcgac gtgctcgtac gtggctttgg
361 agactccgtg gaggaggtct tatcagaggc acgtcaacat cttaaagatg gcacttgtgg
421 cttagtagaa gttgaaaaag gcgttttgcc tcaacttgaa cagccctatg tgttcatcaa
481 acgttcggat gctcgaactg cacctcatgg tcatgttatg gttgagctgg tagcagaact
541 cgaaggcatt cagtacggtc gtagtggtga gacacttggt gtccttgtcc ctcatgtggg
601 cgaaatacca gtggcttacc gcaaggttct tcttcgtaag aacggtaata aaggagctgg
661 tggccatagt tacggcgccg atctaaagtc atttgactta ggcgacgagc ttggcactga
721 tccttatgaa gattttcaag aaaactggaa cactaaacat agcagtggtg ttacccgtga
781 actcatgcgt gagcttaacg gaggggcata cactcgctat gtcgataaca acttctgtgg
841 ccctgatggc taccctcttg agtgcattaa agaccttcta gcacgtgctg gtaaagcttc
901 atgcactttg tccgaacaac tggactttat tgacactaag aggggtgtat actgctgccg
961 tgaacatgag catgaaattg cttggtacac ggaacgttct gaaaagagct atgaattgca
1021 gacacctttt gaaattaaat tggcaaagaa atttgacacc ttcaatgggg aatgtccaaa
1081 ttttgtattt cccttaaatt ccataatcaa gactattcaa ccaagggttg aaaagaaaaa
1141 gcttgatggc tttatgggta gaattcgatc tgtctatcca gttgcgtcac caaatgaatg
1201 caaccaaatg tgcctttcaa ctctcatgaa gtgtgatcat tgtggtgaaa cttcatggca
1261 gacgggcgat tttgttaaag ccacttgcga attttgtggc actgagaatt tgactaaaga
1321 aggtgccact acttgtggtt acttacccca aaatgctgtt gttaaaattt attgtccagc
1381 atgtcacaat tcagaagtag gacctgagca tagtcttgcc gaataccata atgaatctgg
1441 cttgaaaacc attcttcgta agggtggtcg cactattgcc tttggaggct gtgtgttctc
1501 ttatgttggt tgccataaca agtgtgccta ttgggttcca cgtgctagcg ctaacatagg
1561 ttgtaaccat acaggtgttg ttggagaagg ttccgaaggt cttaatgaca accttcttga
1621 aatactccaa aaagagaaag tcaacatcaa tattgttggt gactttaaac ttaatgaaga
1681 gatcgccatt attttggcat ctttttctgc ttccacaagt gcttttgtgg aaactgtgaa
1741 aggtttggat tataaagcat tcaaacaaat tgttgaatcc tgtggtaatt ttaaagttac
1801 aaaaggaaaa gctaaaaaag gtgcctggaa tattggtgaa cagaaatcaa tactgagtcc
1861 tctttatgca tttgcatcag aggctgctcg tgttgtacga tcaattttct cccgcactct
1921 tgaaactgct caaaattctg tgcgtgtttt acagaaggcc gctataacaa tactagatgg
1981 aatttcacag tattcactga gactcattga tgctatgatg ttcacatctg atttggctac
2041 taacaatcta gttgtaatgg cctacattac aggtggtgtt gttcagttga cttcgcagtg
2101 gctaactaac atctttggca ctgtttatga aaaactcaaa cccgtccttg attggcttga
2161 agagaagttt aaggaaggtg tagagtttct tagagacggt tgggaaattg ttaaatttat
2221 ctcaacctgt gcttgtgaaa ttgtcggtgg acaaattgtc acctgtgcaa aggaaattaa
2281 ggagagtgtt cagacattct ttaagcttgt aaataaattt ttggctttgt gtgctgactc
2341 tatcattatt ggtggagcta aacttaaagc cttgaattta ggtgaaacat ttgtcacgca
2401 ctcaaaggga ttgtacagaa agtgtgttaa atccagagaa gaaactggcc tactcatgcc
2461 tctaaaagcc ccaaaagaaa ttatcttctt agagggagaa acacttccca cagaagtgtt
2521 aacagaggaa gttgtcttga aaactggtga tttacaacca ttagaacaac ctactagtga
2581 agctgttgaa gctccattgg ttggtacacc agtttgtatt aacgggctta tgttgctcga
2641 aatcaaagac acagaaaagt actgtgccct tgcacctaat atgatggtaa caaacaatac
2701 cttcacactc aaaggcggtg caccaacaaa ggttactttt ggtgatgaca ctgtgataga
2761 agtgcaaggt tacaagagtg tgaatatcac ttttgaactt gatgaaagga ttgataaagt
2821 acttaatgag aagtgctctg cctatacagt tgaactcggt acagaagtaa atgagttcgc
2881 ctgtgttgtg gcagatgctg tcataaaaac tttgcaacca gtatctgaat tacttacacc
2941 actgggcatt gatttagatg agtggagtat ggctacatac tacttatttg atgagtctgg
3001 tgagtttaaa ttggcttcac atatgtattg ttctttctac cctccagatg aggatgaaga
3061 agaaggtgat tgtgaagaag aagagtttga gccatcaact caatatgagt atggtactga
3121 agatgattac caaggtaaac ctttggaatt tggtgccact tctgctgctc ttcaacctga
3181 agaagagcaa gaagaagatt ggttagatga tgatagtcaa caaactgttg gtcaacaaga
3241 cggcagtgag gacaatcaga caactactat tcaaacaatt gttgaggttc aacctcaatt
3301 agagatggaa cttacaccag ttgttcagac tattgaagtg aatagtttta gtggttattt
3361 aaaacttact gacaatgtat acattaaaaa tgcagacatt gtggaagaag ctaaaaaggt
3421 aaaaccaaca gtggttgtta atgcagccaa tgtttacctt aaacatggag gaggtgttgc
3481 aggagcctta aataaggcta ctaacaatgc catgcaagtt gaatctgatg attacatagc
3541 tactaatgga ccacttaaag tgggtggtag ttgtgtttta agcggacaca atcttgctaa
3601 acactgtctt catgttgtcg gcccaaatgt taacaaaggt gaagacattc aacttcttaa
3661 gagtgcttat gaaaatttta atcagcacga agttctactt gcaccattat tatcagctgg
3721 tatttttggt gctgacccta tacattcttt aagagtttgt gtagatactg ttcgcacaaa
3781 tgtctactta gctgtctttg ataaaaatct ctatgacaaa cttgtttcaa gctttttgga
3841 aatgaagagt gaaaagcaag ttgaacaaaa gatcgctgag attcctaaag aggaagttaa
3901 gccatttata actgaaagta aaccttcagt tgaacagaga aaacaagatg ataagaaaat
3961 caaagcttgt gttgaagaag ttacaacaac tctggaagaa actaagttcc tcacagaaaa
4021 cttgttactt tatattgaca ttaatggcaa tcttcatcca gattctgcca ctcttgttag
4081 tgacattgac atcactttct taaagaaaga tgctccatat atagtgggtg atgttgttca
4141 agagggtgtt ttaactgctg tggttatacc tactaaaaag gctggtggca ctactgaaat
4201 gctagcgaaa gctttgagaa aagtgccaac agacaattat ataaccactt acccgggtca
4261 gggtttaaat ggttacactg tagaggaggc aaagacagtg cttaaaaagt gtaaaagtgc
4321 cttttacatt ctaccatcta ttatctctaa tgagaagcaa gaaattcttg gaactgtttc
4381 ttggaatttg cgagaaatgc ttgcacatgc agaagaaaca cgcaaattaa tgcctgtctg
4441 tgtggaaact aaagccatag tttcaactat acagcgtaaa tataagggta ttaaaataca
4501 agagggtgtg gttgattatg gtgctagatt ttacttttac accagtaaaa caactgtagc
4561 gtcacttatc aacacactta acgatctaaa tgaaactctt gttacaatgc cacttggcta
4621 tgtaacacat ggcttaaatt tggaagaagc tgctcggtat atgagatctc tcaaagtgcc
4681 agctacagtt tctgtttctt cacctgatgc tgttacagcg tataatggtt atcttacttc
4741 ttcttctaaa acacctgaag aacattttat tgaaaccatc tcacttgctg gttcctataa
4801 agattggtcc tattctggac aatctacaca actaggtata gaatttctta agagaggtga
4861 taaaagtgta tattacacta gtaatcctac cacattccac ctagatggtg aagttatcac
4921 ctttgacaat cttaagacac ttctttcttt gagagaagtg aggactatta aggtgtttac
4981 aacagtagac aacattaacc tccacacgca agttgtggac atgtcaatga catatggaca
5041 acagtttggt ccaacttatt tggatggagc tgatgttact aaaataaaac ctcataattc
5101 acatgaaggt aaaacatttt atgttttacc taatgatgac actctacgtg ttgaggcttt
5161 tgagtactac cacacaactg atcctagttt tctgggtagg tacatgtcag cattaaatca
5221 cactaaaaag tggaaatacc cacaagttaa tggtttaact tctattaaat gggcagataa
5281 caactgttat cttgccactg cattgttaac actccaacaa atagagttga agtttaatcc
5341 acctgctcta caagatgctt attacagagc aagggctggt gaagctgcta acttttgtgc
5401 acttatctta gcctactgta ataagacagt aggtgagtta ggtgatgtta gagaaacaat
5461 gagttacttg tttcaacatg ccaatttaga ttcttgcaaa agagtcttga acgtggtgtg
5521 taaaacttgt ggacaacagc agacaaccct taagggtgta gaagctgtta tgtacatggg
5581 cacactttct tatgaacaat ttaagaaagg tgttcagata ccttgtacgt gtggtaaaca
5641 agctacaaaa tatctagtac aacaggagtc accttttgtt atgatgtcag caccacctgc
5701 tcagtatgaa cttaagcatg gtacatttac ttgtgctagt gagtacactg gtaattacca
5761 gtgtggtcac tataaacata taacttctaa agaaactttg tattgcatag acggtgcttt
5821 acttacaaag tcctcagaat acaaaggtcc tattacggat gttttctaca aagaaaacag
5881 ttacacaaca accataaaac cagttactta taaattggat ggtgttgttt gtacagaaat
5941 tgaccctaag ttggacaatt attataagaa agacaattct tatttcacag agcaaccaat
6001 tgatcttgta ccaaaccaac catatccaaa cgcaagcttc gataatttta agtttgtatg
6061 tgataatatc aaatttgctg atgatttaaa ccagttaact ggttataaga aacctgcttc
6121 aagagagctt aaagttacat ttttccctga cttaaatggt gatgtggtgg ctattgatta
6181 taaacactac acaccctctt ttaagaaagg agctaaattg ttacataaac ctattgtttg
6241 gcatgttaac aatgcaacta ataaagccac gtataaacca aatacctggt gtatacgttg
6301 tctttggagc acaaaaccag ttgaaacatc aaattcgttt gatgtactga agtcagagga
6361 cgcgcaggga atggataatc ttgcctgcga agatctaaaa ccagtctctg aagaagtagt
6421 ggaaaatcct accatacaga aagacgttct tgagtgtaat gtgaaaacta ccgaagttgt
6481 aggagacatt atacttaaac cagcaaataa tagtttaaaa attacagaag aggttggcca
6541 cacagatcta atggctgctt atgtagacaa ttctagtctt actattaaga aacctaatga
6601 attatctaga gtattaggtt tgaaaaccct tgctactcat ggtttagctg ctgttaatag
6661 tgtcccttgg gatactatag ctaattatgc taagcctttt cttaacaaag ttgttagtac
6721 aactactaac atagttacac ggtgtttaaa ccgtgtttgt actaattata tgccttattt
6781 ctttacttta ttgctacaat tgtgtacttt tactagaagt acaaattcta gaattaaagc
6841 atctatgccg actactatag caaagaatac tgttaagagt gtcggtaaat tttgtctaga
6901 ggcttcattt aattatttga agtcacctaa tttttctaaa ctgataaata ttataatttg
6961 gtttttacta ttaagtgttt gcctaggttc tttaatctac tcaaccgctg ctttaggtgt
7021 tttaatgtct aatttaggca tgccttctta ctgtactggt tacagagaag gctatttgaa
7081 ctctactaat gtcactattg caacctactg tactggttct ataccttgta gtgtttgtct
7141 tagtggttta gattctttag acacctatcc ttctttagaa actatacaaa ttaccatttc
7201 atcttttaaa tgggatttaa ctgcttttgg cttagttgca gagtggtttt tggcatatat
7261 tcttttcact aggtttttct atgtacttgg attggctgca atcatgcaat tgtttttcag
7321 ctattttgca gtacatttta ttagtaattc ttggcttatg tggttaataa ttaatcttgt
7381 acaaatggcc ccgatttcag ctatggttag aatgtacatc ttctttgcat cattttatta
7441 tgtatggaaa agttatgtgc atgttgtaga cggttgtaat tcatcaactt gtatgatgtg
7501 ttacaaacgt aatagagcaa caagagtcga atgtacaact attgttaatg gtgttagaag
7561 gtccttttat gtctatgcta atggaggtaa aggcttttgc aaactacaca attggaattg
7621 tgttaattgt gatacattct gtgctggtag tacatttatt agtgatgaag ttgcgagaga
7681 cttgtcacta cagtttaaaa gaccaataaa tcctactgac cagtcttctt acatcgttga
7741 tagtgttaca gtgaagaatg gttccatcca tctttacttt gataaagctg gtcaaaagac
7801 ttatgaaaga cattctctct ctcattttgt taacttagac aacctgagag ctaataacac
7861 taaaggttca ttgcctatta atgttatagt ttttgatggt aaatcaaaat gtgaagaatc
7921 atctgcaaaa tcagcgtctg tttactacag tcagcttatg tgtcaaccta tactgttact
7981 agatcaggca ttagtgtctg atgttggtga tagtgcggaa gttgcagtta aaatgtttga
8041 tgcttacgtt aatacgtttt catcaacttt taacgtacca atggaaaaac tcaaaacact
8101 agttgcaact gcagaagctg aacttgcaaa gaatgtgtcc ttagacaatg tcttatctac
8161 ttttatttca gcagctcggc aagggtttgt tgattcagat gtagaaacta aagatgttgt
8221 tgaatgtctt aaattgtcac atcaatctga catagaagtt actggcgata gttgtaataa
8281 ctatatgctc acctataaca aagttgaaaa catgacaccc cgtgaccttg gtgcttgtat
8341 tgactgtagt gcgcgtcata ttaatgcgca ggtagcaaaa agtcacaaca ttgctttgat
8401 atggaacgtt aaagatttca tgtcattgtc tgaacaacta cgaaaacaaa tacgtagtgc
8461 tgctaaaaag aataacttac cttttaagtt gacatgtgca actactagac aagttgttaa
8521 tgttgtaaca acaaagatag cacttaaggg tggtaaaatt gttaataatt ggttgaagca
8581 gttaattaaa gttacacttg tgttcctttt tgttgctgct attttctatt taataacacc
8641 tgttcatgtc atgtctaaac atactgactt ttcaagtgaa atcataggat acaaggctat
8701 tgatggtggt gtcactcgtg acatagcatc tacagatact tgttttgcta acaaacatgc
8761 tgattttgac acatggttta gccagcgtgg tggtagttat actaatgaca aagcttgccc
8821 attgattgct gcagtcataa caagagaagt gggttttgtc gtgcctggtt tgcctggcac
8881 gatattacgc acaactaatg gtgacttttt gcatttctta cctagagttt ttagtgcagt
8941 tggtaacatc tgttacacac catcaaaact tatagagtac actgactttg caacatcagc
9001 ttgtgttttg gctgctgaat gtacaatttt taaagatgct tctggtaagc cagtaccata
9061 ttgttatgat accaatgtac tagaaggttc tgttgcttat gaaagtttac gccctgacac
9121 acgttatgtg ctcatggatg gctctattat tcaatttcct aacacctacc ttgaaggttc
9181 tgttagagtg gtaacaactt ttgattctga gtactgtagg cacggcactt gtgaaagatc
9241 agaagctggt gtttgtgtat ctactagtgg tagatgggta cttaacaatg attattacag
9301 atctttacca ggagttttct gtggtgtaga tgctgtaaat ttacttacta atatgtttac
9361 accactaatt caacctattg gtgctttgga catatcagca tctatagtag ctggtggtat
9421 tgtagctatc gtagtaacat gccttgccta ctattttatg aggtttagaa gagcttttgg
9481 tgaatacagt catgtagttg cctttaatac tttactattc cttatgtcat tcactgtact
9541 ctgtttaaca ccagtttact cattcttacc tggtgtttat tctgttattt acttgtactt
9601 gacattttat cttactaatg atgtttcttt tttagcacat attcagtgga tggttatgtt
9661 cacaccttta gtacctttct ggataacaat tgcttatatc atttgtattt ccacaaagca
9721 tttctattgg ttctttagta attacctaaa gagacgtgta gtctttaatg gtgtttcctt
9781 tagtactttt gaagaagctg cgctgtgcac ctttttgtta aataaagaaa tgtatctaaa
9841 gttgcgtagt gatgtgctat tacctcttac gcaatataat agatacttag ctctttataa
9901 taagtacaag tattttagtg gagcaatgga tacaactagc tacagagaag ctgcttgttg
9961 tcatctcgca aaggctctca atgacttcag taactcaggt tctgatgttc tttaccaacc
10021 accacaaacc tctatcacct cagctgtttt gcagagtggt tttagaaaaa tggcattccc
10081 atctggtaaa gttgagggtt gtatggtaca agtaacttgt ggtacaacta cacttaacgg
10141 tctttggctt gatgacgtag tttactgtcc aagacatgtg atctgcacct ctgaagacat
10201 gcttaaccct aattatgaag atttactcat tcgtaagtct aatcataatt tcttggtaca
10261 ggctggtaat gttcaactca gggttattgg acattctatg caaaattgtg tacttaagct
10321 taaggttgat acagccaatc ctaagacacc taagtataag tttgttcgca ttcaaccagg
10381 acagactttt tcagtgttag cttgttacaa tggttcacca tctggtgttt accaatgtgc
10441 tatgaggccc aatttcacta ttaagggttc attccttaat ggttcatgtg gtagtgttgg
10501 ttttaacata gattatgact gtgtctcttt ttgttacatg caccatatgg aattaccaac
10561 tggagttcat gctggcacag acttagaagg taacttttat ggaccttttg ttgacaggca
10621 aacagcacaa gcagctggta cggacacaac tattacagtt aatgttttag cttggttgta
10681 cgctgctgtt ataaatggag acaggtggtt tctcaatcga tttaccacaa ctcttaatga
10741 ctttaacctt gtggctatga agtacaatta tgaacctcta acacaagacc atgttgacat
10801 actaggacct ctttctgctc aaactggaat tgccgtttta gatatgtgtg cttcattaaa
10861 agaattactg caaaatggta tgaatggacg taccatattg ggtagtgctt tattagaaga
10921 tgaatttaca ccttttgatg ttgttagaca atgctcaggt gttactttcc aaagtgcagt
10981 gaaaagaaca atcaagggta cacaccactg gttgttactc acaattttga cttcactttt
11041 agttttagtc cagagtactc aatggtcttt gttctttttt ttgtatgaaa atgccttttt
11101 accttttgct atgggtatta ttgctatgtc tgcttttgca atgatgtttg tcaaacataa
11161 gcatgcattt ctctgtttgt ttttgttacc ttctcttgcc actgtagctt attttaatat
11221 ggtctatatg cctgctagtt gggtgatgcg tattatgaca tggttggata tggttgatac
11281 tagtttgtct ggttttaagc taaaagactg tgttatgtat gcatcagctg tagtgttact
11341 aatccttatg acagcaagaa ctgtgtatga tgatggtgct aggagagtgt ggacacttat
11401 gaatgtcttg acactcgttt ataaagttta ttatggtaat gctttagatc aagccatttc
11461 catgtgggct cttataatct ctgttacttc taactactca ggtgtagtta caactgtcat
11521 gtttttggcc agaggtattg tttttatgtg tgttgagtat tgccctattt tcttcataac
11581 tggtaataca cttcagtgta taatgctagt ttattgtttc ttaggctatt tttgtacttg
11641 ttactttggc ctcttttgtt tactcaaccg ctactttaga ctgactcttg gtgtttatga
11701 ttacttagtt tctacacagg agtttagata tatgaattca cagggactac tcccacccaa
11761 gaatagcata gatgccttca aactcaacat taaattgttg ggtgttggtg gcaaaccttg
11821 tatcaaagta gccactgtac agtctaaaat gtcagatgta aagtgcacat cagtagtctt
11881 actctcagtt ttgcaacaac tcagagtaga atcatcatct aaattgtggg ctcaatgtgt
11941 ccagttacac aatgacattc tcttagctaa agatactact gaagcctttg aaaaaatggt
12001 ttcactactt tctgttttgc tttccatgca gggtgctgta gacataaaca agctttgtga
12061 agaaatgctg gacaacaggg caaccttaca agctatagcc tcagagttta gttcccttcc
12121 atcatatgca gcttttgcta ctgctcaaga agcttatgag caggctgttg ctaatggtga
12181 ttctgaagtt gttcttaaaa agttgaagaa gtctttgaat gtggctaaat ctgaatttga
12241 ccgtgatgca gccatgcaac gtaagttgga aaagatggct gatcaagcta tgacccaaat
12301 gtataaacag gctagatctg aggacaagag ggcaaaagtt actagtgcta tgcagacaat
12361 gcttttcact atgcttagaa agttggataa tgatgcactc aacaacatta tcaacaatgc
12421 aagagatggt tgtgttccct tgaacataat acctcttaca acagcagcca aactaatggt
12481 tgtcatacca gactataaca catataaaaa tacgtgtgat ggtacaacat ttacttatgc
12541 atcagcattg tgggaaatcc aacaggttgt agatgcagat agtaaaattg ttcaacttag
12601 tgaaattagt atggacaatt cacctaattt agcatggcct cttattgtaa cagctttaag
12661 ggccaattct gctgtcaaat tacagaataa tgagcttagt cctgttgcac tacgacagat
12721 gtcttgtgct gccggtacta cacaaactgc ttgcactgat gacaatgcgt tagcttacta
12781 caacacaaca aagggaggta ggtttgtact tgcactgtta tccgatttac aggatttgaa
12841 atgggctaga ttccctaaga gtgatggaac tggtactatc tatacagaac tggaaccacc
12901 ttgtaggttt gttacagaca cacctaaagg tcctaaagtg aagtatttat actttattaa
12961 aggattaaac aacctaaata gaggtatggt acttggtagt ttagctgcca cagtacgtct
13021 acaagctggt aatgcaacag aagtgcctgc caattcaact gtattatctt tctgtgcttt
13081 tgctgtagat gctgctaaag cttacaaaga ttatctagct agtgggggac aaccaatcac
13141 taattgtgtt aagatgttgt gtacacacac tggtactggt caggcaataa cagttacacc
13201 ggaagccaat atggatcaag aatcctttgg tggtgcatcg tgttgtctgt actgccgttg
13261 ccacatagat catccaaatc ctaaaggatt ttgtgactta aaaggtaagt atgtacaaat
13321 acctacaact tgtgctaatg accctgtggg ttttacactt aaaaacacag tctgtaccgt
13381 ctgcggtatg tggaaaggtt atggctgtag ttgtgatcaa ctccgcgaac ccatgcttca
13441 gtcagctgat gcacaatcgt ttttaaacgg gtttgcggtg taagtgcagc ccgtcttaca
13501 ccgtgcggca caggcactag tactgatgtc gtatacaggg cttttgacat ctacaatgat
13561 aaagtagctg gttttgctaa attcctaaaa actaattgtt gtcgcttcca agaaaaggac
13621 gaagatgaca atttaattga ttcttacttt gtagttaaga gacacacttt ctctaactac
13681 caacatgaag aaacaattta taatttactt aaggattgtc cagctgttgc taaacatgac
13741 ttctttaagt ttagaataga cggtgacatg gtaccacata tatcacgtca acgtcttact
13801 aaatacacaa tggcagacct cgtctatgct ttaaggcatt ttgatgaagg taattgtgac
13861 acattaaaag aaatacttgt cacatacaat tgttgtgatg atgattattt caataaaaag
13921 gactggtatg attttgtaga aaacccagat atattacgcg tatacgccaa cttaggtgaa
13981 cgtgtacgcc aagctttgtt aaaaacagta caattctgtg atgccatgcg aaatgctggt
14041 attgttggtg tactgacatt agataatcaa gatctcaatg gtaactggta tgatttcggt
14101 gatttcatac aaaccacgcc aggtagtgga gttcctgttg tagattctta ttattcattg
14161 ttaatgccta tattaacctt gaccagggct ttaactgcag agtcacatgt tgacactgac
14221 ttaacaaagc cttacattaa gtgggatttg ttaaaatatg acttcacgga agagaggtta
14281 aaactctttg accgttattt taaatattgg gatcagacat accacccaaa ttgtgttaac
14341 tgtttggatg acagatgcat tctgcattgt gcaaacttta atgttttatt ctctacagtg
14401 ttcccaccta caagttttgg accactagtg agaaaaatat ttgttgatgg tgttccattt
14461 gtagtttcaa ctggatacca cttcagagag ctaggtgttg tacataatca ggatgtaaac
14521 ttacatagct ctagacttag ttttaaggaa ttacttgtgt atgctgctga ccctgctatg
14581 cacgctgctt ctggtaatct attactagat aaacgcacta cgtgcttttc agtagctgca
14641 cttactaaca atgttgcttt tcaaactgtc aaacccggta attttaacaa agacttctat
14701 gactttgctg tgtctaaggg tttctttaag gaaggaagtt ctgttgaatt aaaacacttc
14761 ttctttgctc aggatggtaa tgctgctatc agcgattatg actactatcg ttataatcta
14821 ccaacaatgt gtgatatcag acaactacta tttgtagttg aagttgttga taagtacttt
14881 gattgttacg atggtggctg tattaatgct aaccaagtca tcgtcaacaa cctagacaaa
14941 tcagctggtt ttccatttaa taaatggggt aaggctagac tttattatga ttcaatgagt
15001 tatgaggatc aagatgcact tttcgcatat acaaaacgta atgtcatccc tactataact
15061 caaatgaatc ttaagtatgc cattagtgca aagaatagag ctcgcaccgt agctggtgtc
15121 tctatctgta gtactatgac caatagacag tttcatcaaa aattattgaa atcaatagcc
15181 gccactagag gagctactgt agtaattgga acaagcaaat tctatggtgg ttggcacaac
15241 atgttaaaaa ctgtttatag tgatgtagaa aaccctcacc ttatgggttg ggattatcct
15301 aaatgtgata gagccatgcc taacatgctt agaattatgg cctcacttgt tcttgctcgc
15361 aaacatacaa cgtgttgtag cttgtcacac cgtttctata gattagctaa tgagtgtgct
15421 caagtattga gtgaaatggt catgtgtggc ggttcactat atgttaaacc aggtggaacc
15481 tcatcaggag atgccacaac tgcttatgct aatagtgttt ttaacatttg tcaagctgtc
15541 acggccaatg ttaatgcact tttatctact gatggtaaca aaattgccga taagtatgtc
15601 cgcaatttac aacacagact ttatgagtgt ctctatagaa atagagatgt tgacacagac
15661 tttgtgaatg agttttacgc atatttgcgt aaacatttct caatgatgat actctctgac
15721 gatgctgttg tgtgtttcaa tagcacttat gcatctcaag gtctagtggc tagcataaag
15781 aactttaagt cagttcttta ttatcaaaac aatgttttta tgtctgaagc aaaatgttgg
15841 actgagactg accttactaa aggacctcat gaattttgct ctcaacatac aatgctagtt
15901 aaacagggtg atgattatgt gtaccttcct tacccagatc catcaagaat cctaggggcc
15961 ggctgttttg tagatgatat cgtaaaaaca gatggtacac ttatgattga acggttcgtg
16021 tctttagcta tagatgctta cccacttact aaacatccta atcaggagta tgctgatgtc
16081 tttcatttgt acttacaata cataagaaag ctacatgatg agttaacagg acacatgtta
16141 gacatgtatt ctgttatgct tactaatgat aacacttcaa ggtattggga acctgagttt
16201 tatgaggcta tgtacacacc gcatacagtc ttacaggctg ttggggcttg tgttctttgc
16261 aattcacaga cttcattaag atgtggtgct tgcatacgta gaccattctt atgttgtaaa
16321 tgctgttacg accatgtcat atcaacatca cataaattag tcttgtctgt taatccgtat
16381 gtttgcaatg ctccaggttg tgatgtcaca gatgtgactc aactttactt aggaggtatg
16441 agctattatt gtaaatcaca taaaccaccc attagttttc cattgtgtgc taatggacaa
16501 gtttttggtt tatataaaaa tacatgtgtt ggtagcgata atgttactga ctttaatgca
16561 attgcaacat gtgactggac aaatgctggt gattacattt tagctaacac ctgtactgaa
16621 agactcaagc tttttgcagc agaaacgctc aaagctactg aggagacatt taaactgtct
16681 tatggtattg ctactgtacg tgaagtgctg tctgacagag aattacatct ttcatgggaa
16741 gttggtaaac ctagaccacc acttaaccga aattatgtct ttactggtta tcgtgtaact
16801 aaaaacagta aagtacaaat aggagagtac acctttgaaa aaggtgacta tggtgatgct
16861 gttgtttacc gaggtacaac aacttacaaa ttaaatgttg gtgattattt tgtgctgaca
16921 tcacatacag taatgccatt aagtgcacct acactagtgc cacaagagca ctatgttaga
16981 attactggct tatacccaac actcaatatc tcagatgagt tttctagcaa tgttgcaaat
17041 tatcaaaagg ttggtatgca aaagtattct acactccagg gaccacctgg tactggtaag
17101 agtcattttg ctattggcct agctctctac tacccttctg ctcgcatagt gtatacagct
17161 tgctctcatg ccgctgttga tgcactatgt gagaaggcat taaaatattt gcctatagat
17221 aaatgtagta gaattatacc tgcacgtgct cgtgtagagt gttttgataa attcaaagtg
17281 aattcaacat tagaacagta tgtcttttgt actgtaaatg cattgcctga gacgacagca
17341 gatatagttg tctttgatga aatttcaatg gccacaaatt atgatttgag tgttgtcaat
17401 gccagattac gtgctaagca ctatgtgtac attggcgacc ctgctcaatt acctgcacca
17461 cgcacattgc taactaaggg cacactagaa ccagaatatt tcaattcagt gtgtagactt
17521 atgaaaacta taggtccaga catgttcctc ggaacttgtc ggcgttgtcc tgctgaaatt
17581 gttgacactg tgagtgcttt ggtttatgat aataagctta aagcacataa agacaaatca
17641 gctcaatgct ttaaaatgtt ttataagggt gttatcacgc atgatgtttc atctgcaatt
17701 aacaggccac aaataggcgt ggtaagagaa ttccttacac gtaaccctgc ttggagaaaa
17761 gctgtcttta tttcacctta taattcacag aatgctgtag cctcaaagat tttgggacta
17821 ccaactcaaa ctgttgattc atcacagggc tcagaatatg actatgtcat attcactcaa
17881 accactgaaa cagctcactc ttgtaatgta aacagattta atgttgctat taccagagca
17941 aaagtaggca tactttgcat aatgtctgat agagaccttt atgacaagtt gcaatttaca
18001 agtcttgaaa ttccacgtag gaatgtggca actttacaag ctgaaaatgt aacaggactc
18061 tttaaagatt gtagtaaggt aatcactggg ttacatccta cacaggcacc tacacacctc
18121 agtgttgaca ctaaattcaa aactgaaggt ttatgtgttg acatacctgg catacctaag
18181 gacatgacct atagaagact catctctatg atgggtttta aaatgaatta tcaagttaat
18241 ggttacccta acatgtttat cacccgcgaa gaagctataa gacatgtacg tgcatggatt
18301 ggcttcgatg tcgaggggtg tcatgctact agagaagctg ttggtaccaa tttaccttta
18361 cagctaggtt tttctacagg tgttaaccta gttgctgtac ctacaggtta tgttgataca
18421 cctaataata cagatttttc cagagttagt gctaaaccac cgcctggaga tcaatttaaa
18481 cacctcatac cacttatgta caaaggactt ccttggaatg tagtgcgtat aaagattgta
18541 caaatgttaa gtgacacact taaaaatctc tctgacagag tcgtatttgt cttatgggca
18601 catggctttg agttgacatc tatgaagtat tttgtgaaaa taggacctga gcgcacctgt
18661 tgtctatgtg atagacgtgc cacatgcttt tccactgctt cagacactta tgcctgttgg
18721 catcattcta ttggatttga ttacgtctat aatccgttta tgattgatgt tcaacaatgg
18781 ggttttacag gtaacctaca aagcaaccat gatctgtatt gtcaagtcca tggtaatgca
18841 catgtagcta gttgtgatgc aatcatgact aggtgtctag ctgtccacga gtgctttgtt
18901 aagcgtgttg actggactat tgaatatcct ataattggtg atgaactgaa gattaatgcg
18961 gcttgtagaa aggttcaaca catggttgtt aaagctgcat tattagcaga caaattccca
19021 gttcttcacg acattggtaa ccctaaagct attaagtgtg tacctcaagc tgatgtagaa
19081 tggaagttct atgatgcaca gccttgtagt gacaaagctt ataaaataga agaattattc
19141 tattcttatg ccacacattc tgacaaattc acagatggtg tatgcctatt ttggaattgc
19201 aatgtcgata gatatcctgc taattccatt gtttgtagat ttgacactag agtgctatct
19261 aaccttaact tgcctggttg tgatggtggc agtttgtatg taaataaaca tgcattccac
19321 acaccagctt ttgataaaag tgcttttgtt aatttaaaac aattaccatt tttctattac
19381 tctgacagtc catgtgagtc tcatggaaaa caagtagtgt cagatataga ttatgtacca
19441 ctaaagtctg ctacgtgtat aacacgttgc aatttaggtg gtgctgtctg tagacatcat
19501 gctaatgagt acagattgta tctcgatgct tataacatga tgatctcagc tggctttagc
19561 ttgtgggttt acaaacaatt tgatacttat aacctctgga acacttttac aagacttcag
19621 agtttagaaa atgtggcttt taatgttgta aataagggac actttgatgg acaacagggt
19681 gaagtaccag tttctatcat taataacact gtttacacaa aagttgatgg tgttgatgta
19741 gaattgtttg aaaataaaac aacattacct gttaatgtag catttgagct ttgggctaag
19801 cgcaacatta aaccagtacc agaggtgaaa atactcaata atttgggtgt ggacattgct
19861 gctaatactg tgatctggga ctacaaaaga gatgctccag cacatatatc tactattggt
19921 gtttgttcta tgactgacat agccaagaaa ccaactgaaa cgatttgtgc accactcact
19981 gtcttttttg atggtagagt tgatggtcaa gtagacttat ttagaaatgc ccgtaatggt
20041 gttcttatta cagaaggtag tgttaaaggt ttacaaccat ctgtaggtcc caaacaagct
20101 agtcttaatg gagtcacatt aattggagaa gccgtaaaaa cacagttcaa ttattataag
20161 aaagttgatg gtgttgtcca acaattacct gaaacttact ttactcagag tagaaattta
20221 caagaattta aacccaggag tcaaatggaa attgatttct tagaattagc tatggatgaa
20281 ttcattgaac ggtataaatt agaaggctat gccttcgaac atatcgttta tggagatttt
20341 agtcatagtc agttaggtgg tttacatcta ctgattggac tagctaaacg ttttaaggaa
20401 tcaccttttg aattagaaga ttttattcct atggacagta cagttaaaaa ctatttcata
20461 acagatgcgc aaacaggttc atctaagtgt gtgtgttctg ttattgattt attacttgat
20521 gattttgttg aaataataaa atcccaagat ttatctgtag tttctaaggt tgtcaaagtg
20581 actattgact atacagaaat ttcatttatg ctttggtgta aagatggcca tgtagaaaca
20641 ttttacccaa aattacaatc tagtcaagcg tggcaaccgg gtgttgctat gcctaatctt
20701 tacaaaatgc aaagaatgct attagaaaag tgtgaccttc aaaattatgg tgatagtgca
20761 acattaccta aaggcataat gatgaatgtc gcaaaatata ctcaactgtg tcaatattta
20821 aacacattaa cattagctgt accctataat atgagagtta tacattttgg tgctggttct
20881 gataaaggag ttgcaccagg tacagctgtt ttaagacagt ggttgcctac gggtacgctg
20941 cttgtcgatt cagatcttaa tgactttgtc tctgatgcag attcaacttt gattggtgat
21001 tgtgcaactg tacatacagc taataaatgg gatctcatta ttagtgatat gtacgaccct
21061 aagactaaaa atgttacaaa agaaaatgac tctaaagagg gttttttcac ttacatttgt
21121 gggtttatac aacaaaagct agctcttgga ggttccgtgg ctataaagat aacagaacat
21181 tcttggaatg ctgatcttta taagctcatg ggacacttcg catggtggac agcctttgtt
21241 actaatgtga atgcgtcatc atctgaagca tttttaattg gatgtaatta tcttggcaaa
21301 ccacgcgaac aaatagatgg ttatgtcatg catgcaaatt acatattttg gaggaataca
21361 aatccaattc agttgtcttc ctattcttta tttgacatga gtaaatttcc ccttaaatta
21421 aggggtactg ctgttatgtc tttaaaagaa ggtcaaatca atgatatgat tttatctctt
21481 cttagtaaag gtagacttat aattagagaa aacaacagag ttgttatttc tagtgatgtt
21541 cttgttaaca actaaacgaa caatgtttgt ttttcttgtt ttattgccac tagtctctag
21601 tcagtgtgtt aatcttacaa ccagaactca attaccccct gcatacacta attctttcac
21661 acgtggtgtt tattaccctg acaaagtttt cagatcctca gttttacatt caactcagga
21721 cttgttctta cctttctttt ccaatgttac ttggttccat gctatacatg tctctgggac
21781 caatggtact aagaggtttg ataaccctgt cctaccattt aatgatggtg tttattttgc
21841 ttccactgag aagtctaaca taataagagg ctggattttt ggtactactt tagattcgaa
21901 gacccagtcc ctacttattg ttaataacgc tactaatgtt gttattaaag tctgtgaatt
21961 tcaattttgt aatgatccat ttttgggtgt ttattaccac aaaaacaaca aaagttggat
22021 ggaaagtgag ttcagagttt attctagtgc gaataattgc acttttgaat atgtctctca
22081 gccttttctt atggaccttg aaggaaaaca gggtaatttc aaaaatctta gggaatttgt
22141 gtttaagaat attgatggtt attttaaaat atattctaag cacacgccta ttaatttagt
22201 gcgtgatctc cctcagggtt tttcggcttt agaaccattg gtagatttgc caataggtat
22261 taacatcact aggtttcaaa ctttacttgc tttacataga agttatttga ctcctggtga
22321 ttcttcttca ggttggacag ctggtgctgc agcttattat gtgggttatc ttcaacctag
22381 gacttttcta ttaaaatata atgaaaatgg aaccattaca gatgctgtag actgtgcact
22441 tgaccctctc tcagaaacaa agtgtacgtt gaaatccttc actgtagaaa aaggaatcta
22501 tcaaacttct aactttagag tccaaccaac agaatctatt gttagatttc ctaatattac
22561 aaacttgtgc ccttttggtg aagtttttaa cgccaccaga tttgcatctg tttatgcttg
22621 gaacaggaag agaatcagca actgtgttgc tgattattct gtcctatata attccgcatc
22681 attttccact tttaagtgtt atggagtgtc tcctactaaa ttaaatgatc tctgctttac
22741 taatgtctat gcagattcat ttgtaattag aggtgatgaa gtcagacaaa tcgctccagg
22801 gcaaactgga aagattgctg attataatta taaattacca gatgatttta caggctgcgt
22861 tatagcttgg aattctaaca atcttgattc taaggttggt ggtaattata attacctgta
22921 tagattgttt aggaagtcta atctcaaacc ttttgagaga gatatttcaa ctgaaatcta
22981 tcaggccggt agcacacctt gtaatggtgt tgaaggtttt aattgttact ttcctttaca
23041 atcatatggt ttccaaccca ctaatggtgt tggttaccaa ccatacagag tagtagtact
23101 ttcttttgaa cttctacatg caccagcaac tgtttgtgga cctaaaaagt ctactaattt
23161 ggttaaaaac aaatgtgtca atttcaactt caatggttta acaggcacag gtgttcttac
23221 tgagtctaac aaaaagtttc tgcctttcca acaatttggc agagacattg ctgacactac
23281 tgatgctgtc cgtgatccac agacacttga gattcttgac attacaccat gttcttttgg
23341 tggtgtcagt gttataacac caggaacaaa tacttctaac caggttgctg ttctttatca
23401 ggatgttaac tgcacagaag tccctgttgc tattcatgca gatcaactta ctcctacttg
23461 gcgtgtttat tctacaggtt ctaatgtttt tcaaacacgt gcaggctgtt taataggggc
23521 tgaacatgtc aacaactcat atgagtgtga catacccatt ggtgcaggta tatgcgctag
23581 ttatcagact cagactaatt ctcctcggcg ggcacgtagt gtagctagtc aatccatcat
23641 tgcctacact atgtcacttg gtgcagaaaa ttcagttgct tactctaata actctattgc
23701 catacccaca aattttacta ttagtgttac cacagaaatt ctaccagtgt ctatgaccaa
23761 gacatcagta gattgtacaa tgtacatttg tggtgattca actgaatgca gcaatctttt
23821 gttgcaatat ggcagttttt gtacacaatt aaaccgtgct ttaactggaa tagctgttga
23881 acaagacaaa aacacccaag aagtttttgc acaagtcaaa caaatttaca aaacaccacc
23941 aattaaagat tttggtggtt ttaatttttc acaaatatta ccagatccat caaaaccaag
24001 caagaggtca tttattgaag atctactttt caacaaagtg acacttgcag atgctggctt
24061 catcaaacaa tatggtgatt gccttggtga tattgctgct agagacctca tttgtgcaca
24121 aaagtttaac ggccttactg ttttgccacc tttgctcaca gatgaaatga ttgctcaata
24181 cacttctgca ctgttagcgg gtacaatcac ttctggttgg acctttggtg caggtgctgc
24241 attacaaata ccatttgcta tgcaaatggc ttataggttt aatggtattg gagttacaca
24301 gaatgttctc tatgagaacc aaaaattgat tgccaaccaa tttaatagtg ctattggcaa
24361 aattcaagac tcactttctt ccacagcaag tgcacttgga aaacttcaag atgtggtcaa
24421 ccaaaatgca caagctttaa acacgcttgt taaacaactt agctccaatt ttggtgcaat
24481 ttcaagtgtt ttaaatgata tcctttcacg tcttgacaaa gttgaggctg aagtgcaaat
24541 tgataggttg atcacaggca gacttcaaag tttgcagaca tatgtgactc aacaattaat
24601 tagagctgca gaaatcagag cttctgctaa tcttgctgct actaaaatgt cagagtgtgt
24661 acttggacaa tcaaaaagag ttgatttttg tggaaagggc tatcatctta tgtccttccc
24721 tcagtcagca cctcatggtg tagtcttctt gcatgtgact tatgtccctg cacaagaaaa
24781 gaacttcaca actgctcctg ccatttgtca tgatggaaaa gcacactttc ctcgtgaagg
24841 tgtctttgtt tcaaatggca cacactggtt tgtaacacaa aggaattttt atgaaccaca
24901 aatcattact acagacaaca catttgtgtc tggtaactgt gatgttgtaa taggaattgt
24961 caacaacaca gtttatgatc ctttgcaacc tgaattagac tcattcaagg aggagttaga
25021 taaatatttt aagaatcata catcaccaga tgttgattta ggtgacatct ctggcattaa
25081 tgcttcagtt gtaaacattc aaaaagaaat tgaccgcctc aatgaggttg ccaagaattt
25141 aaatgaatct ctcatcgatc tccaagaact tggaaagtat gagcagtata taaaatggcc
25201 atggtacatt tggctaggtt ttatagctgg cttgattgcc atagtaatgg tgacaattat
25261 gctttgctgt atgaccagtt gctgtagttg tctcaagggc tgttgttctt gtggatcctg
25321 ctgcaaattt gatgaagacg actctgagcc agtgctcaaa ggagtcaaat tacattacac
25381 ataaacgaac ttatggattt gtttatgaga atcttcacaa ttggaactgt aactttgaag
25441 caaggtgaaa tcaaggatgc tactccttca gattttgttc gcgctactgc aacgataccg
25501 atacaagcct cactcccttt cggatggctt attgttggcg ttgcacttct tgctgttttt
25561 cagagcgctt ccaaaatcat aaccctcaaa aagagatggc aactagcact ctccaagggt
25621 gttcactttg tttgcaactt gctgttgttg tttgtaacag tttactcaca ccttttgctc
25681 gttgctgctg gccttgaagc cccttttctc tatctttatg ctttagtcta cttcttgcag
25741 agtataaact ttgtaagaat aataatgagg ctttggcttt gctggaaatg ccgttccaaa
25801 aacccattac tttatgatgc caactatttt ctttgctggc atactaattg ttacgactat
25861 tgtatacctt acaatagtgt aacttcttca attgtcatta cttcaggtga tggcacaaca
25921 agtcctattt ctgaacatga ctaccagatt ggtggttata ctgaaaaatg ggaatctgga
25981 gtaaaagact gtgttgtatt acacagttac ttcacttcag actattacca gctgtactca
26041 actcaattga gtacagacac tggtgttgaa catgttacct tcttcatcta caataaaatt
26101 gttgatgagc ctgaagaaca tgtccaaatt cacacaatcg acggttcatc cggagttgtt
26161 aatccagtaa tggaaccaat ttatgatgaa ccgacgacga ctactagcgt gcctttgtaa
26221 gcacaagctg atgagtacga acttatgtac tcattcgttt cggaagagac aggtacgtta
26281 atagttaata gcgtacttct ttttcttgct ttcgtggtat tcttgctagt tacactagcc
26341 atccttactg cgcttcgatt gtgtgcgtac tgctgcaata ttgttaacgt gagtcttgta
26401 aaaccttctt tttacgttta ctctcgtgtt aaaaatctga attcttctag agttcctgat
26461 cttctggtct aaacgaacta aatattatat tagtttttct gtttggaact ttaattttag
26521 ccatggcaga ttccaacggt actattaccg ttgaagagct taaaaagctc cttgaacaat
26581 ggaacctagt aataggtttc ctattcctta catggatttg tcttctacaa tttgcctatg
26641 ccaacaggaa taggtttttg tatataatta agttaatttt cctctggctg ttatggccag
26701 taactttagc ttgttttgtg cttgctgctg tttacagaat aaattggatc accggtggaa
26761 ttgctatcgc aatggcttgt cttgtaggct tgatgtggct cagctacttc attgcttctt
26821 tcagactgtt tgcgcgtacg cgttccatgt ggtcattcaa tccagaaact aacattcttc
26881 tcaacgtgcc actccatggc actattctga ccagaccgct tctagaaagt gaactcgtaa
26941 tcggagctgt gatccttcgt ggacatcttc gtattgctgg acaccatcta ggacgctgtg
27001 acatcaagga cctgcctaaa gaaatcactg ttgctacatc acgaacgctt tcttattaca
27061 aattgggagc ttcgcagcgt gtagcaggtg actcaggttt tgctgcatac agtcgctaca
27121 ggattggcaa ctataaatta aacacagacc attccagtag cagtgacaat attgctttgc
27181 ttgtacagta agtgacaaca gatgtttcat ctcgttgact ttcaggttac tatagcagag
27241 atattactaa ttattatgag gacttttaaa gtttccattt ggaatcttga ttacatcata
27301 aacctcataa ttaaaaattt atctaagtca ctaactgaga ataaatattc tcaattagat
27361 gaagagcaac caatggagat tgattaaacg aacatgaaaa ttattctttt cttggcactg
27421 ataacactcg ctacttgtga gctttatcac taccaagagt gtgttagagg tacaacagta
27481 cttttaaaag aaccttgctc ttctggaaca tacgagggca attcaccatt tcatcctcta
27541 gctgataaca aatttgcact gacttgcttt agcactcaat ttgcttttgc ttgtcctgac
27601 ggcgtaaaac acgtctatca gttacgtgcc agatcagttt cacctaaact gttcatcaga
27661 caagaggaag ttcaagaact ttactctcca atttttctta ttgttgcggc aatagtgttt
27721 ataacacttt gcttcacact caaaagaaag acagaatgat tgaactttca ttaattgact
27781 tctatttgtg ctttttagcc tttctgctat tccttgtttt aattatgctt attatctttt
27841 ggttctcact tgaactgcaa gatcataatg aaacttgtca cgcctaaacg aacatgaaat
27901 ttcttgtttt cttaggaatc atcacaactg tagctgcatt tcaccaagaa tgtagtttac
27961 agtcatgtac tcaacatcaa ccatatgtag ttgatgaccc gtgtcctatt cacttctatt
28021 ctaaatggta tattagagta ggagctagaa aatcagcacc tttaattgaa ttgtgcgtgg
28081 atgaggctgg ttctaaatca cccattcagt acatcgatat cggtaattat acagtttcct
28141 gtttaccttt tacaattaat tgccaggaac ctaaattggg tagtcttgta gtgcgttgtt
28201 cgttctatga agacttttta gagtatcatg acgttcgtgt tgttttagat ttcatctaaa
28261 cgaacaaact aaaatgtctg ataatggacc ccaaaatcag cgaaatgcac cccgcattac
28321 gtttggtgga ccctcagatt caactggcag taaccagaat ggagaacgca gtggggcgcg
28381 atcaaaacaa cgtcggcccc aaggtttacc caataatact gcgtcttggt tcaccgctct
28441 cactcaacat ggcaaggaag accttaaatt ccctcgagga caaggcgttc caattaacac
28501 caatagcagt ccagatgacc aaattggcta ctaccgaaga gctaccagac gaattcgtgg
28561 tggtgacggt aaaatgaaag atctcagtcc aagatggtat ttctactacc taggaactgg
28621 gccagaagct ggacttccct atggtgctaa caaagacggc atcatatggg ttgcaactga
28681 gggagccttg aatacaccaa aagatcacat tggcacccgc aatcctgcta acaatgctgc
28741 aatcgtgcta caacttcctc aaggaacaac attgccaaaa ggcttctacg cagaagggag
28801 cagaggcggc agtcaagcct cttctcgttc ctcatcacgt agtcgcaaca gttcaagaaa
28861 ttcaactcca ggcagcagta ggggaacttc tcctgctaga atggctggca atggcggtga
28921 tgctgctctt gctttgctgc tgcttgacag attgaaccag cttgagagca aaatgtctgg
28981 taaaggccaa caacaacaag gccaaactgt cactaagaaa tctgctgctg aggcttctaa
29041 gaagcctcgg caaaaacgta ctgccactaa agcatacaat gtaacacaag ctttcggcag
29101 acgtggtcca gaacaaaccc aaggaaattt tggggaccag gaactaatca gacaaggaac
29161 tgattacaaa cattggccgc aaattgcaca atttgccccc agcgcttcag cgttcttcgg
29221 aatgtcgcgc attggcatgg aagtcacacc ttcgggaacg tggttgacct acacaggtgc
29281 catcaaattg gatgacaaag atccaaattt caaagatcaa gtcattttgc tgaataagca
29341 tattgacgca tacaaaacat tcccaccaac agagcctaaa aaggacaaaa agaagaaggc
29401 tgatgaaact caagccttac cgcagagaca gaagaaacag caaactgtga ctcttcttcc
29461 tgctgcagat ttggatgatt tctccaaaca attgcaacaa tccatgagca gtgctgactc
29521 aactcaggcc taaactcatg cagaccacac aaggcagatg ggctatataa acgttttcgc
29581 ttttccgttt acgatatata gtctactctt gtgcagaatg aattctcgta actacatagc
29641 acaagtagat gtagttaact ttaatctcac atagcaatct ttaatcagtg tgtaacatta
29701 gggaggactt gaaagagcca ccacattttc accgaggcca cgcggagtac gatcgagtgt
29761 acagtgaaca atgctaggga gagctgccta tatggaagag ccctaatgtg taaaattaat
29821 tttagtagtg ctatccccat gtgattttaa tagcttctta ggagaatgac aaaaaaaaaa
29881 aaaaaaaaaa aaaaaaaaaa aaa"""
###Output
_____no_output_____
###Markdown
> **This genome can be replaced in this notebook by saving it on the disk by creating a txt file and calling it like we've done in the next cell**
###Code
with open('/Users/pranjal27bhardwaj/Desktop/Corona main/covid_genome.txt', 'r') as file:
corona = file.read()
corona
###Output
_____no_output_____
###Markdown
To remove all the numbers and spaces in the genome and just get the string of A, T, G, C. So using the replace function:
###Code
for a in " \n0123456789":
corona = corona.replace(a, "")
corona
###Output
_____no_output_____
###Markdown
Number of base pairs i.e. nucleotides in the modelule that made up the RNA and DNA
###Code
len(corona)
###Output
_____no_output_____
###Markdown
Kolmogorov complexity Predicting the size of virus by compressing the genome of Corona virus. 'Kolmogorov complexity' (upperbounding because lower bounding is not possible). Compressing using zlib
###Code
import zlib
len(zlib.compress(corona.encode("utf-8")))
## for python 3 or more we need to utf-8 format encoding
###Output
_____no_output_____
###Markdown
The above result means - The RNA of Coronavirus can contain '8858' bytes of information. This is just an upper-bound. This means - Coronavirus cannot contain more than '8858' bytes of information. Let's see if we can compress it a little more. HEre we used the zlib method of compression. We can look for better compression types like lzma. Compressing furthermore using lzma
###Code
import lzma
lzc = lzma.compress(corona.encode("utf-8"))
len(lzc)
###Output
_____no_output_____
###Markdown
So, The RNA of Coronavirus can contain '8308' bytes of information. This is just an upper-bound. Hence it's a better compression way. How to extract imformation from this genome information?The genome contains the information about the proteins it can make. These proteins determine the characteristics of the cell in which they are produced. So we need to extract information about the proteins. To extract this info, we must know - how proteins are formed from the genetic material i.e. DNA/RNA.> **Learning before applying:** RNAs and DNAs form proteins. This is how proteins are formed from DNA. In DNA, A-T/U and G-C form pairs. This pair formation is because - the chemical structure of A, T/U, G and C is such that - A and T are attracted towards each other by 2 hydrogen bonds and G and C together are attracted by 3 hydrogen bonds. A-C and G-T can't form such stable bonds. > What happens during protein formation is:> An enzyme called 'RNA polymerase' breaks these hydrogen bonds for a small part, takes one strand of DNA and forms its corresponding paired RNA. This process happens inside the nucleus of the cell. We call this RNA generated as 'mRNA' or 'messenger RNA' because this RNA will come out of nucleus and act like a messaage to Ribosome which will generate proteins accordingly. This process of generation of mRNA is called - **Transcription.** Now Ribosome will read the mRNA in sets of 3 bases. This set of 3 bases is called codon. Codons decide the Amino acids. Depending on the codon read by Ribosome, tRNA (transfer-RNA) brings the appropiate amino acid. These amino acids are then linked using peptide bonds to form a chain called *Polypeptide chain*. At the other end of Ribosome, tRNA is free and can go to take another amino acid. > *Note:* Amino acids are organic compounds that contain amine (-NH2) and carboxyl (-COOH) functional groups. There are 20 standard amino acids and 2 non-standard. Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. Here is the table of codons and their corresponding Amino acids. 'Met' is usually the starting amino acid i.e. 'AUG' forms the start of mRNA. Hence 'AUG' is called *start codon.* 'UAA', 'UGA' and 'UAG' are *stop codons* as they mark the ending of the polypeptide chain, so that a new chain should start from the next codon. > This process of generation of chains of amino acids is called - **Translation.** A very long chain of amino acids is called *Protein.* In summary, we can understand the process as: Now since in Coronavirus, we only has RNA, the process of Transcription won't occur and only Translation will happen. So what we now need to write is - *a translation function*, which takes corona's genome as input and gives back all the polypeptide chains that could be formed from that genome. For that, we first need a dictionary of codons. Following codons' string is copied from 'Genetic code' - Wikipedia. (https://en.wikipedia.org/wiki/DNA_codon_table)
###Code
# Asn or Asp / B AAU, AAC; GAU, GAC
# Gln or Glu / Z CAA, CAG; GAA, GAG
# START AUG
## Seperating them from the table because these duplicates was creating problems
codons = """
Ala / A GCU, GCC, GCA, GCG
Ile / I AUU, AUC, AUA
Arg / R CGU, CGC, CGA, CGG; AGA, AGG, AGR;
Leu / L CUU, CUC, CUA, CUG; UUA, UUG, UUR;
Asn / N AAU, AAC
Lys / K AAA, AAG
Asp / D GAU, GAC
Met / M AUG
Phe / F UUU, UUC
Cys / C UGU, UGC
Pro / P CCU, CCC, CCA, CCG
Gln / Q CAA, CAG
Ser / S UCU, UCC, UCA, UCG; AGU, AGC;
Glu / E GAA, GAG
Thr / T ACU, ACC, ACA, ACG
Trp / W UGG
Gly / G GGU, GGC, GGA, GGG
Tyr / Y UAU, UAC
His / H CAU, CAC
Val / V GUU, GUC, GUA, GUG
STOP UAA, UGA, UAG""".strip()
for t in codons.split('\n'):
print(t.split('\t'))
###Output
['Ala / A', 'GCU, GCC, GCA, GCG']
['Ile / I', 'AUU, AUC, AUA']
['Arg / R', 'CGU, CGC, CGA, CGG; AGA, AGG, AGR;']
['Leu / L', 'CUU, CUC, CUA, CUG; UUA, UUG, UUR;']
['Asn / N', 'AAU, AAC']
['Lys / K', 'AAA, AAG']
['Asp / D', 'GAU, GAC']
['Met / M', 'AUG']
['Phe / F', 'UUU, UUC']
['Cys / C', 'UGU, UGC']
['Pro / P', 'CCU, CCC, CCA, CCG']
['Gln / Q', 'CAA, CAG']
['Ser / S', 'UCU, UCC, UCA, UCG; AGU, AGC;']
['Glu / E', 'GAA, GAG']
['Thr / T', 'ACU, ACC, ACA, ACG']
['Trp / W', 'UGG']
['Gly / G', 'GGU, GGC, GGA, GGG']
['Tyr / Y', 'UAU, UAC']
['His / H', 'CAU, CAC']
['Val / V', 'GUU, GUC, GUA, GUG']
['STOP', 'UAA, UGA, UAG']
###Markdown
> **To make this in a better readable format we'll making it into a decoder dictionary. Then making the decoder for DNA . We will also conevert the "U" to "T" in the list because the CoronaVIrus is a RNA virus and we will convert to DNA only dictionary.**
###Code
##decoder dictionary
dec = {}
for t in codons.split('\n'):
k, v = t.split('\t')
if '/' in k:
k = k.split('/')[-1].strip()
k = k.replace("STOP", "*")
v = v.replace(",", "").replace(";", "").lower().replace("u", "t").split(" ")
for vv in v:
if vv in dec:
print("duplicate", vv)
dec[vv] = k
dec
###Output
_____no_output_____
###Markdown
We had to add the duplicate function because AUG is at multiple places, IT can bee seen in "Met" and in "Start" both. Which was creating problem in translation.
###Code
len(set(dec.values()))
###Output
_____no_output_____
###Markdown
> This means we have 21 amino acids in our decoder. Which can also be verified by the following chart, which shows there can be only 20 amino acids. Here we have a 'STOP' being the 21th amino acid. Which shows that the decoder works well. > Now, decoding the genome can result in one of the three possible ways. These 3 ways are called 'reading frames'. In molecular biology, a reading frame is a way of dividing the sequence of nucleotides in a nucleic acid (DNA or RNA) molecule into a set of consecutive, non-overlapping triplets.
###Code
def translation(x, isProtein = False):
aa = []
for i in range(0, len(x)-2, 3):
aa.append(dec[x[i:i+3]])
aa = ''.join(aa)
if isProtein:
if aa[0] != "M" or aa[-1] != "*":
print("BAD PROTEIN!")
return None
aa = aa[:-1]
return aa
aa = translation(corona[0:]) + translation(corona[1:]) + translation(corona[2:])
##Refer to reading of codons for the above algorithm
aa
###Output
_____no_output_____
###Markdown
Polypeptides>In molecular biology, a reading frame is a way of dividing the sequence of nucleotides in a nucleic acid (DNA or RNA) molecule into a set of consecutive, non-overlapping triplets. Where these triplets equate to amino acids or stop signals during translation, they are called codons.>A polypeptide is a longer, continuous, and unbranched peptide chain of up to fifty amino acids. Hence, peptides fall under the broad chemical classes of biological oligomers and polymers, alongside nucleic acids, oligosaccharides, polysaccharides, and others.>When a polypeptide contains more than fifty amino acids it is known as a protein. Proteins consist of one or more polypeptides arranged in a biologically functional way, often bound to ligands such as coenzymes and cofactors, or to another protein or other macromolecule such as DNA or RNA, or to complex macromolecular assemblies.
###Code
polypeptides = aa.split("*")
polypeptides
len(polypeptides)
long_polypep_chains = list(filter(lambda x: len(x) > 100, aa.split("*")))
long_polypep_chains
len(long_polypep_chains)
###Output
_____no_output_____
###Markdown
This is the genome organisation of Sars-Cov-2. _(Genome organisation is the linear order of genetic material (DNA/RNA) and its division into segments performing some specific function.)_ > Note: ORF stands for 'Open Reading Frame', the reading frame in which protein starts with M and ends with *.Source: https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2Phylogenetics_and_taxonomyLet's see if we can extract all the segments as mentioned here. We will refer to the following source again. Source: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512Also, if you will see the following genome organisation of Sars-Cov (old coronavirus), you will notice - the structure is very similar to Sars-CoV-2. _(Ignore the detailing given in the structure.)_
###Code
with open('/Users/pranjal27bhardwaj/Desktop/corona/sars_cov2_data _c/genome/sars_cov2_genome.txt', 'r') as file:
corona = file.read()
for s in "\n01234567789 ":
corona = corona.replace(s, "")
# https://www.ncbi.nlm.nih.gov/protein/1802476803 -
# Orf1a polyprotein, found in Sars-Cov-2 (new Covid 19)
orf1a_poly_v2 = translation(corona[265:13483], True)
orf1a_v2
# https://www.uniprot.org/uniprot/A7J8L3
# Orf1a polyprotein, found in Sars-Cov
with open('sars_cov2_data/proteins_copy/orf1a.txt', 'r') as file:
orf1a_poly_v1 = file.read().replace('\n', '')
orf1a_poly_v1
len(orf1a_poly_v1), len(orf1a_poly_v2)
###Output
_____no_output_____
###Markdown
> Usually orf1b is not studied alone but along with orf1a. So we need to look at 'orf1ab'. But just to prove that the length of orf1b is 2595, here is just finding the length of orf1b in SARS-CoV-2.
###Code
# For orf1b_v1, refer - https://www.uniprot.org/uniprot/A0A0A0QGJ0
orf1a_poly_v2 = translation(corona[13467:21555])
# Length calculated from first 'M'. The last base is *, so extra -1 for that.
len(orf1a_poly_v2) - orf1a_poly_v2.find('M') - 1
# https://www.ncbi.nlm.nih.gov/protein/1796318597 -
# Orf1ab polyprotein - found in Sars-cov-2
orf1a_poly_v2 = translation(corona[265:13468]) + translation(corona[13467:21555])
# https://www.uniprot.org/uniprot/A7J8L2
# Orf1ab polyprotein - found in Sars-cov
with open('sars_cov2_data/proteins_copy/orf1ab.txt', 'r') as file:
orf1a_poly_v2 = file.read().replace('\n', '')
len(orf1a_poly_v2), len(orf1a_poly_v1)
###Output
_____no_output_____
###Markdown
> So by now, we have extracted Orf1a and Orf1b RNA segments.
###Code
# https://www.ncbi.nlm.nih.gov/protein/1796318598
# Spike glycoprotein - found in Sars-cov-2
spike_pro_v2 = translation(corona[21562:25384], True)
# https://www.ncbi.nlm.nih.gov/Structure/pdb/6VXX CLOSED FORM of glycoprotein (structure of glycoprotein before delivering the payload)
# https://www.ncbi.nlm.nih.gov/Structure/pdb/6VYB OPEN FORM of glycoprotein (structure of glycoprotein after delivering the payload)
spike_pro_v2
cn3 = open('/Users/pranjal27bhardwaj/Desktop/Corona main/mmdb_6VXX.cn3', 'rb').read
###Output
_____no_output_____
###Markdown
> Spike gylcoprotein is has catalystic properties which is responsible for attacking the body and multiplying the number of cells. The infection begins when the viral spike(S) glycoprotein attaches to it's compliementary host cell receptor, which usually is ACE2.
###Code
# https://www.uniprot.org/uniprot/P59594
# Spike glycoprotein - found in Sars-cov
with open('sars_cov2_data/proteins_copy/spike.txt', 'r') as file:
spike_v1 = file.read().replace('\n', '')
len(spike_v2), len(spike_v1)
import nglview
view = nglview.show_pdbid("6VXX") # load "3pqr" from RCSB PDB and display viewer widget
view
import nglview
view = nglview.show_pdbid("6VYB") # load "3pqr" from RCSB PDB and display viewer widget
view
# https://www.ncbi.nlm.nih.gov/gene/43740569
# orf3a protein found in Sars-cov-2.
orf3a_pro_v2 = translation(corona[25392:26220], True)
# https://www.uniprot.org/uniprot/J9TEM7
with open('sars_cov2_data/proteins_copy/orf3a.txt', 'r') as file:
orf3a_pro_v1 = file.read().replace('\n', '')
len(orf3a_pro_v2), len(orf3a_pro_v1)
###Output
_____no_output_____
###Markdown
By now we have observed that there is very little difference in the corresponding protein lengths of SARS-CoV and SARS-CoV-2.**So, Can we say that there isn't much difference between the proteins of two viruses?** Well, **Not Really** Reason for that is that the length of both the proteins is not the most accurate measure of how dissimilar they are. That arises a different question in front of us. Q. 3 How much different is the protein of this novel coronavirus as compared to the older one?The answer is - **The Edit Distance.** In computational linguistics and computer science, edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T. Source: https://en.wikipedia.org/wiki/Edit_distanceLet's calculate the edit distance of the genomes of the two versions of coronaviruses. Source of complete genome of old coronavirus: https://www.ncbi.nlm.nih.gov/nuccore/30271926
###Code
with open('sars_cov_data/genome/sars_cov_genome.txt', 'r') as file:
sars_cov = file.read()
print(sars_cov)
for s in "\n01234567789 ":
sars_cov = sars_cov.replace(s, "")
sars_cov
import lzma
lzc_v1 = lzma.compress(sars_cov.encode("utf-8"))
len(lzc_v1)
len(lzc_v1) - len(lzc)
len(corona) - len(sars_cov)
import editdistance
editdistance.eval(sars_cov, corona)
###Output
_____no_output_____
###Markdown
From this, we can see that - Novel coronavirus differ alot than expected from old coronavirus. Now that we know - the difference between two DNAs/RNAs is measured by calculating edit-distance, we can now just simply complete extracting other proteins. > Cross verifying the length of Envelope protein as 75 aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740570 - Envelope protein in Cov-2
envelope_pro_v2 = translation(corona[26244:26472], True)
len(envelope_pro_v2)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of Memberane protein which ia supposed to be 222aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740571 - Membrane Glycoprotein in Cov-2
membrane_pro_v2 = translation(corona[26522:27191], True)
len(membrane_pro_v2)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of ORF6 protein which is supposed to be 61 aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740572 - Orf6 in Cov-2
orf6_pro_v2 = translation(corona[27201:27387], True)
len(orf6_pro_v2)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of ORF7a protein which is supposed to be 121aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740573 - orf7a in Cov-2
orf7a_pro = translation(corona[27393:27759], True)
len(orf7a_pro)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of ORF7b protein which is supposed to be 43aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740574 - orf7b in Cov-2
orf7b_pro = translation(corona[27755:27887], True)
len(orf7b_pro)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of ORF8 protein which is supposed to be 121aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740577 - orf8 in Cov-2
orf8_pro = translation(corona[27893:28259], True)
len(orf8_pro)
###Output
_____no_output_____
###Markdown
> Cross verifying the length of ORF10 protein which is supposed to be 38aa
###Code
# https://www.ncbi.nlm.nih.gov/gene/43740576 - orf10 in Cov-2
orf10_pro = translation(corona[29557:29674], True)
len(orf10_pro)
###Output
_____no_output_____
###Markdown
COVID-19 Quick demo for Pytraj and NGLView
###Code
import pytraj as pt
import nglview as nv
traj = pt.load(nv.datafiles.TRR, top=nv.datafiles.PDB)
view = nv.show_pytraj(traj)
view
import nglview
view = nglview.show_pdbid("3pqr") # load "3pqr" from RCSB PDB and display viewer widget
view
# import autoreload
# ?autoreload
%load_ext autoreload
%autoreload 2
import lzma
# sarscov2_basePair
from data.sarscov2_data import sarscov2_basePair
sarscov2_basePair_lenght = len(sarscov2_basePair)
cost_to_sintezize = sarscov2_basePair_lenght * 0.10
# Kolmogrov complexity - copressing algorithm
sarscov2_basePair_compressed_lenght = len(lzma.compress(sarscov2_basePair.encode("utf-8"))) # 8.4kb
# sars
from data.sarscov2_data import sars
sars_lenght = len(sars)
cost_to_sintezize = sars_lenght * 0.10
# Kolmogrov complexity - copressing algorithm
sars_compremv sssed_lenght = len(lzma.compress(sars.encode("utf-8"))) # 8.4kb
# sarscov2_basePair # genom_1
# sars # genom_2
from lib_sarscov2 import *
# Translation from nitrogenous bases to amino acids
# translate(sarscov2_basePair)
###Output
_____no_output_____
###Markdown
Here Region,Name,Gender,Designation,Married ,Children,Occupation and Cases 1/M,deaths 1/M.Insurance,Salary,FT/MONTH are un necessary labels and hence we wont consider them for our model .So we simply discard them in our model
###Code
data.drop(data.columns[[2,16]],axis=1,inplace=True)
data
data.describe()
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
labelencoder=LabelEncoder()
data['cardiological pressure']=labelencoder.fit_transform(data['cardiological pressure'])
data['comorbidity']=labelencoder.fit_transform(data['comorbidity'])
data['Pulmonary score']=labelencoder.fit_transform(data['Pulmonary score'])
data
import seaborn as sns
sns.heatmap(data.corr())
data.corr()
###Output
_____no_output_____
###Markdown
We can see the correlation between the attributes from the above correlation analysis and we can fetch some conclusions from the above correlations1)coma score,cardiological pressure,diuresis,HBB,platelets,d-dimer,heart rate,hdl cholestrol are positively correlated with infected probability of a person .2)Whie charlson index,blood glucose,Heart rate,comorbidity and age are negatively correlated.3)The positive correlation infers increase in all those positively correlated variables will increase the probability of infected person where as the negatively correlated variables are opposite of the positiely correlated.4)A decrease in the negativelycorrelated will leads to increase in nfected probability.
###Code
data
# @hidden_cell
# The following code contains the credentials for a file in your IBM Cloud Object Storage.
# You might want to remove those credentials before you share your notebook.
credentials_1 = {
'IAM_SERVICE_ID': 'iam-ServiceId-d8d9281c-21e0-4c5a-9aa4-e2960a7e92db',
'IBM_API_KEY_ID': 'IokstnztqKsH5yoCaSfRkeRIGsBywOYluG4VWH7vVpXj',
'ENDPOINT': 'https://s3.eu-geo.objectstorage.service.networklayer.com',
'IBM_AUTH_ENDPOINT': 'https://iam.eu-gb.bluemix.net/oidc/token',
'BUCKET': 'corona-donotdelete-pr-aratehacjfsy4z',
'FILE': 'Test_dataset.xlsx'
}
client_91de2c84d9734c0cabf21ace8a543c7a = ibm_boto3.client(service_name='s3',
ibm_api_key_id='IokstnztqKsH5yoCaSfRkeRIGsBywOYluG4VWH7vVpXj',
ibm_auth_endpoint="https://iam.eu-gb.bluemix.net/oidc/token",
config=Config(signature_version='oauth'),
endpoint_url=('https://s3.eu-geo.objectstorage.service.networklayer.com'))
body = client_91de2c84d9734c0cabf21ace8a543c7a.get_object(Bucket='corona-donotdelete-pr-aratehacjfsy4z',Key='Test_dataset.xlsx')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType( __iter__, body )
test = pd.read_excel(body)
test.head()
test.drop(test.columns[[1,2,3]],axis=1,inplace=True)
test['cardiological pressure']=labelencoder.fit_transform(test['cardiological pressure'])
test['comorbidity']=labelencoder.fit_transform(test['comorbidity'])
test['Pulmonary score']=labelencoder.fit_transform(test['Pulmonary score'])
import statsmodels.api as sm
X=data[["Coma score","Pulmonary score","cardiological pressure","Diuresis","Platelets","HBB","d-dimer","HDL cholesterol"]]
y=data["Infect_Prob"]
model = sm.OLS(y, X).fit()
predictions = model.predict(X)
model.summary()
Xq=test[["Coma score","Pulmonary score","cardiological pressure","Diuresis","Platelets","HBB","d-dimer","HDL cholesterol"]]
predictions1=model.predict(Xq)
predictions1
f=open("corona.txt","w+")
for each in predictions1:
f.write(str(each))
print(each)
predictions
###Output
_____no_output_____ |
University of Washington - Machine Learning Specialization/University of Washington - Machine Learning Foundations A Case Study Approach/week3/FND03-NB01.ipynb | ###Markdown
Analyze Product Sentiment
###Code
import turicreate
import os
###Output
_____no_output_____
###Markdown
Read product review data
###Code
d = os.getcwd() #Gets the current working directory
os.chdir("..")
products = turicreate.SFrame('./data/amazon_baby.sframe/m_bfaa91c17752f745.frame_idx')
###Output
_____no_output_____
###Markdown
Explore data
###Code
products
products.groupby('name',operations={'count':turicreate.aggregate.COUNT()}).sort('count',ascending=False)
###Output
_____no_output_____
###Markdown
Examine the reivews for the most-reviewed product
###Code
giraffe_reviews = products[products['name']=='Vulli Sophie the Giraffe Teether']
giraffe_reviews
len(giraffe_reviews)
giraffe_reviews['rating'].show()
###Output
_____no_output_____
###Markdown
Building a sentiment classifier Build word count vectors
###Code
products['word_count'] = turicreate.text_analytics.count_words(products['review'])
products
###Output
_____no_output_____
###Markdown
Define what is positive and negative sentiment
###Code
products['rating'].show()
#ignore all 3* reviews
products = products[products['rating']!= 3]
#positive sentiment = 4-star or 5-star reviews
products['sentiment'] = products['rating'] >= 4
products
products['sentiment'].show()
###Output
_____no_output_____
###Markdown
Train our sentiment classifier
###Code
train_data,test_data = products.random_split(.8,seed=0)
sentiment_model = turicreate.logistic_classifier.create(train_data,target='sentiment', features=['word_count'], validation_set=test_data)
###Output
_____no_output_____
###Markdown
Apply the sentiment classifier to better understand the Giraffe reviews
###Code
products['predicted_sentiment'] = sentiment_model.predict(products, output_type = 'probability')
products
giraffe_reviews = products[products['name']== 'Vulli Sophie the Giraffe Teether']
giraffe_reviews
###Output
_____no_output_____
###Markdown
Sort the Giraffe reviews according to predicted sentiment
###Code
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews
giraffe_reviews.tail()
###Output
_____no_output_____
###Markdown
Show the most positive reviews
###Code
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
###Output
_____no_output_____
###Markdown
Most negative reivews
###Code
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
# some selected words to measure the data
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
# count how many times did the customers mentioned the selected words
def getWordCount(data, word):
return int(data[word]) if word in data else 0
# create new column with every selected words and count
for word in selected_words:
products[word] = products['word_count'].apply(lambda x:getWordCount(x,word))
dicts = {}
for word in selected_words:
if word not in dicts:
dicts[word] = products[word].sum()
dicts
print('Max:', max(dicts, key=dicts.get))
print('Min:', min(dicts, key=dicts.get))
train_data,test_data = products.random_split(.8, seed=0)
features=selected_words
selected_words_model = turicreate.logistic_classifier.create(train_data,target='sentiment', features=features, validation_set=test_data)
selected_words_model
products['predicted_selected_words'] = selected_words_model.predict(products, output_type = 'probability')
products.print_rows(num_rows=30)
products[products['name']== 'Baby Trend Diaper Champ'].sort('predicted_sentiment', ascending=False)[0]
selected_words_model.evaluate(test_data)
sentiment_model.evaluate(test_data)
test_data[test_data['rating'] >3].num_rows()/test_data.num_rows()
###Output
_____no_output_____ |
tutorials/notebook/cx_site_chart_examples/violin_1.ipynb | ###Markdown
Example: CanvasXpress violin Chart No. 1This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/violin-1.htmlThis example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.Everything required for the chart to render is included in the code below. Simply run the code block.
###Code
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="violin1",
data={
"y": {
"smps": [
"Var1",
"Var2",
"Var3",
"Var4",
"Var5",
"Var6",
"Var7",
"Var8",
"Var9",
"Var10",
"Var11",
"Var12",
"Var13",
"Var14",
"Var15",
"Var16",
"Var17",
"Var18",
"Var19",
"Var20",
"Var21",
"Var22",
"Var23",
"Var24",
"Var25",
"Var26",
"Var27",
"Var28",
"Var29",
"Var30",
"Var31",
"Var32",
"Var33",
"Var34",
"Var35",
"Var36",
"Var37",
"Var38",
"Var39",
"Var40",
"Var41",
"Var42",
"Var43",
"Var44",
"Var45",
"Var46",
"Var47",
"Var48",
"Var49",
"Var50",
"Var51",
"Var52",
"Var53",
"Var54",
"Var55",
"Var56",
"Var57",
"Var58",
"Var59",
"Var60"
],
"data": [
[
4.2,
11.5,
7.3,
5.8,
6.4,
10,
11.2,
11.2,
5.2,
7,
16.5,
16.5,
15.2,
17.3,
22.5,
17.3,
13.6,
14.5,
18.8,
15.5,
23.6,
18.5,
33.9,
25.5,
26.4,
32.5,
26.7,
21.5,
23.3,
29.5,
15.2,
21.5,
17.6,
9.7,
14.5,
10,
8.2,
9.4,
16.5,
9.7,
19.7,
23.3,
23.6,
26.4,
20,
25.2,
25.8,
21.2,
14.5,
27.3,
25.5,
26.4,
22.4,
24.5,
24.8,
30.9,
26.4,
27.3,
29.4,
23
]
],
"vars": [
"len"
]
},
"x": {
"supp": [
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ"
],
"order": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"dose": [
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
]
}
},
config={
"axisAlgorithm": "rPretty",
"axisTickScaleFontFactor": 1.8,
"axisTitleFontStyle": "bold",
"axisTitleScaleFontFactor": 1.8,
"background": "white",
"backgroundType": "window",
"backgroundWindow": "#E5E5E5",
"graphOrientation": "vertical",
"graphType": "Boxplot",
"groupingFactors": [
"dose"
],
"guides": "solid",
"guidesColor": "white",
"showBoxplotIfViolin": False,
"showLegend": False,
"showViolinBoxplot": True,
"smpLabelRotate": 90,
"smpLabelScaleFontFactor": 1.8,
"smpTitle": "dose",
"smpTitleFontStyle": "bold",
"smpTitleScaleFontFactor": 1.8,
"theme": "CanvasXpress",
"title": "The Effect of Vitamin C on Tooth Growth in Guinea Pigs",
"violinScale": "area",
"xAxis2Show": False,
"xAxisMinorTicks": False,
"xAxisTickColor": "white",
"xAxisTitle": "len"
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="violin_1.html")
###Output
_____no_output_____ |
examples/04_Binary_Classification_Varying_Parameters.ipynb | ###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
#binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df)
# DEEP
tab_preprocessor = TabPreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_tab = tab_preprocessor.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_tab)
print(X_tab.shape)
###Output
[[ 1. 1. 1. ... 1. -0.99512893
-0.03408696]
[ 2. 2. 1. ... 1. -0.04694151
0.77292975]
[ 3. 2. 2. ... 1. -0.77631645
-0.03408696]
...
[ 2. 4. 1. ... 1. 1.41180837
-0.03408696]
[ 2. 1. 1. ... 1. -1.21394141
-1.64812038]
[ 2. 5. 7. ... 1. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
?TabMlp
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers, as well as chose the order of the operations
deeptabular = TabMlp(column_idx=tab_preprocessor.column_idx,
mlp_hidden_dims=[64,32],
mlp_dropout=[0.5, 0.5],
mlp_batchnorm=True,
mlp_linear_first = True,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deeptabular.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deeptabular':deep_opt}
schedulers = {'wide': wide_sch, 'deeptabular':deep_sch}
initializers = {'wide': KaimingNormal, 'deeptabular':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
trainer = Trainer(model,
objective='binary',
optimizers=optimizers,
lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics
)
trainer.fit(X_wide=X_wide, X_tab=X_tab, target=target, n_epochs=10, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 153/153 [00:03<00:00, 42.78it/s, loss=0.562, metrics={'acc': 0.7779, 'rec': 0.488}]
valid: 100%|██████████| 39/39 [00:00<00:00, 54.81it/s, loss=0.374, metrics={'acc': 0.8363, 'rec': 0.5684}]
epoch 2: 100%|██████████| 153/153 [00:03<00:00, 44.03it/s, loss=0.373, metrics={'acc': 0.8277, 'rec': 0.5535}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.54it/s, loss=0.359, metrics={'acc': 0.8361, 'rec': 0.5915}]
epoch 3: 100%|██████████| 153/153 [00:03<00:00, 41.40it/s, loss=0.354, metrics={'acc': 0.8354, 'rec': 0.5686}]
valid: 100%|██████████| 39/39 [00:00<00:00, 100.84it/s, loss=0.355, metrics={'acc': 0.8378, 'rec': 0.5346}]
epoch 4: 100%|██████████| 153/153 [00:03<00:00, 43.49it/s, loss=0.346, metrics={'acc': 0.8381, 'rec': 0.5653}]
valid: 100%|██████████| 39/39 [00:00<00:00, 117.29it/s, loss=0.352, metrics={'acc': 0.8388, 'rec': 0.5633}]
epoch 5: 100%|██████████| 153/153 [00:03<00:00, 39.83it/s, loss=0.343, metrics={'acc': 0.8396, 'rec': 0.5669}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.86it/s, loss=0.351, metrics={'acc': 0.8388, 'rec': 0.6074}]
epoch 6: 100%|██████████| 153/153 [00:03<00:00, 41.32it/s, loss=0.342, metrics={'acc': 0.8406, 'rec': 0.5758}]
valid: 100%|██████████| 39/39 [00:00<00:00, 110.53it/s, loss=0.35, metrics={'acc': 0.84, 'rec': 0.5834}]
epoch 7: 100%|██████████| 153/153 [00:03<00:00, 40.08it/s, loss=0.341, metrics={'acc': 0.8407, 'rec': 0.5664}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.04it/s, loss=0.35, metrics={'acc': 0.8399, 'rec': 0.5924}]
epoch 8: 100%|██████████| 153/153 [00:03<00:00, 40.74it/s, loss=0.341, metrics={'acc': 0.8397, 'rec': 0.573}]
valid: 100%|██████████| 39/39 [00:00<00:00, 103.97it/s, loss=0.35, metrics={'acc': 0.8404, 'rec': 0.5881}]
epoch 9: 100%|██████████| 153/153 [00:03<00:00, 41.83it/s, loss=0.341, metrics={'acc': 0.8407, 'rec': 0.571}]
valid: 100%|██████████| 39/39 [00:00<00:00, 112.66it/s, loss=0.35, metrics={'acc': 0.8398, 'rec': 0.595}]
epoch 10: 100%|██████████| 153/153 [00:03<00:00, 41.73it/s, loss=0.341, metrics={'acc': 0.8404, 'rec': 0.5751}]
valid: 100%|██████████| 39/39 [00:00<00:00, 111.89it/s, loss=0.35, metrics={'acc': 0.8389, 'rec': 0.5787}]
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
print(trainer.history)
print(trainer.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deeptabular_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a what is perhaps a useful method that I intend to deprecate in favor of `Tab2Vec`. This method, called `get_embeddings` is designed to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
trainer.get_embeddings(col_name='education', cat_encoding_dict=tab_preprocessor.label_encoder.encoding_dict)
###Output
/Users/javier/Projects/pytorch-widedeep/pytorch_widedeep/training/trainer.py:794: DeprecationWarning: 'get_embeddings' will be deprecated in the next release. Please consider using 'Tab2vec' instead
DeprecationWarning,
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv("data/adult/adult.csv.zip")
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df["income_label"] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop("income", axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = [
"education",
"relationship",
"workclass",
"occupation",
"native_country",
"gender",
]
crossed_cols = [("education", "occupation"), ("native_country", "occupation")]
cat_embed_cols = [
("education", 16),
("relationship", 8),
("workclass", 16),
("occupation", 16),
("native_country", 16),
]
continuous_cols = ["age", "hours_per_week"]
target_col = "income_label"
# TARGET
target = df[target_col].values
# WIDE
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df)
# DEEP
tab_preprocessor = TabPreprocessor(
embed_cols=cat_embed_cols, continuous_cols=continuous_cols
)
X_tab = tab_preprocessor.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_tab)
print(X_tab.shape)
###Output
[[ 1. 1. 1. ... 1. -0.99512893
-0.03408696]
[ 2. 2. 1. ... 1. -0.04694151
0.77292975]
[ 3. 2. 2. ... 1. -0.77631645
-0.03408696]
...
[ 2. 4. 1. ... 1. 1.41180837
-0.03408696]
[ 2. 1. 1. ... 1. -1.21394141
-1.64812038]
[ 2. 5. 7. ... 1. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
# ?TabMlp
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers, as well as chose the order of the operations
deeptabular = TabMlp(
column_idx=tab_preprocessor.column_idx,
mlp_hidden_dims=[64, 32],
mlp_dropout=[0.5, 0.5],
mlp_batchnorm=True,
mlp_linear_first=True,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=continuous_cols,
)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deeptabular.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {"wide": wide_opt, "deeptabular": deep_opt}
schedulers = {"wide": wide_sch, "deeptabular": deep_sch}
initializers = {"wide": KaimingNormal, "deeptabular": XavierNormal}
# General settings as List
callbacks = [
LRHistory(n_epochs=10),
EarlyStopping,
ModelCheckpoint(filepath="model_weights/wd_out"),
]
metrics = [Accuracy, Recall]
trainer = Trainer(
model,
objective="binary",
optimizers=optimizers,
lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics,
)
trainer.fit(
X_wide=X_wide,
X_tab=X_tab,
target=target,
n_epochs=10,
batch_size=256,
val_split=0.2,
)
###Output
epoch 1: 100%|██████████| 153/153 [00:03<00:00, 42.78it/s, loss=0.562, metrics={'acc': 0.7779, 'rec': 0.488}]
valid: 100%|██████████| 39/39 [00:00<00:00, 54.81it/s, loss=0.374, metrics={'acc': 0.8363, 'rec': 0.5684}]
epoch 2: 100%|██████████| 153/153 [00:03<00:00, 44.03it/s, loss=0.373, metrics={'acc': 0.8277, 'rec': 0.5535}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.54it/s, loss=0.359, metrics={'acc': 0.8361, 'rec': 0.5915}]
epoch 3: 100%|██████████| 153/153 [00:03<00:00, 41.40it/s, loss=0.354, metrics={'acc': 0.8354, 'rec': 0.5686}]
valid: 100%|██████████| 39/39 [00:00<00:00, 100.84it/s, loss=0.355, metrics={'acc': 0.8378, 'rec': 0.5346}]
epoch 4: 100%|██████████| 153/153 [00:03<00:00, 43.49it/s, loss=0.346, metrics={'acc': 0.8381, 'rec': 0.5653}]
valid: 100%|██████████| 39/39 [00:00<00:00, 117.29it/s, loss=0.352, metrics={'acc': 0.8388, 'rec': 0.5633}]
epoch 5: 100%|██████████| 153/153 [00:03<00:00, 39.83it/s, loss=0.343, metrics={'acc': 0.8396, 'rec': 0.5669}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.86it/s, loss=0.351, metrics={'acc': 0.8388, 'rec': 0.6074}]
epoch 6: 100%|██████████| 153/153 [00:03<00:00, 41.32it/s, loss=0.342, metrics={'acc': 0.8406, 'rec': 0.5758}]
valid: 100%|██████████| 39/39 [00:00<00:00, 110.53it/s, loss=0.35, metrics={'acc': 0.84, 'rec': 0.5834}]
epoch 7: 100%|██████████| 153/153 [00:03<00:00, 40.08it/s, loss=0.341, metrics={'acc': 0.8407, 'rec': 0.5664}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.04it/s, loss=0.35, metrics={'acc': 0.8399, 'rec': 0.5924}]
epoch 8: 100%|██████████| 153/153 [00:03<00:00, 40.74it/s, loss=0.341, metrics={'acc': 0.8397, 'rec': 0.573}]
valid: 100%|██████████| 39/39 [00:00<00:00, 103.97it/s, loss=0.35, metrics={'acc': 0.8404, 'rec': 0.5881}]
epoch 9: 100%|██████████| 153/153 [00:03<00:00, 41.83it/s, loss=0.341, metrics={'acc': 0.8407, 'rec': 0.571}]
valid: 100%|██████████| 39/39 [00:00<00:00, 112.66it/s, loss=0.35, metrics={'acc': 0.8398, 'rec': 0.595}]
epoch 10: 100%|██████████| 153/153 [00:03<00:00, 41.73it/s, loss=0.341, metrics={'acc': 0.8404, 'rec': 0.5751}]
valid: 100%|██████████| 39/39 [00:00<00:00, 111.89it/s, loss=0.35, metrics={'acc': 0.8389, 'rec': 0.5787}]
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
print(trainer.history)
print(trainer.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deeptabular_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a what is perhaps a useful method that I intend to deprecate in favor of `Tab2Vec`. This method, called `get_embeddings` is designed to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
trainer.get_embeddings(
col_name="education", cat_encoding_dict=tab_preprocessor.label_encoder.encoding_dict
)
###Output
/Users/javier/Projects/pytorch-widedeep/pytorch_widedeep/training/trainer.py:794: DeprecationWarning: 'get_embeddings' will be deprecated in the next release. Please consider using 'Tab2vec' instead
DeprecationWarning,
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = preprocess_wide.fit_transform(df)
# DEEP
preprocess_deep = DensePreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_deep = preprocess_deep.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_deep)
print(X_deep.shape)
###Output
[[ 0. 0. 0. ... 0. -0.99512893
-0.03408696]
[ 1. 1. 0. ... 0. -0.04694151
0.77292975]
[ 2. 1. 1. ... 0. -0.77631645
-0.03408696]
...
[ 1. 3. 0. ... 0. 1.41180837
-0.03408696]
[ 1. 0. 0. ... 0. -1.21394141
-1.64812038]
[ 1. 4. 6. ... 0. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers
deepdense = DeepDense(hidden_layers=[64,32], dropout=[0.5, 0.5], batchnorm=True,
deep_column_idx=preprocess_deep.deep_column_idx,
embed_input=preprocess_deep.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deepdense=deepdense)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deepdense.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deepdense':deep_opt}
schedulers = {'wide': wide_sch, 'deepdense':deep_sch}
initializers = {'wide': KaimingNormal, 'deepdense':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
model.compile(method='binary', optimizers=optimizers, lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics)
model.fit(X_wide=X_wide, X_deep=X_deep, target=target, n_epochs=10, batch_size=256, val_split=0.2)
dir(model)
###Output
_____no_output_____
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
model.history.epoch
print(model.history._history)
print(model.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deepdense_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
model.get_embeddings(col_name='education', cat_encoding_dict=preprocess_deep.label_encoder.encoding_dict)
###Output
_____no_output_____
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep.preprocessing import WidePreprocessor, DensePreprocessor
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = preprocess_wide.fit_transform(df)
# DEEP
preprocess_deep = DensePreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_deep = preprocess_deep.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_deep)
print(X_deep.shape)
###Output
[[ 0. 0. 0. ... 0. -0.99512893
-0.03408696]
[ 1. 1. 0. ... 0. -0.04694151
0.77292975]
[ 2. 1. 1. ... 0. -0.77631645
-0.03408696]
...
[ 1. 3. 0. ... 0. 1.41180837
-0.03408696]
[ 1. 0. 0. ... 0. -1.21394141
-1.64812038]
[ 1. 4. 6. ... 0. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
wide = Wide(wide_dim=X_wide.shape[1], pred_dim=1)
# We can add dropout and batchnorm to the dense layers
deepdense = DeepDense(hidden_layers=[64,32], dropout=[0.5, 0.5], batchnorm=True,
deep_column_idx=preprocess_deep.deep_column_idx,
embed_input=preprocess_deep.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deepdense=deepdense)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters())
deep_opt = RAdam(model.deepdense.parameters())
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deepdense':deep_opt}
schedulers = {'wide': wide_sch, 'deepdense':deep_sch}
initializers = {'wide': KaimingNormal, 'deepdense':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
model.compile(method='binary', optimizers=optimizers, lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics)
model.fit(X_wide=X_wide, X_deep=X_deep, target=target, n_epochs=10, batch_size=256, val_split=0.2)
dir(model)
###Output
_____no_output_____
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
model.history.epoch
print(model.history._history)
print(model.lr_history)
###Output
{'lr_wide_0': [0.001, 0.001, 0.001, 0.0001, 0.0001, 0.0001, 1e-05, 1e-05, 1e-05, 1.0000000000000002e-06], 'lr_deepdense_0': [0.001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
model.get_embeddings(col_name='education', cat_encoding_dict=preprocess_deep.label_encoder.encoding_dict)
###Output
_____no_output_____
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
#binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df)
# DEEP
tab_preprocessor = TabPreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_tab = tab_preprocessor.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_tab)
print(X_tab.shape)
###Output
[[ 1. 1. 1. ... 1. -0.99512893
-0.03408696]
[ 2. 2. 1. ... 1. -0.04694151
0.77292975]
[ 3. 2. 2. ... 1. -0.77631645
-0.03408696]
...
[ 2. 4. 1. ... 1. 1.41180837
-0.03408696]
[ 2. 1. 1. ... 1. -1.21394141
-1.64812038]
[ 2. 5. 7. ... 1. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
?TabMlp
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers, as well as chose the order of the operations
deeptabular = TabMlp(column_idx=tab_preprocessor.column_idx,
mlp_hidden_dims=[64,32],
mlp_dropout=[0.5, 0.5],
mlp_batchnorm=True,
mlp_linear_first = True,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deeptabular.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deeptabular':deep_opt}
schedulers = {'wide': wide_sch, 'deeptabular':deep_sch}
initializers = {'wide': KaimingNormal, 'deeptabular':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
trainer = Trainer(model,
objective='binary',
optimizers=optimizers,
lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics
)
trainer.fit(X_wide=X_wide, X_tab=X_tab, target=target, n_epochs=10, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 153/153 [00:03<00:00, 40.76it/s, loss=0.605, metrics={'acc': 0.7653, 'rec': 0.5005}]
valid: 100%|██████████| 39/39 [00:00<00:00, 70.56it/s, loss=0.37, metrics={'acc': 0.8295, 'rec': 0.5646}]
epoch 2: 100%|██████████| 153/153 [00:03<00:00, 42.82it/s, loss=0.37, metrics={'acc': 0.8298, 'rec': 0.5627}]
valid: 100%|██████████| 39/39 [00:00<00:00, 116.22it/s, loss=0.355, metrics={'acc': 0.8372, 'rec': 0.6206}]
epoch 3: 100%|██████████| 153/153 [00:03<00:00, 41.82it/s, loss=0.354, metrics={'acc': 0.8338, 'rec': 0.5612}]
valid: 100%|██████████| 39/39 [00:00<00:00, 116.42it/s, loss=0.35, metrics={'acc': 0.8395, 'rec': 0.5804}]
epoch 4: 100%|██████████| 153/153 [00:03<00:00, 42.66it/s, loss=0.345, metrics={'acc': 0.8382, 'rec': 0.5658}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.17it/s, loss=0.35, metrics={'acc': 0.8379, 'rec': 0.6048}]
epoch 5: 100%|██████████| 153/153 [00:03<00:00, 42.11it/s, loss=0.343, metrics={'acc': 0.8391, 'rec': 0.5681}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.60it/s, loss=0.347, metrics={'acc': 0.84, 'rec': 0.595}]
epoch 6: 100%|██████████| 153/153 [00:03<00:00, 41.32it/s, loss=0.341, metrics={'acc': 0.8398, 'rec': 0.5748}]
valid: 100%|██████████| 39/39 [00:00<00:00, 109.95it/s, loss=0.347, metrics={'acc': 0.8404, 'rec': 0.5855}]
epoch 7: 100%|██████████| 153/153 [00:03<00:00, 41.79it/s, loss=0.34, metrics={'acc': 0.8413, 'rec': 0.5746}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.11it/s, loss=0.347, metrics={'acc': 0.8395, 'rec': 0.5898}]
epoch 8: 100%|██████████| 153/153 [00:03<00:00, 41.09it/s, loss=0.341, metrics={'acc': 0.8395, 'rec': 0.5744}]
valid: 100%|██████████| 39/39 [00:00<00:00, 99.26it/s, loss=0.347, metrics={'acc': 0.8404, 'rec': 0.5877}]
epoch 9: 100%|██████████| 153/153 [00:03<00:00, 41.33it/s, loss=0.34, metrics={'acc': 0.8409, 'rec': 0.573}]
valid: 100%|██████████| 39/39 [00:00<00:00, 108.59it/s, loss=0.347, metrics={'acc': 0.8399, 'rec': 0.5778}]
epoch 10: 100%|██████████| 153/153 [00:03<00:00, 40.06it/s, loss=0.34, metrics={'acc': 0.8413, 'rec': 0.5718}]
valid: 100%|██████████| 39/39 [00:00<00:00, 104.13it/s, loss=0.347, metrics={'acc': 0.8395, 'rec': 0.577}]
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
print(trainer.history)
print(trainer.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deeptabular_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
trainer.get_embeddings(col_name='education', cat_encoding_dict=tab_preprocessor.label_encoder.encoding_dict)
###Output
_____no_output_____
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
#binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df)
# DEEP
tab_preprocessor = TabPreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_tab = tab_preprocessor.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_tab)
print(X_tab.shape)
###Output
[[ 1. 1. 1. ... 1. -0.99512893
-0.03408696]
[ 2. 2. 1. ... 1. -0.04694151
0.77292975]
[ 3. 2. 2. ... 1. -0.77631645
-0.03408696]
...
[ 2. 4. 1. ... 1. 1.41180837
-0.03408696]
[ 2. 1. 1. ... 1. -1.21394141
-1.64812038]
[ 2. 5. 7. ... 1. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
?TabMlp
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers, as well as chose the order of the operations
deeptabular = TabMlp(column_idx=tab_preprocessor.column_idx,
mlp_hidden_dims=[64,32],
mlp_dropout=[0.5, 0.5],
mlp_batchnorm=True,
mlp_linear_first = True,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deeptabular.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deeptabular':deep_opt}
schedulers = {'wide': wide_sch, 'deeptabular':deep_sch}
initializers = {'wide': KaimingNormal, 'deeptabular':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
trainer = Trainer(model,
objective='binary',
optimizers=optimizers,
lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics
)
trainer.fit(X_wide=X_wide, X_tab=X_tab, target=target, n_epochs=10, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 153/153 [00:03<00:00, 47.06it/s, loss=0.667, metrics={'acc': 0.7471, 'rec': 0.4645}]
valid: 100%|██████████| 39/39 [00:00<00:00, 109.05it/s, loss=0.384, metrics={'acc': 0.8328, 'rec': 0.5701}]
epoch 2: 100%|██████████| 153/153 [00:03<00:00, 47.77it/s, loss=0.384, metrics={'acc': 0.8241, 'rec': 0.56}]
valid: 100%|██████████| 39/39 [00:00<00:00, 103.34it/s, loss=0.363, metrics={'acc': 0.8354, 'rec': 0.5838}]
epoch 3: 100%|██████████| 153/153 [00:02<00:00, 51.60it/s, loss=0.359, metrics={'acc': 0.8338, 'rec': 0.5657}]
valid: 100%|██████████| 39/39 [00:00<00:00, 116.99it/s, loss=0.357, metrics={'acc': 0.8365, 'rec': 0.5719}]
epoch 4: 100%|██████████| 153/153 [00:03<00:00, 50.11it/s, loss=0.349, metrics={'acc': 0.8376, 'rec': 0.5608}]
valid: 100%|██████████| 39/39 [00:00<00:00, 114.85it/s, loss=0.355, metrics={'acc': 0.8374, 'rec': 0.595}]
epoch 5: 100%|██████████| 153/153 [00:03<00:00, 45.94it/s, loss=0.347, metrics={'acc': 0.8377, 'rec': 0.5624}]
valid: 100%|██████████| 39/39 [00:00<00:00, 119.71it/s, loss=0.355, metrics={'acc': 0.8384, 'rec': 0.6091}]
epoch 6: 100%|██████████| 153/153 [00:03<00:00, 47.17it/s, loss=0.346, metrics={'acc': 0.8377, 'rec': 0.5655}]
valid: 100%|██████████| 39/39 [00:00<00:00, 122.04it/s, loss=0.352, metrics={'acc': 0.8403, 'rec': 0.589}]
epoch 7: 100%|██████████| 153/153 [00:03<00:00, 50.84it/s, loss=0.344, metrics={'acc': 0.8402, 'rec': 0.573}]
valid: 100%|██████████| 39/39 [00:00<00:00, 122.60it/s, loss=0.352, metrics={'acc': 0.8394, 'rec': 0.5796}]
epoch 8: 100%|██████████| 153/153 [00:03<00:00, 47.53it/s, loss=0.343, metrics={'acc': 0.8393, 'rec': 0.5696}]
valid: 100%|██████████| 39/39 [00:00<00:00, 112.23it/s, loss=0.352, metrics={'acc': 0.839, 'rec': 0.5808}]
epoch 9: 100%|██████████| 153/153 [00:03<00:00, 45.08it/s, loss=0.343, metrics={'acc': 0.8405, 'rec': 0.5746}]
valid: 100%|██████████| 39/39 [00:00<00:00, 81.40it/s, loss=0.351, metrics={'acc': 0.839, 'rec': 0.586}]
epoch 10: 100%|██████████| 153/153 [00:04<00:00, 34.82it/s, loss=0.343, metrics={'acc': 0.8408, 'rec': 0.5745}]
valid: 100%|██████████| 39/39 [00:00<00:00, 96.99it/s, loss=0.351, metrics={'acc': 0.8387, 'rec': 0.5761}]
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
print(trainer.history)
print(trainer.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deeptabular_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
trainer.get_embeddings(col_name='education', cat_encoding_dict=tab_preprocessor.label_encoder.encoding_dict)
###Output
_____no_output_____
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep.preprocessing import WidePreprocessor, DeepPreprocessor
from pytorch_widedeep.models import Wide, DeepDense, WideDeep
from pytorch_widedeep.metrics import BinaryAccuracy
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
preprocess_wide = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = preprocess_wide.fit_transform(df)
# DEEP
preprocess_deep = DeepPreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_deep = preprocess_deep.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_deep)
print(X_deep.shape)
###Output
[[ 0. 0. 0. ... 0. -0.99512893
-0.03408696]
[ 1. 1. 0. ... 0. -0.04694151
0.77292975]
[ 2. 1. 1. ... 0. -0.77631645
-0.03408696]
...
[ 1. 3. 0. ... 0. 1.41180837
-0.03408696]
[ 1. 0. 0. ... 0. -1.21394141
-1.64812038]
[ 1. 4. 6. ... 0. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
wide = Wide(wide_dim=X_wide.shape[1], output_dim=1)
# We can add dropout and batchnorm to the dense layers
deepdense = DeepDense(hidden_layers=[64,32], dropout=[0.5, 0.5], batchnorm=True,
deep_column_idx=preprocess_deep.deep_column_idx,
embed_input=preprocess_deep.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deepdense=deepdense)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters())
deep_opt = RAdam(model.deepdense.parameters())
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deepdense':deep_opt}
schedulers = {'wide': wide_sch, 'deepdense':deep_sch}
initializers = {'wide': KaimingNormal, 'deepdense':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [BinaryAccuracy]
model.compile(method='binary', optimizers=optimizers, lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics)
model.fit(X_wide=X_wide, X_deep=X_deep, target=target, n_epochs=10, batch_size=256, val_split=0.2)
dir(model)
###Output
_____no_output_____
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
model.history.epoch
print(model.history._history)
print(model.lr_history)
###Output
{'lr_wide_0': [0.001, 0.001, 0.001, 0.0001, 0.0001, 0.0001, 1.0000000000000003e-05, 1.0000000000000003e-05, 1.0000000000000003e-05, 1.0000000000000002e-06], 'lr_deepdense_0': [0.001, 0.001, 0.001, 0.001, 0.001, 0.0001, 0.0001, 0.0001, 0.0001, 0.0001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
model.get_embeddings(col_name='education', cat_encoding_dict=preprocess_deep.encoding_dict)
###Output
_____no_output_____
###Markdown
Binary Classification with different optimizers, schedulers, etc.In this notebook we will use the Adult Census dataset. Download the data from [here](https://www.kaggle.com/wenruliu/adult-income-dataset/downloads/adult.csv/2).
###Code
import numpy as np
import pandas as pd
import torch
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import WidePreprocessor, TabPreprocessor
from pytorch_widedeep.models import Wide, TabMlp, WideDeep
from pytorch_widedeep.metrics import Accuracy, Recall
df = pd.read_csv('data/adult/adult.csv.zip')
df.head()
# For convenience, we'll replace '-' with '_'
df.columns = [c.replace("-", "_") for c in df.columns]
# binary target
df['income_label'] = (df["income"].apply(lambda x: ">50K" in x)).astype(int)
df.drop('income', axis=1, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the dataHave a look to notebooks one and two if you want to get a good understanding of the next few lines of code (although there is no need to use the package)
###Code
wide_cols = ['education', 'relationship','workclass','occupation','native_country','gender']
crossed_cols = [('education', 'occupation'), ('native_country', 'occupation')]
cat_embed_cols = [('education',16), ('relationship',8), ('workclass',16), ('occupation',16),('native_country',16)]
continuous_cols = ["age","hours_per_week"]
target_col = 'income_label'
# TARGET
target = df[target_col].values
# WIDE
wide_preprocessor = WidePreprocessor(wide_cols=wide_cols, crossed_cols=crossed_cols)
X_wide = wide_preprocessor.fit_transform(df)
# DEEP
tab_preprocessor = TabPreprocessor(embed_cols=cat_embed_cols, continuous_cols=continuous_cols)
X_tab = tab_preprocessor.fit_transform(df)
print(X_wide)
print(X_wide.shape)
print(X_tab)
print(X_tab.shape)
###Output
[[ 1. 1. 1. ... 1. -0.99512893
-0.03408696]
[ 2. 2. 1. ... 1. -0.04694151
0.77292975]
[ 3. 2. 2. ... 1. -0.77631645
-0.03408696]
...
[ 2. 4. 1. ... 1. 1.41180837
-0.03408696]
[ 2. 1. 1. ... 1. -1.21394141
-1.64812038]
[ 2. 5. 7. ... 1. 0.97418341
-0.03408696]]
(48842, 7)
###Markdown
As you can see, you can run a wide and deep model in just a few lines of codeLet's now see how to use `WideDeep` with varying parameters 2.1 Dropout and Batchnorm
###Code
?TabMlp
wide = Wide(wide_dim=np.unique(X_wide).shape[0], pred_dim=1)
# We can add dropout and batchnorm to the dense layers, as well as chose the order of the operations
deeptabular = TabMlp(column_idx=tab_preprocessor.column_idx,
mlp_hidden_dims=[64,32],
mlp_dropout=[0.5, 0.5],
mlp_batchnorm=True,
mlp_linear_first = True,
embed_input=tab_preprocessor.embeddings_input,
continuous_cols=continuous_cols)
model = WideDeep(wide=wide, deeptabular=deeptabular)
model
###Output
_____no_output_____
###Markdown
We can use different initializers, optimizers and learning rate schedulers for each `branch` of the model Optimizers, LR schedulers, Initializers and Callbacks
###Code
from pytorch_widedeep.initializers import KaimingNormal, XavierNormal
from pytorch_widedeep.callbacks import ModelCheckpoint, LRHistory, EarlyStopping
from pytorch_widedeep.optim import RAdam
# Optimizers
wide_opt = torch.optim.Adam(model.wide.parameters(), lr=0.03)
deep_opt = RAdam(model.deeptabular.parameters(), lr=0.01)
# LR Schedulers
wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3)
deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5)
###Output
_____no_output_____
###Markdown
the component-dependent settings must be passed as dictionaries, while general settings are simply lists
###Code
# Component-dependent settings as Dict
optimizers = {'wide': wide_opt, 'deeptabular':deep_opt}
schedulers = {'wide': wide_sch, 'deeptabular':deep_sch}
initializers = {'wide': KaimingNormal, 'deeptabular':XavierNormal}
# General settings as List
callbacks = [LRHistory(n_epochs=10), EarlyStopping, ModelCheckpoint(filepath='model_weights/wd_out')]
metrics = [Accuracy, Recall]
trainer = Trainer(model,
objective='binary',
optimizers=optimizers,
lr_schedulers=schedulers,
initializers=initializers,
callbacks=callbacks,
metrics=metrics
)
trainer.fit(X_wide=X_wide, X_tab=X_tab, target=target, n_epochs=10, batch_size=256, val_split=0.2)
###Output
epoch 1: 100%|██████████| 153/153 [00:03<00:00, 46.93it/s, loss=0.597, metrics={'acc': 0.7751, 'rec': 0.4646}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.54it/s, loss=0.365, metrics={'acc': 0.7871, 'rec': 0.4839}]
epoch 2: 100%|██████████| 153/153 [00:03<00:00, 48.61it/s, loss=0.373, metrics={'acc': 0.8258, 'rec': 0.5525}]
valid: 100%|██████████| 39/39 [00:00<00:00, 126.36it/s, loss=0.354, metrics={'acc': 0.8282, 'rec': 0.5622}]
epoch 3: 100%|██████████| 153/153 [00:03<00:00, 46.11it/s, loss=0.356, metrics={'acc': 0.8329, 'rec': 0.5595}]
valid: 100%|██████████| 39/39 [00:00<00:00, 114.20it/s, loss=0.351, metrics={'acc': 0.8343, 'rec': 0.5672}]
epoch 4: 100%|██████████| 153/153 [00:03<00:00, 45.97it/s, loss=0.346, metrics={'acc': 0.8371, 'rec': 0.574}]
valid: 100%|██████████| 39/39 [00:00<00:00, 107.73it/s, loss=0.349, metrics={'acc': 0.8374, 'rec': 0.5691}]
epoch 5: 100%|██████████| 153/153 [00:03<00:00, 46.22it/s, loss=0.345, metrics={'acc': 0.8384, 'rec': 0.571}]
valid: 100%|██████████| 39/39 [00:00<00:00, 115.02it/s, loss=0.348, metrics={'acc': 0.8387, 'rec': 0.567}]
epoch 6: 100%|██████████| 153/153 [00:03<00:00, 46.08it/s, loss=0.344, metrics={'acc': 0.8397, 'rec': 0.5702}]
valid: 100%|██████████| 39/39 [00:00<00:00, 114.94it/s, loss=0.347, metrics={'acc': 0.8398, 'rec': 0.5666}]
epoch 7: 100%|██████████| 153/153 [00:03<00:00, 48.03it/s, loss=0.342, metrics={'acc': 0.8404, 'rec': 0.5692}]
valid: 100%|██████████| 39/39 [00:00<00:00, 120.91it/s, loss=0.347, metrics={'acc': 0.8405, 'rec': 0.5672}]
epoch 8: 100%|██████████| 153/153 [00:03<00:00, 46.60it/s, loss=0.342, metrics={'acc': 0.8408, 'rec': 0.573}]
valid: 100%|██████████| 39/39 [00:00<00:00, 117.20it/s, loss=0.347, metrics={'acc': 0.8408, 'rec': 0.5705}]
epoch 9: 100%|██████████| 153/153 [00:03<00:00, 47.54it/s, loss=0.34, metrics={'acc': 0.8417, 'rec': 0.5744}]
valid: 100%|██████████| 39/39 [00:00<00:00, 116.07it/s, loss=0.346, metrics={'acc': 0.8419, 'rec': 0.5733}]
epoch 10: 100%|██████████| 153/153 [00:03<00:00, 47.99it/s, loss=0.341, metrics={'acc': 0.8413, 'rec': 0.5786}]
valid: 100%|██████████| 39/39 [00:00<00:00, 112.61it/s, loss=0.346, metrics={'acc': 0.8416, 'rec': 0.5763}]
###Markdown
You see that, among many methods and attributes we have the `history` and `lr_history` attributes
###Code
print(trainer.history)
print(trainer.lr_history)
###Output
{'lr_wide_0': [0.03, 0.03, 0.03, 0.003, 0.003, 0.003, 0.00030000000000000003, 0.00030000000000000003, 0.00030000000000000003, 3.0000000000000004e-05], 'lr_deeptabular_0': [0.01, 0.01, 0.01, 0.01, 0.01, 0.001, 0.001, 0.001, 0.001, 0.001]}
###Markdown
We can see that the learning rate effectively decreases by a factor of 0.1 (the default) after the corresponding `step_size`. Note that the keys of the dictionary have a suffix `_0`. This is because if you pass different parameter groups to the torch optimizers, these will also be recorded. We'll see this in the `Regression` notebook. And I guess one has a good idea of how to use the package. Before we leave this notebook just mentioning that the `WideDeep` class comes with a useful method to "rescue" the learned embeddings. For example, let's say I want to use the embeddings learned for the different levels of the categorical feature `education`
###Code
trainer.get_embeddings(col_name='education', cat_encoding_dict=tab_preprocessor.label_encoder.encoding_dict)
###Output
_____no_output_____ |
2019_06_03/Style_Transform.ipynb | ###Markdown
定义图片的输入输出
###Code
def image_loader(image_name,imsize):
"""图片load函数
"""
# 转换图片大小
loader = transforms.Compose([
transforms.Resize(imsize), # scale imported image
transforms.ToTensor()]) # transform it into a torch tensor
image = Image.open(image_name)
# fake batch dimension required to fit network's input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
def image_util(img_size=512,style_img="./images/picasso.jpg", content_img="./images/dancing.jpg"):
"""返回style_image和content_image
需要保证两张图片的大小是一样的
"""
imsize = img_size if torch.cuda.is_available() else 128 # use small size if no gpu
# 加载图片
style_img = image_loader(image_name=style_img, imsize=img_size)
content_img = image_loader(image_name=content_img, imsize=img_size)
# 判断是否加载成功
print("Style Image Size:{}".format(style_img.size()))
print("Content Image Size:{}".format(content_img.size()))
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
return style_img, content_img
###Output
_____no_output_____
###Markdown
定义Content Loss
###Code
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
###Output
_____no_output_____
###Markdown
定义Style Loss
###Code
# 我们首先定义 Gram Matrix
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# print(G)
# 对Gram Matrix做正规化, 除总的大小
return G.div(a * b * c * d)
x_input = torch.from_numpy(np.array([[[[1,2],[3,4]],[[5,6],[7,8]],[[9,10],[11,12]]]])).float()
x_input.size()
gram_matrix(x_input)
# 接着我们就可以定义Style Loss了
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
###Output
_____no_output_____
###Markdown
基于VGG16网络的修改
###Code
# -------------------
# 模型的标准化
# 因为原始的VGG网络对图片做了normalization, 所在要把下面的Normalization放在新的网络的第一层
# -------------------
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = mean.view(-1, 1, 1)
self.std = std.view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
# --------------------------------
# 网络结构的修改, 生成一个style的网络
# --------------------------------
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers,
style_layers):
# 复制cnn的网络部分
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
# 之后逐层向model里面增加内容
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses\
# 只需要算到最后一个style loss或是content loss用到的layer就可以了, 后面的可以去掉
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
# 返回的是修改后的Model, style_losses和content_losses的list
return model, style_losses, content_losses
###Output
_____no_output_____
###Markdown
定义优化算法
###Code
def get_input_optimizer(input_img):
# 这里要对图片做梯度下降
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
###Output
_____no_output_____
###Markdown
定义传播函数
###Code
def run_style_transfer(cnn, normalization_mean, normalization_std, content_img, style_img, input_img, content_layers,style_layers, num_steps=300, style_weight=1000000, content_weight=1):
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img, content_layers, style_layers)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values of updated input image
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img) # 前向传播
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
# loss为style loss 和 content loss的和
loss = style_score + content_score
loss.backward() # 反向传播
# 打印loss的变化情况
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
# 进行参数优化
optimizer.step(closure)
# a last correction...
# 数值范围的纠正, 使其范围在0-1之间
input_img.data.clamp_(0, 1)
return input_img
###Output
_____no_output_____
###Markdown
开始训练
###Code
# 加载content image和style image
style_img,content_img = image_util(img_size=444,style_img="./images/style/rose.jpg", content_img="./images/content/face.jpg")
# input image使用content image
input_img = content_img.clone()
# 加载预训练好的模型
cnn = models.vgg19(pretrained=True).features.to(device).eval()
# 模型标准化的值
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# 定义要计算loss的层
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
# 模型进行计算
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std, content_img, style_img, input_img, content_layers=content_layers_default, style_layers=style_layers_default, num_steps=300, style_weight=100000, content_weight=1)
###Output
Style Image Size:torch.Size([1, 3, 444, 444])
Content Image Size:torch.Size([1, 3, 444, 444])
Building the style transfer model..
Optimizing..
run [50]:
Style Loss : 83.327301 Content Loss: 28.212976
run [100]:
Style Loss : 24.913506 Content Loss: 28.910002
run [150]:
Style Loss : 12.124184 Content Loss: 28.101280
run [200]:
Style Loss : 5.490695 Content Loss: 27.439909
run [250]:
Style Loss : 4.143858 Content Loss: 27.175915
run [300]:
Style Loss : 5.784199 Content Loss: 26.932138
###Markdown
图片显示
###Code
image = output.cpu().clone()
image = image.squeeze(0)
unloader = transforms.ToPILImage()
unloader(image)
###Output
_____no_output_____ |
example.ipynb | ###Markdown
Generate primers without temperature restrictions
###Code
generator = PrimersGenerator(length=20, gc_percentage=0.6)
primers = generator.generate_primers()
primers
###Output
_____no_output_____
###Markdown
Generate primers with temperature restrictions
###Code
generator = PrimersGenerator(length=20, gc_percentage=0.3, min_temperature=35, max_temperature=40)
primers = generator.generate_primers()
primers
###Output
_____no_output_____
###Markdown
Generate a given number of primers
###Code
generator = PrimersGenerator(length=20, gc_percentage=0.5, number_of_primers=5)
primers = generator.generate_primers()
primers
###Output
_____no_output_____
###Markdown
Give an impossible task
###Code
generator = PrimersGenerator(length=20, gc_percentage=0.5, min_temperature=0, max_temperature=5)
primers = generator.generate_primers()
primers
###Output
_____no_output_____
###Markdown
Make sure primers are not found in some organism
###Code
generator = PrimersGenerator(
length=20,
gc_percentage=0.3,
number_of_primers=100
)
primers = generator.generate_primers()
print(f"{len(primers)} primers generated")
###Output
100 primers generated
###Markdown
Remote search against the whole NT databaseThis will make sure that generated primers are not found in any known organism. Be careful, remote search for a huge number of primers takes a **long** time!
###Code
filtered_primers = filter_primers_by_blast(primers, remote=True)
print(f"{len(filtered_primers)} primers left")
filtered_primers
###Output
_____no_output_____
###Markdown
Using a local Blast database:
###Code
HG_38_BLAST_DB_PATH = "/home/vladimir/Documents/Science/data/hg38_blast_db/hg38.fa"
generator = PrimersGenerator(
length=20,
gc_percentage=0.7,
number_of_primers=10000,
min_temperature=55,
max_temperature=75
)
primers = generator.generate_primers()
print(f"{len(primers)} primers generated")
not_human_primers = filter_primers_by_blast(primers, blast_db_path=HG_38_BLAST_DB_PATH)
print(f"{len(not_human_primers)} primers are not found in human genome")
not_human_primers
generator = PrimersGenerator(
length=20,
gc_percentage=0.3,
number_of_primers=1000
)
primers = generator.generate_primers()
print(f"{len(primers)} primers generated")
filter_primers_by_blast(primers, blast_db_path=HG_38_BLAST_DB_PATH)
###Output
1000 primers generated
###Markdown
Phase 1: have the transformers example running
###Code
def hello_world_tranformers_example():
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = TFAutoModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello world!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
###Output
_____no_output_____
###Markdown
Phase 2: modify the transformers example to run in Chinese
###Code
def chinese_transformers_example():
tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-uncased")
model = TFAutoModel.from_pretrained("bert-base-multilingual-uncased")
inputs = tokenizer("你好!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
###Output
_____no_output_____
###Markdown
Phase 3: modify the transformers example to run as a text generator (English first)
###Code
def text_generation_with_bert_example():
sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
summary_text = 'This is the first sentence.'
input_ids = tokenizer(summary_text, add_special_tokens=False, return_tensors="pt").input_ids
outputs = sentence_fuser.generate(input_ids)
print('Generated {cnt} pieces.'.format(cnt=len(outputs)))
print(tokenizer.decode(outputs[0]))
###Output
_____no_output_____
###Markdown
Phase 4: train text generator with custom text corpus Phase 5: train text generator with Chinese text corpus Phase 6: use Chinese in the text generator TODO(tianhaoz95): think about the next steps to build a comment generator The main program to test things out
###Code
# hello_world_tranformers_example()
# chinese_transformers_example()
text_generation_with_bert_example()
###Output
Generated 1 pieces.
This is the first sentence. In fact, the is
###Markdown
Example: ML regression on hyperspectral dataset with Python
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Load dataRepository: https://github.com/felixriese/hyperspectral-soilmoisture-dataset
###Code
# load dataframe
path = "https://raw.githubusercontent.com/felixriese/hyperspectral-soilmoisture-dataset/master/soilmoisture_dataset.csv"
df = pd.read_csv(path, index_col=0)
# get hyperspectral bands:
hypbands = []
for col in df.columns:
try:
int(col)
except Exception:
continue
hypbands.append(col)
# split dataset
X_train, X_test, y_train, y_test = train_test_split(
df[hypbands], df["soil_moisture"],
test_size=0.5, random_state=42, shuffle=True)
###Output
_____no_output_____
###Markdown
Regression
###Code
lg = LinearRegression()
lg.fit(X_train, y_train)
lg.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Bahamas RGB
###Code
# First, create a tile server from raster file
b_client = examples.get_bahamas()
# Create ipyleaflet tile layer from that server
t = get_leaflet_tile_layer(b_client)
# Create ipyleaflet map, add tile layer, and display
m = Map(center=b_client.center(), zoom=b_client.default_zoom)
m.add_layer(t)
m
###Output
_____no_output_____
###Markdown
Multiband Landsat Compare
###Code
# First, create a tile server from raster file
landsat_client = examples.get_landsat()
# Create 2 tile layers from same raster viewing different bands
l = get_leaflet_tile_layer(landsat_client, band=[7, 5, 4])
r = get_leaflet_tile_layer(landsat_client, band=[5, 3, 2])
# Make the ipyleaflet map
m = Map(center=landsat_client.center(), zoom=landsat_client.default_zoom)
control = SplitMapControl(left_layer=l, right_layer=r)
m.add_control(control)
m.add_control(ScaleControl(position='bottomleft'))
m.add_control(FullScreenControl())
m
###Output
_____no_output_____
###Markdown
Non-geospatial image
###Code
client = examples.get_pelvis()
# Image layer that fetches tiles in image coordinates
image_layer = get_leaflet_tile_layer(client)
# Make the ipyleaflet map
m = Map(crs=projections.Simple, # no projection
basemap=image_layer, # basemap is the source image
min_zoom=0, max_zoom=client.max_zoom, zoom=0, # handle zoom defaults
)
m
###Output
_____no_output_____
###Markdown
Example operations
###Code
import data as d
import matplotlib.pyplot as plt
import numpy as np
import ants
###Output
_____no_output_____
###Markdown
Defining the dataset object:
###Code
dataset=d.dataset('/media/Olowoo/Work/CR/Test_task')
###Output
_____no_output_____
###Markdown
But first, let's see if the general parameters of the scans make sense:
###Code
dataset.check_scan_params()
###Output
Average spacing [ 1.5 2.4000001 10. 0.3125 ]
spacing standard deviation [0. 0. 0. 0.]
spacing median: [ 1.5 2.4000001 10. 0.3125 ]
spacing mode: [[[ 1.5 2.4000001 10. 0.3125 ]]]
Outliers:
No spacing outliers found
Average shape [128. 80. 17. 158.]
shape standard deviation [0. 0. 0. 0.]
shape median: [128. 80. 17. 158.]
shape mode: [[[128 80 17 158]]]
Outliers:
No shape outliers found
###Markdown
Let's now try those different sets of parameters and see how they perform for motion correction. To save time, we only use 3 samples.
###Code
alignment_parameters =[{'type_of_transform':'BOLDRigid',
'aff_sampling':32,
'aff_random_sampling_rate':0.2,
'aff_iterations':(100, 500, 50),
'aff_smoothing_sigmas':(2, 1, 0),
'aff_shrink_factors':(4, 2, 1)
},
{'type_of_transform':'BOLDRigid',
'aff_sampling':16,
'aff_random_sampling_rate':0.2,
'aff_iterations':(50, 25, 10),
'aff_smoothing_sigmas':(2, 1, 0),
'aff_shrink_factors':(4, 2, 1)
}]
results = dataset.calibrate_motion_correction(alignment_parameters, range(3))
print(results)
###Output
[(0.30117741271947757, 40.789008696873985), (0.3272586393912759, 36.928400913874306)]
###Markdown
It seems the second methos is faster, but we have a lower performance (larger standardized error, the first number in the tuple). Since we want this demonstration to go fast, let's pick that one.
###Code
dataset.default_motion_correction = alignment_parameters[1]
motionc_err = dataset.motioncorrect_all()
###Output
_____no_output_____
###Markdown
We get a score for every volume in each 4D data series. Let's average them, and have an histogram of the error metric for each series:
###Code
plt.hist(np.array(motionc_err).mean(1))
###Output
_____no_output_____
###Markdown
We can check which volume has the largest error, however they all seem evenly distributed in an interval. It would be:
###Code
np.argmax(np.array(motionc_err).mean(1))
###Output
_____no_output_____
###Markdown
And we could access it just like this:
###Code
dataset[5]
###Output
_____no_output_____
###Markdown
Now for the registration parameters: we can again explore different configurations, and use mutual information as a quality metric. We changed the sign in the dataset module to be consistent with the previous error measure, where bigger is worse.
###Code
registration_parameters=[{'type_of_transform': 'Similarity',
'aff_sampling': 16,
'aff_random_sampling_rate': 0.4,
'aff_iterations': (500, 100, 10),
'aff_smoothing_sigmas': (2, 1, 0),
'aff_shrink_factors': (4, 2, 1),
},
{'type_of_transform': 'Similarity',
'aff_sampling': 32,
'aff_random_sampling_rate': 0.2,
'aff_iterations': (5000, 3000, 3000),
'aff_smoothing_sigmas': (2, 1, 0),
'aff_shrink_factors': (4, 2, 1),
}]
registration_info = dataset.calibrate_registration(registration_parameters)
print(registration_info)
###Output
[(0.3699080974835123, 1.541954795519511), (0.40917784845693883, 1.7413692077000935)]
###Markdown
This is a lot faster: we are really only aligning two volumes.
###Code
dataset.default_registration = registration_parameters[0]
reg_errs = dataset.register_all()
plt.hist(reg_errs)
###Output
_____no_output_____
###Markdown
Let's see the least-aligned one:
###Code
def rescale_intensity(x):
# This will just help us with visualizing two images
# on the same scale
a = x-x.mean()
return a/a.max()
index = np.argmax(reg_errs)
template = rescale_intensity(ants.image_read(dataset.template))
shifted = rescale_intensity(ants.image_read(dataset.regscans[index]))
comparison = np.concatenate([template[:,:,9].T, shifted[:,:,9,60].T],1)
plt.imshow(comparison)
###Output
_____no_output_____
###Markdown
Let's do the same with the best aligned one:
###Code
index = np.argmin(reg_errs)
template = rescale_intensity(ants.image_read(dataset.template))
shifted = rescale_intensity(ants.image_read(dataset.regscans[index]))
comparison = np.concatenate([template[:,:,9].T, shifted[:,:,9,60].T],1)
plt.imshow(comparison)
###Output
_____no_output_____
###Markdown
CLEVR-MRT example dataset visualisation
###Code
import json
import torch
from skimage.io import imread
import numpy as np
from torch import nn
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Extracting dataset Because the dataset is quite massive, we will play around with a small sample of it here. Download `clevr-mrt-v2-sample.tar.gz` from and extract its contents. The 'v2' here refers to the version of the dataset described in Table X of the original paper.
###Code
%%bash
cd data
tar -xvzf clevr-mrt-v2-sample.tar.gz >/dev/null 2>&1
%%bash
ls -lt data/clevr-mrt-v2-sample/
###Output
total 0
drwxr-xr-x 5 beckhamc staff 160 25 Dec 13:48 metadata
drwxr-xr-x 4 beckhamc staff 128 25 Dec 13:47 train-val
###Markdown
When we extract the folder, we will see three directories: `train-val` and `metadata` (for the full dataset, there will also be a held-out test set called `test`). The `train-val` directory consists of many subfolders that are identified by indices. These indices don't carry any meaning and are simply byproducts of generating the dataset in parallel when we did this internally. Essentially, each subfolder is a batch of scenes and questions, and all of the indices together comprise the entire dataset.
###Code
%%bash
ls -lt data/clevr-mrt-v2-sample/train-val
###Output
total 0
drwxr-xr-x 6 beckhamc staff 192 25 Dec 13:47 217
drwxr-xr-x 6 beckhamc staff 192 25 Dec 13:44 120
###Markdown
Examining one of these indices will look familiar if you've played with the original Clevr dataset:
###Code
%%bash
ls data/clevr-mrt-v2-sample/train-val/217
DATADIR="data/clevr-mrt-v2-sample/train-val/"
###Output
_____no_output_____
###Markdown
----- Scenes Let's read in the scenes.json of these indices.
###Code
scenes = json.loads(open("data/clevr-mrt-v2-sample/train-val/217/scenes.json").read())
#print(scenes.keys())
len(scenes['scenes']) # there are 100 scenes per index
scenes.keys()
###Output
_____no_output_____
###Markdown
Each scene consists of 20 cameras, randomly sampled in a 360 degree arc. `cc` refers to the 'canonical camera', which is what the corresponding questions (in `questions.json`) are posed with respect to.
###Code
scenes['scenes'][0].keys()
###Output
_____no_output_____
###Markdown
Let us visualise a random subset (16) of these cameras.
###Code
cam_names = list(scenes['scenes'][0].keys())
np.random.shuffle(cam_names)
cam_names_subset = list(cam_names)[0:16]
cam_names_subset
scenes['scenes'][0][ cam_names_subset[0] ]['image_filename']
###Output
_____no_output_____
###Markdown
We can see that image filenames are of the form `CLEVR_train-clevr-kiwi-spatial_sX_Y.jpg`, where X is a unique scene identifier and Y denotes the camera number. If Y is `cc`, this means it is the canonical camera (the viewpoint which questions are posed with respect to).
###Code
plt.figure(figsize=(16,10))
plt.tight_layout()
for j in range(16):
plt.subplot(4,4,j+1)
img = imread("data/clevr-mrt-v2-sample/train-val/217/images/%s" % \
scenes['scenes'][0][ cam_names_subset[j] ]['image_filename'].replace(".png",".jpg"))
plt.imshow(img)
plt.axis('off')
plt.title(cam_names_subset[j])
###Output
_____no_output_____
###Markdown
Here is the canonical camera. While the canonical camera is fixed to the same position (i.e. same world 3d coordinates) for every scene in the dataset, in principle it should not be able to be easily determined by eyeballing the images, since we made an effort to ensure that there were no landmarks in the dataset (e.g. directional lighting) that could give hints as to what its position is. In other words, unless you have the 3d coordinates of the camera pertaining to the image, it should not be possible to determine the canonical camera's coordinates.
###Code
img_canonical = imread("data/clevr-mrt-v2-sample/train-val/217/images/%s" % \
scenes['scenes'][0]['cc']['image_filename'].replace(".png",".jpg"))
plt.imshow(img_canonical)
###Output
_____no_output_____
###Markdown
----- Questions
###Code
questions = json.loads(open("{}/217/questions.json".format(DATADIR)).read())
questions.keys()
q1 = questions['questions'][0]
q1['question']
q1['answer']
q1['image_filename']
for question in questions['questions']:
if "CLEVR_train-clevr-kiwi-spatial_s021700" in question['image_filename']:
print("Q:", question['question'])
print(" A:", question['answer'])
###Output
Q: Are there the same number of large brown balls in front of the yellow rubber cylinder and small green rubber cubes?
A: False
Q: Is the number of big brown objects that are behind the tiny metal ball less than the number of small cubes?
A: True
Q: Are there more large brown matte things that are in front of the tiny red metal cylinder than matte cubes?
A: False
Q: Are there an equal number of tiny yellow things that are in front of the small cube and tiny cyan shiny things that are to the right of the small brown sphere?
A: True
Q: Are there fewer small metal cylinders behind the small red object than brown cubes on the left side of the tiny metallic sphere?
A: False
Q: Is the number of yellow matte cubes that are behind the small red cylinder greater than the number of yellow blocks that are in front of the large brown cube?
A: False
Q: There is a brown ball that is in front of the yellow cylinder; is it the same size as the tiny yellow block?
A: False
Q: There is a yellow rubber cylinder; is it the same size as the brown rubber sphere in front of the cyan cylinder?
A: False
Q: Is the size of the brown rubber thing behind the small yellow matte cylinder the same as the cylinder that is behind the brown block?
A: False
Q: Does the tiny cylinder that is behind the big brown block have the same color as the tiny cube?
A: True
###Markdown
------- PyTorch dataset These json files are not used directly with the corresponding dataset class in PyTorch. The `metadata` folder contains preprocessed versions of these files in h5py format:
###Code
%%bash
ls data/clevr-mrt-v2-sample/metadata
# Courtesy of: https://github.com/ethanjperez/film
def invert_dict(d):
return {v: k for k, v in d.items()}
# Courtesy of: https://github.com/ethanjperez/film
def load_vocab(path):
with open(path, 'r') as f:
vocab = json.load(f)
vocab['question_idx_to_token'] = invert_dict(vocab['question_token_to_idx'])
#vocab['program_idx_to_token'] = invert_dict(vocab['program_token_to_idx'])
vocab['answer_idx_to_token'] = invert_dict(vocab['answer_token_to_idx'])
# Sanity check: make sure <NULL>, <START>, and <END> are consistent
assert vocab['question_token_to_idx']['<NULL>'] == 0
assert vocab['question_token_to_idx']['<START>'] == 1
assert vocab['question_token_to_idx']['<END>'] == 2
#assert vocab['program_token_to_idx']['<NULL>'] == 0
#assert vocab['program_token_to_idx']['<START>'] == 1
#assert vocab['program_token_to_idx']['<END>'] == 2
return vocab
import glob
import h5py
import os
from PIL import Image
class ClevrMrtDataset(Dataset):
def __init__(self,
root_images,
root_meta,
transforms_=None,
mode='train',
canonical_only=False):
self.CAM_NAMES = ["cam{}".format(j) for j in range(20)]
subfolders = glob.glob("%s/*" % root_images)
self.root_images = root_images
self.root_meta = root_meta
self.transform = transforms.Compose(transforms_)
self.canonical_only = canonical_only
self.vocab = load_vocab("%s/vocab.json" % root_meta)
if mode not in ['train', 'val', 'test']:
raise Exception("mode must be either train or val or test (got %s)" % mode)
self.mode = mode
# This holds every question and for all intents
# and purposes is the _length_ of this dataset.
# In order to map a question to its scene we
# must parse its filename and use id_to_scene
# in order to go from question to camera views.
if mode == 'train':
h5 = h5py.File("%s/train_questions.h5" % root_meta, "r")
elif mode == 'val':
h5 = h5py.File("%s/valid_questions.h5" % root_meta, "r")
else:
h5 = h5py.File("%s/test_questions.h5" % root_meta, "r")
self.answers = h5['answers'][:]
self.image_filenames = [ x.decode('utf-8') for x in h5['image_filenames'][:] ]
self.template_filenames = [x.decode('utf-8') for x in h5['template_filenames'][:] ]
self.questions = h5['questions'][:]
self.question_strs = h5['question_strs'][:]
assert len(self.answers) == len(self.image_filenames) == len(self.questions)
# Construct an internal dictionary, `id_to_scene`, which
# maps from a scene id to the scene metadata.
id_to_scene = {}
n_questions = 0
for subfolder in subfolders:
q_file = "%s/questions.json" % subfolder
s_file = "%s/scenes.json" % subfolder
if not os.path.exists(q_file) or not os.path.exists(s_file):
print("ERROR: skip:", subfolder)
continue
q_json = json.loads(open(q_file).read())
s_json = json.loads(open(s_file).read())
n_questions += len(q_json['questions'])
# Collect scenes first.
for idx, scene in enumerate(s_json['scenes']):
# Add subfolder to scene dict
for key in scene:
scene[key]['subfolder'] = os.path.basename(subfolder)
this_scene_cc = scene['cc']
# e.g. 's002400'
this_basename = this_scene_cc['image_filename'].split("_")[-2]
# Map the basename e.g. s002400
# to its dictionary of camera views.
id_to_scene[this_basename] = scene
self.id_to_scene = id_to_scene
self.mode = mode
def open_img_and_transform(self, path):
img = Image.open(path).convert('RGB')
img = self.transform(img)
return img
def __getitem__(self, index):
# Ok, grab the metadata
this_q = torch.from_numpy(self.questions[index]).long()
this_answer = torch.LongTensor([self.answers[index]])
this_filename_cc = self.image_filenames[index]
this_id = this_filename_cc.split("_")[-2]
this_template_filename = self.template_filenames[index]
# A dictionary of keys consisting of camera
# views.
scene_from_id = self.id_to_scene[this_id]
subfolder = scene_from_id['cc']['subfolder']
if self.canonical_only:
cam_names = ["cc"]
else:
cam_names = self.CAM_NAMES
# Select a random camera, this will be image 1
rnd_cam_name = cam_names[ np.random.randint(0, len(cam_names)) ]
img_filename = this_filename_cc.replace("_cc", "_"+rnd_cam_name).\
replace(".png", ".jpg")
this_img_path = "%s/%s/images/%s" % \
(self.root_images, subfolder, img_filename)
img = self.open_img_and_transform(this_img_path)
# Select a random camera, this will be image 2.
rnd_cam_name2 = cam_names[ np.random.randint(0, len(cam_names)) ]
img_filename2 = this_filename_cc.replace("_cc", "_"+rnd_cam_name2).\
replace(".png", ".jpg")
this_img_path2 = "%s/%s/images/%s" % \
(self.root_images, subfolder, img_filename2)
img2 = self.open_img_and_transform(this_img_path2)
# Get camera coordinates of image 1.
this_cam = torch.FloatTensor(
scene_from_id[rnd_cam_name]['cam_params'])
# Get camera coordinates of image 2.
this_cam2 = torch.FloatTensor(
scene_from_id[rnd_cam_name2]['cam_params'])
return img, img2, this_q, this_cam, this_cam2, this_answer
def __len__(self):
return len(self.questions)
# setting max_subfolders=20 just to speed up
# dataset creation.
ds = ClevrKiwiDataset(
root_images="data/clevr-mrt-v2-sample/train-val",
root_meta="data/clevr-mrt-v2-sample/metadata",
mode='train',
transforms_=[
transforms.Resize((224,224)),
transforms.ToTensor()
]
)
# The length of the dataset is how many questions.
# There are roughly ~10 questions per scene.
len(ds)
###Output
_____no_output_____
###Markdown
Visualising data loader
###Code
loader = DataLoader(ds, batch_size=8, shuffle=True)
for x1, x2, q, cam1, cam2, answer in loader:
break
x1.shape, x2.shape
q.shape, cam1.shape, cam2.shape
plt.imshow(x1[0].numpy().swapaxes(0,1).swapaxes(1,2))
plt.imshow(x2[0].numpy().swapaxes(0,1).swapaxes(1,2))
###Output
_____no_output_____
###Markdown
Example usageConsider the following problem:$$\begin{align*}\text{minimize} & & x_1 + x_2 + \max (0, x_1^2 + x_2^2 - 4), \\\text{s.t.} & & -5 \le x_1 \le 5, -5 \le x_2 \le 5.\end{align*}$$The problem is based on Example 7.1 of [Andrzej Ruszczyński's 'Nonlinear Optimization'](https://press.princeton.edu/books/hardcover/9780691119151/nonlinear-optimization).Solve the problem by the proximal bundle method.
###Code
import numpy as np
def f(x: np.ndarray):
r"""Calculate objective value and subgradient vector"""
v = x[0] ** 2 + x[1] ** 2 - 4
obj = x[0] + x[1] + max(0, v)
if v > 0:
g = np.array([1 + 2 * x[0], 1 + 2 * x[1]], dtype=np.float)
else:
g = np.array([1, 1])
return obj, g
from bundle import ProximalBundleMethod as PBM
p = PBM(
n=2, # dimension of x
sense=min # This problem is minimization.
)
p.custom_constraints = [p.x >= -5, p.x <= 5]
# initial guess
x = np.array([2, - 2], dtype=np.float)
for i in range(20):
obj, g = f(x)
print(i, x, obj)
x = p.step(obj, x, g)
###Output
0 [ 2. -2.] 4.0
1 [-3. 1.] 3.999999999999994
2 [-0.5 -0.5] -1.0
3 [ 0.5 -2.5] 0.5000000000000013
4 [-0.75 -2.25] -1.375
5 [-1.24166667 -1.725 ] -2.449305555555557
6 [-1.35513003 -1.49336856] -2.7819715336493314
7 [-1.39481123 -1.43508642] -2.8249262463433364
8 [-1.40837719 -1.42016909] -2.8281397208796584
9 [-1.41249182 -1.41594549] -2.828402539864622
10 [-1.41518487 -1.41326074] -2.8283914730141833
11 [-1.41405397 -1.41438172] -2.8284114083026886
12 [-1.4152527 -1.41318373] -2.8284079878455497
13 [-1.41486466 -1.41356912] -2.828414117182251
14 [-1.41546281 -1.41297054] -2.8284126506044105
15 [-1.4152505 -1.41318163] -2.828415829945838
16 [-1.41549608 -1.41293546] -2.828415783176273
17 [-1.41536737 -1.41306351] -2.8284175958852336
18 [-1.41539081 -1.41303961] -2.8284183418771613
19 [-1.4153363 -1.41309374] -2.82841929085044
###Markdown
Example script of the MSO myelination model>Ben-Zheng Li et al.,Predicting the Influence of Axon Myelination on Sound Localization Precision Using a Spiking Neural Network Model of Auditory Brainstem, 2022
###Code
from encoding import generate_auditory_nerve_input
from simulation import run_all_simulations
from decoding import run_decoding_analyses
from visualization import run_data_visualization
###Output
_____no_output_____
###Markdown
Encode sound wave auditory nerve input with ITDs
###Code
generate_auditory_nerve_input()
###Output
>> Generating pure tone sound wave
>> Generating auditory nerve responses; processing seed: 1 out of 2
> generating ITD dataframe
> Stimulus duration: 99.0 sec
> encoding auditory nerve responses
> encoded right-ear responses; time spent: 0:00:22.284610
> encoded left-ear responses; time spent: 0:00:22.004273
> saved as input/ANF_spikes_ITD_f_300_size_1000_seed_0.pckl
>> Generating auditory nerve responses; processing seed: 2 out of 2
> generating ITD dataframe
> Stimulus duration: 99.0 sec
> encoding auditory nerve responses
> encoded right-ear responses; time spent: 0:00:22.049894
> encoded left-ear responses; time spent: 0:00:22.105197
> saved as input/ANF_spikes_ITD_f_300_size_1000_seed_1.pckl
###Markdown
Simulate spiking neural network model of MSO
###Code
run_all_simulations()
###Output
>> Start simulating seed 1 out of 2
> loading auditory nerve input: input/ANF_spikes_ITD_f_300_size_1000_seed_0.pckl
> duration to be simulated: 99.0 sec
> enable multiprocessing; detected CPU count: 6
> creating NeuronGroups
> creating Synapses
> initializing network
> start running simulation
> generating cpp code
Starting simulation at t=0 s for duration 99 s
10.9912 s (11%) simulated in 10s, estimated 1m 20s remaining.
22.0309 s (22%) simulated in 20s, estimated 1m 10s remaining.
32.7001 s (33%) simulated in 30s, estimated 1m 1s remaining.
42.9763 s (43%) simulated in 40s, estimated 52s remaining.
53.5084 s (54%) simulated in 50s, estimated 43s remaining.
64.5488 s (65%) simulated in 1m 0s, estimated 32s remaining.
75.9175 s (76%) simulated in 1m 10s, estimated 21s remaining.
86.1492 s (87%) simulated in 1m 20s, estimated 12s remaining.
97.1822 s (98%) simulated in 1m 30s, estimated 2s remaining.
99 s (100%) simulated in 1m 31s
** total time: 0:01:48.353462
> exporting data
> saved data: data/data_MSO_SNN_seed_0.pckl
> reinitializing simulation core
> complete
>> Start simulating seed 2 out of 2
> loading auditory nerve input: input/ANF_spikes_ITD_f_300_size_1000_seed_1.pckl
> duration to be simulated: 99.0 sec
> enable multiprocessing; detected CPU count: 6
> creating NeuronGroups
> creating Synapses
> initializing network
> start running simulation
> generating cpp code
Starting simulation at t=0 s for duration 99 s
10.7882 s (10%) simulated in 10s, estimated 1m 22s remaining.
21.2483 s (21%) simulated in 20s, estimated 1m 13s remaining.
31.015 s (31%) simulated in 30s, estimated 1m 6s remaining.
41.2942 s (41%) simulated in 40s, estimated 56s remaining.
52.1571 s (52%) simulated in 50s, estimated 45s remaining.
62.9015 s (63%) simulated in 1m 0s, estimated 34s remaining.
73.769 s (74%) simulated in 1m 10s, estimated 24s remaining.
84.2793 s (85%) simulated in 1m 20s, estimated 14s remaining.
95.0153 s (95%) simulated in 1m 30s, estimated 4s remaining.
99 s (100%) simulated in 1m 33s
** total time: 0:01:52.044542
> exporting data
> saved data: data/data_MSO_SNN_seed_1.pckl
> reinitializing simulation core
> complete
###Markdown
Decode ITDs from simulated MSO responses
###Code
run_decoding_analyses()
###Output
>> Decoding dataset 1 out of 2
> loading simulation data: data/data_MSO_SNN_seed_1.pckl
> computing spike counts
> decoding dataset
** decoding accuracy = 0.9636363636363636
** mean squared error = 4.43
** accuracy at 10.0 us ITD = 1.0
> saved results: result/Decoding_data_MSO_SNN_seed_1.pckl
** time spent: 0:00:34.101064
>> Decoding dataset 2 out of 2
> loading simulation data: data/data_MSO_SNN_seed_0.pckl
> computing spike counts
> decoding dataset
** decoding accuracy = 0.9545454545454546
** mean squared error = 5.45
** accuracy at 10.0 us ITD = 1.0
> saved results: result/Decoding_data_MSO_SNN_seed_0.pckl
** time spent: 0:00:34.288617
###Markdown
Data analysis and figure plotting
###Code
run_data_visualization()
###Output
>> Compute peak parameters and plot figures
> loading ITD dataset
> start computing ITD tuning curve parameters
> processing file: result/Decoding_data_MSO_SNN_seed_0.pckl
> processing file: result/Decoding_data_MSO_SNN_seed_1.pckl
###Markdown
Example kMeansIn this notebook, we will go through two examples on how to use the class KMeans. We will first apply it on a toy example using our own generated data. Then, we will use it to cluster flowers in the iris dataset.
###Code
# Import useful libraries
import numpy as np
import matplotlib.pyplot as plt
from kMeans import KMeans
# Specific to Example II
import pandas as pd
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.metrics import confusion_matrix
###Output
_____no_output_____
###Markdown
Example I - Toy example with randomly generated dataIn this example, we generate data from three different multivariate guassian distributions, all with the same covariance structure. Then, we perform the k-means algorithm using the class KMeans. Finally, we plot the predicted clusters together with the ground truth.
###Code
# Generate data from three multivariate (2-dimensional) guassian distributions.
n=100
mu = np.array([[3,3], [0,0], [4, 0]])
data = np.concatenate((np.random.randn(n,2) + mu[0], np.random.randn(n,2) + mu[1], np.random.randn(n,2) + mu[2]))
# We set the random seed to ensure reproducability
np.random.seed(123123123)
# Create an instance of the class KMeans by specicifying number of clusters (k=3) and the observations (X=data).
model = KMeans(k=3, X=data)
# We update the hyper-parameter distance function to the manhattan distance.
model.update_h_params({'dist_f' : 'L1_norm'})
# Run the k-means algorithm to find the clusters
_, y = model.fit()
# Visualize the clusters
plt.figure()
# Predicted clusters
plt.subplot(121)
plt.scatter(data[:,0], data[:,1], c=y)
plt.title("Predicted clusters")
# True clusters
plt.subplot(122)
plt.scatter(data[:,0], data[:,1], c=np.repeat([0,1,2], n))
plt.title("True clusters")
###Output
Converged to a solution after 6 iterations!
###Markdown
Example IIIn this example we use the class kMeans to cluster the flowers in the well known Iris-dataset. For convenience, we use scikit-learn to load the dataset. Further, we use a scree-plot (a plot of the within-sum-of-squares against the number of classes k) to choose number of clusters, i.e., a hyper-parameter tuning strategy. More specifically, we will use a thumbrule that says: look for the elbow in the scree-plot to choose the corresponding k.
###Code
# Load the iris dataset using scikit-learn.
iris = datasets.load_iris()
features = iris['data']
feature_names = iris['feature_names']
labels = iris['target']
labels_names = iris['target_names']
# Load data into a pandas dataframe for convenience
data = pd.DataFrame(features, columns=feature_names)
# Initial data analysis
print(f"We have {data.shape[0]} observations on {data.shape[1]} features.")
data.head()
plt.figure()
# Create histogram of each feature
for i in range(4):
plt.subplot(2,2,i+1)
plt.hist(data.iloc[:,i])
plt.ylabel("Frequency")
plt.xlabel("cm")
plt.title(data.columns[i])
# Increase space between subplots for better layout
plt.subplots_adjust(wspace=0.5, hspace=0.7)
###Output
_____no_output_____
###Markdown
I believe that in ordinary machine learning projects, a substantial portion of the work should go into the initial data analysis. However, we will let this complete our data analysis since this is not the goal of this example notebook. We have seen that we have four features measured on each flower. We see that we have no obviously wrong measurements, i.e., all lengths are positive and reasonable. We continue with clustering. Clustering belongs to unsupervised machine learning, which means that no labels are accessible. However, in this example we have labels available, but we will only use them in the end to compare our predicted clusters with the "ground truth". Next, we perform hyper-parameter tuning of the number of clusters k. We will do that by clustering the data for k=2, 3, ...,K where K is an arbitrarly choosen number. We choose K=10. For each k, we will perform 10 clusterings (train 10 models) with different starts, this is because the k-means is not guaranteed to find the optimal solution, only local optima. Hence, the final solution depends on the initial start. By doing 10 different starts we increase the probability of finding the optimal clustering.
###Code
# Settings
nr_iterations = 1000
nr_starts = 10
K = 10
# Save the within-sum-of-squares so we can compare different k
WSS = np.zeros((1,K-1))
for k in range(2,K+1):
# Print to follow progression
print(f"Iteration {k-1} of {K-1} iterations.")
wss = np.zeros((1, nr_starts))
for start in range(nr_starts):
# Create a KMeans object and fit it directly, we need to transform the dataframe to a numpy array
wss_out, _ = KMeans(k=k, X=data.values, verbose=False, h_params={'n_iter' : nr_iterations}, random_state=np.random.randint(1000000)).fit()
wss[0,start] = wss_out[-1]
# Save lowest within-sum-of-squares for each k
WSS[0,k-2] = np.nanmin(wss)
# Create scree plot
plt.figure()
plt.title("Scree Plot")
plt.plot(np.arange(K-1) + 2, WSS[0])
plt.xlabel("Number of Clusters (k)")
plt.ylabel("Within Sum of Squares")
###Output
_____no_output_____
###Markdown
It is hard to find a distinct elbow in the above Scree Plot. If this was a pure machine learning problem, one should also use the average silhouette method to choose k. But since this is only an example and we have access to the real number of groups (which is 3), we choose k=3 so that we can evaluate the clusters. Next, we retrain 10 models with k=3 and plot the within sum of squares for each model to find the optimal one.
###Code
# Settings
nr_iterations = 1000
nr_starts = 10
k = 3
# Save the within-sum-of-squares so we can compare different k
wss = np.zeros((1, nr_starts))
predictions = []
# Set random seed to enable reproducibility
np.random.seed(27)
for start in range(nr_starts):
# Create a KMeans object and fit it directly, we need to transform the dataframe to a numpy array
wss_out, predictions_out = KMeans(k=k, X=data.values, verbose=False, h_params={'n_iter' : nr_iterations}).fit()
wss[0,start] = wss_out[-1]
predictions.append(predictions_out)
# Plotting the within-sum-of-squares and marking the minumum
plt.plot(np.arange(1, wss.shape[1]+1), wss[0])
plt.xlabel("Start #")
plt.ylabel("Within-Sum-of-Squares")
plt.scatter(np.argmin(wss)+1, wss[0][np.argmin(wss)], c="red")
###Output
_____no_output_____
###Markdown
We can see that the second model yielded the lowest wss as shown by the red dot. Now, we plot the predicted clusters versus the true clusters using the first two principal components.
###Code
pca = PCA(n_components=2)
pca.fit(data)
pcs = pca.transform(data)
# Predicted clusters
plt.subplot(121)
plt.scatter(pcs[:,0], pcs[:,1], c=predictions[np.argmin(wss[0])])
plt.title("Predicted Clusters")
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
# True clusters
plt.subplot(122)
plt.scatter(pcs[:,0], pcs[:,1], c=labels)
plt.title("True Clusters")
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.subplots_adjust(wspace=0.7)
###Output
_____no_output_____
###Markdown
We see that the predicted clusters corresponds quite well to the true clusters. Finally, we create a confusion matrix which we can use to calculate the overall accuracy, just as a measure of the performance of the k-means algorithm on the Iris-dataset. Once again, please remember, that in true unsupervised machine learning, we have no ground truth, so we cannot calculate any accuracy.
###Code
conf_m = confusion_matrix(labels, predictions[np.argmin(wss[0])])
print(conf_m)
print(f"We have an accuracy of {(50 + 48 + 36)/sum(sum(conf_m))*100:.2f}%")
###Output
We have an accuracy of 89.33%
###Markdown
Example how to create erzsol input files and run erzsol
###Code
# import libraries
import pandas as pd
import numpy as np
import erzsol3Py as erz
import os
###Output
_____no_output_____
###Markdown
create modeland write to .mod file used by ERZSOL3Note that ERZSOL3 can either take layer thicknesses as input or the depth of the layers. You need to specify whether depth or layer thickness is given by specifying the layer_mode parameter.
###Code
# velocites in km/s, density in kg/m3, depth in km
vp = [2.7, 3.3, 3.8] # P-velocity km/s
vs = [1.8, 2.1, 2.5] # S-velocity in km/s
rho = [2.7, 3.1, 3.2] # denisty in kg/m3
dz = [1.0, 2.0, 5.0] # thickness of each layer in km -> set layer_mode=0 in writeModFile
# depth of layers
# depth = [0.0, 1.0, 3.0] # depth in km -> set layer_mode=1 in writeModFile function
mod_fn = 'example.mod'
erz.writeModFile(vp, vs, rho, layers=dz, layer_mode=0, model_name='myModel', erzsol3_mod_file=mod_fn, nr=1)
# View created mod file
! cat example.mod
###Output
myModel
3 0
1 2.700 1.800 2.70 1.000 0.000 0.000
1 3.300 2.100 3.10 2.000 0.000 0.000
1 3.800 2.500 3.20 5.000 0.000 0.000
###Markdown
Define receiver and source locations- shape of receiver locations should be (3, number_of_receivers)- shape of source location is (1, 3)
###Code
receivers = np.array([[0,0,0],[1,1,0],[2,2,0],[3,3,0],[4,4,0]]) # km (cartesian coordinate system)
#print(receivers)
receivers = receivers.T # transpose to shape (3, n_receivers)
print(receivers.shape)
source_coord = np.array([2, 2, 3.5])
dst_fn = 'example.dst'
erz.writeDstFile(receivers, source_coord, dst_fn)
# view created dst file
! cat example.dst
# Plot source and receiver locatios
import matplotlib.pyplot as plt
plt.scatter(receivers[0,:], receivers[1,:], marker='v', color='k')
plt.scatter(source_coord[0], source_coord[1], marker='*', color='r', s=200, alpha=0.6)
plt.xlabel('x [km]')
plt.ylabel('y [km]')
plt.legend(['receivers', 'source'])
###Output
_____no_output_____
###Markdown
Write the cmd fileThe function requires many inputs
###Code
cmd_fn = 'example.cmd' # name of cmd file to create
out_fn = 'example.tx.z'# name of output seismogram file that will be created by ERZSOL3
srf_cond = "HS" # surface condition
ntpts = 2048 # Number of time-samples in output seismograms
dt = 0.002 # time step in s
MT = np.array([[0,10,10],[10,10,0],[10,0,10]]) # moment tensor (isotropic source defined here)
sz = source_coord[2] # depth of the source, best to get it from source_coord used to create dst file to avoid modelling errors
dom_freq = 10.0 # center frequency of the source
low_f_taper = (0.125, 0.25) # low frequency taper
high_f_taper = (60.0, 75.0) # high frequency taper
min_slow = 0.0001 # minumum slowness s/km
max_slow = 0.7 # maximum slowness s/km
erz.writeCmdFile(cmd_fn, out_fn, mod_fn, dst_fn,
srf_cond, ntpts, dt, MT, sz, dom_freq,
low_f_taper, high_f_taper, min_slow, max_slow)
###Output
_____no_output_____
###Markdown
Now that all necessary input files are defined, ERZSOL3 can be run
###Code
# Run erzsol3
erzBin = '/Users/nvinard/ErzsolOriginal/bin/erzsol3' # path to erzsol3
cmd = erzBin + ' < example.cmd'
os.system(cmd)
! cat example.cmd
f_cmd = open('example.cmd', 'r')
lines = f_cmd.read().splitlines()
f_cmd.close()
int(lines[12].split(' ')[0])
###Output
_____no_output_____
###Markdown
Read erzsol output and plot result
###Code
data = erz.readErzsol3('example.tx.z', 'example.cmd')
data.shape
#erz.wiggle(data)
plt.figure(1)
erz.wiggle(data[0,:,:].T)
plt.figure(2)
erz.wiggle(data[1,:,:].T)
plt.figure(3)
erz.wiggle(data[2,:,:].T)
###Output
_____no_output_____
###Markdown
Teachers trainingIn this example, we will train one teacher for each of the following datasets: *BC2GM*, *BC5CDR-chem*, *NCBI-disease*.
###Code
!python train_teacher.py \
--data_dir 'data/BC2GM' \
--model_name_or_path 'dmis-lab/biobert-base-cased-v1.1' \
--output_dir 'models/Teachers/BC2GM' \
--logging_dir 'models/Teachers/BC2GM' \
--save_steps 10000
!python train_teacher.py \
--data_dir 'data/BC5CDR-chem' \
--model_name_or_path 'dmis-lab/biobert-base-cased-v1.1' \
--output_dir 'models/Teachers/BC5CDR-chem' \
--logging_dir 'models/Teachers/BC5CDR-chem' \
--save_steps 10000
!python train_teacher.py \
--data_dir 'data/NCBI-disease' \
--model_name_or_path 'dmis-lab/biobert-base-cased-v1.1' \
--output_dir 'models/Teachers/NCBI-disease' \
--logging_dir 'models/Teachers/NCBI-disease' \
--save_steps 10000
###Output
2021-03-29 16:49:08.663349: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Namespace(data_dir='data/NCBI-disease', do_eval=True, do_predict=True, do_train=True, evaluation_strategy='epoch', logging_dir='models/Teachers/NCBI-disease', logging_steps=100, max_seq_length=128, model_name_or_path='dmis-lab/biobert-base-cased-v1.1', num_train_epochs=3, output_dir='models/Teachers/NCBI-disease', per_device_train_batch_size=32, save_steps=10000, seed=1)
03/29/2021 16:49:10 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
03/29/2021 16:49:10 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=models/Teachers/NCBI-disease, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=IntervalStrategy.EPOCH, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=models/Teachers/NCBI-disease, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=100, save_strategy=IntervalStrategy.STEPS, save_steps=10000, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=100, dataloader_num_workers=0, past_index=-1, run_name=models/Teachers/NCBI-disease, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=1)
03/29/2021 16:49:12 - INFO - filelock - Lock 139759777279312 acquired on data/NCBI-disease/cached_train_dev_BertTokenizer_128.lock
03/29/2021 16:49:12 - INFO - src.data_handling.DataHandlers - Loading features from cached file data/NCBI-disease/cached_train_dev_BertTokenizer_128
03/29/2021 16:49:12 - INFO - filelock - Lock 139759777279312 released on data/NCBI-disease/cached_train_dev_BertTokenizer_128.lock
03/29/2021 16:49:12 - INFO - filelock - Lock 139759777718224 acquired on data/NCBI-disease/cached_test_BertTokenizer_128.lock
03/29/2021 16:49:12 - INFO - src.data_handling.DataHandlers - Loading features from cached file data/NCBI-disease/cached_test_BertTokenizer_128
03/29/2021 16:49:12 - INFO - filelock - Lock 139759777718224 released on data/NCBI-disease/cached_test_BertTokenizer_128.lock
Some weights of the model checkpoint at dmis-lab/biobert-base-cased-v1.1 were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at dmis-lab/biobert-base-cased-v1.1 and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py:836: FutureWarning: `model_path` is deprecated and will be removed in a future version. Use `resume_from_checkpoint` instead.
FutureWarning,
{'loss': 0.1303, 'learning_rate': 4.16247906197655e-05, 'epoch': 0.5}
33% 199/597 [02:16<03:44, 1.77it/s]
0% 0/118 [00:00<?, ?it/s][A
3% 3/118 [00:00<00:05, 22.01it/s][A
4% 5/118 [00:00<00:05, 18.96it/s][A
6% 7/118 [00:00<00:06, 17.49it/s][A
8% 9/118 [00:00<00:06, 16.48it/s][A
9% 11/118 [00:00<00:06, 15.80it/s][A
11% 13/118 [00:00<00:06, 15.28it/s][A
13% 15/118 [00:00<00:06, 15.06it/s][A
14% 17/118 [00:01<00:06, 15.09it/s][A
16% 19/118 [00:01<00:06, 14.98it/s][A
18% 21/118 [00:01<00:06, 14.94it/s][A
19% 23/118 [00:01<00:06, 14.82it/s][A
21% 25/118 [00:01<00:06, 14.79it/s][A
23% 27/118 [00:01<00:06, 14.61it/s][A
25% 29/118 [00:01<00:06, 14.60it/s][A
26% 31/118 [00:02<00:05, 14.76it/s][A
28% 33/118 [00:02<00:05, 14.81it/s][A
30% 35/118 [00:02<00:05, 14.77it/s][A
31% 37/118 [00:02<00:05, 14.82it/s][A
33% 39/118 [00:02<00:05, 14.71it/s][A
35% 41/118 [00:02<00:05, 14.65it/s][A
36% 43/118 [00:02<00:05, 14.61it/s][A
38% 45/118 [00:03<00:04, 14.72it/s][A
40% 47/118 [00:03<00:04, 14.72it/s][A
42% 49/118 [00:03<00:04, 14.71it/s][A
43% 51/118 [00:03<00:04, 14.74it/s][A
45% 53/118 [00:03<00:04, 14.66it/s][A
47% 55/118 [00:03<00:04, 14.62it/s][A
48% 57/118 [00:03<00:04, 14.54it/s][A
50% 59/118 [00:03<00:04, 14.60it/s][A
52% 61/118 [00:04<00:03, 14.69it/s][A
53% 63/118 [00:04<00:03, 14.74it/s][A
55% 65/118 [00:04<00:03, 14.74it/s][A
57% 67/118 [00:04<00:03, 14.74it/s][A
58% 69/118 [00:04<00:03, 14.66it/s][A
60% 71/118 [00:04<00:03, 14.59it/s][A
62% 73/118 [00:04<00:03, 14.68it/s][A
64% 75/118 [00:05<00:02, 14.73it/s][A
65% 77/118 [00:05<00:02, 14.73it/s][A
67% 79/118 [00:05<00:02, 14.71it/s][A
69% 81/118 [00:05<00:02, 14.77it/s][A
70% 83/118 [00:05<00:02, 14.67it/s][A
72% 85/118 [00:05<00:02, 14.63it/s][A
74% 87/118 [00:05<00:02, 14.74it/s][A
75% 89/118 [00:06<00:01, 14.66it/s][A
77% 91/118 [00:06<00:01, 14.66it/s][A
79% 93/118 [00:06<00:01, 14.66it/s][A
81% 95/118 [00:06<00:01, 14.74it/s][A
82% 97/118 [00:06<00:01, 14.60it/s][A
84% 99/118 [00:06<00:01, 14.58it/s][A
86% 101/118 [00:06<00:01, 14.64it/s][A
87% 103/118 [00:06<00:01, 14.73it/s][A
89% 105/118 [00:07<00:00, 14.68it/s][A
91% 107/118 [00:07<00:00, 14.73it/s][A
92% 109/118 [00:07<00:00, 14.80it/s][A
94% 111/118 [00:07<00:00, 14.73it/s][A
96% 113/118 [00:07<00:00, 14.61it/s][A
97% 115/118 [00:07<00:00, 14.63it/s][A
[A{'eval_loss': 0.048421476036310196, 'eval_accuracy_score': 0.9833748621379845, 'eval_precision': 0.8003802281368821, 'eval_recall': 0.8770833333333333, 'eval_f1': 0.8369781312127236, 'eval_runtime': 11.5173, 'eval_samples_per_second': 81.616, 'epoch': 1.0}
33% 199/597 [02:27<03:44, 1.77it/s]
100% 118/118 [00:11<00:00, 14.76it/s][A
{'loss': 0.0449, 'learning_rate': 3.324958123953099e-05, 'epoch': 1.01}
{'loss': 0.0221, 'learning_rate': 2.4874371859296484e-05, 'epoch': 1.51}
67% 398/597 [04:43<01:52, 1.77it/s]
0% 0/118 [00:00<?, ?it/s][A
3% 3/118 [00:00<00:05, 21.99it/s][A
4% 5/118 [00:00<00:05, 18.85it/s][A
6% 7/118 [00:00<00:06, 17.36it/s][A
8% 9/118 [00:00<00:06, 16.34it/s][A
9% 11/118 [00:00<00:06, 15.76it/s][A
11% 13/118 [00:00<00:06, 15.29it/s][A
13% 15/118 [00:00<00:06, 15.13it/s][A
14% 17/118 [00:01<00:06, 15.17it/s][A
16% 19/118 [00:01<00:06, 15.05it/s][A
18% 21/118 [00:01<00:06, 14.87it/s][A
19% 23/118 [00:01<00:06, 14.76it/s][A
21% 25/118 [00:01<00:06, 14.74it/s][A
23% 27/118 [00:01<00:06, 14.65it/s][A
25% 29/118 [00:01<00:06, 14.58it/s][A
26% 31/118 [00:02<00:05, 14.73it/s][A
28% 33/118 [00:02<00:05, 14.76it/s][A
30% 35/118 [00:02<00:05, 14.77it/s][A
31% 37/118 [00:02<00:05, 14.66it/s][A
33% 39/118 [00:02<00:05, 14.64it/s][A
35% 41/118 [00:02<00:05, 14.51it/s][A
36% 43/118 [00:02<00:05, 14.46it/s][A
38% 45/118 [00:03<00:04, 14.64it/s][A
40% 47/118 [00:03<00:04, 14.70it/s][A
42% 49/118 [00:03<00:04, 14.69it/s][A
43% 51/118 [00:03<00:04, 14.70it/s][A
45% 53/118 [00:03<00:04, 14.61it/s][A
47% 55/118 [00:03<00:04, 14.50it/s][A
48% 57/118 [00:03<00:04, 14.52it/s][A
50% 59/118 [00:03<00:04, 14.64it/s][A
52% 61/118 [00:04<00:03, 14.66it/s][A
53% 63/118 [00:04<00:03, 14.72it/s][A
55% 65/118 [00:04<00:03, 14.70it/s][A
57% 67/118 [00:04<00:03, 14.67it/s][A
58% 69/118 [00:04<00:03, 14.59it/s][A
60% 71/118 [00:04<00:03, 14.58it/s][A
62% 73/118 [00:04<00:03, 14.68it/s][A
64% 75/118 [00:05<00:02, 14.76it/s][A
65% 77/118 [00:05<00:02, 14.87it/s][A
67% 79/118 [00:05<00:02, 14.88it/s][A
69% 81/118 [00:05<00:02, 14.75it/s][A
70% 83/118 [00:05<00:02, 14.73it/s][A
72% 85/118 [00:05<00:02, 14.67it/s][A
74% 87/118 [00:05<00:02, 14.75it/s][A
75% 89/118 [00:06<00:01, 14.71it/s][A
77% 91/118 [00:06<00:01, 14.85it/s][A
79% 93/118 [00:06<00:01, 14.80it/s][A
81% 95/118 [00:06<00:01, 14.69it/s][A
82% 97/118 [00:06<00:01, 14.67it/s][A
84% 99/118 [00:06<00:01, 14.63it/s][A
86% 101/118 [00:06<00:01, 14.62it/s][A
87% 103/118 [00:06<00:01, 14.59it/s][A
89% 105/118 [00:07<00:00, 14.68it/s][A
91% 107/118 [00:07<00:00, 14.75it/s][A
92% 109/118 [00:07<00:00, 14.70it/s][A
94% 111/118 [00:07<00:00, 14.71it/s][A
96% 113/118 [00:07<00:00, 14.68it/s][A
97% 115/118 [00:07<00:00, 14.64it/s][A
[A{'eval_loss': 0.04439317435026169, 'eval_accuracy_score': 0.9848862383072587, 'eval_precision': 0.8467741935483871, 'eval_recall': 0.875, 'eval_f1': 0.8606557377049181, 'eval_runtime': 11.5254, 'eval_samples_per_second': 81.559, 'epoch': 2.0}
67% 398/597 [04:55<01:52, 1.77it/s]
100% 118/118 [00:11<00:00, 14.57it/s][A
{'loss': 0.0187, 'learning_rate': 1.6499162479061976e-05, 'epoch': 2.01}
{'loss': 0.0093, 'learning_rate': 8.123953098827471e-06, 'epoch': 2.51}
100% 597/597 [07:11<00:00, 1.79it/s]
0% 0/118 [00:00<?, ?it/s][A
3% 3/118 [00:00<00:05, 21.90it/s][A
4% 5/118 [00:00<00:05, 18.91it/s][A
6% 7/118 [00:00<00:06, 17.40it/s][A
8% 9/118 [00:00<00:06, 16.43it/s][A
9% 11/118 [00:00<00:06, 15.76it/s][A
11% 13/118 [00:00<00:06, 15.36it/s][A
13% 15/118 [00:00<00:06, 15.16it/s][A
14% 17/118 [00:01<00:06, 15.24it/s][A
16% 19/118 [00:01<00:06, 15.07it/s][A
18% 21/118 [00:01<00:06, 15.04it/s][A
19% 23/118 [00:01<00:06, 14.89it/s][A
21% 25/118 [00:01<00:06, 14.82it/s][A
23% 27/118 [00:01<00:06, 14.68it/s][A
25% 29/118 [00:01<00:06, 14.72it/s][A
26% 31/118 [00:02<00:05, 14.87it/s][A
28% 33/118 [00:02<00:05, 14.74it/s][A
30% 35/118 [00:02<00:05, 14.79it/s][A
31% 37/118 [00:02<00:05, 14.77it/s][A
33% 39/118 [00:02<00:05, 14.73it/s][A
35% 41/118 [00:02<00:05, 14.65it/s][A
36% 43/118 [00:02<00:05, 14.61it/s][A
38% 45/118 [00:02<00:04, 14.74it/s][A
40% 47/118 [00:03<00:04, 14.77it/s][A
42% 49/118 [00:03<00:04, 14.85it/s][A
43% 51/118 [00:03<00:04, 14.78it/s][A
45% 53/118 [00:03<00:04, 14.75it/s][A
47% 55/118 [00:03<00:04, 14.71it/s][A
48% 57/118 [00:03<00:04, 14.72it/s][A
50% 59/118 [00:03<00:04, 14.68it/s][A
52% 61/118 [00:04<00:03, 14.79it/s][A
53% 63/118 [00:04<00:03, 14.89it/s][A
55% 65/118 [00:04<00:03, 14.74it/s][A
57% 67/118 [00:04<00:03, 14.72it/s][A
58% 69/118 [00:04<00:03, 14.74it/s][A
60% 71/118 [00:04<00:03, 14.69it/s][A
62% 73/118 [00:04<00:03, 14.62it/s][A
64% 75/118 [00:05<00:02, 14.72it/s][A
65% 77/118 [00:05<00:02, 14.87it/s][A
67% 79/118 [00:05<00:02, 14.75it/s][A
69% 81/118 [00:05<00:02, 14.84it/s][A
70% 83/118 [00:05<00:02, 14.81it/s][A
72% 85/118 [00:05<00:02, 14.73it/s][A
74% 87/118 [00:05<00:02, 14.75it/s][A
75% 89/118 [00:05<00:01, 14.81it/s][A
77% 91/118 [00:06<00:01, 14.77it/s][A
79% 93/118 [00:06<00:01, 14.75it/s][A
81% 95/118 [00:06<00:01, 14.82it/s][A
82% 97/118 [00:06<00:01, 14.72it/s][A
84% 99/118 [00:06<00:01, 14.67it/s][A
86% 101/118 [00:06<00:01, 14.63it/s][A
87% 103/118 [00:06<00:01, 14.78it/s][A
89% 105/118 [00:07<00:00, 14.81it/s][A
91% 107/118 [00:07<00:00, 14.84it/s][A
92% 109/118 [00:07<00:00, 14.79it/s][A
94% 111/118 [00:07<00:00, 14.81it/s][A
96% 113/118 [00:07<00:00, 14.81it/s][A
97% 115/118 [00:07<00:00, 14.75it/s][A
[A{'eval_loss': 0.05127991735935211, 'eval_accuracy_score': 0.9853764143621584, 'eval_precision': 0.8384236453201971, 'eval_recall': 0.8864583333333333, 'eval_f1': 0.8617721518987341, 'eval_runtime': 11.5073, 'eval_samples_per_second': 81.687, 'epoch': 3.0}
100% 597/597 [07:22<00:00, 1.79it/s]
100% 118/118 [00:11<00:00, 14.76it/s][A
{'train_runtime': 442.8327, 'train_samples_per_second': 1.348, 'epoch': 3.0}
100% 597/597 [07:22<00:00, 1.35it/s]
###Markdown
Global datasetsWe need the aggregated datasets for *teachers* in order to obtain their predictions over the whole set of data. Furthermore, we need the aggregated dataset with teachers' labels to train our Student.
###Code
!python generate_global_datasets.py
###Output
Namespace(data_path='data')
Generating file: data/GLOBAL/BC2GM/train.tsv
Generating file: data/GLOBAL/BC5CDR-chem/train.tsv
Generating file: data/GLOBAL/NCBI-disease/train.tsv
Generating file: data/GLOBAL/BC2GM/dev.tsv
Generating file: data/GLOBAL/BC5CDR-chem/dev.tsv
Generating file: data/GLOBAL/NCBI-disease/dev.tsv
Generating file: data/GLOBAL/BC2GM/train_dev.tsv
Generating file: data/GLOBAL/BC5CDR-chem/train_dev.tsv
Generating file: data/GLOBAL/NCBI-disease/train_dev.tsv
Generating file: data/GLOBAL/BC2GM/test.tsv
Generating file: data/GLOBAL/BC5CDR-chem/test.tsv
Generating file: data/GLOBAL/NCBI-disease/test.tsv
Generating file: data/GLOBAL/Student/train.tsv
Generating file: data/GLOBAL/Student/dev.tsv
Generating file: data/GLOBAL/Student/train_dev.tsv
Generating file: data/GLOBAL/Student/test.tsv
###Markdown
Offline generation of teachers' distributionWe obtain the output distribution of each teacher. The $i$-th teacher outputs the probabilities $p_B^i, p_I^i, p_O^i$, $i = \{1,...,k\}$, $k$ being the number of teachers.We have to aggregate them in a distribution with $2k+1$ labels ($B$ and $O$ for each teacher and the global $O$):- $P_{Bi} = p_B^i \prod_{j\ne i}{\big(p_I^j + p_O^j\big)}$, - $P_{Ii} = p_I^i \prod_{j\ne i}{\big(p_I^j + p_O^j\big)}$, - $P_{O} = \prod_i{p_O^i}$
###Code
!python generate_teachers_distributions.py \
--data_dir 'data' \
--teachers_dir 'models/Teachers' \
--model_name_or_path 'dmis-lab/biobert-base-cased-v1.1'
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
35% 1326/3823 [02:29<04:42, 8.85it/s][A[A
35% 1327/3823 [02:29<04:42, 8.83it/s][A[A
35% 1328/3823 [02:29<04:42, 8.82it/s][A[A
35% 1329/3823 [02:30<04:42, 8.84it/s][A[A
35% 1330/3823 [02:30<04:41, 8.84it/s][A[A
35% 1331/3823 [02:30<04:41, 8.85it/s][A[A
35% 1332/3823 [02:30<04:41, 8.85it/s][A[A
35% 1333/3823 [02:30<04:41, 8.83it/s][A[A
35% 1334/3823 [02:30<04:42, 8.81it/s][A[A
35% 1335/3823 [02:30<04:42, 8.81it/s][A[A
35% 1336/3823 [02:30<04:42, 8.82it/s][A[A
35% 1337/3823 [02:30<04:41, 8.82it/s][A[A
35% 1338/3823 [02:31<04:41, 8.83it/s][A[A
35% 1339/3823 [02:31<04:41, 8.83it/s][A[A
35% 1340/3823 [02:31<04:41, 8.83it/s][A[A
35% 1341/3823 [02:31<04:41, 8.83it/s][A[A
35% 1342/3823 [02:31<04:40, 8.84it/s][A[A
35% 1343/3823 [02:31<04:40, 8.84it/s][A[A
35% 1344/3823 [02:31<04:40, 8.83it/s][A[A
35% 1345/3823 [02:31<04:40, 8.82it/s][A[A
35% 1346/3823 [02:31<04:40, 8.83it/s][A[A
35% 1347/3823 [02:32<04:40, 8.83it/s][A[A
35% 1348/3823 [02:32<04:39, 8.84it/s][A[A
35% 1349/3823 [02:32<04:39, 8.84it/s][A[A
35% 1350/3823 [02:32<04:39, 8.84it/s][A[A
35% 1351/3823 [02:32<04:39, 8.84it/s][A[A
35% 1352/3823 [02:32<04:39, 8.84it/s][A[A
35% 1353/3823 [02:32<04:39, 8.83it/s][A[A
35% 1354/3823 [02:32<04:39, 8.83it/s][A[A
35% 1355/3823 [02:32<04:39, 8.84it/s][A[A
35% 1356/3823 [02:33<04:38, 8.84it/s][A[A
35% 1357/3823 [02:33<04:38, 8.85it/s][A[A
36% 1358/3823 [02:33<04:38, 8.84it/s][A[A
36% 1359/3823 [02:33<04:38, 8.85it/s][A[A
36% 1360/3823 [02:33<04:38, 8.85it/s][A[A
36% 1361/3823 [02:33<04:38, 8.84it/s][A[A
36% 1362/3823 [02:33<04:38, 8.85it/s][A[A
36% 1363/3823 [02:33<04:37, 8.85it/s][A[A
36% 1364/3823 [02:33<04:37, 8.86it/s][A[A
36% 1365/3823 [02:34<04:37, 8.85it/s][A[A
36% 1366/3823 [02:34<04:37, 8.84it/s][A[A
36% 1367/3823 [02:34<04:37, 8.84it/s][A[A
36% 1368/3823 [02:34<04:38, 8.83it/s][A[A
36% 1369/3823 [02:34<04:38, 8.80it/s][A[A
36% 1370/3823 [02:34<04:38, 8.82it/s][A[A
36% 1371/3823 [02:34<04:37, 8.84it/s][A[A
36% 1372/3823 [02:34<04:37, 8.84it/s][A[A
36% 1373/3823 [02:34<04:36, 8.85it/s][A[A
36% 1374/3823 [02:35<04:36, 8.84it/s][A[A
36% 1375/3823 [02:35<04:36, 8.85it/s][A[A
36% 1376/3823 [02:35<04:36, 8.84it/s][A[A
36% 1377/3823 [02:35<04:36, 8.84it/s][A[A
36% 1378/3823 [02:35<04:36, 8.84it/s][A[A
36% 1379/3823 [02:35<04:36, 8.84it/s][A[A
36% 1380/3823 [02:35<04:36, 8.84it/s][A[A
36% 1381/3823 [02:35<04:36, 8.84it/s][A[A
36% 1382/3823 [02:36<04:36, 8.83it/s][A[A
36% 1383/3823 [02:36<04:36, 8.84it/s][A[A
36% 1384/3823 [02:36<04:35, 8.84it/s][A[A
36% 1385/3823 [02:36<04:35, 8.83it/s][A[A
36% 1386/3823 [02:36<04:35, 8.85it/s][A[A
36% 1387/3823 [02:36<04:35, 8.85it/s][A[A
36% 1388/3823 [02:36<04:34, 8.86it/s][A[A
36% 1389/3823 [02:36<04:34, 8.85it/s][A[A
36% 1390/3823 [02:36<04:35, 8.84it/s][A[A
36% 1391/3823 [02:37<04:34, 8.85it/s][A[A
36% 1392/3823 [02:37<04:34, 8.84it/s][A[A
36% 1393/3823 [02:37<04:35, 8.82it/s][A[A
36% 1394/3823 [02:37<04:35, 8.82it/s][A[A
36% 1395/3823 [02:37<04:35, 8.83it/s][A[A
37% 1396/3823 [02:37<04:34, 8.84it/s][A[A
37% 1397/3823 [02:37<04:34, 8.84it/s][A[A
37% 1398/3823 [02:37<04:34, 8.82it/s][A[A
37% 1399/3823 [02:37<04:34, 8.83it/s][A[A
37% 1400/3823 [02:38<04:34, 8.84it/s][A[A
37% 1401/3823 [02:38<04:33, 8.84it/s][A[A
37% 1402/3823 [02:38<04:33, 8.85it/s][A[A
37% 1403/3823 [02:38<04:33, 8.85it/s][A[A
37% 1404/3823 [02:38<04:33, 8.85it/s][A[A
37% 1405/3823 [02:38<04:33, 8.85it/s][A[A
37% 1406/3823 [02:38<04:33, 8.83it/s][A[A
37% 1407/3823 [02:38<04:33, 8.83it/s][A[A
37% 1408/3823 [02:38<04:33, 8.82it/s][A[A
37% 1409/3823 [02:39<04:33, 8.81it/s][A[A
37% 1410/3823 [02:39<04:33, 8.83it/s][A[A
37% 1411/3823 [02:39<04:33, 8.83it/s][A[A
37% 1412/3823 [02:39<04:32, 8.84it/s][A[A
37% 1413/3823 [02:39<04:32, 8.84it/s][A[A
37% 1414/3823 [02:39<04:32, 8.83it/s][A[A
37% 1415/3823 [02:39<04:32, 8.84it/s][A[A
37% 1416/3823 [02:39<04:32, 8.84it/s][A[A
37% 1417/3823 [02:39<04:31, 8.85it/s][A[A
37% 1418/3823 [02:40<04:32, 8.84it/s][A[A
37% 1419/3823 [02:40<04:32, 8.83it/s][A[A
37% 1420/3823 [02:40<04:32, 8.83it/s][A[A
37% 1421/3823 [02:40<04:31, 8.84it/s][A[A
37% 1422/3823 [02:40<04:31, 8.83it/s][A[A
37% 1423/3823 [02:40<04:31, 8.84it/s][A[A
37% 1424/3823 [02:40<04:31, 8.83it/s][A[A
37% 1425/3823 [02:40<04:31, 8.83it/s][A[A
37% 1426/3823 [02:40<04:31, 8.84it/s][A[A
37% 1427/3823 [02:41<04:30, 8.84it/s][A[A
37% 1428/3823 [02:41<04:30, 8.85it/s][A[A
37% 1429/3823 [02:41<04:30, 8.84it/s][A[A
37% 1430/3823 [02:41<04:31, 8.83it/s][A[A
37% 1431/3823 [02:41<04:31, 8.83it/s][A[A
37% 1432/3823 [02:41<04:31, 8.82it/s][A[A
37% 1433/3823 [02:41<04:30, 8.82it/s][A[A
38% 1434/3823 [02:41<04:30, 8.82it/s][A[A
38% 1435/3823 [02:42<04:30, 8.82it/s][A[A
38% 1436/3823 [02:42<04:30, 8.82it/s][A[A
38% 1437/3823 [02:42<04:30, 8.82it/s][A[A
38% 1438/3823 [02:42<04:30, 8.82it/s][A[A
38% 1439/3823 [02:42<04:30, 8.82it/s][A[A
38% 1440/3823 [02:42<04:30, 8.82it/s][A[A
38% 1441/3823 [02:42<04:30, 8.82it/s][A[A
38% 1442/3823 [02:42<04:29, 8.82it/s][A[A
38% 1443/3823 [02:42<04:30, 8.81it/s][A[A
38% 1444/3823 [02:43<04:30, 8.80it/s][A[A
38% 1445/3823 [02:43<04:29, 8.81it/s][A[A
38% 1446/3823 [02:43<04:29, 8.81it/s][A[A
38% 1447/3823 [02:43<04:29, 8.82it/s][A[A
38% 1448/3823 [02:43<04:28, 8.83it/s][A[A
38% 1449/3823 [02:43<04:28, 8.84it/s][A[A
38% 1450/3823 [02:43<04:28, 8.84it/s][A[A
38% 1451/3823 [02:43<04:28, 8.85it/s][A[A
38% 1452/3823 [02:43<04:28, 8.83it/s][A[A
38% 1453/3823 [02:44<04:28, 8.82it/s][A[A
38% 1454/3823 [02:44<04:28, 8.83it/s][A[A
38% 1455/3823 [02:44<04:27, 8.84it/s][A[A
38% 1456/3823 [02:44<04:27, 8.84it/s][A[A
38% 1457/3823 [02:44<04:27, 8.85it/s][A[A
38% 1458/3823 [02:44<04:27, 8.85it/s][A[A
38% 1459/3823 [02:44<04:27, 8.84it/s][A[A
38% 1460/3823 [02:44<04:27, 8.84it/s][A[A
38% 1461/3823 [02:44<04:27, 8.84it/s][A[A
38% 1462/3823 [02:45<04:27, 8.83it/s][A[A
38% 1463/3823 [02:45<04:26, 8.85it/s][A[A
38% 1464/3823 [02:45<04:26, 8.85it/s][A[A
38% 1465/3823 [02:45<04:26, 8.85it/s][A[A
38% 1466/3823 [02:45<04:26, 8.85it/s][A[A
38% 1467/3823 [02:45<04:26, 8.85it/s][A[A
38% 1468/3823 [02:45<04:25, 8.85it/s][A[A
38% 1469/3823 [02:45<04:26, 8.85it/s][A[A
38% 1470/3823 [02:45<04:26, 8.84it/s][A[A
38% 1471/3823 [02:46<04:25, 8.85it/s][A[A
39% 1472/3823 [02:46<04:25, 8.85it/s][A[A
39% 1473/3823 [02:46<04:25, 8.85it/s][A[A
39% 1474/3823 [02:46<04:25, 8.85it/s][A[A
39% 1475/3823 [02:46<04:25, 8.85it/s][A[A
39% 1476/3823 [02:46<04:25, 8.85it/s][A[A
39% 1477/3823 [02:46<04:25, 8.85it/s][A[A
39% 1478/3823 [02:46<04:25, 8.84it/s][A[A
39% 1479/3823 [02:46<04:25, 8.84it/s][A[A
39% 1480/3823 [02:47<04:24, 8.84it/s][A[A
39% 1481/3823 [02:47<04:24, 8.84it/s][A[A
39% 1482/3823 [02:47<04:24, 8.84it/s][A[A
39% 1483/3823 [02:47<04:24, 8.84it/s][A[A
39% 1484/3823 [02:47<04:24, 8.84it/s][A[A
39% 1485/3823 [02:47<04:24, 8.83it/s][A[A
39% 1486/3823 [02:47<04:25, 8.81it/s][A[A
39% 1487/3823 [02:47<04:24, 8.82it/s][A[A
39% 1488/3823 [02:48<04:24, 8.81it/s][A[A
39% 1489/3823 [02:48<04:24, 8.81it/s][A[A
39% 1490/3823 [02:48<04:24, 8.82it/s][A[A
39% 1491/3823 [02:48<04:23, 8.83it/s][A[A
39% 1492/3823 [02:48<04:23, 8.83it/s][A[A
39% 1493/3823 [02:48<04:23, 8.84it/s][A[A
39% 1494/3823 [02:48<04:23, 8.83it/s][A[A
39% 1495/3823 [02:48<04:23, 8.84it/s][A[A
39% 1496/3823 [02:48<04:23, 8.84it/s][A[A
39% 1497/3823 [02:49<04:23, 8.83it/s][A[A
39% 1498/3823 [02:49<04:23, 8.82it/s][A[A
39% 1499/3823 [02:49<04:23, 8.82it/s][A[A
39% 1500/3823 [02:49<04:23, 8.82it/s][A[A
39% 1501/3823 [02:49<04:22, 8.83it/s][A[A
39% 1502/3823 [02:49<04:22, 8.84it/s][A[A
39% 1503/3823 [02:49<04:22, 8.84it/s][A[A
39% 1504/3823 [02:49<04:22, 8.85it/s][A[A
39% 1505/3823 [02:49<04:21, 8.85it/s][A[A
39% 1506/3823 [02:50<04:22, 8.84it/s][A[A
39% 1507/3823 [02:50<04:22, 8.82it/s][A[A
39% 1508/3823 [02:50<04:22, 8.82it/s][A[A
39% 1509/3823 [02:50<04:22, 8.83it/s][A[A
39% 1510/3823 [02:50<04:21, 8.83it/s][A[A
40% 1511/3823 [02:50<04:21, 8.83it/s][A[A
40% 1512/3823 [02:50<04:21, 8.82it/s][A[A
40% 1513/3823 [02:50<04:21, 8.82it/s][A[A
40% 1514/3823 [02:50<04:21, 8.83it/s][A[A
40% 1515/3823 [02:51<04:21, 8.83it/s][A[A
40% 1516/3823 [02:51<04:21, 8.81it/s][A[A
40% 1517/3823 [02:51<04:21, 8.82it/s][A[A
40% 1518/3823 [02:51<04:21, 8.83it/s][A[A
40% 1519/3823 [02:51<04:21, 8.82it/s][A[A
40% 1520/3823 [02:51<04:20, 8.83it/s][A[A
40% 1521/3823 [02:51<04:20, 8.84it/s][A[A
40% 1522/3823 [02:51<04:20, 8.84it/s][A[A
40% 1523/3823 [02:51<04:20, 8.84it/s][A[A
40% 1524/3823 [02:52<04:20, 8.82it/s][A[A
40% 1525/3823 [02:52<04:20, 8.81it/s][A[A
40% 1526/3823 [02:52<04:20, 8.81it/s][A[A
40% 1527/3823 [02:52<04:21, 8.79it/s][A[A
40% 1528/3823 [02:52<04:20, 8.79it/s][A[A
40% 1529/3823 [02:52<04:20, 8.80it/s][A[A
40% 1530/3823 [02:52<04:20, 8.81it/s][A[A
40% 1531/3823 [02:52<04:20, 8.81it/s][A[A
40% 1532/3823 [02:53<04:20, 8.80it/s][A[A
40% 1533/3823 [02:53<04:19, 8.81it/s][A[A
40% 1534/3823 [02:53<04:19, 8.82it/s][A[A
40% 1535/3823 [02:53<04:19, 8.83it/s][A[A
40% 1536/3823 [02:53<04:18, 8.84it/s][A[A
40% 1537/3823 [02:53<04:19, 8.82it/s][A[A
40% 1538/3823 [02:53<04:18, 8.83it/s][A[A
40% 1539/3823 [02:53<04:18, 8.83it/s][A[A
40% 1540/3823 [02:53<04:18, 8.83it/s][A[A
40% 1541/3823 [02:54<04:18, 8.84it/s][A[A
40% 1542/3823 [02:54<04:18, 8.82it/s][A[A
40% 1543/3823 [02:54<04:18, 8.82it/s][A[A
40% 1544/3823 [02:54<04:17, 8.84it/s][A[A
40% 1545/3823 [02:54<04:17, 8.84it/s][A[A
40% 1546/3823 [02:54<04:18, 8.82it/s][A[A
40% 1547/3823 [02:54<04:17, 8.82it/s][A[A
40% 1548/3823 [02:54<04:17, 8.83it/s][A[A
41% 1549/3823 [02:54<04:17, 8.83it/s][A[A
41% 1550/3823 [02:55<04:17, 8.82it/s][A[A
41% 1551/3823 [02:55<04:17, 8.83it/s][A[A
41% 1552/3823 [02:55<04:17, 8.84it/s][A[A
41% 1553/3823 [02:55<04:17, 8.83it/s][A[A
41% 1554/3823 [02:55<04:16, 8.84it/s][A[A
41% 1555/3823 [02:55<04:16, 8.84it/s][A[A
41% 1556/3823 [02:55<04:16, 8.84it/s][A[A
41% 1557/3823 [02:55<04:16, 8.83it/s][A[A
41% 1558/3823 [02:55<04:16, 8.83it/s][A[A
41% 1559/3823 [02:56<04:16, 8.83it/s][A[A
41% 1560/3823 [02:56<04:16, 8.82it/s][A[A
41% 1561/3823 [02:56<04:16, 8.82it/s][A[A
41% 1562/3823 [02:56<04:16, 8.83it/s][A[A
41% 1563/3823 [02:56<04:15, 8.83it/s][A[A
41% 1564/3823 [02:56<04:15, 8.83it/s][A[A
41% 1565/3823 [02:56<04:15, 8.83it/s][A[A
41% 1566/3823 [02:56<04:15, 8.84it/s][A[A
41% 1567/3823 [02:56<04:15, 8.84it/s][A[A
41% 1568/3823 [02:57<04:15, 8.83it/s][A[A
41% 1569/3823 [02:57<04:14, 8.84it/s][A[A
41% 1570/3823 [02:57<04:14, 8.84it/s][A[A
41% 1571/3823 [02:57<04:14, 8.84it/s][A[A
41% 1572/3823 [02:57<04:15, 8.82it/s][A[A
41% 1573/3823 [02:57<04:15, 8.80it/s][A[A
41% 1574/3823 [02:57<04:15, 8.80it/s][A[A
41% 1575/3823 [02:57<04:15, 8.78it/s][A[A
41% 1576/3823 [02:57<04:15, 8.80it/s][A[A
41% 1577/3823 [02:58<04:14, 8.81it/s][A[A
41% 1578/3823 [02:58<04:14, 8.82it/s][A[A
41% 1579/3823 [02:58<04:14, 8.83it/s][A[A
41% 1580/3823 [02:58<04:14, 8.83it/s][A[A
41% 1581/3823 [02:58<04:14, 8.81it/s][A[A
41% 1582/3823 [02:58<04:14, 8.82it/s][A[A
41% 1583/3823 [02:58<04:13, 8.83it/s][A[A
41% 1584/3823 [02:58<04:13, 8.84it/s][A[A
41% 1585/3823 [02:59<04:13, 8.84it/s][A[A
41% 1586/3823 [02:59<04:13, 8.82it/s][A[A
42% 1587/3823 [02:59<04:13, 8.83it/s][A[A
42% 1588/3823 [02:59<04:13, 8.82it/s][A[A
42% 1589/3823 [02:59<04:13, 8.82it/s][A[A
42% 1590/3823 [02:59<04:12, 8.83it/s][A[A
42% 1591/3823 [02:59<04:12, 8.84it/s][A[A
42% 1592/3823 [02:59<04:12, 8.84it/s][A[A
42% 1593/3823 [02:59<04:11, 8.85it/s][A[A
42% 1594/3823 [03:00<04:11, 8.85it/s][A[A
42% 1595/3823 [03:00<04:12, 8.84it/s][A[A
42% 1596/3823 [03:00<04:11, 8.85it/s][A[A
42% 1597/3823 [03:00<04:11, 8.84it/s][A[A
42% 1598/3823 [03:00<04:11, 8.84it/s][A[A
42% 1599/3823 [03:00<04:11, 8.83it/s][A[A
42% 1600/3823 [03:00<04:11, 8.82it/s][A[A
42% 1601/3823 [03:00<04:11, 8.82it/s][A[A
42% 1602/3823 [03:00<04:11, 8.82it/s][A[A
42% 1603/3823 [03:01<04:11, 8.83it/s][A[A
42% 1604/3823 [03:01<04:11, 8.84it/s][A[A
42% 1605/3823 [03:01<04:11, 8.84it/s][A[A
42% 1606/3823 [03:01<04:10, 8.85it/s][A[A
42% 1607/3823 [03:01<04:10, 8.84it/s][A[A
42% 1608/3823 [03:01<04:10, 8.85it/s][A[A
42% 1609/3823 [03:01<04:10, 8.85it/s][A[A
42% 1610/3823 [03:01<04:10, 8.85it/s][A[A
42% 1611/3823 [03:01<04:09, 8.85it/s][A[A
42% 1612/3823 [03:02<04:09, 8.86it/s][A[A
42% 1613/3823 [03:02<04:09, 8.85it/s][A[A
42% 1614/3823 [03:02<04:09, 8.85it/s][A[A
42% 1615/3823 [03:02<04:09, 8.85it/s][A[A
42% 1616/3823 [03:02<04:09, 8.85it/s][A[A
42% 1617/3823 [03:02<04:09, 8.85it/s][A[A
42% 1618/3823 [03:02<04:09, 8.85it/s][A[A
42% 1619/3823 [03:02<04:09, 8.85it/s][A[A
42% 1620/3823 [03:02<04:09, 8.85it/s][A[A
42% 1621/3823 [03:03<04:09, 8.84it/s][A[A
42% 1622/3823 [03:03<04:08, 8.84it/s][A[A
42% 1623/3823 [03:03<04:08, 8.84it/s][A[A
42% 1624/3823 [03:03<04:08, 8.84it/s][A[A
43% 1625/3823 [03:03<04:08, 8.84it/s][A[A
43% 1626/3823 [03:03<04:08, 8.83it/s][A[A
43% 1627/3823 [03:03<04:08, 8.83it/s][A[A
43% 1628/3823 [03:03<04:08, 8.84it/s][A[A
43% 1629/3823 [03:03<04:08, 8.83it/s][A[A
43% 1630/3823 [03:04<04:08, 8.82it/s][A[A
43% 1631/3823 [03:04<04:08, 8.83it/s][A[A
43% 1632/3823 [03:04<04:08, 8.83it/s][A[A
43% 1633/3823 [03:04<04:07, 8.84it/s][A[A
43% 1634/3823 [03:04<04:07, 8.84it/s][A[A
43% 1635/3823 [03:04<04:07, 8.84it/s][A[A
43% 1636/3823 [03:04<04:07, 8.83it/s][A[A
43% 1637/3823 [03:04<04:07, 8.83it/s][A[A
43% 1638/3823 [03:05<04:07, 8.82it/s][A[A
43% 1639/3823 [03:05<04:07, 8.82it/s][A[A
43% 1640/3823 [03:05<04:07, 8.81it/s][A[A
43% 1641/3823 [03:05<04:07, 8.82it/s][A[A
43% 1642/3823 [03:05<04:07, 8.83it/s][A[A
43% 1643/3823 [03:05<04:07, 8.82it/s][A[A
43% 1644/3823 [03:05<04:07, 8.80it/s][A[A
43% 1645/3823 [03:05<04:07, 8.80it/s][A[A
43% 1646/3823 [03:05<04:07, 8.81it/s][A[A
43% 1647/3823 [03:06<04:06, 8.82it/s][A[A
43% 1648/3823 [03:06<04:06, 8.82it/s][A[A
43% 1649/3823 [03:06<04:06, 8.83it/s][A[A
43% 1650/3823 [03:06<04:05, 8.84it/s][A[A
43% 1651/3823 [03:06<04:05, 8.84it/s][A[A
43% 1652/3823 [03:06<04:05, 8.84it/s][A[A
43% 1653/3823 [03:06<04:05, 8.83it/s][A[A
43% 1654/3823 [03:06<04:05, 8.83it/s][A[A
43% 1655/3823 [03:06<04:05, 8.84it/s][A[A
43% 1656/3823 [03:07<04:05, 8.84it/s][A[A
43% 1657/3823 [03:07<04:04, 8.84it/s][A[A
43% 1658/3823 [03:07<04:04, 8.84it/s][A[A
43% 1659/3823 [03:07<04:04, 8.85it/s][A[A
43% 1660/3823 [03:07<04:04, 8.85it/s][A[A
43% 1661/3823 [03:07<04:04, 8.84it/s][A[A
43% 1662/3823 [03:07<04:04, 8.84it/s][A[A
43% 1663/3823 [03:07<04:04, 8.84it/s][A[A
44% 1664/3823 [03:07<04:04, 8.85it/s][A[A
44% 1665/3823 [03:08<04:03, 8.85it/s][A[A
44% 1666/3823 [03:08<04:03, 8.84it/s][A[A
44% 1667/3823 [03:08<04:03, 8.84it/s][A[A
44% 1668/3823 [03:08<04:03, 8.84it/s][A[A
44% 1669/3823 [03:08<04:03, 8.83it/s][A[A
44% 1670/3823 [03:08<04:03, 8.84it/s][A[A
44% 1671/3823 [03:08<04:03, 8.84it/s][A[A
44% 1672/3823 [03:08<04:03, 8.84it/s][A[A
44% 1673/3823 [03:08<04:03, 8.84it/s][A[A
44% 1674/3823 [03:09<04:03, 8.84it/s][A[A
44% 1675/3823 [03:09<04:02, 8.84it/s][A[A
44% 1676/3823 [03:09<04:02, 8.84it/s][A[A
44% 1677/3823 [03:09<04:02, 8.84it/s][A[A
44% 1678/3823 [03:09<04:02, 8.84it/s][A[A
44% 1679/3823 [03:09<04:02, 8.83it/s][A[A
44% 1680/3823 [03:09<04:02, 8.82it/s][A[A
44% 1681/3823 [03:09<04:02, 8.82it/s][A[A
44% 1682/3823 [03:09<04:02, 8.83it/s][A[A
44% 1683/3823 [03:10<04:02, 8.83it/s][A[A
44% 1684/3823 [03:10<04:02, 8.83it/s][A[A
44% 1685/3823 [03:10<04:01, 8.84it/s][A[A
44% 1686/3823 [03:10<04:01, 8.84it/s][A[A
44% 1687/3823 [03:10<04:01, 8.84it/s][A[A
44% 1688/3823 [03:10<04:01, 8.84it/s][A[A
44% 1689/3823 [03:10<04:01, 8.84it/s][A[A
44% 1690/3823 [03:10<04:01, 8.84it/s][A[A
44% 1691/3823 [03:11<04:01, 8.84it/s][A[A
44% 1692/3823 [03:11<04:01, 8.83it/s][A[A
44% 1693/3823 [03:11<04:01, 8.83it/s][A[A
44% 1694/3823 [03:11<04:01, 8.83it/s][A[A
44% 1695/3823 [03:11<04:00, 8.84it/s][A[A
44% 1696/3823 [03:11<04:00, 8.84it/s][A[A
44% 1697/3823 [03:11<04:00, 8.84it/s][A[A
44% 1698/3823 [03:11<04:00, 8.85it/s][A[A
44% 1699/3823 [03:11<03:59, 8.85it/s][A[A
44% 1700/3823 [03:12<03:59, 8.86it/s][A[A
44% 1701/3823 [03:12<03:59, 8.85it/s][A[A
45% 1702/3823 [03:12<03:59, 8.85it/s][A[A
45% 1703/3823 [03:12<03:59, 8.86it/s][A[A
45% 1704/3823 [03:12<03:59, 8.86it/s][A[A
45% 1705/3823 [03:12<03:59, 8.85it/s][A[A
45% 1706/3823 [03:12<03:59, 8.84it/s][A[A
45% 1707/3823 [03:12<03:59, 8.83it/s][A[A
45% 1708/3823 [03:12<03:59, 8.83it/s][A[A
45% 1709/3823 [03:13<03:59, 8.83it/s][A[A
45% 1710/3823 [03:13<03:59, 8.83it/s][A[A
45% 1711/3823 [03:13<03:58, 8.84it/s][A[A
45% 1712/3823 [03:13<03:58, 8.84it/s][A[A
45% 1713/3823 [03:13<03:58, 8.84it/s][A[A
45% 1714/3823 [03:13<03:58, 8.83it/s][A[A
45% 1715/3823 [03:13<03:58, 8.83it/s][A[A
45% 1716/3823 [03:13<03:58, 8.83it/s][A[A
45% 1717/3823 [03:13<03:58, 8.83it/s][A[A
45% 1718/3823 [03:14<03:58, 8.83it/s][A[A
45% 1719/3823 [03:14<03:58, 8.84it/s][A[A
45% 1720/3823 [03:14<03:58, 8.83it/s][A[A
45% 1721/3823 [03:14<03:57, 8.84it/s][A[A
45% 1722/3823 [03:14<03:57, 8.83it/s][A[A
45% 1723/3823 [03:14<03:57, 8.83it/s][A[A
45% 1724/3823 [03:14<03:57, 8.82it/s][A[A
45% 1725/3823 [03:14<03:57, 8.83it/s][A[A
45% 1726/3823 [03:14<03:57, 8.83it/s][A[A
45% 1727/3823 [03:15<03:57, 8.84it/s][A[A
45% 1728/3823 [03:15<03:56, 8.85it/s][A[A
45% 1729/3823 [03:15<03:56, 8.85it/s][A[A
45% 1730/3823 [03:15<03:56, 8.85it/s][A[A
45% 1731/3823 [03:15<03:56, 8.85it/s][A[A
45% 1732/3823 [03:15<03:56, 8.84it/s][A[A
45% 1733/3823 [03:15<03:56, 8.83it/s][A[A
45% 1734/3823 [03:15<03:56, 8.84it/s][A[A
45% 1735/3823 [03:15<03:56, 8.84it/s][A[A
45% 1736/3823 [03:16<03:56, 8.84it/s][A[A
45% 1737/3823 [03:16<03:55, 8.84it/s][A[A
45% 1738/3823 [03:16<03:55, 8.85it/s][A[A
45% 1739/3823 [03:16<03:55, 8.85it/s][A[A
46% 1740/3823 [03:16<03:55, 8.83it/s][A[A
46% 1741/3823 [03:16<03:55, 8.84it/s][A[A
46% 1742/3823 [03:16<03:55, 8.84it/s][A[A
46% 1743/3823 [03:16<03:55, 8.84it/s][A[A
46% 1744/3823 [03:16<03:54, 8.85it/s][A[A
46% 1745/3823 [03:17<03:55, 8.84it/s][A[A
46% 1746/3823 [03:17<03:55, 8.84it/s][A[A
46% 1747/3823 [03:17<03:55, 8.83it/s][A[A
46% 1748/3823 [03:17<03:54, 8.83it/s][A[A
46% 1749/3823 [03:17<03:54, 8.83it/s][A[A
46% 1750/3823 [03:17<03:54, 8.83it/s][A[A
46% 1751/3823 [03:17<03:54, 8.82it/s][A[A
46% 1752/3823 [03:17<03:54, 8.82it/s][A[A
46% 1753/3823 [03:18<03:54, 8.82it/s][A[A
46% 1754/3823 [03:18<03:54, 8.82it/s][A[A
46% 1755/3823 [03:18<03:54, 8.82it/s][A[A
46% 1756/3823 [03:18<03:54, 8.83it/s][A[A
46% 1757/3823 [03:18<03:53, 8.84it/s][A[A
46% 1758/3823 [03:18<03:53, 8.84it/s][A[A
46% 1759/3823 [03:18<03:53, 8.84it/s][A[A
46% 1760/3823 [03:18<03:53, 8.82it/s][A[A
46% 1761/3823 [03:18<03:53, 8.82it/s][A[A
46% 1762/3823 [03:19<03:53, 8.81it/s][A[A
46% 1763/3823 [03:19<03:53, 8.82it/s][A[A
46% 1764/3823 [03:19<03:53, 8.82it/s][A[A
46% 1765/3823 [03:19<03:52, 8.84it/s][A[A
46% 1766/3823 [03:19<03:52, 8.84it/s][A[A
46% 1767/3823 [03:19<03:52, 8.84it/s][A[A
46% 1768/3823 [03:19<03:52, 8.84it/s][A[A
46% 1769/3823 [03:19<03:52, 8.84it/s][A[A
46% 1770/3823 [03:19<03:52, 8.84it/s][A[A
46% 1771/3823 [03:20<03:52, 8.84it/s][A[A
46% 1772/3823 [03:20<03:52, 8.82it/s][A[A
46% 1773/3823 [03:20<03:52, 8.81it/s][A[A
46% 1774/3823 [03:20<03:52, 8.81it/s][A[A
46% 1775/3823 [03:20<03:52, 8.81it/s][A[A
46% 1776/3823 [03:20<03:52, 8.81it/s][A[A
46% 1777/3823 [03:20<03:52, 8.82it/s][A[A
47% 1778/3823 [03:20<03:51, 8.83it/s][A[A
47% 1779/3823 [03:20<03:51, 8.83it/s][A[A
47% 1780/3823 [03:21<03:51, 8.82it/s][A[A
47% 1781/3823 [03:21<03:51, 8.82it/s][A[A
47% 1782/3823 [03:21<03:51, 8.81it/s][A[A
47% 1783/3823 [03:21<03:51, 8.81it/s][A[A
47% 1784/3823 [03:21<03:51, 8.81it/s][A[A
47% 1785/3823 [03:21<03:51, 8.82it/s][A[A
47% 1786/3823 [03:21<03:51, 8.82it/s][A[A
47% 1787/3823 [03:21<03:51, 8.81it/s][A[A
47% 1788/3823 [03:21<03:50, 8.82it/s][A[A
47% 1789/3823 [03:22<03:50, 8.82it/s][A[A
47% 1790/3823 [03:22<03:50, 8.82it/s][A[A
47% 1791/3823 [03:22<03:50, 8.81it/s][A[A
47% 1792/3823 [03:22<03:50, 8.81it/s][A[A
47% 1793/3823 [03:22<03:50, 8.81it/s][A[A
47% 1794/3823 [03:22<03:50, 8.81it/s][A[A
47% 1795/3823 [03:22<03:49, 8.83it/s][A[A
47% 1796/3823 [03:22<03:49, 8.82it/s][A[A
47% 1797/3823 [03:23<03:49, 8.84it/s][A[A
47% 1798/3823 [03:23<03:48, 8.85it/s][A[A
47% 1799/3823 [03:23<03:49, 8.84it/s][A[A
47% 1800/3823 [03:23<03:48, 8.83it/s][A[A
47% 1801/3823 [03:23<03:48, 8.84it/s][A[A
47% 1802/3823 [03:23<03:48, 8.85it/s][A[A
47% 1803/3823 [03:23<03:48, 8.84it/s][A[A
47% 1804/3823 [03:23<03:48, 8.83it/s][A[A
47% 1805/3823 [03:23<03:48, 8.83it/s][A[A
47% 1806/3823 [03:24<03:48, 8.84it/s][A[A
47% 1807/3823 [03:24<03:48, 8.84it/s][A[A
47% 1808/3823 [03:24<03:48, 8.83it/s][A[A
47% 1809/3823 [03:24<03:48, 8.81it/s][A[A
47% 1810/3823 [03:24<03:49, 8.79it/s][A[A
47% 1811/3823 [03:24<03:48, 8.79it/s][A[A
47% 1812/3823 [03:24<03:49, 8.75it/s][A[A
47% 1813/3823 [03:24<03:49, 8.76it/s][A[A
47% 1814/3823 [03:24<03:48, 8.78it/s][A[A
47% 1815/3823 [03:25<03:48, 8.80it/s][A[A
48% 1816/3823 [03:25<03:47, 8.81it/s][A[A
48% 1817/3823 [03:25<03:47, 8.81it/s][A[A
48% 1818/3823 [03:25<03:47, 8.80it/s][A[A
48% 1819/3823 [03:25<03:47, 8.81it/s][A[A
48% 1820/3823 [03:25<03:47, 8.81it/s][A[A
48% 1821/3823 [03:25<03:47, 8.80it/s][A[A
48% 1822/3823 [03:25<03:47, 8.81it/s][A[A
48% 1823/3823 [03:25<03:47, 8.81it/s][A[A
48% 1824/3823 [03:26<03:46, 8.81it/s][A[A
48% 1825/3823 [03:26<03:46, 8.80it/s][A[A
48% 1826/3823 [03:26<03:47, 8.79it/s][A[A
48% 1827/3823 [03:26<03:46, 8.80it/s][A[A
48% 1828/3823 [03:26<03:46, 8.82it/s][A[A
48% 1829/3823 [03:26<03:46, 8.82it/s][A[A
48% 1830/3823 [03:26<03:46, 8.82it/s][A[A
48% 1831/3823 [03:26<03:45, 8.83it/s][A[A
48% 1832/3823 [03:26<03:45, 8.84it/s][A[A
48% 1833/3823 [03:27<03:45, 8.84it/s][A[A
48% 1834/3823 [03:27<03:45, 8.84it/s][A[A
48% 1835/3823 [03:27<03:44, 8.84it/s][A[A
48% 1836/3823 [03:27<03:44, 8.84it/s][A[A
48% 1837/3823 [03:27<03:44, 8.84it/s][A[A
48% 1838/3823 [03:27<03:44, 8.84it/s][A[A
48% 1839/3823 [03:27<03:45, 8.81it/s][A[A
48% 1840/3823 [03:27<03:45, 8.81it/s][A[A
48% 1841/3823 [03:27<03:44, 8.81it/s][A[A
48% 1842/3823 [03:28<03:44, 8.82it/s][A[A
48% 1843/3823 [03:28<03:44, 8.83it/s][A[A
48% 1844/3823 [03:28<03:44, 8.83it/s][A[A
48% 1845/3823 [03:28<03:43, 8.84it/s][A[A
48% 1846/3823 [03:28<03:43, 8.83it/s][A[A
48% 1847/3823 [03:28<03:44, 8.82it/s][A[A
48% 1848/3823 [03:28<03:44, 8.81it/s][A[A
48% 1849/3823 [03:28<03:43, 8.82it/s][A[A
48% 1850/3823 [03:29<03:43, 8.82it/s][A[A
48% 1851/3823 [03:29<03:43, 8.82it/s][A[A
48% 1852/3823 [03:29<03:43, 8.82it/s][A[A
48% 1853/3823 [03:29<03:43, 8.82it/s][A[A
48% 1854/3823 [03:29<03:43, 8.82it/s][A[A
49% 1855/3823 [03:29<03:43, 8.82it/s][A[A
49% 1856/3823 [03:29<03:42, 8.82it/s][A[A
49% 1857/3823 [03:29<03:43, 8.82it/s][A[A
49% 1858/3823 [03:29<03:42, 8.81it/s][A[A
49% 1859/3823 [03:30<03:42, 8.82it/s][A[A
49% 1860/3823 [03:30<03:42, 8.82it/s][A[A
49% 1861/3823 [03:30<03:42, 8.83it/s][A[A
49% 1862/3823 [03:30<03:42, 8.83it/s][A[A
49% 1863/3823 [03:30<03:41, 8.83it/s][A[A
49% 1864/3823 [03:30<03:41, 8.83it/s][A[A
49% 1865/3823 [03:30<03:41, 8.83it/s][A[A
49% 1866/3823 [03:30<03:41, 8.83it/s][A[A
49% 1867/3823 [03:30<03:41, 8.83it/s][A[A
49% 1868/3823 [03:31<03:41, 8.83it/s][A[A
49% 1869/3823 [03:31<03:41, 8.83it/s][A[A
49% 1870/3823 [03:31<03:41, 8.83it/s][A[A
49% 1871/3823 [03:31<03:41, 8.83it/s][A[A
49% 1872/3823 [03:31<03:40, 8.83it/s][A[A
49% 1873/3823 [03:31<03:40, 8.82it/s][A[A
49% 1874/3823 [03:31<03:41, 8.82it/s][A[A
49% 1875/3823 [03:31<03:40, 8.83it/s][A[A
49% 1876/3823 [03:31<03:40, 8.84it/s][A[A
49% 1877/3823 [03:32<03:40, 8.84it/s][A[A
49% 1878/3823 [03:32<03:40, 8.84it/s][A[A
49% 1879/3823 [03:32<03:40, 8.83it/s][A[A
49% 1880/3823 [03:32<03:40, 8.83it/s][A[A
49% 1881/3823 [03:32<03:40, 8.82it/s][A[A
49% 1882/3823 [03:32<03:39, 8.83it/s][A[A
49% 1883/3823 [03:32<03:40, 8.81it/s][A[A
49% 1884/3823 [03:32<03:39, 8.81it/s][A[A
49% 1885/3823 [03:32<03:39, 8.83it/s][A[A
49% 1886/3823 [03:33<03:39, 8.83it/s][A[A
49% 1887/3823 [03:33<03:39, 8.82it/s][A[A
49% 1888/3823 [03:33<03:39, 8.82it/s][A[A
49% 1889/3823 [03:33<03:39, 8.82it/s][A[A
49% 1890/3823 [03:33<03:38, 8.84it/s][A[A
49% 1891/3823 [03:33<03:38, 8.84it/s][A[A
49% 1892/3823 [03:33<03:38, 8.85it/s][A[A
50% 1893/3823 [03:33<03:38, 8.84it/s][A[A
50% 1894/3823 [03:33<03:38, 8.83it/s][A[A
50% 1895/3823 [03:34<03:38, 8.84it/s][A[A
50% 1896/3823 [03:34<03:38, 8.84it/s][A[A
50% 1897/3823 [03:34<03:37, 8.84it/s][A[A
50% 1898/3823 [03:34<03:38, 8.83it/s][A[A
50% 1899/3823 [03:34<03:37, 8.84it/s][A[A
50% 1900/3823 [03:34<03:37, 8.83it/s][A[A
50% 1901/3823 [03:34<03:37, 8.83it/s][A[A
50% 1902/3823 [03:34<03:37, 8.84it/s][A[A
50% 1903/3823 [03:35<03:37, 8.83it/s][A[A
50% 1904/3823 [03:35<03:37, 8.83it/s][A[A
50% 1905/3823 [03:35<03:37, 8.83it/s][A[A
50% 1906/3823 [03:35<03:36, 8.84it/s][A[A
50% 1907/3823 [03:35<03:36, 8.84it/s][A[A
50% 1908/3823 [03:35<03:36, 8.85it/s][A[A
50% 1909/3823 [03:35<03:36, 8.85it/s][A[A
50% 1910/3823 [03:35<03:36, 8.84it/s][A[A
50% 1911/3823 [03:35<03:36, 8.84it/s][A[A
50% 1912/3823 [03:36<03:36, 8.84it/s][A[A
50% 1913/3823 [03:36<03:36, 8.83it/s][A[A
50% 1914/3823 [03:36<03:36, 8.84it/s][A[A
50% 1915/3823 [03:36<03:35, 8.84it/s][A[A
50% 1916/3823 [03:36<03:35, 8.84it/s][A[A
50% 1917/3823 [03:36<03:35, 8.84it/s][A[A
50% 1918/3823 [03:36<03:35, 8.85it/s][A[A
50% 1919/3823 [03:36<03:35, 8.85it/s][A[A
50% 1920/3823 [03:36<03:35, 8.85it/s][A[A
50% 1921/3823 [03:37<03:34, 8.85it/s][A[A
50% 1922/3823 [03:37<03:34, 8.86it/s][A[A
50% 1923/3823 [03:37<03:34, 8.86it/s][A[A
50% 1924/3823 [03:37<03:34, 8.86it/s][A[A
50% 1925/3823 [03:37<03:34, 8.86it/s][A[A
50% 1926/3823 [03:37<03:34, 8.85it/s][A[A
50% 1927/3823 [03:37<03:34, 8.83it/s][A[A
50% 1928/3823 [03:37<03:34, 8.83it/s][A[A
50% 1929/3823 [03:37<03:34, 8.83it/s][A[A
50% 1930/3823 [03:38<03:34, 8.82it/s][A[A
51% 1931/3823 [03:38<03:34, 8.83it/s][A[A
51% 1932/3823 [03:38<03:33, 8.84it/s][A[A
51% 1933/3823 [03:38<03:33, 8.85it/s][A[A
51% 1934/3823 [03:38<03:33, 8.84it/s][A[A
51% 1935/3823 [03:38<03:33, 8.84it/s][A[A
51% 1936/3823 [03:38<03:33, 8.83it/s][A[A
51% 1937/3823 [03:38<03:33, 8.83it/s][A[A
51% 1938/3823 [03:38<03:33, 8.82it/s][A[A
51% 1939/3823 [03:39<03:33, 8.82it/s][A[A
51% 1940/3823 [03:39<03:33, 8.82it/s][A[A
51% 1941/3823 [03:39<03:33, 8.83it/s][A[A
51% 1942/3823 [03:39<03:32, 8.84it/s][A[A
51% 1943/3823 [03:39<03:32, 8.84it/s][A[A
51% 1944/3823 [03:39<03:33, 8.82it/s][A[A
51% 1945/3823 [03:39<03:32, 8.82it/s][A[A
51% 1946/3823 [03:39<03:32, 8.83it/s][A[A
51% 1947/3823 [03:39<03:32, 8.84it/s][A[A
51% 1948/3823 [03:40<03:31, 8.84it/s][A[A
51% 1949/3823 [03:40<03:31, 8.85it/s][A[A
51% 1950/3823 [03:40<03:31, 8.85it/s][A[A
51% 1951/3823 [03:40<03:31, 8.83it/s][A[A
51% 1952/3823 [03:40<03:31, 8.84it/s][A[A
51% 1953/3823 [03:40<03:31, 8.83it/s][A[A
51% 1954/3823 [03:40<03:31, 8.83it/s][A[A
51% 1955/3823 [03:40<03:31, 8.84it/s][A[A
51% 1956/3823 [03:41<03:31, 8.84it/s][A[A
51% 1957/3823 [03:41<03:30, 8.85it/s][A[A
51% 1958/3823 [03:41<03:30, 8.85it/s][A[A
51% 1959/3823 [03:41<03:30, 8.85it/s][A[A
51% 1960/3823 [03:41<03:30, 8.85it/s][A[A
51% 1961/3823 [03:41<03:30, 8.84it/s][A[A
51% 1962/3823 [03:41<03:30, 8.85it/s][A[A
51% 1963/3823 [03:41<03:30, 8.85it/s][A[A
51% 1964/3823 [03:41<03:29, 8.85it/s][A[A
51% 1965/3823 [03:42<03:29, 8.86it/s][A[A
51% 1966/3823 [03:42<03:29, 8.85it/s][A[A
51% 1967/3823 [03:42<03:30, 8.83it/s][A[A
51% 1968/3823 [03:42<03:30, 8.83it/s][A[A
52% 1969/3823 [03:42<03:30, 8.82it/s][A[A
52% 1970/3823 [03:42<03:29, 8.82it/s][A[A
52% 1971/3823 [03:42<03:29, 8.83it/s][A[A
52% 1972/3823 [03:42<03:29, 8.84it/s][A[A
52% 1973/3823 [03:42<03:29, 8.84it/s][A[A
52% 1974/3823 [03:43<03:29, 8.84it/s][A[A
52% 1975/3823 [03:43<03:28, 8.85it/s][A[A
52% 1976/3823 [03:43<03:28, 8.85it/s][A[A
52% 1977/3823 [03:43<03:28, 8.86it/s][A[A
52% 1978/3823 [03:43<03:28, 8.85it/s][A[A
52% 1979/3823 [03:43<03:28, 8.84it/s][A[A
52% 1980/3823 [03:43<03:28, 8.84it/s][A[A
52% 1981/3823 [03:43<03:28, 8.84it/s][A[A
52% 1982/3823 [03:43<03:28, 8.84it/s][A[A
52% 1983/3823 [03:44<03:28, 8.83it/s][A[A
52% 1984/3823 [03:44<03:28, 8.84it/s][A[A
52% 1985/3823 [03:44<03:28, 8.83it/s][A[A
52% 1986/3823 [03:44<03:27, 8.84it/s][A[A
52% 1987/3823 [03:44<03:27, 8.85it/s][A[A
52% 1988/3823 [03:44<03:27, 8.85it/s][A[A
52% 1989/3823 [03:44<03:27, 8.85it/s][A[A
52% 1990/3823 [03:44<03:27, 8.85it/s][A[A
52% 1991/3823 [03:44<03:27, 8.84it/s][A[A
52% 1992/3823 [03:45<03:27, 8.84it/s][A[A
52% 1993/3823 [03:45<03:27, 8.83it/s][A[A
52% 1994/3823 [03:45<03:27, 8.83it/s][A[A
52% 1995/3823 [03:45<03:26, 8.84it/s][A[A
52% 1996/3823 [03:45<03:26, 8.84it/s][A[A
52% 1997/3823 [03:45<03:26, 8.85it/s][A[A
52% 1998/3823 [03:45<03:26, 8.85it/s][A[A
52% 1999/3823 [03:45<03:26, 8.84it/s][A[A
52% 2000/3823 [03:45<03:26, 8.85it/s][A[A
52% 2001/3823 [03:46<03:25, 8.85it/s][A[A
52% 2002/3823 [03:46<03:26, 8.84it/s][A[A
52% 2003/3823 [03:46<03:25, 8.84it/s][A[A
52% 2004/3823 [03:46<03:25, 8.84it/s][A[A
52% 2005/3823 [03:46<03:25, 8.84it/s][A[A
52% 2006/3823 [03:46<03:25, 8.83it/s][A[A
52% 2007/3823 [03:46<03:26, 8.81it/s][A[A
53% 2008/3823 [03:46<03:25, 8.82it/s][A[A
53% 2009/3823 [03:47<03:25, 8.83it/s][A[A
53% 2010/3823 [03:47<03:25, 8.83it/s][A[A
53% 2011/3823 [03:47<03:25, 8.84it/s][A[A
53% 2012/3823 [03:47<03:24, 8.85it/s][A[A
53% 2013/3823 [03:47<03:24, 8.86it/s][A[A
53% 2014/3823 [03:47<03:24, 8.85it/s][A[A
53% 2015/3823 [03:47<03:24, 8.84it/s][A[A
53% 2016/3823 [03:47<03:24, 8.84it/s][A[A
53% 2017/3823 [03:47<03:24, 8.84it/s][A[A
53% 2018/3823 [03:48<03:24, 8.84it/s][A[A
53% 2019/3823 [03:48<03:24, 8.84it/s][A[A
53% 2020/3823 [03:48<03:24, 8.83it/s][A[A
53% 2021/3823 [03:48<03:24, 8.83it/s][A[A
53% 2022/3823 [03:48<03:24, 8.82it/s][A[A
53% 2023/3823 [03:48<03:24, 8.82it/s][A[A
53% 2024/3823 [03:48<03:23, 8.83it/s][A[A
53% 2025/3823 [03:48<03:23, 8.84it/s][A[A
53% 2026/3823 [03:48<03:23, 8.83it/s][A[A
53% 2027/3823 [03:49<03:23, 8.83it/s][A[A
53% 2028/3823 [03:49<03:23, 8.84it/s][A[A
53% 2029/3823 [03:49<03:22, 8.84it/s][A[A
53% 2030/3823 [03:49<03:22, 8.84it/s][A[A
53% 2031/3823 [03:49<03:23, 8.82it/s][A[A
53% 2032/3823 [03:49<03:22, 8.83it/s][A[A
53% 2033/3823 [03:49<03:22, 8.83it/s][A[A
53% 2034/3823 [03:49<03:22, 8.83it/s][A[A
53% 2035/3823 [03:49<03:22, 8.84it/s][A[A
53% 2036/3823 [03:50<03:22, 8.84it/s][A[A
53% 2037/3823 [03:50<03:21, 8.85it/s][A[A
53% 2038/3823 [03:50<03:21, 8.85it/s][A[A
53% 2039/3823 [03:50<03:21, 8.85it/s][A[A
53% 2040/3823 [03:50<03:21, 8.85it/s][A[A
53% 2041/3823 [03:50<03:21, 8.85it/s][A[A
53% 2042/3823 [03:50<03:21, 8.85it/s][A[A
53% 2043/3823 [03:50<03:20, 8.86it/s][A[A
53% 2044/3823 [03:50<03:20, 8.86it/s][A[A
53% 2045/3823 [03:51<03:20, 8.85it/s][A[A
54% 2046/3823 [03:51<03:20, 8.84it/s][A[A
54% 2047/3823 [03:51<03:21, 8.83it/s][A[A
54% 2048/3823 [03:51<03:20, 8.84it/s][A[A
54% 2049/3823 [03:51<03:20, 8.84it/s][A[A
54% 2050/3823 [03:51<03:20, 8.84it/s][A[A
54% 2051/3823 [03:51<03:20, 8.84it/s][A[A
54% 2052/3823 [03:51<03:20, 8.84it/s][A[A
54% 2053/3823 [03:51<03:20, 8.84it/s][A[A
54% 2054/3823 [03:52<03:20, 8.84it/s][A[A
54% 2055/3823 [03:52<03:20, 8.83it/s][A[A
54% 2056/3823 [03:52<03:19, 8.84it/s][A[A
54% 2057/3823 [03:52<03:19, 8.84it/s][A[A
54% 2058/3823 [03:52<03:19, 8.84it/s][A[A
54% 2059/3823 [03:52<03:19, 8.84it/s][A[A
54% 2060/3823 [03:52<03:19, 8.84it/s][A[A
54% 2061/3823 [03:52<03:19, 8.85it/s][A[A
54% 2062/3823 [03:53<03:19, 8.85it/s][A[A
54% 2063/3823 [03:53<03:19, 8.84it/s][A[A
54% 2064/3823 [03:53<03:18, 8.85it/s][A[A
54% 2065/3823 [03:53<03:18, 8.85it/s][A[A
54% 2066/3823 [03:53<03:18, 8.85it/s][A[A
54% 2067/3823 [03:53<03:18, 8.85it/s][A[A
54% 2068/3823 [03:53<03:18, 8.85it/s][A[A
54% 2069/3823 [03:53<03:18, 8.84it/s][A[A
54% 2070/3823 [03:53<03:18, 8.84it/s][A[A
54% 2071/3823 [03:54<03:18, 8.83it/s][A[A
54% 2072/3823 [03:54<03:18, 8.83it/s][A[A
54% 2073/3823 [03:54<03:18, 8.82it/s][A[A
54% 2074/3823 [03:54<03:17, 8.84it/s][A[A
54% 2075/3823 [03:54<03:17, 8.84it/s][A[A
54% 2076/3823 [03:54<03:17, 8.84it/s][A[A
54% 2077/3823 [03:54<03:17, 8.84it/s][A[A
54% 2078/3823 [03:54<03:17, 8.84it/s][A[A
54% 2079/3823 [03:54<03:17, 8.83it/s][A[A
54% 2080/3823 [03:55<03:17, 8.84it/s][A[A
54% 2081/3823 [03:55<03:17, 8.84it/s][A[A
54% 2082/3823 [03:55<03:16, 8.85it/s][A[A
54% 2083/3823 [03:55<03:16, 8.85it/s][A[A
55% 2084/3823 [03:55<03:16, 8.85it/s][A[A
55% 2085/3823 [03:55<03:16, 8.84it/s][A[A
55% 2086/3823 [03:55<03:16, 8.82it/s][A[A
55% 2087/3823 [03:55<03:16, 8.82it/s][A[A
55% 2088/3823 [03:55<03:16, 8.82it/s][A[A
55% 2089/3823 [03:56<03:16, 8.84it/s][A[A
55% 2090/3823 [03:56<03:16, 8.83it/s][A[A
55% 2091/3823 [03:56<03:15, 8.84it/s][A[A
55% 2092/3823 [03:56<03:15, 8.84it/s][A[A
55% 2093/3823 [03:56<03:15, 8.84it/s][A[A
55% 2094/3823 [03:56<03:15, 8.85it/s][A[A
55% 2095/3823 [03:56<03:15, 8.84it/s][A[A
55% 2096/3823 [03:56<03:15, 8.85it/s][A[A
55% 2097/3823 [03:56<03:15, 8.85it/s][A[A
55% 2098/3823 [03:57<03:14, 8.85it/s][A[A
55% 2099/3823 [03:57<03:14, 8.85it/s][A[A
55% 2100/3823 [03:57<03:14, 8.85it/s][A[A
55% 2101/3823 [03:57<03:14, 8.85it/s][A[A
55% 2102/3823 [03:57<03:14, 8.85it/s][A[A
55% 2103/3823 [03:57<03:14, 8.84it/s][A[A
55% 2104/3823 [03:57<03:14, 8.83it/s][A[A
55% 2105/3823 [03:57<03:14, 8.83it/s][A[A
55% 2106/3823 [03:57<03:14, 8.84it/s][A[A
55% 2107/3823 [03:58<03:14, 8.84it/s][A[A
55% 2108/3823 [03:58<03:13, 8.84it/s][A[A
55% 2109/3823 [03:58<03:13, 8.85it/s][A[A
55% 2110/3823 [03:58<03:13, 8.84it/s][A[A
55% 2111/3823 [03:58<03:13, 8.84it/s][A[A
55% 2112/3823 [03:58<03:13, 8.83it/s][A[A
55% 2113/3823 [03:58<03:13, 8.83it/s][A[A
55% 2114/3823 [03:58<03:13, 8.83it/s][A[A
55% 2115/3823 [03:59<03:13, 8.84it/s][A[A
55% 2116/3823 [03:59<03:13, 8.83it/s][A[A
55% 2117/3823 [03:59<03:13, 8.84it/s][A[A
55% 2118/3823 [03:59<03:13, 8.83it/s][A[A
55% 2119/3823 [03:59<03:13, 8.82it/s][A[A
55% 2120/3823 [03:59<03:13, 8.82it/s][A[A
55% 2121/3823 [03:59<03:12, 8.83it/s][A[A
56% 2122/3823 [03:59<03:12, 8.82it/s][A[A
56% 2123/3823 [03:59<03:12, 8.82it/s][A[A
56% 2124/3823 [04:00<03:12, 8.82it/s][A[A
56% 2125/3823 [04:00<03:12, 8.80it/s][A[A
56% 2126/3823 [04:00<03:12, 8.81it/s][A[A
56% 2127/3823 [04:00<03:12, 8.81it/s][A[A
56% 2128/3823 [04:00<03:12, 8.82it/s][A[A
56% 2129/3823 [04:00<03:11, 8.83it/s][A[A
56% 2130/3823 [04:00<03:11, 8.84it/s][A[A
56% 2131/3823 [04:00<03:11, 8.85it/s][A[A
56% 2132/3823 [04:00<03:11, 8.85it/s][A[A
56% 2133/3823 [04:01<03:10, 8.85it/s][A[A
56% 2134/3823 [04:01<03:10, 8.85it/s][A[A
56% 2135/3823 [04:01<03:10, 8.84it/s][A[A
56% 2136/3823 [04:01<03:10, 8.84it/s][A[A
56% 2137/3823 [04:01<03:10, 8.83it/s][A[A
56% 2138/3823 [04:01<03:10, 8.82it/s][A[A
56% 2139/3823 [04:01<03:10, 8.82it/s][A[A
56% 2140/3823 [04:01<03:10, 8.83it/s][A[A
56% 2141/3823 [04:01<03:10, 8.83it/s][A[A
56% 2142/3823 [04:02<03:10, 8.83it/s][A[A
56% 2143/3823 [04:02<03:10, 8.83it/s][A[A
56% 2144/3823 [04:02<03:10, 8.83it/s][A[A
56% 2145/3823 [04:02<03:09, 8.84it/s][A[A
56% 2146/3823 [04:02<03:09, 8.84it/s][A[A
56% 2147/3823 [04:02<03:09, 8.84it/s][A[A
56% 2148/3823 [04:02<03:09, 8.85it/s][A[A
56% 2149/3823 [04:02<03:09, 8.84it/s][A[A
56% 2150/3823 [04:02<03:09, 8.84it/s][A[A
56% 2151/3823 [04:03<03:09, 8.84it/s][A[A
56% 2152/3823 [04:03<03:09, 8.84it/s][A[A
56% 2153/3823 [04:03<03:08, 8.84it/s][A[A
56% 2154/3823 [04:03<03:08, 8.84it/s][A[A
56% 2155/3823 [04:03<03:08, 8.84it/s][A[A
56% 2156/3823 [04:03<03:08, 8.84it/s][A[A
56% 2157/3823 [04:03<03:08, 8.85it/s][A[A
56% 2158/3823 [04:03<03:08, 8.85it/s][A[A
56% 2159/3823 [04:03<03:08, 8.84it/s][A[A
57% 2160/3823 [04:04<03:08, 8.84it/s][A[A
57% 2161/3823 [04:04<03:07, 8.84it/s][A[A
57% 2162/3823 [04:04<03:07, 8.85it/s][A[A
57% 2163/3823 [04:04<03:07, 8.85it/s][A[A
57% 2164/3823 [04:04<03:07, 8.85it/s][A[A
57% 2165/3823 [04:04<03:07, 8.84it/s][A[A
57% 2166/3823 [04:04<03:07, 8.83it/s][A[A
57% 2167/3823 [04:04<03:07, 8.82it/s][A[A
57% 2168/3823 [04:04<03:07, 8.83it/s][A[A
57% 2169/3823 [04:05<03:07, 8.83it/s][A[A
57% 2170/3823 [04:05<03:07, 8.83it/s][A[A
57% 2171/3823 [04:05<03:07, 8.82it/s][A[A
57% 2172/3823 [04:05<03:07, 8.83it/s][A[A
57% 2173/3823 [04:05<03:06, 8.83it/s][A[A
57% 2174/3823 [04:05<03:06, 8.82it/s][A[A
57% 2175/3823 [04:05<03:06, 8.81it/s][A[A
57% 2176/3823 [04:05<03:06, 8.83it/s][A[A
57% 2177/3823 [04:06<03:06, 8.83it/s][A[A
57% 2178/3823 [04:06<03:06, 8.84it/s][A[A
57% 2179/3823 [04:06<03:05, 8.85it/s][A[A
57% 2180/3823 [04:06<03:05, 8.85it/s][A[A
57% 2181/3823 [04:06<03:05, 8.85it/s][A[A
57% 2182/3823 [04:06<03:05, 8.85it/s][A[A
57% 2183/3823 [04:06<03:05, 8.85it/s][A[A
57% 2184/3823 [04:06<03:05, 8.85it/s][A[A
57% 2185/3823 [04:06<03:05, 8.85it/s][A[A
57% 2186/3823 [04:07<03:05, 8.85it/s][A[A
57% 2187/3823 [04:07<03:05, 8.84it/s][A[A
57% 2188/3823 [04:07<03:05, 8.84it/s][A[A
57% 2189/3823 [04:07<03:04, 8.84it/s][A[A
57% 2190/3823 [04:07<03:04, 8.84it/s][A[A
57% 2191/3823 [04:07<03:04, 8.84it/s][A[A
57% 2192/3823 [04:07<03:04, 8.83it/s][A[A
57% 2193/3823 [04:07<03:04, 8.83it/s][A[A
57% 2194/3823 [04:07<03:04, 8.84it/s][A[A
57% 2195/3823 [04:08<03:04, 8.83it/s][A[A
57% 2196/3823 [04:08<03:04, 8.84it/s][A[A
57% 2197/3823 [04:08<03:04, 8.83it/s][A[A
57% 2198/3823 [04:08<03:04, 8.83it/s][A[A
58% 2199/3823 [04:08<03:03, 8.84it/s][A[A
58% 2200/3823 [04:08<03:03, 8.82it/s][A[A
58% 2201/3823 [04:08<03:04, 8.81it/s][A[A
58% 2202/3823 [04:08<03:03, 8.81it/s][A[A
58% 2203/3823 [04:08<03:03, 8.81it/s][A[A
58% 2204/3823 [04:09<03:03, 8.81it/s][A[A
58% 2205/3823 [04:09<03:03, 8.81it/s][A[A
58% 2206/3823 [04:09<03:03, 8.80it/s][A[A
58% 2207/3823 [04:09<03:03, 8.80it/s][A[A
58% 2208/3823 [04:09<03:03, 8.80it/s][A[A
58% 2209/3823 [04:09<03:03, 8.81it/s][A[A
58% 2210/3823 [04:09<03:02, 8.82it/s][A[A
58% 2211/3823 [04:09<03:02, 8.83it/s][A[A
58% 2212/3823 [04:09<03:02, 8.83it/s][A[A
58% 2213/3823 [04:10<03:02, 8.82it/s][A[A
58% 2214/3823 [04:10<03:02, 8.80it/s][A[A
58% 2215/3823 [04:10<03:02, 8.81it/s][A[A
58% 2216/3823 [04:10<03:02, 8.82it/s][A[A
58% 2217/3823 [04:10<03:01, 8.82it/s][A[A
58% 2218/3823 [04:10<03:01, 8.83it/s][A[A
58% 2219/3823 [04:10<03:01, 8.83it/s][A[A
58% 2220/3823 [04:10<03:01, 8.82it/s][A[A
58% 2221/3823 [04:11<03:01, 8.82it/s][A[A
58% 2222/3823 [04:11<03:01, 8.82it/s][A[A
58% 2223/3823 [04:11<03:01, 8.82it/s][A[A
58% 2224/3823 [04:11<03:01, 8.82it/s][A[A
58% 2225/3823 [04:11<03:01, 8.81it/s][A[A
58% 2226/3823 [04:11<03:01, 8.82it/s][A[A
58% 2227/3823 [04:11<03:00, 8.82it/s][A[A
58% 2228/3823 [04:11<03:00, 8.83it/s][A[A
58% 2229/3823 [04:11<03:00, 8.83it/s][A[A
58% 2230/3823 [04:12<03:00, 8.82it/s][A[A
58% 2231/3823 [04:12<03:00, 8.82it/s][A[A
58% 2232/3823 [04:12<03:00, 8.83it/s][A[A
58% 2233/3823 [04:12<03:00, 8.83it/s][A[A
58% 2234/3823 [04:12<03:00, 8.82it/s][A[A
58% 2235/3823 [04:12<02:59, 8.83it/s][A[A
58% 2236/3823 [04:12<02:59, 8.83it/s][A[A
59% 2237/3823 [04:12<02:59, 8.83it/s][A[A
59% 2238/3823 [04:12<02:59, 8.83it/s][A[A
59% 2239/3823 [04:13<02:59, 8.84it/s][A[A
59% 2240/3823 [04:13<02:59, 8.83it/s][A[A
59% 2241/3823 [04:13<02:59, 8.82it/s][A[A
59% 2242/3823 [04:13<02:59, 8.80it/s][A[A
59% 2243/3823 [04:13<03:00, 8.76it/s][A[A
59% 2244/3823 [04:13<02:59, 8.77it/s][A[A
59% 2245/3823 [04:13<02:59, 8.79it/s][A[A
59% 2246/3823 [04:13<02:59, 8.80it/s][A[A
59% 2247/3823 [04:13<02:58, 8.81it/s][A[A
59% 2248/3823 [04:14<02:58, 8.82it/s][A[A
59% 2249/3823 [04:14<02:58, 8.83it/s][A[A
59% 2250/3823 [04:14<02:58, 8.83it/s][A[A
59% 2251/3823 [04:14<02:58, 8.82it/s][A[A
59% 2252/3823 [04:14<02:58, 8.81it/s][A[A
59% 2253/3823 [04:14<02:58, 8.81it/s][A[A
59% 2254/3823 [04:14<02:58, 8.81it/s][A[A
59% 2255/3823 [04:14<02:57, 8.82it/s][A[A
59% 2256/3823 [04:14<02:57, 8.83it/s][A[A
59% 2257/3823 [04:15<02:57, 8.84it/s][A[A
59% 2258/3823 [04:15<02:56, 8.84it/s][A[A
59% 2259/3823 [04:15<02:56, 8.85it/s][A[A
59% 2260/3823 [04:15<02:56, 8.84it/s][A[A
59% 2261/3823 [04:15<02:57, 8.82it/s][A[A
59% 2262/3823 [04:15<02:57, 8.82it/s][A[A
59% 2263/3823 [04:15<02:56, 8.82it/s][A[A
59% 2264/3823 [04:15<02:56, 8.83it/s][A[A
59% 2265/3823 [04:15<02:56, 8.83it/s][A[A
59% 2266/3823 [04:16<02:56, 8.83it/s][A[A
59% 2267/3823 [04:16<02:56, 8.82it/s][A[A
59% 2268/3823 [04:16<02:56, 8.81it/s][A[A
59% 2269/3823 [04:16<02:56, 8.79it/s][A[A
59% 2270/3823 [04:16<02:56, 8.80it/s][A[A
59% 2271/3823 [04:16<02:55, 8.85it/s][A[A
59% 2272/3823 [04:16<02:55, 8.85it/s][A[A
59% 2273/3823 [04:16<02:55, 8.85it/s][A[A
59% 2274/3823 [04:17<02:54, 8.86it/s][A[A
60% 2275/3823 [04:17<02:54, 8.85it/s][A[A
60% 2276/3823 [04:17<02:54, 8.86it/s][A[A
60% 2277/3823 [04:17<02:54, 8.86it/s][A[A
60% 2278/3823 [04:17<02:54, 8.85it/s][A[A
60% 2279/3823 [04:17<02:54, 8.84it/s][A[A
60% 2280/3823 [04:17<02:54, 8.84it/s][A[A
60% 2281/3823 [04:17<02:54, 8.84it/s][A[A
60% 2282/3823 [04:17<02:54, 8.84it/s][A[A
60% 2283/3823 [04:18<02:54, 8.84it/s][A[A
60% 2284/3823 [04:18<02:54, 8.82it/s][A[A
60% 2285/3823 [04:18<02:54, 8.83it/s][A[A
60% 2286/3823 [04:18<02:54, 8.83it/s][A[A
60% 2287/3823 [04:18<02:54, 8.81it/s][A[A
60% 2288/3823 [04:18<02:54, 8.81it/s][A[A
60% 2289/3823 [04:18<02:53, 8.82it/s][A[A
60% 2290/3823 [04:18<02:53, 8.83it/s][A[A
60% 2291/3823 [04:18<02:53, 8.83it/s][A[A
60% 2292/3823 [04:19<02:53, 8.81it/s][A[A
60% 2293/3823 [04:19<02:53, 8.81it/s][A[A
60% 2294/3823 [04:19<02:53, 8.82it/s][A[A
60% 2295/3823 [04:19<02:53, 8.83it/s][A[A
60% 2296/3823 [04:19<02:52, 8.83it/s][A[A
60% 2297/3823 [04:19<02:52, 8.83it/s][A[A
60% 2298/3823 [04:19<02:52, 8.84it/s][A[A
60% 2299/3823 [04:19<02:52, 8.84it/s][A[A
60% 2300/3823 [04:19<02:52, 8.83it/s][A[A
60% 2301/3823 [04:20<02:52, 8.84it/s][A[A
60% 2302/3823 [04:20<02:52, 8.84it/s][A[A
60% 2303/3823 [04:20<02:51, 8.84it/s][A[A
60% 2304/3823 [04:20<02:51, 8.84it/s][A[A
60% 2305/3823 [04:20<02:51, 8.84it/s][A[A
60% 2306/3823 [04:20<02:51, 8.83it/s][A[A
60% 2307/3823 [04:20<02:51, 8.83it/s][A[A
60% 2308/3823 [04:20<02:51, 8.82it/s][A[A
60% 2309/3823 [04:20<02:51, 8.82it/s][A[A
60% 2310/3823 [04:21<02:51, 8.83it/s][A[A
60% 2311/3823 [04:21<02:51, 8.83it/s][A[A
60% 2312/3823 [04:21<02:51, 8.84it/s][A[A
61% 2313/3823 [04:21<02:50, 8.84it/s][A[A
61% 2314/3823 [04:21<02:50, 8.84it/s][A[A
61% 2315/3823 [04:21<02:50, 8.82it/s][A[A
61% 2316/3823 [04:21<02:51, 8.80it/s][A[A
61% 2317/3823 [04:21<02:51, 8.80it/s][A[A
61% 2318/3823 [04:21<02:51, 8.80it/s][A[A
61% 2319/3823 [04:22<02:50, 8.80it/s][A[A
61% 2320/3823 [04:22<02:50, 8.81it/s][A[A
61% 2321/3823 [04:22<02:50, 8.82it/s][A[A
61% 2322/3823 [04:22<02:50, 8.82it/s][A[A
61% 2323/3823 [04:22<02:49, 8.83it/s][A[A
61% 2324/3823 [04:22<02:49, 8.82it/s][A[A
61% 2325/3823 [04:22<02:49, 8.83it/s][A[A
61% 2326/3823 [04:22<02:49, 8.83it/s][A[A
61% 2327/3823 [04:23<02:49, 8.84it/s][A[A
61% 2328/3823 [04:23<02:49, 8.84it/s][A[A
61% 2329/3823 [04:23<02:48, 8.84it/s][A[A
61% 2330/3823 [04:23<02:48, 8.84it/s][A[A
61% 2331/3823 [04:23<02:48, 8.83it/s][A[A
61% 2332/3823 [04:23<02:49, 8.81it/s][A[A
61% 2333/3823 [04:23<02:49, 8.81it/s][A[A
61% 2334/3823 [04:23<02:48, 8.81it/s][A[A
61% 2335/3823 [04:23<02:48, 8.82it/s][A[A
61% 2336/3823 [04:24<02:48, 8.83it/s][A[A
61% 2337/3823 [04:24<02:48, 8.84it/s][A[A
61% 2338/3823 [04:24<02:48, 8.84it/s][A[A
61% 2339/3823 [04:24<02:48, 8.83it/s][A[A
61% 2340/3823 [04:24<02:48, 8.82it/s][A[A
61% 2341/3823 [04:24<02:47, 8.84it/s][A[A
61% 2342/3823 [04:24<02:47, 8.83it/s][A[A
61% 2343/3823 [04:24<02:47, 8.83it/s][A[A
61% 2344/3823 [04:24<02:47, 8.83it/s][A[A
61% 2345/3823 [04:25<02:47, 8.83it/s][A[A
61% 2346/3823 [04:25<02:47, 8.83it/s][A[A
61% 2347/3823 [04:25<02:47, 8.83it/s][A[A
61% 2348/3823 [04:25<02:47, 8.83it/s][A[A
61% 2349/3823 [04:25<02:46, 8.83it/s][A[A
61% 2350/3823 [04:25<02:46, 8.83it/s][A[A
61% 2351/3823 [04:25<02:47, 8.81it/s][A[A
62% 2352/3823 [04:25<02:47, 8.80it/s][A[A
62% 2353/3823 [04:25<02:46, 8.81it/s][A[A
62% 2354/3823 [04:26<02:46, 8.81it/s][A[A
62% 2355/3823 [04:26<02:46, 8.81it/s][A[A
62% 2356/3823 [04:26<02:46, 8.82it/s][A[A
62% 2357/3823 [04:26<02:46, 8.83it/s][A[A
62% 2358/3823 [04:26<02:45, 8.84it/s][A[A
62% 2359/3823 [04:26<02:45, 8.83it/s][A[A
62% 2360/3823 [04:26<02:45, 8.82it/s][A[A
62% 2361/3823 [04:26<02:45, 8.83it/s][A[A
62% 2362/3823 [04:26<02:45, 8.83it/s][A[A
62% 2363/3823 [04:27<02:45, 8.84it/s][A[A
62% 2364/3823 [04:27<02:45, 8.84it/s][A[A
62% 2365/3823 [04:27<02:45, 8.84it/s][A[A
62% 2366/3823 [04:27<02:44, 8.84it/s][A[A
62% 2367/3823 [04:27<02:44, 8.84it/s][A[A
62% 2368/3823 [04:27<02:44, 8.84it/s][A[A
62% 2369/3823 [04:27<02:44, 8.84it/s][A[A
62% 2370/3823 [04:27<02:44, 8.84it/s][A[A
62% 2371/3823 [04:27<02:44, 8.84it/s][A[A
62% 2372/3823 [04:28<02:44, 8.83it/s][A[A
62% 2373/3823 [04:28<02:44, 8.81it/s][A[A
62% 2374/3823 [04:28<02:44, 8.82it/s][A[A
62% 2375/3823 [04:28<02:44, 8.83it/s][A[A
62% 2376/3823 [04:28<02:43, 8.83it/s][A[A
62% 2377/3823 [04:28<02:43, 8.83it/s][A[A
62% 2378/3823 [04:28<02:43, 8.83it/s][A[A
62% 2379/3823 [04:28<02:43, 8.84it/s][A[A
62% 2380/3823 [04:29<02:43, 8.82it/s][A[A
62% 2381/3823 [04:29<02:43, 8.81it/s][A[A
62% 2382/3823 [04:29<02:43, 8.82it/s][A[A
62% 2383/3823 [04:29<02:43, 8.82it/s][A[A
62% 2384/3823 [04:29<02:43, 8.82it/s][A[A
62% 2385/3823 [04:29<02:43, 8.82it/s][A[A
62% 2386/3823 [04:29<02:43, 8.81it/s][A[A
62% 2387/3823 [04:29<02:42, 8.81it/s][A[A
62% 2388/3823 [04:29<02:42, 8.81it/s][A[A
62% 2389/3823 [04:30<02:42, 8.82it/s][A[A
63% 2390/3823 [04:30<02:42, 8.82it/s][A[A
63% 2391/3823 [04:30<02:42, 8.81it/s][A[A
63% 2392/3823 [04:30<02:42, 8.82it/s][A[A
63% 2393/3823 [04:30<02:42, 8.82it/s][A[A
63% 2394/3823 [04:30<02:42, 8.82it/s][A[A
63% 2395/3823 [04:30<02:42, 8.81it/s][A[A
63% 2396/3823 [04:30<02:41, 8.81it/s][A[A
63% 2397/3823 [04:30<02:41, 8.81it/s][A[A
63% 2398/3823 [04:31<02:41, 8.82it/s][A[A
63% 2399/3823 [04:31<02:41, 8.82it/s][A[A
63% 2400/3823 [04:31<02:41, 8.82it/s][A[A
63% 2401/3823 [04:31<02:41, 8.82it/s][A[A
63% 2402/3823 [04:31<02:41, 8.82it/s][A[A
63% 2403/3823 [04:31<02:41, 8.81it/s][A[A
63% 2404/3823 [04:31<02:41, 8.80it/s][A[A
63% 2405/3823 [04:31<02:41, 8.80it/s][A[A
63% 2406/3823 [04:31<02:40, 8.81it/s][A[A
63% 2407/3823 [04:32<02:40, 8.83it/s][A[A
63% 2408/3823 [04:32<02:40, 8.83it/s][A[A
63% 2409/3823 [04:32<02:40, 8.83it/s][A[A
63% 2410/3823 [04:32<02:40, 8.82it/s][A[A
63% 2411/3823 [04:32<02:40, 8.82it/s][A[A
63% 2412/3823 [04:32<02:40, 8.81it/s][A[A
63% 2413/3823 [04:32<02:40, 8.80it/s][A[A
63% 2414/3823 [04:32<02:39, 8.81it/s][A[A
63% 2415/3823 [04:32<02:39, 8.82it/s][A[A
63% 2416/3823 [04:33<02:39, 8.83it/s][A[A
63% 2417/3823 [04:33<02:39, 8.84it/s][A[A
63% 2418/3823 [04:33<02:38, 8.84it/s][A[A
63% 2419/3823 [04:33<02:38, 8.84it/s][A[A
63% 2420/3823 [04:33<02:38, 8.84it/s][A[A
63% 2421/3823 [04:33<02:38, 8.84it/s][A[A
63% 2422/3823 [04:33<02:38, 8.84it/s][A[A
63% 2423/3823 [04:33<02:38, 8.84it/s][A[A
63% 2424/3823 [04:34<02:38, 8.84it/s][A[A
63% 2425/3823 [04:34<02:38, 8.83it/s][A[A
63% 2426/3823 [04:34<02:38, 8.83it/s][A[A
63% 2427/3823 [04:34<02:38, 8.83it/s][A[A
64% 2428/3823 [04:34<02:38, 8.83it/s][A[A
64% 2429/3823 [04:34<02:37, 8.83it/s][A[A
64% 2430/3823 [04:34<02:37, 8.83it/s][A[A
64% 2431/3823 [04:34<02:37, 8.83it/s][A[A
64% 2432/3823 [04:34<02:37, 8.84it/s][A[A
64% 2433/3823 [04:35<02:37, 8.84it/s][A[A
64% 2434/3823 [04:35<02:37, 8.84it/s][A[A
64% 2435/3823 [04:35<02:37, 8.84it/s][A[A
64% 2436/3823 [04:35<02:36, 8.84it/s][A[A
64% 2437/3823 [04:35<02:36, 8.83it/s][A[A
64% 2438/3823 [04:35<02:37, 8.81it/s][A[A
64% 2439/3823 [04:35<02:37, 8.81it/s][A[A
64% 2440/3823 [04:35<02:36, 8.82it/s][A[A
64% 2441/3823 [04:35<02:36, 8.81it/s][A[A
64% 2442/3823 [04:36<02:36, 8.82it/s][A[A
64% 2443/3823 [04:36<02:36, 8.82it/s][A[A
64% 2444/3823 [04:36<02:36, 8.82it/s][A[A
64% 2445/3823 [04:36<02:36, 8.81it/s][A[A
64% 2446/3823 [04:36<02:36, 8.82it/s][A[A
64% 2447/3823 [04:36<02:35, 8.82it/s][A[A
64% 2448/3823 [04:36<02:35, 8.83it/s][A[A
64% 2449/3823 [04:36<02:35, 8.84it/s][A[A
64% 2450/3823 [04:36<02:35, 8.84it/s][A[A
64% 2451/3823 [04:37<02:35, 8.85it/s][A[A
64% 2452/3823 [04:37<02:35, 8.83it/s][A[A
64% 2453/3823 [04:37<02:35, 8.84it/s][A[A
64% 2454/3823 [04:37<02:34, 8.84it/s][A[A
64% 2455/3823 [04:37<02:34, 8.84it/s][A[A
64% 2456/3823 [04:37<02:34, 8.85it/s][A[A
64% 2457/3823 [04:37<02:34, 8.85it/s][A[A
64% 2458/3823 [04:37<02:34, 8.84it/s][A[A
64% 2459/3823 [04:37<02:34, 8.84it/s][A[A
64% 2460/3823 [04:38<02:34, 8.83it/s][A[A
64% 2461/3823 [04:38<02:34, 8.84it/s][A[A
64% 2462/3823 [04:38<02:33, 8.84it/s][A[A
64% 2463/3823 [04:38<02:33, 8.84it/s][A[A
64% 2464/3823 [04:38<02:33, 8.85it/s][A[A
64% 2465/3823 [04:38<02:33, 8.83it/s][A[A
65% 2466/3823 [04:38<02:33, 8.83it/s][A[A
65% 2467/3823 [04:38<02:33, 8.83it/s][A[A
65% 2468/3823 [04:38<02:33, 8.82it/s][A[A
65% 2469/3823 [04:39<02:33, 8.82it/s][A[A
65% 2470/3823 [04:39<02:33, 8.83it/s][A[A
65% 2471/3823 [04:39<02:32, 8.84it/s][A[A
65% 2472/3823 [04:39<02:32, 8.84it/s][A[A
65% 2473/3823 [04:39<02:32, 8.84it/s][A[A
65% 2474/3823 [04:39<02:32, 8.84it/s][A[A
65% 2475/3823 [04:39<02:32, 8.85it/s][A[A
65% 2476/3823 [04:39<02:32, 8.85it/s][A[A
65% 2477/3823 [04:40<02:32, 8.84it/s][A[A
65% 2478/3823 [04:40<02:32, 8.83it/s][A[A
65% 2479/3823 [04:40<02:32, 8.83it/s][A[A
65% 2480/3823 [04:40<02:32, 8.83it/s][A[A
65% 2481/3823 [04:40<02:31, 8.83it/s][A[A
65% 2482/3823 [04:40<02:31, 8.84it/s][A[A
65% 2483/3823 [04:40<02:31, 8.84it/s][A[A
65% 2484/3823 [04:40<02:31, 8.84it/s][A[A
65% 2485/3823 [04:40<02:31, 8.84it/s][A[A
65% 2486/3823 [04:41<02:31, 8.85it/s][A[A
65% 2487/3823 [04:41<02:30, 8.85it/s][A[A
65% 2488/3823 [04:41<02:30, 8.85it/s][A[A
65% 2489/3823 [04:41<02:30, 8.85it/s][A[A
65% 2490/3823 [04:41<02:30, 8.85it/s][A[A
65% 2491/3823 [04:41<02:30, 8.85it/s][A[A
65% 2492/3823 [04:41<02:30, 8.83it/s][A[A
65% 2493/3823 [04:41<02:30, 8.83it/s][A[A
65% 2494/3823 [04:41<02:30, 8.83it/s][A[A
65% 2495/3823 [04:42<02:30, 8.82it/s][A[A
65% 2496/3823 [04:42<02:30, 8.82it/s][A[A
65% 2497/3823 [04:42<02:30, 8.83it/s][A[A
65% 2498/3823 [04:42<02:30, 8.82it/s][A[A
65% 2499/3823 [04:42<02:29, 8.83it/s][A[A
65% 2500/3823 [04:42<02:29, 8.82it/s][A[A
65% 2501/3823 [04:42<02:29, 8.83it/s][A[A
65% 2502/3823 [04:42<02:29, 8.83it/s][A[A
65% 2503/3823 [04:42<02:29, 8.84it/s][A[A
65% 2504/3823 [04:43<02:29, 8.84it/s][A[A
66% 2505/3823 [04:43<02:29, 8.84it/s][A[A
66% 2506/3823 [04:43<02:28, 8.85it/s][A[A
66% 2507/3823 [04:43<02:28, 8.85it/s][A[A
66% 2508/3823 [04:43<02:28, 8.84it/s][A[A
66% 2509/3823 [04:43<02:28, 8.84it/s][A[A
66% 2510/3823 [04:43<02:28, 8.84it/s][A[A
66% 2511/3823 [04:43<02:28, 8.85it/s][A[A
66% 2512/3823 [04:43<02:28, 8.85it/s][A[A
66% 2513/3823 [04:44<02:28, 8.84it/s][A[A
66% 2514/3823 [04:44<02:28, 8.84it/s][A[A
66% 2515/3823 [04:44<02:27, 8.84it/s][A[A
66% 2516/3823 [04:44<02:27, 8.83it/s][A[A
66% 2517/3823 [04:44<02:27, 8.83it/s][A[A
66% 2518/3823 [04:44<02:27, 8.83it/s][A[A
66% 2519/3823 [04:44<02:27, 8.84it/s][A[A
66% 2520/3823 [04:44<02:27, 8.84it/s][A[A
66% 2521/3823 [04:44<02:27, 8.85it/s][A[A
66% 2522/3823 [04:45<02:27, 8.85it/s][A[A
66% 2523/3823 [04:45<02:26, 8.85it/s][A[A
66% 2524/3823 [04:45<02:26, 8.84it/s][A[A
66% 2525/3823 [04:45<02:26, 8.85it/s][A[A
66% 2526/3823 [04:45<02:26, 8.84it/s][A[A
66% 2527/3823 [04:45<02:26, 8.83it/s][A[A
66% 2528/3823 [04:45<02:26, 8.81it/s][A[A
66% 2529/3823 [04:45<02:26, 8.82it/s][A[A
66% 2530/3823 [04:46<02:26, 8.82it/s][A[A
66% 2531/3823 [04:46<02:26, 8.83it/s][A[A
66% 2532/3823 [04:46<02:26, 8.83it/s][A[A
66% 2533/3823 [04:46<02:26, 8.83it/s][A[A
66% 2534/3823 [04:46<02:25, 8.83it/s][A[A
66% 2535/3823 [04:46<02:25, 8.83it/s][A[A
66% 2536/3823 [04:46<02:25, 8.84it/s][A[A
66% 2537/3823 [04:46<02:25, 8.85it/s][A[A
66% 2538/3823 [04:46<02:25, 8.85it/s][A[A
66% 2539/3823 [04:47<02:25, 8.84it/s][A[A
66% 2540/3823 [04:47<02:25, 8.82it/s][A[A
66% 2541/3823 [04:47<02:25, 8.83it/s][A[A
66% 2542/3823 [04:47<02:24, 8.84it/s][A[A
67% 2543/3823 [04:47<02:24, 8.84it/s][A[A
67% 2544/3823 [04:47<02:24, 8.84it/s][A[A
67% 2545/3823 [04:47<02:24, 8.84it/s][A[A
67% 2546/3823 [04:47<02:24, 8.83it/s][A[A
67% 2547/3823 [04:47<02:24, 8.82it/s][A[A
67% 2548/3823 [04:48<02:24, 8.81it/s][A[A
67% 2549/3823 [04:48<02:24, 8.82it/s][A[A
67% 2550/3823 [04:48<02:24, 8.83it/s][A[A
67% 2551/3823 [04:48<02:23, 8.84it/s][A[A
67% 2552/3823 [04:48<02:23, 8.84it/s][A[A
67% 2553/3823 [04:48<02:23, 8.83it/s][A[A
67% 2554/3823 [04:48<02:23, 8.82it/s][A[A
67% 2555/3823 [04:48<02:23, 8.83it/s][A[A
67% 2556/3823 [04:48<02:23, 8.80it/s][A[A
67% 2557/3823 [04:49<02:23, 8.79it/s][A[A
67% 2558/3823 [04:49<02:23, 8.81it/s][A[A
67% 2559/3823 [04:49<02:23, 8.82it/s][A[A
67% 2560/3823 [04:49<02:23, 8.83it/s][A[A
67% 2561/3823 [04:49<02:23, 8.82it/s][A[A
67% 2562/3823 [04:49<02:23, 8.80it/s][A[A
67% 2563/3823 [04:49<02:22, 8.81it/s][A[A
67% 2564/3823 [04:49<02:22, 8.81it/s][A[A
67% 2565/3823 [04:49<02:22, 8.81it/s][A[A
67% 2566/3823 [04:50<02:22, 8.80it/s][A[A
67% 2567/3823 [04:50<02:22, 8.81it/s][A[A
67% 2568/3823 [04:50<02:22, 8.82it/s][A[A
67% 2569/3823 [04:50<02:22, 8.83it/s][A[A
67% 2570/3823 [04:50<02:21, 8.83it/s][A[A
67% 2571/3823 [04:50<02:21, 8.82it/s][A[A
67% 2572/3823 [04:50<02:21, 8.83it/s][A[A
67% 2573/3823 [04:50<02:21, 8.82it/s][A[A
67% 2574/3823 [04:50<02:21, 8.82it/s][A[A
67% 2575/3823 [04:51<02:21, 8.82it/s][A[A
67% 2576/3823 [04:51<02:21, 8.82it/s][A[A
67% 2577/3823 [04:51<02:21, 8.83it/s][A[A
67% 2578/3823 [04:51<02:21, 8.82it/s][A[A
67% 2579/3823 [04:51<02:21, 8.82it/s][A[A
67% 2580/3823 [04:51<02:20, 8.82it/s][A[A
68% 2581/3823 [04:51<02:20, 8.83it/s][A[A
68% 2582/3823 [04:51<02:20, 8.83it/s][A[A
68% 2583/3823 [04:52<02:20, 8.83it/s][A[A
68% 2584/3823 [04:52<02:20, 8.84it/s][A[A
68% 2585/3823 [04:52<02:20, 8.84it/s][A[A
68% 2586/3823 [04:52<02:20, 8.83it/s][A[A
68% 2587/3823 [04:52<02:20, 8.83it/s][A[A
68% 2588/3823 [04:52<02:19, 8.83it/s][A[A
68% 2589/3823 [04:52<02:19, 8.83it/s][A[A
68% 2590/3823 [04:52<02:19, 8.83it/s][A[A
68% 2591/3823 [04:52<02:19, 8.84it/s][A[A
68% 2592/3823 [04:53<02:19, 8.83it/s][A[A
68% 2593/3823 [04:53<02:19, 8.83it/s][A[A
68% 2594/3823 [04:53<02:19, 8.83it/s][A[A
68% 2595/3823 [04:53<02:19, 8.82it/s][A[A
68% 2596/3823 [04:53<02:19, 8.83it/s][A[A
68% 2597/3823 [04:53<02:18, 8.84it/s][A[A
68% 2598/3823 [04:53<02:18, 8.84it/s][A[A
68% 2599/3823 [04:53<02:18, 8.84it/s][A[A
68% 2600/3823 [04:53<02:18, 8.84it/s][A[A
68% 2601/3823 [04:54<02:18, 8.84it/s][A[A
68% 2602/3823 [04:54<02:18, 8.83it/s][A[A
68% 2603/3823 [04:54<02:18, 8.83it/s][A[A
68% 2604/3823 [04:54<02:17, 8.84it/s][A[A
68% 2605/3823 [04:54<02:17, 8.83it/s][A[A
68% 2606/3823 [04:54<02:18, 8.82it/s][A[A
68% 2607/3823 [04:54<02:17, 8.83it/s][A[A
68% 2608/3823 [04:54<02:17, 8.83it/s][A[A
68% 2609/3823 [04:54<02:17, 8.83it/s][A[A
68% 2610/3823 [04:55<02:17, 8.83it/s][A[A
68% 2611/3823 [04:55<02:17, 8.83it/s][A[A
68% 2612/3823 [04:55<02:17, 8.83it/s][A[A
68% 2613/3823 [04:55<02:17, 8.83it/s][A[A
68% 2614/3823 [04:55<02:16, 8.83it/s][A[A
68% 2615/3823 [04:55<02:16, 8.84it/s][A[A
68% 2616/3823 [04:55<02:16, 8.84it/s][A[A
68% 2617/3823 [04:55<02:16, 8.85it/s][A[A
68% 2618/3823 [04:55<02:16, 8.84it/s][A[A
69% 2619/3823 [04:56<02:16, 8.83it/s][A[A
69% 2620/3823 [04:56<02:16, 8.83it/s][A[A
69% 2621/3823 [04:56<02:16, 8.83it/s][A[A
69% 2622/3823 [04:56<02:15, 8.83it/s][A[A
69% 2623/3823 [04:56<02:15, 8.83it/s][A[A
69% 2624/3823 [04:56<02:15, 8.83it/s][A[A
69% 2625/3823 [04:56<02:15, 8.83it/s][A[A
69% 2626/3823 [04:56<02:15, 8.83it/s][A[A
69% 2627/3823 [04:56<02:15, 8.83it/s][A[A
69% 2628/3823 [04:57<02:15, 8.82it/s][A[A
69% 2629/3823 [04:57<02:15, 8.81it/s][A[A
69% 2630/3823 [04:57<02:15, 8.81it/s][A[A
69% 2631/3823 [04:57<02:15, 8.81it/s][A[A
69% 2632/3823 [04:57<02:15, 8.81it/s][A[A
69% 2633/3823 [04:57<02:15, 8.81it/s][A[A
69% 2634/3823 [04:57<02:14, 8.82it/s][A[A
69% 2635/3823 [04:57<02:14, 8.82it/s][A[A
69% 2636/3823 [04:58<02:14, 8.83it/s][A[A
69% 2637/3823 [04:58<02:14, 8.83it/s][A[A
69% 2638/3823 [04:58<02:14, 8.84it/s][A[A
69% 2639/3823 [04:58<02:13, 8.84it/s][A[A
69% 2640/3823 [04:58<02:13, 8.84it/s][A[A
69% 2641/3823 [04:58<02:13, 8.82it/s][A[A
69% 2642/3823 [04:58<02:13, 8.82it/s][A[A
69% 2643/3823 [04:58<02:13, 8.83it/s][A[A
69% 2644/3823 [04:58<02:13, 8.81it/s][A[A
69% 2645/3823 [04:59<02:13, 8.82it/s][A[A
69% 2646/3823 [04:59<02:13, 8.82it/s][A[A
69% 2647/3823 [04:59<02:13, 8.82it/s][A[A
69% 2648/3823 [04:59<02:13, 8.82it/s][A[A
69% 2649/3823 [04:59<02:13, 8.82it/s][A[A
69% 2650/3823 [04:59<02:12, 8.82it/s][A[A
69% 2651/3823 [04:59<02:12, 8.82it/s][A[A
69% 2652/3823 [04:59<02:12, 8.84it/s][A[A
69% 2653/3823 [04:59<02:12, 8.84it/s][A[A
69% 2654/3823 [05:00<02:12, 8.83it/s][A[A
69% 2655/3823 [05:00<02:12, 8.83it/s][A[A
69% 2656/3823 [05:00<02:12, 8.83it/s][A[A
70% 2657/3823 [05:00<02:12, 8.82it/s][A[A
70% 2658/3823 [05:00<02:12, 8.82it/s][A[A
70% 2659/3823 [05:00<02:11, 8.82it/s][A[A
70% 2660/3823 [05:00<02:11, 8.82it/s][A[A
70% 2661/3823 [05:00<02:11, 8.81it/s][A[A
70% 2662/3823 [05:00<02:11, 8.82it/s][A[A
70% 2663/3823 [05:01<02:11, 8.82it/s][A[A
70% 2664/3823 [05:01<02:11, 8.83it/s][A[A
70% 2665/3823 [05:01<02:11, 8.83it/s][A[A
70% 2666/3823 [05:01<02:11, 8.83it/s][A[A
70% 2667/3823 [05:01<02:10, 8.83it/s][A[A
70% 2668/3823 [05:01<02:10, 8.82it/s][A[A
70% 2669/3823 [05:01<02:10, 8.81it/s][A[A
70% 2670/3823 [05:01<02:10, 8.81it/s][A[A
70% 2671/3823 [05:01<02:10, 8.80it/s][A[A
70% 2672/3823 [05:02<02:10, 8.80it/s][A[A
70% 2673/3823 [05:02<02:10, 8.79it/s][A[A
70% 2674/3823 [05:02<02:10, 8.80it/s][A[A
70% 2675/3823 [05:02<02:10, 8.81it/s][A[A
70% 2676/3823 [05:02<02:10, 8.81it/s][A[A
70% 2677/3823 [05:02<02:09, 8.82it/s][A[A
70% 2678/3823 [05:02<02:09, 8.83it/s][A[A
70% 2679/3823 [05:02<02:09, 8.83it/s][A[A
70% 2680/3823 [05:03<02:09, 8.82it/s][A[A
70% 2681/3823 [05:03<02:09, 8.80it/s][A[A
70% 2682/3823 [05:03<02:09, 8.80it/s][A[A
70% 2683/3823 [05:03<02:09, 8.82it/s][A[A
70% 2684/3823 [05:03<02:09, 8.82it/s][A[A
70% 2685/3823 [05:03<02:09, 8.82it/s][A[A
70% 2686/3823 [05:03<02:09, 8.81it/s][A[A
70% 2687/3823 [05:03<02:08, 8.82it/s][A[A
70% 2688/3823 [05:03<02:08, 8.82it/s][A[A
70% 2689/3823 [05:04<02:08, 8.82it/s][A[A
70% 2690/3823 [05:04<02:08, 8.82it/s][A[A
70% 2691/3823 [05:04<02:08, 8.82it/s][A[A
70% 2692/3823 [05:04<02:08, 8.83it/s][A[A
70% 2693/3823 [05:04<02:08, 8.83it/s][A[A
70% 2694/3823 [05:04<02:07, 8.83it/s][A[A
70% 2695/3823 [05:04<02:07, 8.83it/s][A[A
71% 2696/3823 [05:04<02:07, 8.82it/s][A[A
71% 2697/3823 [05:04<02:07, 8.82it/s][A[A
71% 2698/3823 [05:05<02:07, 8.82it/s][A[A
71% 2699/3823 [05:05<02:07, 8.82it/s][A[A
71% 2700/3823 [05:05<02:07, 8.83it/s][A[A
71% 2701/3823 [05:05<02:07, 8.83it/s][A[A
71% 2702/3823 [05:05<02:06, 8.83it/s][A[A
71% 2703/3823 [05:05<02:06, 8.83it/s][A[A
71% 2704/3823 [05:05<02:06, 8.83it/s][A[A
71% 2705/3823 [05:05<02:06, 8.83it/s][A[A
71% 2706/3823 [05:05<02:06, 8.83it/s][A[A
71% 2707/3823 [05:06<02:06, 8.83it/s][A[A
71% 2708/3823 [05:06<02:06, 8.83it/s][A[A
71% 2709/3823 [05:06<02:06, 8.84it/s][A[A
71% 2710/3823 [05:06<02:05, 8.84it/s][A[A
71% 2711/3823 [05:06<02:05, 8.84it/s][A[A
71% 2712/3823 [05:06<02:05, 8.83it/s][A[A
71% 2713/3823 [05:06<02:05, 8.82it/s][A[A
71% 2714/3823 [05:06<02:05, 8.83it/s][A[A
71% 2715/3823 [05:06<02:05, 8.83it/s][A[A
71% 2716/3823 [05:07<02:05, 8.82it/s][A[A
71% 2717/3823 [05:07<02:05, 8.82it/s][A[A
71% 2718/3823 [05:07<02:05, 8.83it/s][A[A
71% 2719/3823 [05:07<02:04, 8.83it/s][A[A
71% 2720/3823 [05:07<02:04, 8.83it/s][A[A
71% 2721/3823 [05:07<02:04, 8.82it/s][A[A
71% 2722/3823 [05:07<02:04, 8.83it/s][A[A
71% 2723/3823 [05:07<02:04, 8.83it/s][A[A
71% 2724/3823 [05:07<02:04, 8.84it/s][A[A
71% 2725/3823 [05:08<02:04, 8.83it/s][A[A
71% 2726/3823 [05:08<02:04, 8.83it/s][A[A
71% 2727/3823 [05:08<02:04, 8.84it/s][A[A
71% 2728/3823 [05:08<02:03, 8.84it/s][A[A
71% 2729/3823 [05:08<02:03, 8.83it/s][A[A
71% 2730/3823 [05:08<02:03, 8.84it/s][A[A
71% 2731/3823 [05:08<02:03, 8.83it/s][A[A
71% 2732/3823 [05:08<02:03, 8.82it/s][A[A
71% 2733/3823 [05:09<02:03, 8.82it/s][A[A
72% 2734/3823 [05:09<02:03, 8.82it/s][A[A
72% 2735/3823 [05:09<02:03, 8.82it/s][A[A
72% 2736/3823 [05:09<02:03, 8.81it/s][A[A
72% 2737/3823 [05:09<02:03, 8.80it/s][A[A
72% 2738/3823 [05:09<02:03, 8.81it/s][A[A
72% 2739/3823 [05:09<02:03, 8.81it/s][A[A
72% 2740/3823 [05:09<02:02, 8.81it/s][A[A
72% 2741/3823 [05:09<02:02, 8.82it/s][A[A
72% 2742/3823 [05:10<02:02, 8.83it/s][A[A
72% 2743/3823 [05:10<02:02, 8.83it/s][A[A
72% 2744/3823 [05:10<02:02, 8.83it/s][A[A
72% 2745/3823 [05:10<02:02, 8.83it/s][A[A
72% 2746/3823 [05:10<02:02, 8.82it/s][A[A
72% 2747/3823 [05:10<02:02, 8.82it/s][A[A
72% 2748/3823 [05:10<02:01, 8.82it/s][A[A
72% 2749/3823 [05:10<02:01, 8.81it/s][A[A
72% 2750/3823 [05:10<02:01, 8.81it/s][A[A
72% 2751/3823 [05:11<02:01, 8.79it/s][A[A
72% 2752/3823 [05:11<02:01, 8.80it/s][A[A
72% 2753/3823 [05:11<02:01, 8.81it/s][A[A
72% 2754/3823 [05:11<02:01, 8.82it/s][A[A
72% 2755/3823 [05:11<02:01, 8.81it/s][A[A
72% 2756/3823 [05:11<02:01, 8.82it/s][A[A
72% 2757/3823 [05:11<02:01, 8.81it/s][A[A
72% 2758/3823 [05:11<02:00, 8.80it/s][A[A
72% 2759/3823 [05:11<02:00, 8.80it/s][A[A
72% 2760/3823 [05:12<02:00, 8.81it/s][A[A
72% 2761/3823 [05:12<02:00, 8.82it/s][A[A
72% 2762/3823 [05:12<02:00, 8.82it/s][A[A
72% 2763/3823 [05:12<02:00, 8.83it/s][A[A
72% 2764/3823 [05:12<02:00, 8.82it/s][A[A
72% 2765/3823 [05:12<01:59, 8.82it/s][A[A
72% 2766/3823 [05:12<02:00, 8.81it/s][A[A
72% 2767/3823 [05:12<02:00, 8.80it/s][A[A
72% 2768/3823 [05:12<01:59, 8.81it/s][A[A
72% 2769/3823 [05:13<01:59, 8.82it/s][A[A
72% 2770/3823 [05:13<01:59, 8.82it/s][A[A
72% 2771/3823 [05:13<01:59, 8.83it/s][A[A
73% 2772/3823 [05:13<01:59, 8.83it/s][A[A
73% 2773/3823 [05:13<01:58, 8.83it/s][A[A
73% 2774/3823 [05:13<01:58, 8.82it/s][A[A
73% 2775/3823 [05:13<01:58, 8.83it/s][A[A
73% 2776/3823 [05:13<01:58, 8.83it/s][A[A
73% 2777/3823 [05:13<01:58, 8.83it/s][A[A
73% 2778/3823 [05:14<01:58, 8.84it/s][A[A
73% 2779/3823 [05:14<01:58, 8.84it/s][A[A
73% 2780/3823 [05:14<01:57, 8.84it/s][A[A
73% 2781/3823 [05:14<01:57, 8.84it/s][A[A
73% 2782/3823 [05:14<01:57, 8.83it/s][A[A
73% 2783/3823 [05:14<01:57, 8.83it/s][A[A
73% 2784/3823 [05:14<01:57, 8.83it/s][A[A
73% 2785/3823 [05:14<01:57, 8.83it/s][A[A
73% 2786/3823 [05:15<01:57, 8.83it/s][A[A
73% 2787/3823 [05:15<01:57, 8.83it/s][A[A
73% 2788/3823 [05:15<01:57, 8.83it/s][A[A
73% 2789/3823 [05:15<01:57, 8.83it/s][A[A
73% 2790/3823 [05:15<01:57, 8.82it/s][A[A
73% 2791/3823 [05:15<01:56, 8.82it/s][A[A
73% 2792/3823 [05:15<01:56, 8.83it/s][A[A
73% 2793/3823 [05:15<01:56, 8.83it/s][A[A
73% 2794/3823 [05:15<01:56, 8.83it/s][A[A
73% 2795/3823 [05:16<01:56, 8.83it/s][A[A
73% 2796/3823 [05:16<01:56, 8.83it/s][A[A
73% 2797/3823 [05:16<01:56, 8.83it/s][A[A
73% 2798/3823 [05:16<01:56, 8.82it/s][A[A
73% 2799/3823 [05:16<01:56, 8.82it/s][A[A
73% 2800/3823 [05:16<01:56, 8.81it/s][A[A
73% 2801/3823 [05:16<01:55, 8.82it/s][A[A
73% 2802/3823 [05:16<01:55, 8.82it/s][A[A
73% 2803/3823 [05:16<01:55, 8.83it/s][A[A
73% 2804/3823 [05:17<01:55, 8.83it/s][A[A
73% 2805/3823 [05:17<01:55, 8.83it/s][A[A
73% 2806/3823 [05:17<01:55, 8.83it/s][A[A
73% 2807/3823 [05:17<01:55, 8.83it/s][A[A
73% 2808/3823 [05:17<01:54, 8.83it/s][A[A
73% 2809/3823 [05:17<01:54, 8.83it/s][A[A
74% 2810/3823 [05:17<01:54, 8.83it/s][A[A
74% 2811/3823 [05:17<01:54, 8.83it/s][A[A
74% 2812/3823 [05:17<01:54, 8.83it/s][A[A
74% 2813/3823 [05:18<01:54, 8.82it/s][A[A
74% 2814/3823 [05:18<01:54, 8.82it/s][A[A
74% 2815/3823 [05:18<01:54, 8.82it/s][A[A
74% 2816/3823 [05:18<01:54, 8.81it/s][A[A
74% 2817/3823 [05:18<01:54, 8.80it/s][A[A
74% 2818/3823 [05:18<01:54, 8.80it/s][A[A
74% 2819/3823 [05:18<01:54, 8.80it/s][A[A
74% 2820/3823 [05:18<01:53, 8.81it/s][A[A
74% 2821/3823 [05:18<01:53, 8.82it/s][A[A
74% 2822/3823 [05:19<01:53, 8.82it/s][A[A
74% 2823/3823 [05:19<01:53, 8.83it/s][A[A
74% 2824/3823 [05:19<01:53, 8.83it/s][A[A
74% 2825/3823 [05:19<01:52, 8.83it/s][A[A
74% 2826/3823 [05:19<01:52, 8.83it/s][A[A
74% 2827/3823 [05:19<01:52, 8.82it/s][A[A
74% 2828/3823 [05:19<01:52, 8.81it/s][A[A
74% 2829/3823 [05:19<01:53, 8.79it/s][A[A
74% 2830/3823 [05:20<01:52, 8.80it/s][A[A
74% 2831/3823 [05:20<01:52, 8.80it/s][A[A
74% 2832/3823 [05:20<01:52, 8.81it/s][A[A
74% 2833/3823 [05:20<01:52, 8.82it/s][A[A
74% 2834/3823 [05:20<01:52, 8.82it/s][A[A
74% 2835/3823 [05:20<01:52, 8.81it/s][A[A
74% 2836/3823 [05:20<01:51, 8.82it/s][A[A
74% 2837/3823 [05:20<01:51, 8.82it/s][A[A
74% 2838/3823 [05:20<01:51, 8.83it/s][A[A
74% 2839/3823 [05:21<01:51, 8.83it/s][A[A
74% 2840/3823 [05:21<01:51, 8.83it/s][A[A
74% 2841/3823 [05:21<01:51, 8.83it/s][A[A
74% 2842/3823 [05:21<01:51, 8.83it/s][A[A
74% 2843/3823 [05:21<01:51, 8.83it/s][A[A
74% 2844/3823 [05:21<01:51, 8.82it/s][A[A
74% 2845/3823 [05:21<01:50, 8.82it/s][A[A
74% 2846/3823 [05:21<01:50, 8.83it/s][A[A
74% 2847/3823 [05:21<01:50, 8.84it/s][A[A
74% 2848/3823 [05:22<01:50, 8.83it/s][A[A
75% 2849/3823 [05:22<01:50, 8.84it/s][A[A
75% 2850/3823 [05:22<01:50, 8.83it/s][A[A
75% 2851/3823 [05:22<01:50, 8.83it/s][A[A
75% 2852/3823 [05:22<01:50, 8.82it/s][A[A
75% 2853/3823 [05:22<01:50, 8.81it/s][A[A
75% 2854/3823 [05:22<01:49, 8.82it/s][A[A
75% 2855/3823 [05:22<01:49, 8.82it/s][A[A
75% 2856/3823 [05:22<01:49, 8.82it/s][A[A
75% 2857/3823 [05:23<01:49, 8.83it/s][A[A
75% 2858/3823 [05:23<01:49, 8.82it/s][A[A
75% 2859/3823 [05:23<01:49, 8.81it/s][A[A
75% 2860/3823 [05:23<01:49, 8.81it/s][A[A
75% 2861/3823 [05:23<01:49, 8.81it/s][A[A
75% 2862/3823 [05:23<01:49, 8.80it/s][A[A
75% 2863/3823 [05:23<01:48, 8.81it/s][A[A
75% 2864/3823 [05:23<01:48, 8.81it/s][A[A
75% 2865/3823 [05:23<01:48, 8.81it/s][A[A
75% 2866/3823 [05:24<01:48, 8.81it/s][A[A
75% 2867/3823 [05:24<01:48, 8.82it/s][A[A
75% 2868/3823 [05:24<01:48, 8.82it/s][A[A
75% 2869/3823 [05:24<01:48, 8.82it/s][A[A
75% 2870/3823 [05:24<01:48, 8.82it/s][A[A
75% 2871/3823 [05:24<01:47, 8.81it/s][A[A
75% 2872/3823 [05:24<01:47, 8.82it/s][A[A
75% 2873/3823 [05:24<01:47, 8.82it/s][A[A
75% 2874/3823 [05:24<01:47, 8.83it/s][A[A
75% 2875/3823 [05:25<01:47, 8.83it/s][A[A
75% 2876/3823 [05:25<01:47, 8.82it/s][A[A
75% 2877/3823 [05:25<01:47, 8.81it/s][A[A
75% 2878/3823 [05:25<01:47, 8.81it/s][A[A
75% 2879/3823 [05:25<01:47, 8.81it/s][A[A
75% 2880/3823 [05:25<01:47, 8.81it/s][A[A
75% 2881/3823 [05:25<01:46, 8.81it/s][A[A
75% 2882/3823 [05:25<01:46, 8.82it/s][A[A
75% 2883/3823 [05:26<01:46, 8.83it/s][A[A
75% 2884/3823 [05:26<01:46, 8.83it/s][A[A
75% 2885/3823 [05:26<01:46, 8.83it/s][A[A
75% 2886/3823 [05:26<01:46, 8.83it/s][A[A
76% 2887/3823 [05:26<01:45, 8.84it/s][A[A
76% 2888/3823 [05:26<01:45, 8.84it/s][A[A
76% 2889/3823 [05:26<01:45, 8.84it/s][A[A
76% 2890/3823 [05:26<01:45, 8.83it/s][A[A
76% 2891/3823 [05:26<01:45, 8.82it/s][A[A
76% 2892/3823 [05:27<01:45, 8.81it/s][A[A
76% 2893/3823 [05:27<01:45, 8.81it/s][A[A
76% 2894/3823 [05:27<01:45, 8.81it/s][A[A
76% 2895/3823 [05:27<01:45, 8.82it/s][A[A
76% 2896/3823 [05:27<01:45, 8.82it/s][A[A
76% 2897/3823 [05:27<01:44, 8.83it/s][A[A
76% 2898/3823 [05:27<01:44, 8.83it/s][A[A
76% 2899/3823 [05:27<01:44, 8.83it/s][A[A
76% 2900/3823 [05:27<01:44, 8.82it/s][A[A
76% 2901/3823 [05:28<01:44, 8.83it/s][A[A
76% 2902/3823 [05:28<01:44, 8.82it/s][A[A
76% 2903/3823 [05:28<01:44, 8.82it/s][A[A
76% 2904/3823 [05:28<01:44, 8.83it/s][A[A
76% 2905/3823 [05:28<01:43, 8.83it/s][A[A
76% 2906/3823 [05:28<01:44, 8.82it/s][A[A
76% 2907/3823 [05:28<01:43, 8.82it/s][A[A
76% 2908/3823 [05:28<01:43, 8.82it/s][A[A
76% 2909/3823 [05:28<01:43, 8.82it/s][A[A
76% 2910/3823 [05:29<01:43, 8.81it/s][A[A
76% 2911/3823 [05:29<01:43, 8.83it/s][A[A
76% 2912/3823 [05:29<01:43, 8.83it/s][A[A
76% 2913/3823 [05:29<01:43, 8.83it/s][A[A
76% 2914/3823 [05:29<01:42, 8.83it/s][A[A
76% 2915/3823 [05:29<01:42, 8.82it/s][A[A
76% 2916/3823 [05:29<01:42, 8.82it/s][A[A
76% 2917/3823 [05:29<01:42, 8.83it/s][A[A
76% 2918/3823 [05:29<01:42, 8.82it/s][A[A
76% 2919/3823 [05:30<01:42, 8.82it/s][A[A
76% 2920/3823 [05:30<01:42, 8.81it/s][A[A
76% 2921/3823 [05:30<01:42, 8.82it/s][A[A
76% 2922/3823 [05:30<01:42, 8.82it/s][A[A
76% 2923/3823 [05:30<01:42, 8.82it/s][A[A
76% 2924/3823 [05:30<01:41, 8.82it/s][A[A
77% 2925/3823 [05:30<01:41, 8.83it/s][A[A
77% 2926/3823 [05:30<01:41, 8.83it/s][A[A
77% 2927/3823 [05:30<01:41, 8.84it/s][A[A
77% 2928/3823 [05:31<01:41, 8.83it/s][A[A
77% 2929/3823 [05:31<01:41, 8.84it/s][A[A
77% 2930/3823 [05:31<01:41, 8.84it/s][A[A
77% 2931/3823 [05:31<01:40, 8.83it/s][A[A
77% 2932/3823 [05:31<01:40, 8.83it/s][A[A
77% 2933/3823 [05:31<01:40, 8.82it/s][A[A
77% 2934/3823 [05:31<01:40, 8.82it/s][A[A
77% 2935/3823 [05:31<01:40, 8.83it/s][A[A
77% 2936/3823 [05:32<01:40, 8.82it/s][A[A
77% 2937/3823 [05:32<01:40, 8.83it/s][A[A
77% 2938/3823 [05:32<01:40, 8.83it/s][A[A
77% 2939/3823 [05:32<01:40, 8.83it/s][A[A
77% 2940/3823 [05:32<01:40, 8.83it/s][A[A
77% 2941/3823 [05:32<01:39, 8.83it/s][A[A
77% 2942/3823 [05:32<01:39, 8.82it/s][A[A
77% 2943/3823 [05:32<01:39, 8.82it/s][A[A
77% 2944/3823 [05:32<01:39, 8.82it/s][A[A
77% 2945/3823 [05:33<01:39, 8.82it/s][A[A
77% 2946/3823 [05:33<01:39, 8.82it/s][A[A
77% 2947/3823 [05:33<01:39, 8.82it/s][A[A
77% 2948/3823 [05:33<01:39, 8.82it/s][A[A
77% 2949/3823 [05:33<01:39, 8.82it/s][A[A
77% 2950/3823 [05:33<01:38, 8.83it/s][A[A
77% 2951/3823 [05:33<01:38, 8.83it/s][A[A
77% 2952/3823 [05:33<01:38, 8.83it/s][A[A
77% 2953/3823 [05:33<01:38, 8.83it/s][A[A
77% 2954/3823 [05:34<01:38, 8.82it/s][A[A
77% 2955/3823 [05:34<01:38, 8.82it/s][A[A
77% 2956/3823 [05:34<01:38, 8.82it/s][A[A
77% 2957/3823 [05:34<01:38, 8.83it/s][A[A
77% 2958/3823 [05:34<01:37, 8.84it/s][A[A
77% 2959/3823 [05:34<01:37, 8.83it/s][A[A
77% 2960/3823 [05:34<01:37, 8.83it/s][A[A
77% 2961/3823 [05:34<01:37, 8.82it/s][A[A
77% 2962/3823 [05:34<01:37, 8.81it/s][A[A
78% 2963/3823 [05:35<01:37, 8.81it/s][A[A
78% 2964/3823 [05:35<01:37, 8.82it/s][A[A
78% 2965/3823 [05:35<01:37, 8.83it/s][A[A
78% 2966/3823 [05:35<01:37, 8.83it/s][A[A
78% 2967/3823 [05:35<01:36, 8.83it/s][A[A
78% 2968/3823 [05:35<01:36, 8.83it/s][A[A
78% 2969/3823 [05:35<01:36, 8.83it/s][A[A
78% 2970/3823 [05:35<01:36, 8.83it/s][A[A
78% 2971/3823 [05:35<01:36, 8.83it/s][A[A
78% 2972/3823 [05:36<01:36, 8.83it/s][A[A
78% 2973/3823 [05:36<01:36, 8.82it/s][A[A
78% 2974/3823 [05:36<01:36, 8.82it/s][A[A
78% 2975/3823 [05:36<01:36, 8.82it/s][A[A
78% 2976/3823 [05:36<01:35, 8.82it/s][A[A
78% 2977/3823 [05:36<01:35, 8.83it/s][A[A
78% 2978/3823 [05:36<01:35, 8.82it/s][A[A
78% 2979/3823 [05:36<01:35, 8.81it/s][A[A
78% 2980/3823 [05:37<01:35, 8.82it/s][A[A
78% 2981/3823 [05:37<01:35, 8.82it/s][A[A
78% 2982/3823 [05:37<01:35, 8.82it/s][A[A
78% 2983/3823 [05:37<01:35, 8.82it/s][A[A
78% 2984/3823 [05:37<01:35, 8.83it/s][A[A
78% 2985/3823 [05:37<01:34, 8.83it/s][A[A
78% 2986/3823 [05:37<01:34, 8.81it/s][A[A
78% 2987/3823 [05:37<01:34, 8.81it/s][A[A
78% 2988/3823 [05:37<01:34, 8.81it/s][A[A
78% 2989/3823 [05:38<01:34, 8.82it/s][A[A
78% 2990/3823 [05:38<01:34, 8.82it/s][A[A
78% 2991/3823 [05:38<01:34, 8.82it/s][A[A
78% 2992/3823 [05:38<01:34, 8.82it/s][A[A
78% 2993/3823 [05:38<01:34, 8.83it/s][A[A
78% 2994/3823 [05:38<01:33, 8.82it/s][A[A
78% 2995/3823 [05:38<01:33, 8.82it/s][A[A
78% 2996/3823 [05:38<01:33, 8.83it/s][A[A
78% 2997/3823 [05:38<01:33, 8.83it/s][A[A
78% 2998/3823 [05:39<01:33, 8.83it/s][A[A
78% 2999/3823 [05:39<01:33, 8.82it/s][A[A
78% 3000/3823 [05:39<01:33, 8.81it/s][A[A
78% 3001/3823 [05:39<01:33, 8.82it/s][A[A
79% 3002/3823 [05:39<01:33, 8.82it/s][A[A
79% 3003/3823 [05:39<01:32, 8.82it/s][A[A
79% 3004/3823 [05:39<01:32, 8.82it/s][A[A
79% 3005/3823 [05:39<01:32, 8.82it/s][A[A
79% 3006/3823 [05:39<01:32, 8.82it/s][A[A
79% 3007/3823 [05:40<01:32, 8.83it/s][A[A
79% 3008/3823 [05:40<01:32, 8.84it/s][A[A
79% 3009/3823 [05:40<01:32, 8.84it/s][A[A
79% 3010/3823 [05:40<01:32, 8.83it/s][A[A
79% 3011/3823 [05:40<01:31, 8.83it/s][A[A
79% 3012/3823 [05:40<01:31, 8.82it/s][A[A
79% 3013/3823 [05:40<01:31, 8.82it/s][A[A
79% 3014/3823 [05:40<01:31, 8.82it/s][A[A
79% 3015/3823 [05:40<01:31, 8.82it/s][A[A
79% 3016/3823 [05:41<01:31, 8.82it/s][A[A
79% 3017/3823 [05:41<01:31, 8.82it/s][A[A
79% 3018/3823 [05:41<01:31, 8.82it/s][A[A
79% 3019/3823 [05:41<01:31, 8.82it/s][A[A
79% 3020/3823 [05:41<01:31, 8.82it/s][A[A
79% 3021/3823 [05:41<01:30, 8.82it/s][A[A
79% 3022/3823 [05:41<01:30, 8.83it/s][A[A
79% 3023/3823 [05:41<01:30, 8.83it/s][A[A
79% 3024/3823 [05:41<01:30, 8.83it/s][A[A
79% 3025/3823 [05:42<01:30, 8.82it/s][A[A
79% 3026/3823 [05:42<01:30, 8.82it/s][A[A
79% 3027/3823 [05:42<01:30, 8.83it/s][A[A
79% 3028/3823 [05:42<01:30, 8.83it/s][A[A
79% 3029/3823 [05:42<01:29, 8.83it/s][A[A
79% 3030/3823 [05:42<01:29, 8.83it/s][A[A
79% 3031/3823 [05:42<01:29, 8.84it/s][A[A
79% 3032/3823 [05:42<01:29, 8.82it/s][A[A
79% 3033/3823 [05:43<01:29, 8.82it/s][A[A
79% 3034/3823 [05:43<01:29, 8.80it/s][A[A
79% 3035/3823 [05:43<01:29, 8.81it/s][A[A
79% 3036/3823 [05:43<01:29, 8.81it/s][A[A
79% 3037/3823 [05:43<01:29, 8.83it/s][A[A
79% 3038/3823 [05:43<01:28, 8.83it/s][A[A
79% 3039/3823 [05:43<01:28, 8.83it/s][A[A
80% 3040/3823 [05:43<01:28, 8.83it/s][A[A
80% 3041/3823 [05:43<01:28, 8.83it/s][A[A
80% 3042/3823 [05:44<01:28, 8.82it/s][A[A
80% 3043/3823 [05:44<01:28, 8.83it/s][A[A
80% 3044/3823 [05:44<01:28, 8.83it/s][A[A
80% 3045/3823 [05:44<01:28, 8.83it/s][A[A
80% 3046/3823 [05:44<01:28, 8.83it/s][A[A
80% 3047/3823 [05:44<01:27, 8.82it/s][A[A
80% 3048/3823 [05:44<01:27, 8.81it/s][A[A
80% 3049/3823 [05:44<01:27, 8.81it/s][A[A
80% 3050/3823 [05:44<01:27, 8.81it/s][A[A
80% 3051/3823 [05:45<01:27, 8.82it/s][A[A
80% 3052/3823 [05:45<01:27, 8.82it/s][A[A
80% 3053/3823 [05:45<01:27, 8.83it/s][A[A
80% 3054/3823 [05:45<01:27, 8.83it/s][A[A
80% 3055/3823 [05:45<01:26, 8.83it/s][A[A
80% 3056/3823 [05:45<01:26, 8.83it/s][A[A
80% 3057/3823 [05:45<01:26, 8.82it/s][A[A
80% 3058/3823 [05:45<01:26, 8.83it/s][A[A
80% 3059/3823 [05:45<01:26, 8.83it/s][A[A
80% 3060/3823 [05:46<01:26, 8.83it/s][A[A
80% 3061/3823 [05:46<01:26, 8.83it/s][A[A
80% 3062/3823 [05:46<01:26, 8.83it/s][A[A
80% 3063/3823 [05:46<01:26, 8.83it/s][A[A
80% 3064/3823 [05:46<01:26, 8.82it/s][A[A
80% 3065/3823 [05:46<01:25, 8.83it/s][A[A
80% 3066/3823 [05:46<01:25, 8.83it/s][A[A
80% 3067/3823 [05:46<01:25, 8.83it/s][A[A
80% 3068/3823 [05:46<01:25, 8.83it/s][A[A
80% 3069/3823 [05:47<01:25, 8.83it/s][A[A
80% 3070/3823 [05:47<01:25, 8.83it/s][A[A
80% 3071/3823 [05:47<01:25, 8.83it/s][A[A
80% 3072/3823 [05:47<01:25, 8.83it/s][A[A
80% 3073/3823 [05:47<01:25, 8.81it/s][A[A
80% 3074/3823 [05:47<01:25, 8.80it/s][A[A
80% 3075/3823 [05:47<01:24, 8.81it/s][A[A
80% 3076/3823 [05:47<01:24, 8.81it/s][A[A
80% 3077/3823 [05:47<01:24, 8.80it/s][A[A
81% 3078/3823 [05:48<01:24, 8.81it/s][A[A
81% 3079/3823 [05:48<01:24, 8.82it/s][A[A
81% 3080/3823 [05:48<01:24, 8.81it/s][A[A
81% 3081/3823 [05:48<01:24, 8.82it/s][A[A
81% 3082/3823 [05:48<01:23, 8.82it/s][A[A
81% 3083/3823 [05:48<01:23, 8.82it/s][A[A
81% 3084/3823 [05:48<01:23, 8.83it/s][A[A
81% 3085/3823 [05:48<01:23, 8.84it/s][A[A
81% 3086/3823 [05:49<01:23, 8.82it/s][A[A
81% 3087/3823 [05:49<01:23, 8.81it/s][A[A
81% 3088/3823 [05:49<01:23, 8.81it/s][A[A
81% 3089/3823 [05:49<01:23, 8.81it/s][A[A
81% 3090/3823 [05:49<01:23, 8.82it/s][A[A
81% 3091/3823 [05:49<01:22, 8.82it/s][A[A
81% 3092/3823 [05:49<01:22, 8.83it/s][A[A
81% 3093/3823 [05:49<01:22, 8.83it/s][A[A
81% 3094/3823 [05:49<01:22, 8.82it/s][A[A
81% 3095/3823 [05:50<01:22, 8.82it/s][A[A
81% 3096/3823 [05:50<01:22, 8.82it/s][A[A
81% 3097/3823 [05:50<01:22, 8.82it/s][A[A
81% 3098/3823 [05:50<01:22, 8.82it/s][A[A
81% 3099/3823 [05:50<01:22, 8.81it/s][A[A
81% 3100/3823 [05:50<01:22, 8.81it/s][A[A
81% 3101/3823 [05:50<01:21, 8.82it/s][A[A
81% 3102/3823 [05:50<01:21, 8.82it/s][A[A
81% 3103/3823 [05:50<01:21, 8.81it/s][A[A
81% 3104/3823 [05:51<01:21, 8.80it/s][A[A
81% 3105/3823 [05:51<01:21, 8.81it/s][A[A
81% 3106/3823 [05:51<01:21, 8.81it/s][A[A
81% 3107/3823 [05:51<01:21, 8.82it/s][A[A
81% 3108/3823 [05:51<01:21, 8.82it/s][A[A
81% 3109/3823 [05:51<01:20, 8.82it/s][A[A
81% 3110/3823 [05:51<01:20, 8.82it/s][A[A
81% 3111/3823 [05:51<01:20, 8.82it/s][A[A
81% 3112/3823 [05:51<01:20, 8.81it/s][A[A
81% 3113/3823 [05:52<01:20, 8.80it/s][A[A
81% 3114/3823 [05:52<01:20, 8.80it/s][A[A
81% 3115/3823 [05:52<01:20, 8.81it/s][A[A
82% 3116/3823 [05:52<01:20, 8.81it/s][A[A
82% 3117/3823 [05:52<01:20, 8.81it/s][A[A
82% 3118/3823 [05:52<01:19, 8.82it/s][A[A
82% 3119/3823 [05:52<01:19, 8.83it/s][A[A
82% 3120/3823 [05:52<01:19, 8.82it/s][A[A
82% 3121/3823 [05:52<01:19, 8.82it/s][A[A
82% 3122/3823 [05:53<01:19, 8.81it/s][A[A
82% 3123/3823 [05:53<01:19, 8.82it/s][A[A
82% 3124/3823 [05:53<01:19, 8.83it/s][A[A
82% 3125/3823 [05:53<01:19, 8.83it/s][A[A
82% 3126/3823 [05:53<01:19, 8.82it/s][A[A
82% 3127/3823 [05:53<01:19, 8.81it/s][A[A
82% 3128/3823 [05:53<01:18, 8.81it/s][A[A
82% 3129/3823 [05:53<01:18, 8.81it/s][A[A
82% 3130/3823 [05:54<01:18, 8.79it/s][A[A
82% 3131/3823 [05:54<01:18, 8.81it/s][A[A
82% 3132/3823 [05:54<01:17, 8.86it/s][A[A
82% 3133/3823 [05:54<01:17, 8.85it/s][A[A
82% 3134/3823 [05:54<01:17, 8.85it/s][A[A
82% 3135/3823 [05:54<01:17, 8.84it/s][A[A
82% 3136/3823 [05:54<01:17, 8.83it/s][A[A
82% 3137/3823 [05:54<01:17, 8.82it/s][A[A
82% 3138/3823 [05:54<01:17, 8.82it/s][A[A
82% 3139/3823 [05:55<01:17, 8.81it/s][A[A
82% 3140/3823 [05:55<01:17, 8.81it/s][A[A
82% 3141/3823 [05:55<01:17, 8.81it/s][A[A
82% 3142/3823 [05:55<01:17, 8.81it/s][A[A
82% 3143/3823 [05:55<01:17, 8.81it/s][A[A
82% 3144/3823 [05:55<01:17, 8.81it/s][A[A
82% 3145/3823 [05:55<01:16, 8.82it/s][A[A
82% 3146/3823 [05:55<01:16, 8.82it/s][A[A
82% 3147/3823 [05:55<01:16, 8.82it/s][A[A
82% 3148/3823 [05:56<01:16, 8.82it/s][A[A
82% 3149/3823 [05:56<01:16, 8.82it/s][A[A
82% 3150/3823 [05:56<01:16, 8.82it/s][A[A
82% 3151/3823 [05:56<01:16, 8.82it/s][A[A
82% 3152/3823 [05:56<01:16, 8.81it/s][A[A
82% 3153/3823 [05:56<01:15, 8.82it/s][A[A
83% 3154/3823 [05:56<01:15, 8.81it/s][A[A
83% 3155/3823 [05:56<01:15, 8.81it/s][A[A
83% 3156/3823 [05:56<01:15, 8.81it/s][A[A
83% 3157/3823 [05:57<01:15, 8.81it/s][A[A
83% 3158/3823 [05:57<01:15, 8.80it/s][A[A
83% 3159/3823 [05:57<01:15, 8.80it/s][A[A
83% 3160/3823 [05:57<01:15, 8.78it/s][A[A
83% 3161/3823 [05:57<01:15, 8.77it/s][A[A
83% 3162/3823 [05:57<01:15, 8.79it/s][A[A
83% 3163/3823 [05:57<01:15, 8.80it/s][A[A
83% 3164/3823 [05:57<01:14, 8.81it/s][A[A
83% 3165/3823 [05:57<01:14, 8.81it/s][A[A
83% 3166/3823 [05:58<01:14, 8.80it/s][A[A
83% 3167/3823 [05:58<01:14, 8.81it/s][A[A
83% 3168/3823 [05:58<01:14, 8.81it/s][A[A
83% 3169/3823 [05:58<01:14, 8.82it/s][A[A
83% 3170/3823 [05:58<01:14, 8.81it/s][A[A
83% 3171/3823 [05:58<01:13, 8.81it/s][A[A
83% 3172/3823 [05:58<01:13, 8.81it/s][A[A
83% 3173/3823 [05:58<01:13, 8.82it/s][A[A
83% 3174/3823 [05:59<01:13, 8.81it/s][A[A
83% 3175/3823 [05:59<01:13, 8.80it/s][A[A
83% 3176/3823 [05:59<01:13, 8.80it/s][A[A
83% 3177/3823 [05:59<01:13, 8.81it/s][A[A
83% 3178/3823 [05:59<01:13, 8.81it/s][A[A
83% 3179/3823 [05:59<01:13, 8.81it/s][A[A
83% 3180/3823 [05:59<01:12, 8.81it/s][A[A
83% 3181/3823 [05:59<01:12, 8.81it/s][A[A
83% 3182/3823 [05:59<01:12, 8.81it/s][A[A
83% 3183/3823 [06:00<01:12, 8.80it/s][A[A
83% 3184/3823 [06:00<01:12, 8.79it/s][A[A
83% 3185/3823 [06:00<01:12, 8.79it/s][A[A
83% 3186/3823 [06:00<01:12, 8.80it/s][A[A
83% 3187/3823 [06:00<01:12, 8.80it/s][A[A
83% 3188/3823 [06:00<01:12, 8.81it/s][A[A
83% 3189/3823 [06:00<01:12, 8.80it/s][A[A
83% 3190/3823 [06:00<01:11, 8.81it/s][A[A
83% 3191/3823 [06:00<01:11, 8.80it/s][A[A
83% 3192/3823 [06:01<01:11, 8.79it/s][A[A
84% 3193/3823 [06:01<01:11, 8.78it/s][A[A
84% 3194/3823 [06:01<01:11, 8.79it/s][A[A
84% 3195/3823 [06:01<01:11, 8.80it/s][A[A
84% 3196/3823 [06:01<01:11, 8.81it/s][A[A
84% 3197/3823 [06:01<01:11, 8.81it/s][A[A
84% 3198/3823 [06:01<01:10, 8.81it/s][A[A
84% 3199/3823 [06:01<01:10, 8.81it/s][A[A
84% 3200/3823 [06:01<01:10, 8.82it/s][A[A
84% 3201/3823 [06:02<01:10, 8.81it/s][A[A
84% 3202/3823 [06:02<01:10, 8.82it/s][A[A
84% 3203/3823 [06:02<01:10, 8.82it/s][A[A
84% 3204/3823 [06:02<01:10, 8.82it/s][A[A
84% 3205/3823 [06:02<01:10, 8.81it/s][A[A
84% 3206/3823 [06:02<01:10, 8.81it/s][A[A
84% 3207/3823 [06:02<01:09, 8.82it/s][A[A
84% 3208/3823 [06:02<01:09, 8.81it/s][A[A
84% 3209/3823 [06:02<01:09, 8.81it/s][A[A
84% 3210/3823 [06:03<01:09, 8.82it/s][A[A
84% 3211/3823 [06:03<01:09, 8.82it/s][A[A
84% 3212/3823 [06:03<01:09, 8.83it/s][A[A
84% 3213/3823 [06:03<01:09, 8.83it/s][A[A
84% 3214/3823 [06:03<01:09, 8.82it/s][A[A
84% 3215/3823 [06:03<01:08, 8.82it/s][A[A
84% 3216/3823 [06:03<01:08, 8.82it/s][A[A
84% 3217/3823 [06:03<01:08, 8.82it/s][A[A
84% 3218/3823 [06:03<01:08, 8.81it/s][A[A
84% 3219/3823 [06:04<01:08, 8.82it/s][A[A
84% 3220/3823 [06:04<01:08, 8.82it/s][A[A
84% 3221/3823 [06:04<01:08, 8.81it/s][A[A
84% 3222/3823 [06:04<01:08, 8.81it/s][A[A
84% 3223/3823 [06:04<01:08, 8.81it/s][A[A
84% 3224/3823 [06:04<01:07, 8.81it/s][A[A
84% 3225/3823 [06:04<01:07, 8.82it/s][A[A
84% 3226/3823 [06:04<01:07, 8.82it/s][A[A
84% 3227/3823 [06:05<01:07, 8.82it/s][A[A
84% 3228/3823 [06:05<01:07, 8.82it/s][A[A
84% 3229/3823 [06:05<01:07, 8.81it/s][A[A
84% 3230/3823 [06:05<01:07, 8.81it/s][A[A
85% 3231/3823 [06:05<01:07, 8.81it/s][A[A
85% 3232/3823 [06:05<01:07, 8.81it/s][A[A
85% 3233/3823 [06:05<01:06, 8.81it/s][A[A
85% 3234/3823 [06:05<01:06, 8.81it/s][A[A
85% 3235/3823 [06:05<01:06, 8.82it/s][A[A
85% 3236/3823 [06:06<01:06, 8.81it/s][A[A
85% 3237/3823 [06:06<01:06, 8.80it/s][A[A
85% 3238/3823 [06:06<01:06, 8.81it/s][A[A
85% 3239/3823 [06:06<01:06, 8.81it/s][A[A
85% 3240/3823 [06:06<01:06, 8.81it/s][A[A
85% 3241/3823 [06:06<01:06, 8.81it/s][A[A
85% 3242/3823 [06:06<01:05, 8.81it/s][A[A
85% 3243/3823 [06:06<01:05, 8.81it/s][A[A
85% 3244/3823 [06:06<01:05, 8.80it/s][A[A
85% 3245/3823 [06:07<01:05, 8.79it/s][A[A
85% 3246/3823 [06:07<01:05, 8.80it/s][A[A
85% 3247/3823 [06:07<01:05, 8.81it/s][A[A
85% 3248/3823 [06:07<01:05, 8.82it/s][A[A
85% 3249/3823 [06:07<01:05, 8.81it/s][A[A
85% 3250/3823 [06:07<01:05, 8.80it/s][A[A
85% 3251/3823 [06:07<01:04, 8.81it/s][A[A
85% 3252/3823 [06:07<01:04, 8.80it/s][A[A
85% 3253/3823 [06:07<01:04, 8.80it/s][A[A
85% 3254/3823 [06:08<01:04, 8.80it/s][A[A
85% 3255/3823 [06:08<01:04, 8.81it/s][A[A
85% 3256/3823 [06:08<01:04, 8.81it/s][A[A
85% 3257/3823 [06:08<01:04, 8.81it/s][A[A
85% 3258/3823 [06:08<01:04, 8.79it/s][A[A
85% 3259/3823 [06:08<01:04, 8.79it/s][A[A
85% 3260/3823 [06:08<01:04, 8.79it/s][A[A
85% 3261/3823 [06:08<01:03, 8.80it/s][A[A
85% 3262/3823 [06:08<01:03, 8.79it/s][A[A
85% 3263/3823 [06:09<01:03, 8.80it/s][A[A
85% 3264/3823 [06:09<01:03, 8.81it/s][A[A
85% 3265/3823 [06:09<01:03, 8.81it/s][A[A
85% 3266/3823 [06:09<01:03, 8.81it/s][A[A
85% 3267/3823 [06:09<01:03, 8.82it/s][A[A
85% 3268/3823 [06:09<01:02, 8.81it/s][A[A
86% 3269/3823 [06:09<01:02, 8.82it/s][A[A
86% 3270/3823 [06:09<01:02, 8.81it/s][A[A
86% 3271/3823 [06:10<01:02, 8.81it/s][A[A
86% 3272/3823 [06:10<01:02, 8.80it/s][A[A
86% 3273/3823 [06:10<01:02, 8.81it/s][A[A
86% 3274/3823 [06:10<01:02, 8.81it/s][A[A
86% 3275/3823 [06:10<01:02, 8.81it/s][A[A
86% 3276/3823 [06:10<01:02, 8.81it/s][A[A
86% 3277/3823 [06:10<01:01, 8.81it/s][A[A
86% 3278/3823 [06:10<01:01, 8.81it/s][A[A
86% 3279/3823 [06:10<01:01, 8.82it/s][A[A
86% 3280/3823 [06:11<01:01, 8.82it/s][A[A
86% 3281/3823 [06:11<01:01, 8.83it/s][A[A
86% 3282/3823 [06:11<01:01, 8.83it/s][A[A
86% 3283/3823 [06:11<01:01, 8.82it/s][A[A
86% 3284/3823 [06:11<01:01, 8.81it/s][A[A
86% 3285/3823 [06:11<01:01, 8.81it/s][A[A
86% 3286/3823 [06:11<01:00, 8.80it/s][A[A
86% 3287/3823 [06:11<01:00, 8.81it/s][A[A
86% 3288/3823 [06:11<01:00, 8.81it/s][A[A
86% 3289/3823 [06:12<01:00, 8.81it/s][A[A
86% 3290/3823 [06:12<01:00, 8.81it/s][A[A
86% 3291/3823 [06:12<01:00, 8.81it/s][A[A
86% 3292/3823 [06:12<01:00, 8.80it/s][A[A
86% 3293/3823 [06:12<01:00, 8.81it/s][A[A
86% 3294/3823 [06:12<00:59, 8.82it/s][A[A
86% 3295/3823 [06:12<00:59, 8.83it/s][A[A
86% 3296/3823 [06:12<00:59, 8.83it/s][A[A
86% 3297/3823 [06:12<00:59, 8.83it/s][A[A
86% 3298/3823 [06:13<00:59, 8.80it/s][A[A
86% 3299/3823 [06:13<00:59, 8.80it/s][A[A
86% 3300/3823 [06:13<00:59, 8.80it/s][A[A
86% 3301/3823 [06:13<00:59, 8.80it/s][A[A
86% 3302/3823 [06:13<00:59, 8.80it/s][A[A
86% 3303/3823 [06:13<00:59, 8.81it/s][A[A
86% 3304/3823 [06:13<00:58, 8.81it/s][A[A
86% 3305/3823 [06:13<00:58, 8.82it/s][A[A
86% 3306/3823 [06:13<00:58, 8.81it/s][A[A
87% 3307/3823 [06:14<00:58, 8.81it/s][A[A
87% 3308/3823 [06:14<00:58, 8.81it/s][A[A
87% 3309/3823 [06:14<00:58, 8.81it/s][A[A
87% 3310/3823 [06:14<00:58, 8.80it/s][A[A
87% 3311/3823 [06:14<00:58, 8.80it/s][A[A
87% 3312/3823 [06:14<00:58, 8.80it/s][A[A
87% 3313/3823 [06:14<00:57, 8.81it/s][A[A
87% 3314/3823 [06:14<00:57, 8.80it/s][A[A
87% 3315/3823 [06:15<00:57, 8.80it/s][A[A
87% 3316/3823 [06:15<00:57, 8.80it/s][A[A
87% 3317/3823 [06:15<00:57, 8.81it/s][A[A
87% 3318/3823 [06:15<00:57, 8.80it/s][A[A
87% 3319/3823 [06:15<00:57, 8.81it/s][A[A
87% 3320/3823 [06:15<00:57, 8.81it/s][A[A
87% 3321/3823 [06:15<00:56, 8.81it/s][A[A
87% 3322/3823 [06:15<00:56, 8.81it/s][A[A
87% 3323/3823 [06:15<00:56, 8.82it/s][A[A
87% 3324/3823 [06:16<00:56, 8.80it/s][A[A
87% 3325/3823 [06:16<00:56, 8.80it/s][A[A
87% 3326/3823 [06:16<00:56, 8.80it/s][A[A
87% 3327/3823 [06:16<00:56, 8.81it/s][A[A
87% 3328/3823 [06:16<00:56, 8.81it/s][A[A
87% 3329/3823 [06:16<00:56, 8.81it/s][A[A
87% 3330/3823 [06:16<00:56, 8.80it/s][A[A
87% 3331/3823 [06:16<00:55, 8.79it/s][A[A
87% 3332/3823 [06:16<00:55, 8.80it/s][A[A
87% 3333/3823 [06:17<00:55, 8.81it/s][A[A
87% 3334/3823 [06:17<00:55, 8.82it/s][A[A
87% 3335/3823 [06:17<00:55, 8.81it/s][A[A
87% 3336/3823 [06:17<00:55, 8.80it/s][A[A
87% 3337/3823 [06:17<00:55, 8.79it/s][A[A
87% 3338/3823 [06:17<00:55, 8.78it/s][A[A
87% 3339/3823 [06:17<00:55, 8.77it/s][A[A
87% 3340/3823 [06:17<00:55, 8.78it/s][A[A
87% 3341/3823 [06:17<00:54, 8.79it/s][A[A
87% 3342/3823 [06:18<00:54, 8.79it/s][A[A
87% 3343/3823 [06:18<00:54, 8.80it/s][A[A
87% 3344/3823 [06:18<00:54, 8.80it/s][A[A
87% 3345/3823 [06:18<00:54, 8.79it/s][A[A
88% 3346/3823 [06:18<00:54, 8.79it/s][A[A
88% 3347/3823 [06:18<00:54, 8.78it/s][A[A
88% 3348/3823 [06:18<00:54, 8.79it/s][A[A
88% 3349/3823 [06:18<00:53, 8.80it/s][A[A
88% 3350/3823 [06:18<00:53, 8.81it/s][A[A
88% 3351/3823 [06:19<00:53, 8.80it/s][A[A
88% 3352/3823 [06:19<00:53, 8.80it/s][A[A
88% 3353/3823 [06:19<00:53, 8.80it/s][A[A
88% 3354/3823 [06:19<00:53, 8.80it/s][A[A
88% 3355/3823 [06:19<00:53, 8.80it/s][A[A
88% 3356/3823 [06:19<00:53, 8.81it/s][A[A
88% 3357/3823 [06:19<00:52, 8.81it/s][A[A
88% 3358/3823 [06:19<00:52, 8.81it/s][A[A
88% 3359/3823 [06:20<00:52, 8.81it/s][A[A
88% 3360/3823 [06:20<00:52, 8.81it/s][A[A
88% 3361/3823 [06:20<00:52, 8.81it/s][A[A
88% 3362/3823 [06:20<00:52, 8.80it/s][A[A
88% 3363/3823 [06:20<00:52, 8.79it/s][A[A
88% 3364/3823 [06:20<00:52, 8.79it/s][A[A
88% 3365/3823 [06:20<00:52, 8.79it/s][A[A
88% 3366/3823 [06:20<00:51, 8.79it/s][A[A
88% 3367/3823 [06:20<00:51, 8.80it/s][A[A
88% 3368/3823 [06:21<00:51, 8.80it/s][A[A
88% 3369/3823 [06:21<00:51, 8.80it/s][A[A
88% 3370/3823 [06:21<00:51, 8.81it/s][A[A
88% 3371/3823 [06:21<00:51, 8.81it/s][A[A
88% 3372/3823 [06:21<00:51, 8.81it/s][A[A
88% 3373/3823 [06:21<00:51, 8.81it/s][A[A
88% 3374/3823 [06:21<00:50, 8.81it/s][A[A
88% 3375/3823 [06:21<00:50, 8.81it/s][A[A
88% 3376/3823 [06:21<00:50, 8.81it/s][A[A
88% 3377/3823 [06:22<00:50, 8.81it/s][A[A
88% 3378/3823 [06:22<00:50, 8.81it/s][A[A
88% 3379/3823 [06:22<00:50, 8.82it/s][A[A
88% 3380/3823 [06:22<00:50, 8.82it/s][A[A
88% 3381/3823 [06:22<00:50, 8.82it/s][A[A
88% 3382/3823 [06:22<00:50, 8.82it/s][A[A
88% 3383/3823 [06:22<00:49, 8.82it/s][A[A
89% 3384/3823 [06:22<00:49, 8.82it/s][A[A
89% 3385/3823 [06:22<00:49, 8.82it/s][A[A
89% 3386/3823 [06:23<00:49, 8.81it/s][A[A
89% 3387/3823 [06:23<00:49, 8.80it/s][A[A
89% 3388/3823 [06:23<00:49, 8.81it/s][A[A
89% 3389/3823 [06:23<00:49, 8.82it/s][A[A
89% 3390/3823 [06:23<00:49, 8.82it/s][A[A
89% 3391/3823 [06:23<00:49, 8.81it/s][A[A
89% 3392/3823 [06:23<00:48, 8.81it/s][A[A
89% 3393/3823 [06:23<00:48, 8.81it/s][A[A
89% 3394/3823 [06:23<00:48, 8.81it/s][A[A
89% 3395/3823 [06:24<00:48, 8.81it/s][A[A
89% 3396/3823 [06:24<00:48, 8.81it/s][A[A
89% 3397/3823 [06:24<00:48, 8.81it/s][A[A
89% 3398/3823 [06:24<00:48, 8.81it/s][A[A
89% 3399/3823 [06:24<00:48, 8.80it/s][A[A
89% 3400/3823 [06:24<00:48, 8.79it/s][A[A
89% 3401/3823 [06:24<00:47, 8.79it/s][A[A
89% 3402/3823 [06:24<00:47, 8.80it/s][A[A
89% 3403/3823 [06:25<00:47, 8.79it/s][A[A
89% 3404/3823 [06:25<00:47, 8.79it/s][A[A
89% 3405/3823 [06:25<00:47, 8.80it/s][A[A
89% 3406/3823 [06:25<00:47, 8.79it/s][A[A
89% 3407/3823 [06:25<00:47, 8.79it/s][A[A
89% 3408/3823 [06:25<00:47, 8.79it/s][A[A
89% 3409/3823 [06:25<00:47, 8.80it/s][A[A
89% 3410/3823 [06:25<00:46, 8.80it/s][A[A
89% 3411/3823 [06:25<00:46, 8.80it/s][A[A
89% 3412/3823 [06:26<00:46, 8.80it/s][A[A
89% 3413/3823 [06:26<00:46, 8.80it/s][A[A
89% 3414/3823 [06:26<00:46, 8.80it/s][A[A
89% 3415/3823 [06:26<00:46, 8.79it/s][A[A
89% 3416/3823 [06:26<00:46, 8.80it/s][A[A
89% 3417/3823 [06:26<00:46, 8.80it/s][A[A
89% 3418/3823 [06:26<00:45, 8.81it/s][A[A
89% 3419/3823 [06:26<00:45, 8.81it/s][A[A
89% 3420/3823 [06:26<00:45, 8.81it/s][A[A
89% 3421/3823 [06:27<00:45, 8.81it/s][A[A
90% 3422/3823 [06:27<00:45, 8.81it/s][A[A
90% 3423/3823 [06:27<00:45, 8.81it/s][A[A
90% 3424/3823 [06:27<00:45, 8.81it/s][A[A
90% 3425/3823 [06:27<00:45, 8.80it/s][A[A
90% 3426/3823 [06:27<00:45, 8.80it/s][A[A
90% 3427/3823 [06:27<00:44, 8.81it/s][A[A
90% 3428/3823 [06:27<00:44, 8.81it/s][A[A
90% 3429/3823 [06:27<00:44, 8.81it/s][A[A
90% 3430/3823 [06:28<00:44, 8.80it/s][A[A
90% 3431/3823 [06:28<00:44, 8.80it/s][A[A
90% 3432/3823 [06:28<00:44, 8.80it/s][A[A
90% 3433/3823 [06:28<00:44, 8.80it/s][A[A
90% 3434/3823 [06:28<00:44, 8.81it/s][A[A
90% 3435/3823 [06:28<00:44, 8.81it/s][A[A
90% 3436/3823 [06:28<00:43, 8.82it/s][A[A
90% 3437/3823 [06:28<00:43, 8.82it/s][A[A
90% 3438/3823 [06:28<00:43, 8.82it/s][A[A
90% 3439/3823 [06:29<00:43, 8.81it/s][A[A
90% 3440/3823 [06:29<00:43, 8.80it/s][A[A
90% 3441/3823 [06:29<00:43, 8.81it/s][A[A
90% 3442/3823 [06:29<00:43, 8.81it/s][A[A
90% 3443/3823 [06:29<00:43, 8.81it/s][A[A
90% 3444/3823 [06:29<00:42, 8.81it/s][A[A
90% 3445/3823 [06:29<00:42, 8.82it/s][A[A
90% 3446/3823 [06:29<00:42, 8.82it/s][A[A
90% 3447/3823 [06:29<00:42, 8.82it/s][A[A
90% 3448/3823 [06:30<00:42, 8.82it/s][A[A
90% 3449/3823 [06:30<00:42, 8.82it/s][A[A
90% 3450/3823 [06:30<00:42, 8.82it/s][A[A
90% 3451/3823 [06:30<00:42, 8.82it/s][A[A
90% 3452/3823 [06:30<00:42, 8.80it/s][A[A
90% 3453/3823 [06:30<00:42, 8.78it/s][A[A
90% 3454/3823 [06:30<00:42, 8.78it/s][A[A
90% 3455/3823 [06:30<00:41, 8.79it/s][A[A
90% 3456/3823 [06:31<00:41, 8.79it/s][A[A
90% 3457/3823 [06:31<00:41, 8.80it/s][A[A
90% 3458/3823 [06:31<00:41, 8.81it/s][A[A
90% 3459/3823 [06:31<00:41, 8.81it/s][A[A
91% 3460/3823 [06:31<00:41, 8.82it/s][A[A
91% 3461/3823 [06:31<00:41, 8.81it/s][A[A
91% 3462/3823 [06:31<00:40, 8.81it/s][A[A
91% 3463/3823 [06:31<00:40, 8.81it/s][A[A
91% 3464/3823 [06:31<00:40, 8.82it/s][A[A
91% 3465/3823 [06:32<00:40, 8.81it/s][A[A
91% 3466/3823 [06:32<00:40, 8.81it/s][A[A
91% 3467/3823 [06:32<00:40, 8.81it/s][A[A
91% 3468/3823 [06:32<00:40, 8.82it/s][A[A
91% 3469/3823 [06:32<00:40, 8.81it/s][A[A
91% 3470/3823 [06:32<00:40, 8.80it/s][A[A
91% 3471/3823 [06:32<00:40, 8.79it/s][A[A
91% 3472/3823 [06:32<00:39, 8.80it/s][A[A
91% 3473/3823 [06:32<00:39, 8.81it/s][A[A
91% 3474/3823 [06:33<00:39, 8.82it/s][A[A
91% 3475/3823 [06:33<00:39, 8.82it/s][A[A
91% 3476/3823 [06:33<00:39, 8.82it/s][A[A
91% 3477/3823 [06:33<00:39, 8.81it/s][A[A
91% 3478/3823 [06:33<00:39, 8.80it/s][A[A
91% 3479/3823 [06:33<00:39, 8.80it/s][A[A
91% 3480/3823 [06:33<00:38, 8.81it/s][A[A
91% 3481/3823 [06:33<00:38, 8.80it/s][A[A
91% 3482/3823 [06:33<00:38, 8.80it/s][A[A
91% 3483/3823 [06:34<00:38, 8.80it/s][A[A
91% 3484/3823 [06:34<00:38, 8.81it/s][A[A
91% 3485/3823 [06:34<00:38, 8.81it/s][A[A
91% 3486/3823 [06:34<00:38, 8.82it/s][A[A
91% 3487/3823 [06:34<00:38, 8.82it/s][A[A
91% 3488/3823 [06:34<00:37, 8.82it/s][A[A
91% 3489/3823 [06:34<00:37, 8.82it/s][A[A
91% 3490/3823 [06:34<00:37, 8.82it/s][A[A
91% 3491/3823 [06:34<00:37, 8.81it/s][A[A
91% 3492/3823 [06:35<00:37, 8.81it/s][A[A
91% 3493/3823 [06:35<00:37, 8.80it/s][A[A
91% 3494/3823 [06:35<00:37, 8.81it/s][A[A
91% 3495/3823 [06:35<00:37, 8.81it/s][A[A
91% 3496/3823 [06:35<00:37, 8.82it/s][A[A
91% 3497/3823 [06:35<00:36, 8.82it/s][A[A
91% 3498/3823 [06:35<00:36, 8.82it/s][A[A
92% 3499/3823 [06:35<00:36, 8.82it/s][A[A
92% 3500/3823 [06:36<00:36, 8.81it/s][A[A
92% 3501/3823 [06:36<00:36, 8.81it/s][A[A
92% 3502/3823 [06:36<00:36, 8.81it/s][A[A
92% 3503/3823 [06:36<00:36, 8.81it/s][A[A
92% 3504/3823 [06:36<00:36, 8.81it/s][A[A
92% 3505/3823 [06:36<00:36, 8.81it/s][A[A
92% 3506/3823 [06:36<00:35, 8.81it/s][A[A
92% 3507/3823 [06:36<00:35, 8.81it/s][A[A
92% 3508/3823 [06:36<00:35, 8.80it/s][A[A
92% 3509/3823 [06:37<00:35, 8.80it/s][A[A
92% 3510/3823 [06:37<00:35, 8.80it/s][A[A
92% 3511/3823 [06:37<00:35, 8.81it/s][A[A
92% 3512/3823 [06:37<00:35, 8.81it/s][A[A
92% 3513/3823 [06:37<00:35, 8.81it/s][A[A
92% 3514/3823 [06:37<00:35, 8.82it/s][A[A
92% 3515/3823 [06:37<00:34, 8.82it/s][A[A
92% 3516/3823 [06:37<00:34, 8.81it/s][A[A
92% 3517/3823 [06:37<00:34, 8.81it/s][A[A
92% 3518/3823 [06:38<00:34, 8.80it/s][A[A
92% 3519/3823 [06:38<00:34, 8.81it/s][A[A
92% 3520/3823 [06:38<00:34, 8.81it/s][A[A
92% 3521/3823 [06:38<00:34, 8.82it/s][A[A
92% 3522/3823 [06:38<00:34, 8.82it/s][A[A
92% 3523/3823 [06:38<00:34, 8.82it/s][A[A
92% 3524/3823 [06:38<00:33, 8.82it/s][A[A
92% 3525/3823 [06:38<00:33, 8.82it/s][A[A
92% 3526/3823 [06:38<00:33, 8.81it/s][A[A
92% 3527/3823 [06:39<00:33, 8.81it/s][A[A
92% 3528/3823 [06:39<00:33, 8.81it/s][A[A
92% 3529/3823 [06:39<00:33, 8.82it/s][A[A
92% 3530/3823 [06:39<00:33, 8.81it/s][A[A
92% 3531/3823 [06:39<00:33, 8.80it/s][A[A
92% 3532/3823 [06:39<00:33, 8.80it/s][A[A
92% 3533/3823 [06:39<00:32, 8.80it/s][A[A
92% 3534/3823 [06:39<00:32, 8.81it/s][A[A
92% 3535/3823 [06:39<00:32, 8.81it/s][A[A
92% 3536/3823 [06:40<00:32, 8.81it/s][A[A
93% 3537/3823 [06:40<00:32, 8.80it/s][A[A
93% 3538/3823 [06:40<00:32, 8.80it/s][A[A
93% 3539/3823 [06:40<00:32, 8.80it/s][A[A
93% 3540/3823 [06:40<00:32, 8.80it/s][A[A
93% 3541/3823 [06:40<00:32, 8.80it/s][A[A
93% 3542/3823 [06:40<00:31, 8.80it/s][A[A
93% 3543/3823 [06:40<00:31, 8.80it/s][A[A
93% 3544/3823 [06:41<00:31, 8.79it/s][A[A
93% 3545/3823 [06:41<00:31, 8.78it/s][A[A
93% 3546/3823 [06:41<00:31, 8.79it/s][A[A
93% 3547/3823 [06:41<00:31, 8.80it/s][A[A
93% 3548/3823 [06:41<00:31, 8.81it/s][A[A
93% 3549/3823 [06:41<00:31, 8.81it/s][A[A
93% 3550/3823 [06:41<00:30, 8.81it/s][A[A
93% 3551/3823 [06:41<00:30, 8.82it/s][A[A
93% 3552/3823 [06:41<00:30, 8.82it/s][A[A
93% 3553/3823 [06:42<00:30, 8.82it/s][A[A
93% 3554/3823 [06:42<00:30, 8.82it/s][A[A
93% 3555/3823 [06:42<00:30, 8.81it/s][A[A
93% 3556/3823 [06:42<00:30, 8.81it/s][A[A
93% 3557/3823 [06:42<00:30, 8.80it/s][A[A
93% 3558/3823 [06:42<00:30, 8.81it/s][A[A
93% 3559/3823 [06:42<00:29, 8.81it/s][A[A
93% 3560/3823 [06:42<00:29, 8.81it/s][A[A
93% 3561/3823 [06:42<00:29, 8.81it/s][A[A
93% 3562/3823 [06:43<00:29, 8.81it/s][A[A
93% 3563/3823 [06:43<00:29, 8.81it/s][A[A
93% 3564/3823 [06:43<00:29, 8.81it/s][A[A
93% 3565/3823 [06:43<00:29, 8.82it/s][A[A
93% 3566/3823 [06:43<00:29, 8.82it/s][A[A
93% 3567/3823 [06:43<00:29, 8.82it/s][A[A
93% 3568/3823 [06:43<00:28, 8.82it/s][A[A
93% 3569/3823 [06:43<00:28, 8.82it/s][A[A
93% 3570/3823 [06:43<00:28, 8.82it/s][A[A
93% 3571/3823 [06:44<00:28, 8.81it/s][A[A
93% 3572/3823 [06:44<00:28, 8.82it/s][A[A
93% 3573/3823 [06:44<00:28, 8.81it/s][A[A
93% 3574/3823 [06:44<00:28, 8.82it/s][A[A
94% 3575/3823 [06:44<00:28, 8.81it/s][A[A
94% 3576/3823 [06:44<00:28, 8.82it/s][A[A
94% 3577/3823 [06:44<00:27, 8.81it/s][A[A
94% 3578/3823 [06:44<00:27, 8.81it/s][A[A
94% 3579/3823 [06:44<00:27, 8.80it/s][A[A
94% 3580/3823 [06:45<00:27, 8.81it/s][A[A
94% 3581/3823 [06:45<00:27, 8.81it/s][A[A
94% 3582/3823 [06:45<00:27, 8.82it/s][A[A
94% 3583/3823 [06:45<00:27, 8.82it/s][A[A
94% 3584/3823 [06:45<00:27, 8.82it/s][A[A
94% 3585/3823 [06:45<00:27, 8.80it/s][A[A
94% 3586/3823 [06:45<00:26, 8.79it/s][A[A
94% 3587/3823 [06:45<00:26, 8.80it/s][A[A
94% 3588/3823 [06:46<00:26, 8.80it/s][A[A
94% 3589/3823 [06:46<00:26, 8.81it/s][A[A
94% 3590/3823 [06:46<00:26, 8.81it/s][A[A
94% 3591/3823 [06:46<00:26, 8.81it/s][A[A
94% 3592/3823 [06:46<00:26, 8.82it/s][A[A
94% 3593/3823 [06:46<00:26, 8.81it/s][A[A
94% 3594/3823 [06:46<00:26, 8.81it/s][A[A
94% 3595/3823 [06:46<00:25, 8.81it/s][A[A
94% 3596/3823 [06:46<00:25, 8.81it/s][A[A
94% 3597/3823 [06:47<00:25, 8.80it/s][A[A
94% 3598/3823 [06:47<00:25, 8.81it/s][A[A
94% 3599/3823 [06:47<00:25, 8.81it/s][A[A
94% 3600/3823 [06:47<00:25, 8.81it/s][A[A
94% 3601/3823 [06:47<00:25, 8.80it/s][A[A
94% 3602/3823 [06:47<00:25, 8.80it/s][A[A
94% 3603/3823 [06:47<00:25, 8.80it/s][A[A
94% 3604/3823 [06:47<00:24, 8.80it/s][A[A
94% 3605/3823 [06:47<00:24, 8.81it/s][A[A
94% 3606/3823 [06:48<00:24, 8.81it/s][A[A
94% 3607/3823 [06:48<00:24, 8.81it/s][A[A
94% 3608/3823 [06:48<00:24, 8.82it/s][A[A
94% 3609/3823 [06:48<00:24, 8.81it/s][A[A
94% 3610/3823 [06:48<00:24, 8.80it/s][A[A
94% 3611/3823 [06:48<00:24, 8.81it/s][A[A
94% 3612/3823 [06:48<00:23, 8.81it/s][A[A
95% 3613/3823 [06:48<00:23, 8.81it/s][A[A
95% 3614/3823 [06:48<00:23, 8.81it/s][A[A
95% 3615/3823 [06:49<00:23, 8.80it/s][A[A
95% 3616/3823 [06:49<00:23, 8.81it/s][A[A
95% 3617/3823 [06:49<00:23, 8.81it/s][A[A
95% 3618/3823 [06:49<00:23, 8.81it/s][A[A
95% 3619/3823 [06:49<00:23, 8.81it/s][A[A
95% 3620/3823 [06:49<00:23, 8.81it/s][A[A
95% 3621/3823 [06:49<00:22, 8.81it/s][A[A
95% 3622/3823 [06:49<00:22, 8.82it/s][A[A
95% 3623/3823 [06:49<00:22, 8.80it/s][A[A
95% 3624/3823 [06:50<00:22, 8.79it/s][A[A
95% 3625/3823 [06:50<00:22, 8.79it/s][A[A
95% 3626/3823 [06:50<00:22, 8.79it/s][A[A
95% 3627/3823 [06:50<00:22, 8.80it/s][A[A
95% 3628/3823 [06:50<00:22, 8.80it/s][A[A
95% 3629/3823 [06:50<00:22, 8.80it/s][A[A
95% 3630/3823 [06:50<00:21, 8.81it/s][A[A
95% 3631/3823 [06:50<00:21, 8.80it/s][A[A
95% 3632/3823 [06:51<00:21, 8.79it/s][A[A
95% 3633/3823 [06:51<00:21, 8.80it/s][A[A
95% 3634/3823 [06:51<00:21, 8.80it/s][A[A
95% 3635/3823 [06:51<00:21, 8.81it/s][A[A
95% 3636/3823 [06:51<00:21, 8.81it/s][A[A
95% 3637/3823 [06:51<00:21, 8.81it/s][A[A
95% 3638/3823 [06:51<00:21, 8.81it/s][A[A
95% 3639/3823 [06:51<00:20, 8.81it/s][A[A
95% 3640/3823 [06:51<00:20, 8.80it/s][A[A
95% 3641/3823 [06:52<00:20, 8.80it/s][A[A
95% 3642/3823 [06:52<00:20, 8.80it/s][A[A
95% 3643/3823 [06:52<00:20, 8.80it/s][A[A
95% 3644/3823 [06:52<00:20, 8.81it/s][A[A
95% 3645/3823 [06:52<00:20, 8.81it/s][A[A
95% 3646/3823 [06:52<00:20, 8.82it/s][A[A
95% 3647/3823 [06:52<00:19, 8.81it/s][A[A
95% 3648/3823 [06:52<00:19, 8.82it/s][A[A
95% 3649/3823 [06:52<00:19, 8.81it/s][A[A
95% 3650/3823 [06:53<00:19, 8.79it/s][A[A
96% 3651/3823 [06:53<00:19, 8.80it/s][A[A
96% 3652/3823 [06:53<00:19, 8.80it/s][A[A
96% 3653/3823 [06:53<00:19, 8.81it/s][A[A
96% 3654/3823 [06:53<00:19, 8.81it/s][A[A
96% 3655/3823 [06:53<00:19, 8.80it/s][A[A
96% 3656/3823 [06:53<00:18, 8.80it/s][A[A
96% 3657/3823 [06:53<00:18, 8.80it/s][A[A
96% 3658/3823 [06:53<00:18, 8.80it/s][A[A
96% 3659/3823 [06:54<00:18, 8.80it/s][A[A
96% 3660/3823 [06:54<00:18, 8.81it/s][A[A
96% 3661/3823 [06:54<00:18, 8.82it/s][A[A
96% 3662/3823 [06:54<00:18, 8.82it/s][A[A
96% 3663/3823 [06:54<00:18, 8.80it/s][A[A
96% 3664/3823 [06:54<00:18, 8.80it/s][A[A
96% 3665/3823 [06:54<00:17, 8.79it/s][A[A
96% 3666/3823 [06:54<00:17, 8.80it/s][A[A
96% 3667/3823 [06:54<00:17, 8.80it/s][A[A
96% 3668/3823 [06:55<00:17, 8.80it/s][A[A
96% 3669/3823 [06:55<00:17, 8.80it/s][A[A
96% 3670/3823 [06:55<00:17, 8.81it/s][A[A
96% 3671/3823 [06:55<00:17, 8.81it/s][A[A
96% 3672/3823 [06:55<00:17, 8.80it/s][A[A
96% 3673/3823 [06:55<00:17, 8.80it/s][A[A
96% 3674/3823 [06:55<00:16, 8.81it/s][A[A
96% 3675/3823 [06:55<00:16, 8.81it/s][A[A
96% 3676/3823 [06:55<00:16, 8.80it/s][A[A
96% 3677/3823 [06:56<00:16, 8.80it/s][A[A
96% 3678/3823 [06:56<00:16, 8.80it/s][A[A
96% 3679/3823 [06:56<00:16, 8.80it/s][A[A
96% 3680/3823 [06:56<00:16, 8.80it/s][A[A
96% 3681/3823 [06:56<00:16, 8.81it/s][A[A
96% 3682/3823 [06:56<00:16, 8.81it/s][A[A
96% 3683/3823 [06:56<00:15, 8.81it/s][A[A
96% 3684/3823 [06:56<00:15, 8.81it/s][A[A
96% 3685/3823 [06:57<00:15, 8.81it/s][A[A
96% 3686/3823 [06:57<00:15, 8.81it/s][A[A
96% 3687/3823 [06:57<00:15, 8.80it/s][A[A
96% 3688/3823 [06:57<00:15, 8.80it/s][A[A
96% 3689/3823 [06:57<00:15, 8.79it/s][A[A
97% 3690/3823 [06:57<00:15, 8.79it/s][A[A
97% 3691/3823 [06:57<00:15, 8.79it/s][A[A
97% 3692/3823 [06:57<00:14, 8.79it/s][A[A
97% 3693/3823 [06:57<00:14, 8.79it/s][A[A
97% 3694/3823 [06:58<00:14, 8.79it/s][A[A
97% 3695/3823 [06:58<00:14, 8.79it/s][A[A
97% 3696/3823 [06:58<00:14, 8.79it/s][A[A
97% 3697/3823 [06:58<00:14, 8.78it/s][A[A
97% 3698/3823 [06:58<00:14, 8.79it/s][A[A
97% 3699/3823 [06:58<00:14, 8.79it/s][A[A
97% 3700/3823 [06:58<00:13, 8.80it/s][A[A
97% 3701/3823 [06:58<00:13, 8.80it/s][A[A
97% 3702/3823 [06:58<00:13, 8.80it/s][A[A
97% 3703/3823 [06:59<00:13, 8.79it/s][A[A
97% 3704/3823 [06:59<00:13, 8.80it/s][A[A
97% 3705/3823 [06:59<00:13, 8.79it/s][A[A
97% 3706/3823 [06:59<00:13, 8.79it/s][A[A
97% 3707/3823 [06:59<00:13, 8.79it/s][A[A
97% 3708/3823 [06:59<00:13, 8.79it/s][A[A
97% 3709/3823 [06:59<00:12, 8.80it/s][A[A
97% 3710/3823 [06:59<00:12, 8.80it/s][A[A
97% 3711/3823 [06:59<00:12, 8.81it/s][A[A
97% 3712/3823 [07:00<00:12, 8.81it/s][A[A
97% 3713/3823 [07:00<00:12, 8.81it/s][A[A
97% 3714/3823 [07:00<00:12, 8.81it/s][A[A
97% 3715/3823 [07:00<00:12, 8.81it/s][A[A
97% 3716/3823 [07:00<00:12, 8.81it/s][A[A
97% 3717/3823 [07:00<00:12, 8.80it/s][A[A
97% 3718/3823 [07:00<00:11, 8.80it/s][A[A
97% 3719/3823 [07:00<00:11, 8.80it/s][A[A
97% 3720/3823 [07:01<00:11, 8.80it/s][A[A
97% 3721/3823 [07:01<00:11, 8.80it/s][A[A
97% 3722/3823 [07:01<00:11, 8.80it/s][A[A
97% 3723/3823 [07:01<00:11, 8.80it/s][A[A
97% 3724/3823 [07:01<00:11, 8.80it/s][A[A
97% 3725/3823 [07:01<00:11, 8.79it/s][A[A
97% 3726/3823 [07:01<00:11, 8.79it/s][A[A
97% 3727/3823 [07:01<00:10, 8.80it/s][A[A
98% 3728/3823 [07:01<00:10, 8.80it/s][A[A
98% 3729/3823 [07:02<00:10, 8.80it/s][A[A
98% 3730/3823 [07:02<00:10, 8.80it/s][A[A
98% 3731/3823 [07:02<00:10, 8.80it/s][A[A
98% 3732/3823 [07:02<00:10, 8.80it/s][A[A
98% 3733/3823 [07:02<00:10, 8.80it/s][A[A
98% 3734/3823 [07:02<00:10, 8.79it/s][A[A
98% 3735/3823 [07:02<00:10, 8.79it/s][A[A
98% 3736/3823 [07:02<00:09, 8.79it/s][A[A
98% 3737/3823 [07:02<00:09, 8.80it/s][A[A
98% 3738/3823 [07:03<00:09, 8.80it/s][A[A
98% 3739/3823 [07:03<00:09, 8.80it/s][A[A
98% 3740/3823 [07:03<00:09, 8.81it/s][A[A
98% 3741/3823 [07:03<00:09, 8.81it/s][A[A
98% 3742/3823 [07:03<00:09, 8.81it/s][A[A
98% 3743/3823 [07:03<00:09, 8.82it/s][A[A
98% 3744/3823 [07:03<00:08, 8.81it/s][A[A
98% 3745/3823 [07:03<00:08, 8.81it/s][A[A
98% 3746/3823 [07:03<00:08, 8.81it/s][A[A
98% 3747/3823 [07:04<00:08, 8.80it/s][A[A
98% 3748/3823 [07:04<00:08, 8.79it/s][A[A
98% 3749/3823 [07:04<00:08, 8.80it/s][A[A
98% 3750/3823 [07:04<00:08, 8.80it/s][A[A
98% 3751/3823 [07:04<00:08, 8.79it/s][A[A
98% 3752/3823 [07:04<00:08, 8.79it/s][A[A
98% 3753/3823 [07:04<00:07, 8.80it/s][A[A
98% 3754/3823 [07:04<00:07, 8.80it/s][A[A
98% 3755/3823 [07:04<00:07, 8.79it/s][A[A
98% 3756/3823 [07:05<00:07, 8.78it/s][A[A
98% 3757/3823 [07:05<00:07, 8.79it/s][A[A
98% 3758/3823 [07:05<00:07, 8.79it/s][A[A
98% 3759/3823 [07:05<00:07, 8.78it/s][A[A
98% 3760/3823 [07:05<00:07, 8.79it/s][A[A
98% 3761/3823 [07:05<00:07, 8.79it/s][A[A
98% 3762/3823 [07:05<00:06, 8.79it/s][A[A
98% 3763/3823 [07:05<00:06, 8.79it/s][A[A
98% 3764/3823 [07:06<00:06, 8.79it/s][A[A
98% 3765/3823 [07:06<00:06, 8.79it/s][A[A
99% 3766/3823 [07:06<00:06, 8.80it/s][A[A
99% 3767/3823 [07:06<00:06, 8.81it/s][A[A
99% 3768/3823 [07:06<00:06, 8.81it/s][A[A
99% 3769/3823 [07:06<00:06, 8.81it/s][A[A
99% 3770/3823 [07:06<00:06, 8.80it/s][A[A
99% 3771/3823 [07:06<00:05, 8.80it/s][A[A
99% 3772/3823 [07:06<00:05, 8.81it/s][A[A
99% 3773/3823 [07:07<00:05, 8.81it/s][A[A
99% 3774/3823 [07:07<00:05, 8.81it/s][A[A
99% 3775/3823 [07:07<00:05, 8.81it/s][A[A
99% 3776/3823 [07:07<00:05, 8.81it/s][A[A
99% 3777/3823 [07:07<00:05, 8.81it/s][A[A
99% 3778/3823 [07:07<00:05, 8.80it/s][A[A
99% 3779/3823 [07:07<00:04, 8.81it/s][A[A
99% 3780/3823 [07:07<00:04, 8.81it/s][A[A
99% 3781/3823 [07:07<00:04, 8.81it/s][A[A
99% 3782/3823 [07:08<00:04, 8.81it/s][A[A
99% 3783/3823 [07:08<00:04, 8.81it/s][A[A
99% 3784/3823 [07:08<00:04, 8.82it/s][A[A
99% 3785/3823 [07:08<00:04, 8.82it/s][A[A
99% 3786/3823 [07:08<00:04, 8.80it/s][A[A
99% 3787/3823 [07:08<00:04, 8.79it/s][A[A
99% 3788/3823 [07:08<00:03, 8.79it/s][A[A
99% 3789/3823 [07:08<00:03, 8.78it/s][A[A
99% 3790/3823 [07:08<00:03, 8.79it/s][A[A
99% 3791/3823 [07:09<00:03, 8.79it/s][A[A
99% 3792/3823 [07:09<00:03, 8.80it/s][A[A
99% 3793/3823 [07:09<00:03, 8.79it/s][A[A
99% 3794/3823 [07:09<00:03, 8.79it/s][A[A
99% 3795/3823 [07:09<00:03, 8.79it/s][A[A
99% 3796/3823 [07:09<00:03, 8.79it/s][A[A
99% 3797/3823 [07:09<00:02, 8.80it/s][A[A
99% 3798/3823 [07:09<00:02, 8.80it/s][A[A
99% 3799/3823 [07:09<00:02, 8.80it/s][A[A
99% 3800/3823 [07:10<00:02, 8.80it/s][A[A
99% 3801/3823 [07:10<00:02, 8.80it/s][A[A
99% 3802/3823 [07:10<00:02, 8.80it/s][A[A
99% 3803/3823 [07:10<00:02, 8.79it/s][A[A
100% 3804/3823 [07:10<00:02, 8.79it/s][A[A
100% 3805/3823 [07:10<00:02, 8.78it/s][A[A
100% 3806/3823 [07:10<00:01, 8.78it/s][A[A
100% 3807/3823 [07:10<00:01, 8.78it/s][A[A
100% 3808/3823 [07:11<00:01, 8.79it/s][A[A
100% 3809/3823 [07:11<00:01, 8.79it/s][A[A
100% 3810/3823 [07:11<00:01, 8.78it/s][A[A
100% 3811/3823 [07:11<00:01, 8.79it/s][A[A
100% 3812/3823 [07:11<00:01, 8.79it/s][A[A
100% 3813/3823 [07:11<00:01, 8.80it/s][A[A
100% 3814/3823 [07:11<00:01, 8.80it/s][A[A
100% 3815/3823 [07:11<00:00, 8.80it/s][A[A
100% 3816/3823 [07:11<00:00, 8.80it/s][A[A
100% 3817/3823 [07:12<00:00, 8.79it/s][A[A
100% 3818/3823 [07:12<00:00, 8.79it/s][A[A
100% 3819/3823 [07:12<00:00, 8.80it/s][A[A
100% 3820/3823 [07:12<00:00, 8.80it/s][A[A
100% 3821/3823 [07:12<00:00, 8.79it/s][A[A
100% 3822/3823 [07:12<00:00, 8.79it/s][A[AAggregating distributions...
100% 3823/3823 [15:59<00:00, 3.98it/s]
100% 3823/3823 [24:43<00:00, 2.58it/s]
100% 3823/3823 [07:17<00:00, 8.74it/s]
###Markdown
Student Training
###Code
!python train_student.py \
--data_dir 'data/GLOBAL/Student' \
--model_name_or_path 'dmis-lab/biobert-base-cased-v1.1' \
--output_dir 'models/Student' \
--logging_dir 'models/Student' \
--save_steps 956
###Output
_____no_output_____
###Markdown
---
###Code
def f1(a):
time.sleep(2)
return a+a
def f2(a, b):
a = str(a)
b = str(b)
time.sleep(2)
return a+b
in_widget1 = w.Text()
in_widget2 = w.Text()
output_widget1 = w.Output()
first_node = wr.Node(
args=[in_widget1],
f=f1
)
second_node = wr.Node(
args=[in_widget2],
f=f1
)
third_node = wr.Node(
args=[first_node, second_node],
f=f2,
display_widget=output_widget1
)
container_widget = w.VBox([in_widget1, in_widget2, output_widget1])
wr.display(container_widget, debug=True)
###Output
_____no_output_____
###Markdown
Standard way
###Code
result = sqrt(125)
result
###Output
_____no_output_____
###Markdown
`:=` does not work in Python at the top level
###Code
result := sqrt(125)
###Output
_____no_output_____
###Markdown
But it does with ipywalrus extenstion!
###Code
%load_ext ipywalrus
result := sqrt(125)
###Output
_____no_output_____
###Markdown
Пример распознавание русской речи на обученной модели
###Code
import tensorflow as tf
import numpy as np
import os
from IPython.display import Audio
import scipy.io.wavfile as wav
from python_speech_features import fbank, mfcc
from keras.layers import LSTM, Dense, Convolution1D
from keras.models import Sequential
from keras.layers.wrappers import TimeDistributed, Bidirectional
vocabulary = { 'а': 1,
'б': 2,
'в': 3,
'г': 4,
'д': 5,
'е': 6,
'ё': 7,
'ж': 8,
'з': 9,
'и': 10,
'й': 11,
'к': 12,
'л': 13,
'м': 14,
'н': 15,
'о': 16,
'п': 17,
'р': 18,
'с': 19,
'т': 20,
'у': 21,
'ф': 22,
'х': 23,
'ц': 24,
'ч': 25,
'ш': 26,
'щ': 27,
'ъ': 28,
'ы': 29,
'ь': 30,
'э': 31,
'ю': 32,
'я': 33}
inv_mapping = dict(zip(vocabulary.values(), vocabulary.keys()))
inv_mapping[34]='<пробел>'
def decode_single(session, test_input):
z=np.zeros((30,13))
zz=np.vstack((test_input,z))
val_feed = {
input_X: np.asarray([zz]),
seq_lens: np.asarray([len(test_input)])
}
# Decoding
d = session.run(decoded[0], feed_dict=val_feed)
dense_decoded = tf.sparse_tensor_to_dense(d, default_value=-1).eval(session=session)
seq = [s for s in dense_decoded[0] if s != -1]
ret=decode(d, inv_mapping )
for i in range(len(ret)):
print(str(ret[i])),
print('')
def decode(d, mapping):
"""Decode."""
shape = d.dense_shape
batch_size = shape[0]
ans = np.zeros(shape=shape, dtype=int)
seq_lengths = np.zeros(shape=(batch_size, ), dtype=np.int)
for ind, val in zip(d.indices, d.values):
ans[ind[0], ind[1]] = val
seq_lengths[ind[0]] = max(seq_lengths[ind[0]], ind[1] + 1)
ret = []
for i in range(batch_size):
ret.append("".join(map(lambda s: mapping[s], ans[i, :seq_lengths[i]])))
return ret
###Output
_____no_output_____
###Markdown
модель
###Code
graph = tf.Graph()
with graph.as_default():
input_X = tf.placeholder(tf.float32, shape=[None, None, 13],name="input_X")
labels = tf.sparse_placeholder(tf.int32)
seq_lens = tf.placeholder(tf.int32, shape=[None],name="seq_lens")
model = Sequential()
model.add(Bidirectional(LSTM(128, return_sequences=True, implementation=2), input_shape=(None, 13)))
model.add(Bidirectional(LSTM(128, return_sequences=True, implementation=2)))
model.add(TimeDistributed(Dense(len(inv_mapping) + 2)))
final_seq_lens = seq_lens
logits = model(input_X)
logits = tf.transpose(logits, [1, 0, 2])
ctc_loss = tf.reduce_mean(tf.nn.ctc_loss(labels, logits, final_seq_lens))
# ctc_greedy_decoder? merge_repeated=True
decoded, log_prob = tf.nn.ctc_greedy_decoder(logits, final_seq_lens)
ler = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0], tf.int32), labels))
train_op = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(ctc_loss)
###Output
_____no_output_____
###Markdown
Скачиваем тестовый wav фаил с мужским голосом
###Code
WAVE_OUTPUT_FILENAME = 'test.wav'
sample_rate, X1= wav.read(WAVE_OUTPUT_FILENAME)
# Через несколько лет путешествие на Марс будет не более сложно, чем перелёт, из Москвы в Берлин.
Audio(data=X1, rate=sample_rate)
###Output
_____no_output_____
###Markdown
Выдиляем из файла фичи MFCC
###Code
fs, audio = wav.read(WAVE_OUTPUT_FILENAME)
features = mfcc(audio, samplerate=fs, lowfreq=50)
mean_scale = np.mean(features, axis=0)
std_scale = np.std(features, axis=0)
features = (features - mean_scale[np.newaxis, :]) / std_scale[np.newaxis, :]
seq_len = features.shape[0]
###Output
_____no_output_____
###Markdown
Распознаем речь на предворительно обученной модели
###Code
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint1")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
decode_single(session, features)
###Output
[i] LOADING checkpoint checkpoint1/ctc.ckpt-699
INFO:tensorflow:Restoring parameters from checkpoint1/ctc.ckpt-699
<пробел>через<пробел>несколько<пробел>лет<пробел>путешествие<пробел>на<пробел>марс<пробел>будет<пробел>не<пробел>более<пробел>сложно<пробел>чем<пробел>перелёт<пробел>из<пробел>москвы<пробел>в<пробел>берлин<пробел>
###Markdown
Тест распознования женского голоса, говорящего на русском языке
###Code
WAVE_OUTPUT_FILENAME = 'ru_test.wav'
# Покалывало грудь стучала кровь в виски но дышалось легко воздух был тонок и сух
sample_rate, X1= wav.read(WAVE_OUTPUT_FILENAME)
# Через несколько лет путешествие на Марс будет не более сложно, чем перелёт, из Москвы в Берлин.
Audio(data=X1, rate=sample_rate)
fs, audio = wav.read(WAVE_OUTPUT_FILENAME)
features = mfcc(audio, samplerate=fs, lowfreq=50)
mean_scale = np.mean(features, axis=0)
std_scale = np.std(features, axis=0)
features = (features - mean_scale[np.newaxis, :]) / std_scale[np.newaxis, :]
seq_len = features.shape[0]
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint1")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
decode_single(session, features)
###Output
[i] LOADING checkpoint checkpoint1/ctc.ckpt-699
INFO:tensorflow:Restoring parameters from checkpoint1/ctc.ckpt-699
<пробел>покалывало<пробел>грудь<пробел>стучала<пробел>кровь<пробел>в<пробел>виски<пробел>но<пробел>дышалось<пробел>легко<пробел>воздух<пробел>был<пробел>тонок<пробел>и<пробел>сух<пробел>
###Markdown
Тест акустической модели с микрофона
###Code
import pyaudio
import wave
# and IPython.display for audio output
import IPython.display
from scipy.io import wavfile
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
RECORD_SECONDS = 3 #время записи
WAVE_OUTPUT_FILENAME = 'mikr.wav'
###Output
_____no_output_____
###Markdown
запись с микрофона в wav
###Code
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("* ЗАПИСЬ С МИКРОФОНА")
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print("* КОНЕЦ ЗАПИСИ")
stream.stop_stream()
stream.close()
p.terminate()
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
wf.close()
fs, audio = wav.read(WAVE_OUTPUT_FILENAME)
features = mfcc(audio, samplerate=fs, lowfreq=50)
mean_scale = np.mean(features, axis=0)
std_scale = np.std(features, axis=0)
features = (features - mean_scale[np.newaxis, :]) / std_scale[np.newaxis, :]
seq_len = features.shape[0]
sample_rate, X1= wavfile.read(WAVE_OUTPUT_FILENAME)
# Play it back!
IPython.display.Audio(data=X1, rate=sample_rate)
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint1")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
decode_single(session, features)
###Output
[i] LOADING checkpoint checkpoint1/ctc.ckpt-699
INFO:tensorflow:Restoring parameters from checkpoint1/ctc.ckpt-699
<пробел>в<пробел>ключих<пробел>свет<пробел>в<пробел>гростинной<пробел>
###Markdown
Reducing variability in along-tract analysis with diffusion profile realignment In this example, we will load up 150 streamlines from a synthetic dataset. They are however unaligned, so we will simulate different subject by truncating their endpoints, realign everything together and only keep the sections where at least 75% of the bundles are overlapping.At the end, we show how to draw and overlay p-values from a statistical test (or any other values really) over the shadow bundles as in the paper.
###Code
# This is the main functions we will need, there are a few more that might be useful for finer grained control inside dpr/register.py
from dpr.register import align_bundles, resample_bundles_to_same, flip_fibers, truncate
# This is the drawing function
from dpr.utils import draw_fancy_graph
# This contains a few functions to load up the data from text file
# but we won't need them in this example as I did it already
from dpr.utils import read_per_line, strip_first_col, strip_header
###Output
_____no_output_____
###Markdown
A few import needed for this example
###Code
import numpy as np
import matplotlib.pyplot as plt
import pickle
from time import time
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load up the data+ This will most likely be the most difficult step, getting your data in the right format+ You need to load everything as a 2D array of size (number of subjects x along-tract metric) in the same coordinate system + More on that a bit later, but every subject will need to have the same starting and ending coordinate system for everything to make sense 1. Loading up the data
###Code
bundles = np.loadtxt('datasets/bundles.txt')
bundles_cut = np.loadtxt('datasets/bundles_cut.txt')
with open('datasets/coordinates.pkl', 'rb') as f:
coordinates = pickle.load(f)
###Output
_____no_output_____
###Markdown
+ Each bundle is represented as a line in a 2D array + There are 150 bundles with the longest having 208 points + Shorter elements are represented using nans to pad them to the same size+ The coordinates are represented as a list of 150 elements, which contains 2D arrays of coordinates in x, y and z + They are not strictly needed for the algorithm, but allow us to draw figures using the original coordinates + They are also needed to make sure fiber bundles all share the same point of origin in the first place
###Code
print('Shape of the bundles: {}, number of bundles: {} and shape of the coordinates: {}'.format(bundles.shape, len(coordinates), coordinates[0].shape))
###Output
Shape of the bundles: (150, 208), number of bundles: 150 and shape of the coordinates: (184, 3)
###Markdown
We first need to ensures everything is at the same starting point and 'going the same way'+ For that we need the xyz coordinates also+ This may already be taken care of by your softwar,e for example ExploreDTI already keeps all subjects in the same coordinate system when the metrics are extracted
###Code
flipped_bundles = flip_fibers(bundles, coordinates)
flipped_bundles_cut = flip_fibers(bundles_cut, coordinates)
###Output
_____no_output_____
###Markdown
We assume that every subject uses the same coordinate system, else we would be realigning 3D coordinates which do nto even match+ The top row for the original bundles has a coordinate mismach, which we fixed by flipping everything in the same way on the bottom+ Note how is it not immediatly obvious on the cut bundles at first
###Code
all_bundles = bundles, bundles_cut, flipped_bundles, flipped_bundles_cut
f, axs = plt.subplots(2, 2, figsize=(12,10), sharex=True, sharey=True)
for bund, ax in zip(all_bundles, axs.ravel()):
for b in bund:
ax.plot(b, alpha=0.15, color='gray');
for ax in axs[1]:
ax.set_xlabel('Coordinate')
for ax in axs[:,0]:
ax.set_ylabel('AFD')
axs[0,0].set_title('Original bundles')
axs[1,0].set_title('Flipped, but not aligned, bundles')
axs[0,1].set_title('Cut bundles')
axs[1,1].set_title('Flipped, but not aligned, cut bundles')
###Output
_____no_output_____
###Markdown
2. The realignment itself And now we realign everything+ The function returns both the aligned bundles and the shift (in number of points) applied to each of them+ A positive value means a shift to the right and a negative value is for a shift to the left
###Code
start = time()
aligned_bundles_cut, shifts = align_bundles(bundles_cut)
print('Total runtime was {} seconds'.format(time() - start))
###Output
Total runtime was 4.3635642528533936 seconds
###Markdown
At this point we would be basically done and can save all of that to a text file if we want. + However we can do some more processing to only select regions of the bundles which have enough overlapping subjects. We now plot the realigned bundles, but we first remove all of the useless padding
###Code
aligned_bundles_cut = truncate(aligned_bundles_cut, mode='longest')
all_bundles = bundles, bundles_cut, aligned_bundles_cut
f, axs = plt.subplots(1, 3, figsize=(18,5), sharex=True, sharey=True)
for bund, ax in zip(all_bundles, axs.ravel()):
for b in bund:
ax.plot(b, alpha=0.15, color='gray');
ax.set_xlabel('Coordinate')
axs[0].set_ylabel('AFD')
axs[0].set_title('Original bundles')
axs[1].set_title('Cut bundles')
axs[2].set_title('Realigned cut bundles')
###Output
_____no_output_____
###Markdown
We can now remove more padding and keep only relevant portions+ Remember that after realignment, the intrinsic coordinates of each bundle is different and we need to keep track of it to draw them correctly+ Here we resample everything to 50 points and keep only portions where there is at least 75% of the bundles overlapping
###Code
bundles_truncated = truncate(aligned_bundles_cut, mode=75)
num_points = 50
resampled_cut = resample_bundles_to_same(bundles_truncated, num_points=num_points)
all_bundles = bundles, bundles_cut, bundles_truncated, resampled_cut
# They all have a different number of points, but we can keep track of their relative positioning
# with the shift matrix and the original coordinates in xyz if needed
for idx, bund in enumerate(all_bundles, start=1):
print('Shape of bundle no. {}: {}'.format(idx, bund.shape))
endpoints = np.isfinite(bundles_truncated).sum(axis=0)
f, axs = plt.subplots(2, 2, figsize=(12,8), sharex=True, sharey=True)
for bund, ax in zip(all_bundles, axs.ravel()):
for idx, b in enumerate(bund):
end = endpoints[idx]
# this line ensures that when we draw the bundles, they all have the same coordinates
# even if they have a different number of points points
coords = np.linspace(0, end, num=len(b), endpoint=True)
ax.plot(coords, b, alpha=0.15, color='gray');
axs[1,0].set_xlabel('Coordinate')
axs[1,1].set_xlabel('Coordinate')
axs[0,0].set_ylabel('AFD')
axs[1,0].set_ylabel('AFD')
axs[0,0].set_title('Original bundles')
axs[0,1].set_title('Cut bundles')
axs[1,0].set_title('Realigned and truncated bundles')
axs[1,1].set_title('Realigned and resampled bundles')
###Output
_____no_output_____
###Markdown
Remember when extracting averaged metrics that missing portions are represented with Nans, so we must take that into account with specialized functions that ignore them+ This is because Nans get propagated, showing only the portions where all of the subjects would be overlapping
###Code
means = np.mean(resampled_cut, axis=0), np.nanmean(resampled_cut, axis=0)
stds = np.std(resampled_cut, axis=0), np.nanstd(resampled_cut, axis=0)
labels = 'Normal mean', 'Mean excluding missing portions'
fig, axes = plt.subplots(2, 1, sharex=True, sharey=True, figsize=(8, 12))
for ax, mean, std, label in zip(axes, means, stds, labels):
ax.fill_between(range(len(mean)), mean - std, mean + std, alpha=0.5)
ax.plot(mean, color='r', label=label)
ax.set_xlim(0, len(mean))
ax.set_ylim(0, None)
ax.legend(loc='lower right', fontsize=12)
ax.set(ylabel='AFD', xlabel='Coordinates')
###Output
_____no_output_____
###Markdown
And that's pretty much it, if we want we can also store the realigned metrics in a text file for further processing in your environment of choice, such as R for example+ Remember than Nans indicate coordinate location where a given subject is not present during further processing
###Code
# we keep 5 decimals, this is what the fmt option does
# Each line is a subject and each column is a point in the along-tract analysis
np.savetxt('bundle_realigned_truncated.txt', bundles_truncated, fmt='%1.5f')
np.savetxt('bundle_realigned_resampled.txt', resampled_cut, fmt='%1.5f')
###Output
_____no_output_____
###Markdown
3. Better visualisation - how to overlay results with p-values In this section we now plot the p-values over all of the bundles in a neat and easy to analyse fashion.To do so, we will load up some data of the hcp dataset along with p-values I previously computed. We need+ Two coordinates to draw for each streamlines (e.g. all x and z points)+ A set of truncated coordinates between roi to draw (in blue by default)+ The coordinates of the representative streamline (in green by default)+ Something to overlay as a representative streamline (the p-values) Once again the hardest part will likely be to load your data into a bunch of arrays, but here is how to do it for a text file of xyz coordinates.The command line version is in the scripts folder, called **dpr_make_fancy_graph**, be sure to check it for quickly drawing instead of running code everytime
###Code
# This is a list of all the x y z points of each streamlines forming our bundle of interest
all_coords = read_per_line('datasets/af_left_coordinates.txt')
truncated_coords = read_per_line('datasets/af_left_truncated_coordinates.txt')
# These ones are only a single streamline, so nothing special is required to load them
average_coords = np.loadtxt('datasets/af_left_average_coordinates.txt')
pval_realigned = np.loadtxt('datasets/af_left_pval_realigned.txt')
pval_unaligned = np.loadtxt('datasets/af_left_pval_unaligned.txt')
# Split everything into smaller list for the functions
x_coords = [all_coords[i][:,0] for i in range(len(all_coords))]
y_coords = [all_coords[i][:,1] for i in range(len(all_coords))]
z_coords = [all_coords[i][:,2] for i in range(len(all_coords))]
x_coords_truncated = [truncated_coords[i][:,0] for i in range(len(truncated_coords))]
y_coords_truncated = [truncated_coords[i][:,1] for i in range(len(truncated_coords))]
z_coords_truncated = [truncated_coords[i][:,2] for i in range(len(truncated_coords))]
x_coords_representative = average_coords[:,0]
y_coords_representative = average_coords[:,1]
z_coords_representative = average_coords[:,2]
###Output
_____no_output_____
###Markdown
And now we can draw everything using a convenient function
###Code
fig1, ax1 = draw_fancy_graph(pval_realigned, x_coords, z_coords, x_coords_truncated, z_coords_truncated, x_coords_representative, z_coords_representative,
coord1_label='X', coord2_label='Z')
fig2, ax2 = draw_fancy_graph(pval_unaligned, x_coords, z_coords, x_coords_truncated, z_coords_truncated, x_coords_representative, z_coords_representative,
coord1_label='X', coord2_label='Z', title='p-values before realignment')
# Save the graph, the script dpr_make_fancy_graph will also do that for you
fig1.savefig('pvals_overlay.png', dpi=150, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
This is only a quick function where a few tweaks are available, but feel free to check the inner working of the function to suit it to your own tastes+ It might be easier to directly copy and edit the function for more advanced uses, but you can also change a few parameters to draw different views as down below + Remember that this is a 2D projection, so it might need a few try to look good regarding the two axes you wish to view
###Code
# Here the Y and Z axis are not really informative compared to the X and Z axis from the example above
fig, ax = draw_fancy_graph(pval_realigned, y_coords, z_coords, y_coords_truncated, z_coords_truncated, y_coords_representative, z_coords_representative,
coord1_label='my first axis', coord2_label='my second axis', pval_threshold=0.5, title='My cool title')
###Output
_____no_output_____
###Markdown
Example NotebookHere's a simple test of the environment using Pandas, NumPy, Jupyter, etc., based on the [*10 Minutes to pandas*](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html) tutorial.
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
First, a series with a `NaN` included:
###Code
s = pd.Series([1, 3, 5, np.nan, 6, 8])
s
###Output
_____no_output_____
###Markdown
Then some dates:
###Code
dates = pd.date_range('20130101', periods=6)
dates
###Output
_____no_output_____
###Markdown
Now a simple dataframe:
###Code
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
df
###Output
_____no_output_____
###Markdown
import dataNow let's import data from SoundPrint.co :
###Code
df = pd.read_csv("soundcheck.csv")
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Notes from Greg Scott @ SoundPrint:There are ~11,700 SoundChecks of NYC Restaurants in this file organized as follows: For Columns: - *Venue ID* -- developer insisted on making the venue names anonymous, but we can retrieve a specific venue name should you wish; same venue ID means more than one soundcheck was taken for such venue - makes for more robust data) - *Restaurant Type* -- could be interesting segmented data analysis - *Zip code* -- could be good for seeing sound levels by Manhattan neighborhoods - for the *Avg*, *Min*, *Max* sound levels: Avg is what most people care about, but those with _Hyperacusis_ - sensitivity to loud or sudden bursts of noise - care about Max - *Day of the week* -- could be some interested day of the week trends for sound levels (are places louder during weekend days)? - *Timestamp* -- sound levels by time of day could also be useful.
###Code
%matplotlib inline
d2 = df["max_decibels"]
ax = d2.plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
Example Usage of `canvasutils` Interactive Widget-based Submission of Assignment
###Code
from canvasutils.submit import submit, convert_notebook
api_url = "https://canvas.instructure.com/"
course_code = 2313167
convert_notebook("example.ipynb", to_format="html") # optional method to convert to html
submit(course_code, api_url=api_url, token=False)
###Output
Please paste your token here and then hit enter:
###Markdown
Interactive Text-based Submission of Assignment This mode is for users that don't want to use the interactive widgets above or don't have the necessary dependencies installed.
###Code
submit(course_code, api_url=api_url, token=False, no_widgets=True)
###Output
Please paste your token here and then hit enter:
###Markdown
Example for toy data Create toy data
###Code
import numpy as np
import pandas as pd
np.random.seed(12345)
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
df = pd.DataFrame(np.random.rand(500, 5), columns=list('abcde'))
W = np.array([[1, -2, 3, -2, 1], [-2, 1, 3, -2, 1]])
b = np.array([0.2, -0.1])
margins = df.dot(W.transpose()).add(b)
p = margins[0].map(sigmoid)
l = np.exp(margins[1])
df['time'] = l.map(lambda l:np.clip(np.random.exponential(l, 1)[0], 0, 1))
df['label'] = pd.concat([p, l], axis=1).apply(lambda r:np.round(r[0] * (1. - np.exp(-r[1]))), axis=1)
df.to_parquet('toy.parquet')
###Output
_____no_output_____
###Markdown
Train & Test
###Code
from pyspark.ml import Pipeline
from pyspark.ml.feature import RFormula
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from dfm.classification import DelayedFeedbackClassifier
raw = sqlContext.read.parquet('toy.parquet')
raw.printSchema()
train, test = raw.randomSplit([0.9, 0.1], seed=12345)
formula = RFormula(formula='label ~ a + b + c + d + e')
dfc = DelayedFeedbackClassifier(timeCol='time', regParam=0.01)
pipeline = Pipeline(stages=[formula, dfc])
model = pipeline.fit(train)
predictions = model.transform(test)
eval = BinaryClassificationEvaluator()
eval.evaluate(predictions)
###Output
_____no_output_____
###Markdown
Example of Data Analysis with DCD Hub Data First, we import the Python SDK
###Code
from dcd.entities.thing import Thing
###Output
_____no_output_____
###Markdown
We provide the thing ID and access token (replace with yours)
###Code
from dotenv import load_dotenv
import os
load_dotenv()
THING_ID = os.environ['THING_ID']
THING_TOKEN = os.environ['THING_TOKEN']
###Output
_____no_output_____
###Markdown
We instantiate a Thing with its credential, then we fetch its details
###Code
my_thing = Thing(thing_id=THING_ID, token=THING_TOKEN)
my_thing.read()
###Output
INFO:dcd:things:my-test-thing-556e:Initialising MQTT connection for Thing 'dcd:things:my-test-thing-556e'
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-556e HTTP/1.1" 200 12420
###Markdown
What does a Thing look like?
###Code
my_thing.to_json()
###Output
_____no_output_____
###Markdown
Which property do we want to explore and over which time frame?
###Code
from datetime import datetime
# What dates?
START_DATE = "2019-10-08 21:17:00"
END_DATE = "2019-11-08 21:25:00"
from datetime import datetime
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
from_ts = datetime.timestamp(datetime.strptime(START_DATE, DATE_FORMAT)) * 1000
to_ts = datetime.timestamp(datetime.strptime(END_DATE, DATE_FORMAT)) * 1000
###Output
_____no_output_____
###Markdown
Let's find this property and read the data.
###Code
PROPERTY_NAME = "Accelerometer"
my_property = my_thing.find_property_by_name(PROPERTY_NAME)
my_property.read(from_ts, to_ts)
###Output
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-556e/properties/-4208?from=1570562220000.0&to=1573244700000.0 HTTP/1.1" 200 74885
###Markdown
How many data point did we get?
###Code
print(len(my_property.values))
###Output
818
###Markdown
Display values
###Code
my_property.values
###Output
_____no_output_____
###Markdown
From CSV
###Code
from numpy import genfromtxt
import pandas as pd
data = genfromtxt('data.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
Plot some charts with MatplotlibIn this example we plot an histogram, distribution of all values and dimensions.
###Code
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import figure
from numpy import ma
data = np.array(my_property.values)
figure(num=None, figsize=(15, 5))
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')), columns = ['x', 'y', 'z'])
data_frame
t = data_frame.index
plt.plot(t, data_frame.x, t, data_frame.y, t, data_frame.z)
plt.hist(data[:,1:])
plt.show()
###Output
DEBUG:matplotlib.font_manager:findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=10.0.
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Serif' (DejaVuSerif-Bold.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmmi10' (cmmi10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Serif' (DejaVuSerif-Italic.ttf) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans' (DejaVuSans-Bold.ttf) normal normal bold normal>) = 0.33499999999999996
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans Mono' (DejaVuSansMono-Bold.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Serif' (DejaVuSerif.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans Mono' (DejaVuSansMono.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans Display' (DejaVuSansDisplay.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmb10' (cmb10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Serif' (DejaVuSerif-BoldItalic.ttf) italic normal bold normal>) = 11.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeOneSym' (STIXSizOneSymReg.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans Mono' (DejaVuSansMono-Oblique.ttf) oblique normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXNonUnicode' (STIXNonUniIta.ttf) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeFourSym' (STIXSizFourSymBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmss10' (cmss10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeFourSym' (STIXSizFourSymReg.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeTwoSym' (STIXSizTwoSymReg.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmsy10' (cmsy10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmr10' (cmr10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans' (DejaVuSans-BoldOblique.ttf) oblique normal bold normal>) = 1.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmtt10' (cmtt10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'cmex10' (cmex10.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXNonUnicode' (STIXNonUniBolIta.ttf) italic normal bold normal>) = 11.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Serif Display' (DejaVuSerifDisplay.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXGeneral' (STIXGeneral.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans' (DejaVuSans.ttf) normal normal 400 normal>) = 0.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXNonUnicode' (STIXNonUniBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans Mono' (DejaVuSansMono-BoldOblique.ttf) oblique normal bold normal>) = 11.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeFiveSym' (STIXSizFiveSymReg.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeTwoSym' (STIXSizTwoSymBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXGeneral' (STIXGeneralBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeOneSym' (STIXSizOneSymBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXNonUnicode' (STIXNonUni.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXGeneral' (STIXGeneralItalic.ttf) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXGeneral' (STIXGeneralBolIta.ttf) italic normal bold normal>) = 11.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DejaVu Sans' (DejaVuSans-Oblique.ttf) oblique normal 400 normal>) = 1.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeThreeSym' (STIXSizThreeSymReg.ttf) normal normal regular normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'STIXSizeThreeSym' (STIXSizThreeSymBol.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Bell MT' (BELLI.TTF) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Informal Roman' (INFROMAN.TTF) normal normal roman normal>) = 10.145
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Perpetua' (PERI____.TTF) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Cooper Black' (COOPBL.TTF) normal normal black normal>) = 10.525
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Matura MT Script Capitals' (MATURASC.TTF) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Tw Cen MT' (TCB_____.TTF) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Tw Cen MT Condensed Extra Bold' (TCCEB.TTF) normal normal bold condensed>) = 10.535
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Georgia Pro' (GeorgiaPro-LightItalic.ttf) italic normal light normal>) = 11.24
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Bodoni MT' (BOD_CBI.TTF) italic normal bold condensed>) = 11.535
DEBUG:matplotlib.font_manager:findfont: score(<Font 'SimHei' (simhei.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Segoe Script' (segoesc.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Eras Demi ITC' (ERASDEMI.TTF) normal normal demi normal>) = 10.24
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Centaur' (CENTAUR.TTF) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'DengXian' (Dengb.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Verdana Pro' (VerdanaPro-BoldItalic.ttf) italic normal bold normal>) = 11.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Gill Sans Nova' (GillSansCondBoNova.ttf) normal normal bold condensed>) = 10.535
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Book Antiqua' (ANTQUAI.TTF) italic normal book normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'MV Boli' (mvboli.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Corbel' (corbelb.ttf) normal normal bold normal>) = 10.335
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Gill Sans MT' (GILI____.TTF) italic normal 400 normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Franklin Gothic Medium Cond' (FRAMDCN.TTF) normal normal medium condensed>) = 10.344999999999999
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Ink Free' (Inkfree.ttf) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Algerian' (ALGER.TTF) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Bookman Old Style' (BOOKOSBI.TTF) italic normal book normal>) = 11.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Blackadder ITC' (ITCBLKAD.TTF) normal normal black normal>) = 10.525
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Lucida Fax' (LFAXDI.TTF) italic normal demibold normal>) = 11.24
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Old English Text MT' (OLDENGL.TTF) normal normal 400 normal>) = 10.05
DEBUG:matplotlib.font_manager:findfont: score(<Font 'Verdana Pro' (VerdanaPro-Bold.ttf) normal normal bold normal>) = 10.335
###Markdown
Generate statistics with NumPy and Pandas
###Code
np.min(data[:,1:4], axis=0)
from scipy.stats import kurtosis, skew
skew(data[:,1:4])
###Output
_____no_output_____
###Markdown
You can select a column (slice) of data, or a subset of data. In the example below we select rowsfrom 10 to 20 (10 in total) and the colum 1 to x (i.e skiping the first column representing the time).
###Code
data[:10,1:]
###Output
_____no_output_____
###Markdown
Out of the box, Pandas give you some statistics, do not forget to convert your array into a DataFrame.
###Code
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')))
pd.DataFrame.describe(data_frame)
data_frame.rolling(10).std()
###Output
_____no_output_____
###Markdown
Rolling / Sliding WindowTo apply statistics on a sliding (or rolling) window, we can use the rolling() function of a data frame. In the example below, we roll with a window size of 4 elements to apply a skew()
###Code
rolling2s = data_frame.rolling('2s').std()
plt.plot(rolling2s)
plt.show()
rolling100_data_points = data_frame.rolling(100).skew()
plt.plot(rolling100_data_points)
plt.show()
###Output
_____no_output_____
###Markdown
Zero Crossing
###Code
plt.hist(np.where(np.diff(np.sign(data[:,1]))))
plt.show()
###Output
_____no_output_____
###Markdown
Chinese Whispers for Python This is an implementation of the [Chinese Whispers](https://doi.org/10.3115/1654758.1654774) graph clustering algorithm in Python.* * Version Information
###Code
from chinese_whispers import __version__ as cw_version
from networkx import __version__ as nx_version
from matplotlib import __version__ as plt_version
print('Chinese Whispers {}'.format(cw_version))
print('NetworkX {}'.format(nx_version))
print('matplotlib {}'.format(plt_version))
###Output
_____no_output_____
###Markdown
Clustering
###Code
import networkx as nx
from chinese_whispers import chinese_whispers, aggregate_clusters
G = nx.karate_club_graph()
# Perform clustering of G, parameters weighting and seed can be omitted
chinese_whispers(G, weighting='top', seed=1337)
# Print the clusters in the descending order of size
print('ID\tCluster\n')
for label, cluster in sorted(aggregate_clusters(G).items(), key=lambda e: len(e[1]), reverse=True):
print('{}\t{}\n'.format(label, cluster))
###Output
_____no_output_____
###Markdown
Visualization
###Code
import matplotlib.pyplot as plt
colors = [1. / G.nodes[node]['label'] for node in G.nodes()]
nx.draw_networkx(G, cmap=plt.get_cmap('jet'), node_color=colors, font_color='white')
###Output
_____no_output_____
###Markdown
Sample data:
###Code
import vega_datasets
seattle_temps = vega_datasets.data.seattle_temps()
seattle_temp_extrema = (seattle_temps
.set_index('date')
.resample('W')
.apply(['min', 'max', 'mean'])
.temp
.reset_index()
.melt(id_vars='date', var_name='var', value_name='temp')
)
seattle_temp_extrema.head()
###Output
_____no_output_____
###Markdown
To visualize dataframes like this, Altair is very concise:
###Code
alt.Chart(seattle_temp_extrema).mark_line().encode(x='date', y='temp', color='var')
###Output
_____no_output_____
###Markdown
When working in the notebook, I find myself frequently copy-pasting this cell around, modifying the encoding, mark type, and the name of the dataframe. `autovega` is a helper tool that speeds up simple plotting workflows like this using Jupyter widgets.
###Code
import autovega
###Output
_____no_output_____
###Markdown
Wrap the `display_dataframe` function around a dataframe to render a GUI for choosing one of several plot types and encodings:
###Code
autovega.display_dataframe(seattle_temp_extrema)
###Output
_____no_output_____
###Markdown
To make this the default behavior, call `register_renderer`. Then autovega wil be the default display formatter for all Pandas dataframes.
###Code
autovega.register_renderer()
seattle_temp_extrema
###Output
_____no_output_____
###Markdown
A small demo of background generator[should work in both python2 and python3]
###Code
from __future__ import print_function
from prefetch_generator import BackgroundGenerator, background,__doc__
print(__doc__)
###your super-mega data iterator
import numpy as np
import time
def iterate_minibatches(n_batches, batch_size=10):
for b_i in range(n_batches):
time.sleep(0.1) #here it could read file or SQL-get or do some math
X = np.random.normal(size=[batch_size,20])
y = np.random.randint(0,2,size=batch_size)
yield X,y
###Output
_____no_output_____
###Markdown
regular mode
###Code
%%time
#tqdm made in china
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in iterate_minibatches(50):
#training
time.sleep(0.1) #here it could use GPU for example
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 100 ms, sys: 20 ms, total: 120 ms
Wall time: 10.1 s
###Markdown
with prefetch
###Code
%%time
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in BackgroundGenerator(iterate_minibatches(50)):
#training
time.sleep(0.1) #here it could use some GPU
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 68 ms, sys: 16 ms, total: 84 ms
Wall time: 5.14 s
###Markdown
Same with decorator
###Code
###your super-mega data iterator again, now with background decorator
import numpy as np
import time
@background(max_prefetch=3)
def bg_iterate_minibatches(n_batches, batch_size=10):
for b_i in range(n_batches):
time.sleep(0.1) #here it could read file or SQL-get or do some math
X = np.random.normal(size=[batch_size,20])
y = np.random.randint(0,2,size=batch_size)
yield X,y
%%time
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in bg_iterate_minibatches(50):
#training
time.sleep(0.1)#you guessed it
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 56 ms, sys: 20 ms, total: 76 ms
Wall time: 5.14 s
###Markdown
Neuromorphic Computing Course 0. Example Code Download the program and move it up one directory.
###Code
# Delete everything in the content (current) directory on google colab
!rm -rf /content/* || echo rm -rf /content/* failed
# Clone git repo, change the branch and move it up by one level in the folder hierarchy
!git clone https://gitlab.socsci.ru.nl/snnsimulator/simsnn.git
!mv ./simsnn ./simsnnn
!mv ./simsnnn/* ./
!rm -rf simsnnn || echo rm -rf simsnnn failed
###Output
_____no_output_____
###Markdown
Creating a programmed neuron.
###Code
from simsnn.core.networks import Network
from simsnn.core.simulators import Simulator
# Create the network and the simulator object
net = Network()
sim = Simulator(net)
# Create a programmed neuron, that spikes on times 1 and 3,
# does not repeat it's programming and has the ID "pn".
programmed_neuron = net.createInputTrain(train=[0,1,0,1], loop=False, ID="pn")
# Add all neurons to the raster
sim.raster.addTarget(programmed_neuron)
# Add all neurons to the multimeter
sim.multimeter.addTarget(programmed_neuron)
# Run the simulation for 10 rounds, enable the plotting of the raster,
# the multimeter and the network structure.
sim.run(steps=10, plotting=True)
###Output
_____no_output_____
###Markdown
Do you understand what is going on? Connecting two neurons with a synapse.
###Code
from simsnn.core.networks import Network
from simsnn.core.simulators import Simulator
net = Network()
sim = Simulator(net)
programmed_neuron = net.createInputTrain(train=[0,1,0,1], loop=False, ID="pn")
# Create a LIF neuron, with a membrane voltage threshold of 1,
# a post spike reset value of 0 and no voltage decay (m=1).
lif_neuron = net.createLIF(ID="ln", thr=1, V_reset=0, m=1)
# Create a Synapse, between the programmed neuron and the LIF neuron,
# with a voltage weight of 1 and a delay of 1.
net.createSynapse(pre=programmed_neuron, post=lif_neuron, ID="pn-ln", w=1, d=1)
sim.raster.addTarget([programmed_neuron, lif_neuron])
sim.multimeter.addTarget([programmed_neuron, lif_neuron])
sim.run(steps=10, plotting=True)
###Output
_____no_output_____
###Markdown
Note how the LIF neuron does not ever seem to get any voltage. This is just an artifact of the timing of the voltage measurement. The voltages are measured at the end of every discrete timestep. When a LIF neuron spikes, its voltage will be reset to the V_reset value, which is 0 in this case. Creating an endlessly spiking neuron
###Code
from simsnn.core.networks import Network
from simsnn.core.simulators import Simulator
net = Network()
sim = Simulator(net)
# Create a neuron that has threshold of 4, a post spike reset value of 0,
# no voltage decay and a constant input current of 1
lif_neuron = net.createLIF(ID="ln", thr=4, V_reset=0, m=1, I_e=1)
sim.raster.addTarget([lif_neuron])
sim.multimeter.addTarget([lif_neuron])
sim.run(steps=10, plotting=True)
###Output
_____no_output_____
###Markdown
Using the unified query interface
###Code
# instantiate API object, will query metadata
r = UNFCCCApiReader()
# access metadata
r.parties
r.gases
# for obtaining information from the database, use r.query()
r.query?
# Note that only the "party_code" parameter is mandatory, gases can be left empty to query for all gases
r.query(party_code='AFG')
# the result is returned in a pandas DataFrame. Note that sometimes, unknown categories are returned (ex. "unkown category nr. 10503") and
# data points can have a numberValue and/or a stringValue such as "NO", "NE", or "C"
# Querying Annex-I parties for all gases leads to large queries which take a relatively long time to process
r.query(party_code='DEU')
# If you don't need all the information, it is beneficial to query for single gases only
# or use the more specialized query interface (see below)
r.query(party_code='DEU', gases=['N₂O'])
###Output
_____no_output_____
###Markdown
Using the specialized query interfaces for finer control
###Code
# API objects for annexOne and nonAnnexOne parties are available
r.annex_one_reader
r.non_annex_one_reader
# access metadata
r.annex_one_reader.parties
# other available metadata
#r.annex_one_reader.years
#r.annex_one_reader.classifications
#r.annex_one_reader.gases
#r.annex_one_reader.units
#r.annex_one_reader.conversion_factors
# categories and measures are available in hierarchies
#r.annex_one_reader.category_tree
#r.annex_one_reader.measure_tree
# for easier viewing, use the associated methods; note the id in brackets that you need if you want to query for a specific category/measure
#r.annex_one_reader.show_measure_hierarchy()
r.annex_one_reader.show_category_hierarchy()
# for obtaining information from the database, use query()
r.annex_one_reader.query?
# Fine-grained control is possible
# Ex. query for german net emissions/removals of CO₂ in the category 5.A.1.a
# You have to provide categories and measures using IDs, because names are not necessarily unique
r.annex_one_reader.query(party_codes=['DEU'], category_ids=[9839], gases=['CO₂'], measure_ids=[10460])
df = r.query(party_code='AFG')
df.reset_index?
###Output
_____no_output_____
###Markdown
We create 3 nice flattend tables, with some definitions we may be able to generalise thisI am a big fan :)For more products we just need to combine the identifier to the attributes
###Code
attributes = Magic(response.dot_dict.GetMatchingProductResult.Product.AttributeSets.ItemAttributes)
ids = Magic(response.dot_dict.GetMatchingProductResult.Product.Identifiers)
relationships = Magic(response.dot_dict.GetMatchingProductResult.Product.Relationships)
pd.DataFrame(relationships.rowdata)
pd.DataFrame(attributes.dictdata, [1]).T
pd.DataFrame(ids.dictdata, [1])
###Output
_____no_output_____
###Markdown
If we try the complete xml.It's a lot more ugly. But you can try and will hopefully do something like above.
###Code
complete = Magic(response.dot_dict)
pd.DataFrame(complete.dictdata, [1]).T
pd.DataFrame(Magic(response.dot_dict).rowdata)
###Output
_____no_output_____
###Markdown
How to use radon.py
###Code
import torch
import matplotlib.pyplot as plt
import radon_transformation.radon as radon
# install scikit-image
!{sys.executable} -m pip install scikit-image
# for dataset
from skimage.data import shepp_logan_phantom
###Output
###Markdown
Load DataFirst load your data. In this example we simulate a dataset with batchsize 5. The device can be either "cpu" or "cuda". Please be away that calculations ont he gpu are much faster than on cpu.
###Code
device = "cuda"
batchsize = 5
# load example image
image = shepp_logan_phantom()
# transform to tensor and simulate a dataset with batchsize 5
image = torch.tensor(image).float().to(device)
image = image[None, None].repeat(batchsize, 1, 1, 1)
plt.imshow(image[0,0].cpu())
plt.colorbar()
plt.title("Input Image")
plt.show()
###Output
_____no_output_____
###Markdown
Apply Radon Transformation and FBPApply radon trasnformation and filtered backprojection. You can change the number of projection angles by changing the value in *n_angles*.
###Code
# setting
n_angles = 1000
image_size = image.shape[-1]
# get operators
radon_op, fbp_op = radon.get_operators(n_angles=n_angles, image_size=image_size, circle=True, device=device)
# apply radon transformation
sino = radon_op(image)
plt.imshow(sino[0,0].cpu())
plt.colorbar()
plt.title("Sinogram")
plt.show()
# apply filtered backprojection
reconstructed = fbp_op(sino)
plt.imshow(reconstructed[0,0].cpu())
plt.colorbar()
plt.title("Reconstruction")
plt.show()
###Output
_____no_output_____
###Markdown
###Code
%tensorflow_version 1.x
from google.colab import drive
ROOT = '/content/drive'
drive.mount(ROOT)
%cd '/content/drive/My Drive/Colab Notebooks/CTCModel'
!tar xvf ./seqDigits.pkl.tar.gz
import sys
sys.path.append('/content/drive/My Drive/Colab Notebooks/CTCModel')
from keras.layers import TimeDistributed, Activation, Dense, Input, Bidirectional, LSTM, Masking, GaussianNoise
from CTCModel import CTCModel
import pickle
from keras.preprocessing import sequence
import numpy as np
def create_network(nb_features, nb_labels, padding_value):
# Define the network architecture
input_data = Input(name='input', shape=(None, nb_features)) # nb_features = image height
masking = Masking(mask_value=padding_value)(input_data)
noise = GaussianNoise(0.01)(masking)
blstm = Bidirectional(LSTM(128, return_sequences=True, dropout=0.1))(noise)
blstm = Bidirectional(LSTM(128, return_sequences=True, dropout=0.1))(blstm)
blstm = Bidirectional(LSTM(128, return_sequences=True, dropout=0.1))(blstm)
dense = TimeDistributed(Dense(nb_labels + 1, name="dense"))(blstm)
outrnn = Activation('softmax', name='softmax')(dense)
network = CTCModel([input_data], [outrnn])
network.compile(Adam(lr=0.0001))
return network
(x_train, y_train), (x_test, y_test) = pickle.load(open('./seqDigits.pkl', 'rb'))
nb_labels = 10 # number of labels (10, this is digits)
batch_size = 32 # size of the batch that are considered
padding_value = 255 # value for padding input observations
nb_epochs = 10 # number of training epochs
nb_train = len(x_train)
nb_test = len(x_test)
nb_features = len(x_train[0][0])
# create list of input lengths
x_train_len = np.asarray([len(x_train[i]) for i in range(nb_train)])
x_test_len = np.asarray([len(x_test[i]) for i in range(nb_test)])
y_train_len = np.asarray([len(y_train[i]) for i in range(nb_train)])
y_test_len = np.asarray([len(y_test[i]) for i in range(nb_test)])
# pad inputs
x_train_pad = sequence.pad_sequences(x_train, value=float(padding_value), dtype='float32',
padding="post", truncating='post')
x_test_pad = sequence.pad_sequences(x_test, value=float(padding_value), dtype='float32',
padding="post", truncating='post')
y_train_pad = sequence.pad_sequences(y_train, value=float(nb_labels),
dtype='float32', padding="post")
y_test_pad = sequence.pad_sequences(y_test, value=float(nb_labels),
dtype='float32', padding="post")
# define a recurrent network using CTCModel
network = create_network(nb_features, nb_labels, padding_value)
# CTC training
network.fit(x=[x_train_pad, y_train_pad, x_train_len, y_train_len], y=np.zeros(nb_train), \
batch_size=batch_size, epochs=nb_epochs)
# Evaluation: loss, label error rate and sequence error rate are requested
eval = network.evaluate(x=[x_test_pad, y_test_pad, x_test_len, y_test_len],\
batch_size=batch_size, metrics=['loss', 'ler', 'ser'])
# predict label sequences
pred = network.predict([x_test_pad, x_test_len], batch_size=batch_size, max_value=padding_value)
for i in range(10): # print the 10 first predictions
print("Prediction :", [j for j in pred[i] if j!=-1], " -- Label : ", y_test[i]) #
###Output
_____no_output_____
###Markdown
Setup...
###Code
# utility functions
import shutil
from cassandra.cluster import Cluster
import pandas as pd
def rm_folder(path):
shutil.rmtree(path, ignore_errors=True)
CASSANDRA_CLUSTER = Cluster(['127.0.0.1'], port=9042)
CASSANDRA_SESSION = CASSANDRA_CLUSTER.connect()
def cassandra_query(query):
return pd.DataFrame(list(CASSANDRA_SESSION.execute(query)))
# testing online feature store on cassandra container (make cassandra-up)
cassandra_query("SELECT * FROM system_schema.tables WHERE keyspace_name = 'feature_store'")
# setup spark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
conf = SparkConf().setAll(
[
("spark.sql.session.timeZone", "UTC"),
("spark.sql.sources.partitionOverwriteMode", "dynamic"),
]
)
spark = (
SparkSession.builder.config(conf=conf)
.appName("legiti-challenge")
.getOrCreate()
)
###Output
_____no_output_____
###Markdown
UserOrdersPipeline ExampleShowing the several interval executions for the pipeline that creates the feature set `user_orders` from `user` entity
###Code
# create pipeline from declaration
from legiti_challenge.feature_store_pipelines.user import UserOrdersPipeline
user_orders_pipeline = UserOrdersPipeline()
# clean local historical feature store table
user_orders_path = "data/feature_store/historical/user/user_orders"
rm_folder(user_orders_path)
# backfilling all historical data until 2020-05-10
user_orders_pipeline.run(end_date="2020-05-10")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
# daily run for the date 2020-05-11
user_orders_pipeline.run_for_date("2020-05-11")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
###Output
_____no_output_____
###Markdown
3 new records were added to the table with feature states calculated just for the 2020-05-11 date. Records from the other table partitions were not touched.
###Code
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
# daily run for the date 2020-05-12
user_orders_pipeline.run_for_date("2020-05-12")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
# daily run for the date 2020-05-13
user_orders_pipeline.run_for_date("2020-05-13")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
# daily run for the date 2020-05-14
user_orders_pipeline.run_for_date("2020-05-14")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
# backfilling from 2020-05-15 to 2020-07-17, this way completing all the data time line.
user_orders_pipeline.run(start_date="2020-05-15", end_date="2020-07-17")
# showing historical feature store results
spark.read.parquet(user_orders_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_orders").orderBy("timestamp").toPandas()
###Output
_____no_output_____
###Markdown
UserChargebacksPipeline ExampleShowing the pipeline run for all the datasets timeline for the feature set `user_chargeback` from `user` entity
###Code
# create pipeline from declaration
from legiti_challenge.feature_store_pipelines.user import UserChargebacksPipeline
user_chargebacks_pipeline = UserChargebacksPipeline()
# clean local historical feature store table
user_chargebacks_path = "data/feature_store/historical/user/user_chargebacks"
rm_folder(user_chargebacks_path)
# backfilling all historical data until 2020-07-17
user_chargebacks_pipeline.run(end_date="2020-07-17")
# showing historical feature store results
spark.read.parquet(user_chargebacks_path).orderBy("timestamp").toPandas()
# showing online feature store results
spark.table("online_feature_store__user_chargebacks").orderBy("timestamp").toPandas()
###Output
_____no_output_____
###Markdown
Creating the AwesomeDatasetEnriching order events with features from both feature sets
###Code
from legiti_challenge.dataset_pipelines import AwesomeDatasetPipeline
awesome_dataset_pipeline = AwesomeDatasetPipeline()
# creating dataset
awesome_dataset_pipeline.run()
# showing created CSV dataset
awesome_dataset_path = "data/datasets/awesome_dataset"
spark.read.option("header", True).csv(awesome_dataset_path).orderBy("timestamp").toPandas()
###Output
_____no_output_____
###Markdown
Example
###Code
import numpy as np
import pandas as pd
from SWRsimulation.SWRsimulationCE import SWRsimulationCE
# load Damodaran data from pickle
RETURN_FILE = 'histretSP'
def load_returns():
return pd.read_pickle('%s.pickle' % RETURN_FILE)
download_df = load_returns()
return_df = download_df.iloc[:, [0, 3, 12]]
return_df.columns=['stocks', 'bonds', 'cpi']
return_df
# calculate real returns
# should adjust CPI to year-ending also but leave it for now (seems to be annual avg index vs prev year avg)
real_return_df = return_df.copy()
# real_return_df.loc[1948:, 'cpi'] = cpi_test['cpi_fred']
# adjust returns for inflation
real_return_df['stocks'] = (1 + real_return_df['stocks']) / (1 + real_return_df['cpi']) - 1
real_return_df['bonds'] = (1 + real_return_df['bonds']) / (1 + real_return_df['cpi']) - 1
real_return_df.drop('cpi', axis=1, inplace=True)
real_return_df.to_pickle('real_return_df.pickle')
real_return_df
N_RET_YEARS = 30
FIXED_PCT = 3.5
VARIABLE_PCT = 1.0
FLOOR_PCT = 0.0
ALLOC_STOCKS = 0.75
ALLOC_BONDS = 0.25
GAMMA = 1.0
s = SWRsimulationCE({
'simulation': {'returns_df': real_return_df,
'n_ret_years': N_RET_YEARS,
},
'allocation': {'asset_weights': np.array([ALLOC_STOCKS, ALLOC_BONDS])},
'withdrawal': {'fixed_pct': FIXED_PCT,
'variable_pct': VARIABLE_PCT,
'floor_pct': FLOOR_PCT,
},
'evaluation': {'gamma': GAMMA},
'visualization': {'histogram': True,
'chart_1' : {'title': 'Years to Exhaustion by Retirement Year',
'annotation': "Fixed spend %.1f, Variable spend %.1f, stocks %.1f%%" % (FIXED_PCT,
VARIABLE_PCT,
100 * ALLOC_STOCKS)
},
'chart_2' : {'title': 'Spending By Retirement Year',
},
'chart_3' : {'title': 'Portfolio Value By Retirement Year',
},
}
})
s.simulate()
print(s)
s.visualize()
###Output
_____no_output_____
###Markdown
pyiron example notebookThis is a placeholder example notebook running and atomistic Lammps job.
###Code
from pyiron_src import Project
pr = Project("projects/example")
job = pr.create.job.Lammps('lmp')
job.structure = pr.create.structure.bulk('Al', cubic=True)
job.run()
job.output.energy_pot
pr.remove_jobs_silently(recursive=True)
pr.remove(enable=True)
###Output
_____no_output_____
###Markdown
PDF is garbageIn this example, we are looking for a link to some source code :[`http://prodege.jgi-psf.org//downloads/src`](http://prodege.jgi-psf.org//downloads/src)However, in the PDF, the URL is line wrapped, so the `src` is lost.
###Code
urlre = re.compile( '(?P<url>https?://[^\s]+)' )
for page in doc :
print urlre.findall( page )
###Output
[]
['http://prodege.jgi-psf.org//downloads/', 'http://prodege.jgi-psf.org,']
[]
['http://img.jgi.', 'http://www.nature.com/ismej)']
###Markdown
PDF is garbage, continuedIf we remove line breaks to fix URLs that have been wrapped, we discoverthat the visible line breaks in the document do not correspond to actualline breaks in the represented text. The result is random garbage.
###Code
urlre = re.compile( '(?P<url>https?://[^\s]+)' )
for page in doc :
print urlre.findall( page.replace('\n','') )
###Output
[]
['http://prodege.jgi-psf.org//downloads/availablerun', 'http://prodege.jgi-psf.org,which']
[]
['http://img.jgi.Cell', 'http://creativecommons.org/licenses/by/4.0/the', 'http://www.nature.com/ismej)The']
###Markdown
Nope.At this point, the author elects to flip a table. Let's try looking at the HTML version. I'll swipe some code from [Dive into Python](http://www.diveintopython.net/) here, because finding URLs in a HTML document is what is known as a "Solved Problem."
###Code
from sgmllib import SGMLParser
class URLLister(SGMLParser):
def reset(self):
SGMLParser.reset(self)
self.urls = []
def start_a(self, attrs):
href = [v for k, v in attrs if k=='href']
if href:
self.urls.extend(href)
def get_urls_from(url):
url_list = []
import urllib
usock = urllib.urlopen(url)
parser = URLLister()
parser.feed(usock.read())
usock.close()
parser.close()
map(url_list.append,
[item for item in parser.urls if item.startswith(('http', 'ftp', 'www'))])
return url_list
###Output
_____no_output_____
###Markdown
Here are all the URLs in the document...
###Code
urls = get_urls_from('http://www.nature.com/ismej/journal/v10/n1/full/ismej2015100a.html')
urls
###Output
_____no_output_____
###Markdown
Bleh. That is mostly links in the references, ads and navigation cruft from the journal's content mismanagement system. Because their systemis heinously *ad hoc*, there is no base URL. So, we're forced to use an *ad hoc* exclusion list.
###Code
excluded = [ 'http://www.nature.com',
'http://dx.doi.org',
'http://www.ncbi.nlm.nih.gov',
'http://creativecommons.org',
'https://s100.copyright.com',
'http://mts-isme.nature.com',
'http://www.isme-microbes.org',
'http://ad.doubleclick.net',
'http://mse.force.com',
'http://links.isiglobalnet2.com',
'http://www.readcube.com',
'http://chemport.cas.org',
'http://publicationethics.org/',
'http://www.natureasia.com/'
]
def novel_url( url ) :
for excluded_url in excluded :
if url.startswith( excluded_url ) :
return False
return True
filter( novel_url, urls )
###Output
_____no_output_____
###Markdown
Much better. Now, let's see if these exist...
###Code
import requests
for url in filter( novel_url, urls ) :
request = requests.get( url )
if request.status_code == 200:
print 'Good : ', url
else:
print 'Fail : ', url
###Output
Good : http://prodege.jgi-psf.org//downloads/src
Good : http://prodege.jgi-psf.org
Fail : http://img.jgi.doe.gov/w/doc/SingleCellDataDecontamination.pdf
###Markdown
Looks like this will work, though we'll need to make a hand-curated list ofexcluded URLs. Othersise, the counts of dead links could be badly skewed byany issues within the journal's content mismanagement system, ad servers andother irrelevent crud. Walking through ZoteroLet's try walking through the publications in a Zotero library...
###Code
from pyzotero import zotero
api_key = open( 'zotero_api_key.txt' ).read().strip()
library_id = open( 'zotero_api_userID.txt' ).read().strip()
library_type = 'group'
group_id = '405341' # microBE.net group ID
zot = zotero.Zotero(group_id, library_type, api_key)
items = zot.top(limit=5)
# we've retrieved the latest five top-level items in our library
# we can print each item's item type and ID
for item in items:
#print('Item: %s | Key: %s') % (item['data']['itemType'], item['data']['key'])
print item['data']['key'], ':', item['data']['title']
###Output
QC9BAHIK : ProDeGe: a computational protocol for fully automated decontamination of genomes
E7S5UR96 : In search of non-photosynthetic Cyanobacteria
T9GDRBT5 : Evidence-based recommendations on storing and handling specimens for analyses of insect microbiota
BJJUJW48 : Cautionary tale of using 16S rRNA gene sequence similarity values in identification of human-associated bacterial species
QD3JS59Z : ConStrains identifies microbial strains in metagenomic datasets
###Markdown
So far so good. Let's have a look at the `url` attribute...
###Code
for item in items:
print item['data']['key'], ':', item['data']['url']
###Output
QC9BAHIK : http://www.nature.com/ismej/journal/v10/n1/full/ismej2015100a.html
E7S5UR96 : http://espace.library.uq.edu.au/view/UQ:368958
T9GDRBT5 : https://peerj.com/articles/1190
BJJUJW48 :
QD3JS59Z : http://www.nature.com/nbt/journal/v33/n10/full/nbt.3319.html
###Markdown
Well, it looks like not all resources have URLs. Let's try looping oversome of these and extracting links...
###Code
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in filter( novel_url, link_urls ) :
print ' ', url
###Output
QC9BAHIK
http://prodege.jgi-psf.org//downloads/src
http://prodege.jgi-psf.org
http://img.jgi.doe.gov/w/doc/SingleCellDataDecontamination.pdf
E7S5UR96
http://www.uq.edu.au/
http://www.uq.edu.au/
http://www.uq.edu.au/contacts/
http://www.uq.edu.au/study/
http://www.uq.edu.au/maps/
http://www.uq.edu.au/news/
http://www.uq.edu.au/events/
http://www.library.uq.edu.au/
http://my.uq.edu.au/
http://ezproxy.library.uq.edu.au/login?url=http://dx.doi.org/10.14264/uql.2015.855
http://espace.library.uq.edu.au/list/author/Soo%2C+Rochelle+Melissa/
http://espace.library.uq.edu.au/list/?cat=quick_filter&search_keys%5Bcore_70%5D=School of Chemistry and Molecular Biosciences
http://espace.library.uq.edu.au/list/subject/452051/
http://espace.library.uq.edu.au/list/subject/452105/
http://espace.library.uq.edu.au/list/?cat=quick_filter&search_keys%5B0%5D=Melainabacteria
http://espace.library.uq.edu.au/list/?cat=quick_filter&search_keys%5B0%5D=Cyanobacteria
http://espace.library.uq.edu.au/list/?cat=quick_filter&search_keys%5B0%5D=Metabolism
http://scholar.google.com/scholar?q=intitle:"In search of non-photosynthetic Cyanobacteria"
http://www.uq.edu.au/
http://www.uq.edu.au/ipswich/
http://www.uq.edu.au/gatton/
http://www.uq.edu.au/about/herston-campus
http://www.uq.edu.au/maps/
http://www.universitiesaustralia.edu.au/
http://www.universitas21.com/
http://www.edx.org/
http://www.go8.edu.au/
http://www.uq.edu.au/terms-of-use/
http://www.uq.edu.au/rti/
http://www.library.uq.edu.au/feedback/add
http://www.uq.edu.au/about/cricos-link
http://www.uq.edu.au/omc/media
http://www.pf.uq.edu.au/emerg.html
https://www.facebook.com/uniofqld
http://twitter.com/uqnewsonline
http://www.flickr.com/photos/uqnews/sets/
http://instagram.com/uniofqld
https://www.youtube.com/universityqueensland
http://vimeo.com/uq
http://www.uq.edu.au/itunes/
http://www.linkedin.com/edu/school?id=10238
http://www.alumni.uq.edu.au/giving
http://www.uq.edu.au/departments/
http://www.uq.edu.au/uqjobs/
http://www.uq.edu.au/contacts/
http://www.uq.edu.au/services/
http://www.uq.edu.au/uqanswers/
http://fez.library.uq.edu.au/
T9GDRBT5
https://peerj.com/blog/
http://www.mendeley.com/import/?doi=10.7717/peerj.1190
http://twitter.com/share?url=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F&via=thePeerJ&text=Storage%20methods%20and%20insect%20microbiota&related=
http://www.facebook.com/sharer.php?u=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F
https://plus.google.com/share?url=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F
http://twitter.com/share?url=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F&via=thePeerJ&text=Storage%20methods%20and%20insect%20microbiota&related=
http://www.facebook.com/sharer.php?u=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F
https://plus.google.com/share?url=https%3A%2F%2Fpeerj.com%2Farticles%2F1190%2F
https://doi.org/10.7717/peerj.1190
https://doi.org/10.7717/peerj.1190
https://doi.org/10.1146%2Fannurev.ento.49.061802.123416
https://doi.org/10.1111%2F1574-6976.12025
https://doi.org/10.1146%2Fannurev-ento-010814-020822
https://doi.org/10.1038%2Fnrmicro3382
https://doi.org/10.1016%2F0305-1978%2893%2990012-G
https://doi.org/10.1046%2Fj.1365-294x.1999.00795.x
https://doi.org/10.1111%2Fj.1570-7458.2006.00451.x
https://doi.org/10.1071%2FIS12067
https://doi.org/10.1371%2Fjournal.pone.0061218
https://doi.org/10.1371%2Fjournal.pone.0086995
https://doi.org/10.1111%2Fmec.12209
https://doi.org/10.1371%2Fjournal.pone.0079061
https://doi.org/10.1603%2F0022-2585-41.3.340
https://doi.org/10.1111%2Fmec.12611
https://doi.org/10.1111%2Fj.1365-294X.2012.05752.x
https://doi.org/10.1111%2Fj.1574-6968.2010.01965.x
https://doi.org/10.1371%2Fjournal.pone.0070460
https://scholar.google.com/scholar_lookup?title=Tissue%20storage%20and%20primer%20selection%20influence%20pyrosequencing-based%20inferences%20of%20diversity%20and%20community%20composition%20of%20endolichenic%20and%20endophytic%20fungi&author=U%E2%80%99Ren&publication_year=2014
https://doi.org/10.1186%2F1471-2180-14-103
https://doi.org/10.1073%2Fpnas.1319284111
https://doi.org/10.1128%2FAEM.01886-10
https://doi.org/10.1371%2Fjournal.pone.0086995
https://doi.org/10.1111%2Fmec.12611
https://doi.org/10.1111%2F1755-0998.12331
https://doi.org/10.1111%2Fj.1574-6968.2010.01965.x
https://doi.org/10.1371%2Fjournal.pone.0070460
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-1-2x.jpg
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-1-full.png
https://doi.org/10.7717/peerj.1190/fig-1
https://doi.org/10.1371%2Fjournal.pone.0086995
https://doi.org/10.1002%2Fmbo3.216
https://doi.org/10.1071%2FIS12067
https://doi.org/10.1007%2Fs13127-010-0012-4
https://doi.org/10.1111%2Fele.12282
https://doi.org/10.1038%2Fismej.2012.8
https://doi.org/10.1038%2Fnmeth.2604
https://doi.org/10.1098%2Frspb.2014.1988
https://doi.org/10.1128%2FAEM.00062-07
https://doi.org/10.1038%2Fismej.2011.139
https://scholar.google.com/scholar_lookup?title=R:%20a%20language%20and%20environment%20for%20statistical%20computing&author=&publication_year=2013
http://CRAN.R-project.org/package=vegan
http://had.co.nz/ggplot2/book
https://doi.org/10.1111%2Fj.1442-9993.2001.01070.pp.x
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-2-2x.jpg
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-2-full.png
https://doi.org/10.7717/peerj.1190/fig-2
https://doi.org/10.1111%2Fj.1365-294X.2012.05752.x
https://doi.org/10.1371%2Fjournal.pone.0061218
https://doi.org/10.1128%2FAEM.01226-14
https://doi.org/10.1111%2Fj.1574-6968.2010.01965.x
https://scholar.google.com/scholar_lookup?title=Tissue%20storage%20and%20primer%20selection%20influence%20pyrosequencing-based%20inferences%20of%20diversity%20and%20community%20composition%20of%20endolichenic%20and%20endophytic%20fungi&author=U%E2%80%99Ren&publication_year=2014
https://doi.org/10.1186%2F1471-2180-14-103
https://doi.org/10.1073%2Fpnas.1319284111
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-3-2x.jpg
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-3-full.png
https://doi.org/10.7717/peerj.1190/fig-3
https://doi.org/10.7717/peerj.1190/table-1
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-4-2x.jpg
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/fig-4-full.png
https://doi.org/10.7717/peerj.1190/fig-4
https://doi.org/10.1073%2Fpnas.1405838111
https://doi.org/10.1016%2F0020-1790%2885%2990020-4
https://doi.org/10.1073%2Fpnas.0807920105
https://doi.org/10.1371%2Fjournal.pone.0061218
https://doi.org/10.1371%2Fjournal.pone.0086995
https://doi.org/10.1111%2Fmec.12611
https://doi.org/10.7717/peerj.1190/supp-1
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/FigS1.pdf
https://doi.org/10.7717/peerj.1190/supp-2
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/FigS2.pdf
https://doi.org/10.7717/peerj.1190/supp-3
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/FigS3.pdf
https://doi.org/10.7717/peerj.1190/supp-4
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/Table_S1.xlsx
https://doi.org/10.7717/peerj.1190/supp-5
https://dfzljdn9uc3pi.cloudfront.net/2015/1190/1/Table_S2.docx
https://doi.org/10.1111%2Fj.1442-9993.2001.01070.pp.x
https://doi.org/10.1111%2Fele.12282
https://doi.org/10.1603%2F0022-2585-41.3.340
https://doi.org/10.1038%2Fismej.2012.8
https://doi.org/10.1038%2Fnrmicro3382
https://doi.org/10.1111%2Fj.1365-294X.2012.05752.x
https://doi.org/10.1146%2Fannurev.ento.49.061802.123416
https://doi.org/10.1186%2F1471-2180-14-103
https://doi.org/10.1146%2Fannurev-ento-010814-020822
https://doi.org/10.1038%2Fnmeth.2604
https://doi.org/10.1111%2F1574-6976.12025
https://doi.org/10.1371%2Fjournal.pone.0079061
https://doi.org/10.1073%2Fpnas.0807920105
https://doi.org/10.1073%2Fpnas.1319284111
https://doi.org/10.1046%2Fj.1365-294x.1999.00795.x
https://doi.org/10.1371%2Fjournal.pone.0086995
https://doi.org/10.1371%2Fjournal.pone.0061218
https://doi.org/10.1111%2Fmec.12209
https://doi.org/10.1073%2Fpnas.1405838111
https://doi.org/10.1111%2Fj.1574-6968.2010.01965.x
https://doi.org/10.1111%2Fj.1570-7458.2006.00451.x
https://doi.org/10.1038%2Fismej.2011.139
https://doi.org/10.1071%2FIS12067
https://doi.org/10.1007%2Fs13127-010-0012-4
http://CRAN.R-project.org/package=vegan
http://CRAN.R-project.org/package=vegan
https://doi.org/10.1016%2F0305-1978%2893%2990012-G
https://doi.org/10.1098%2Frspb.2014.1988
https://scholar.google.com/scholar_lookup?title=R:%20a%20language%20and%20environment%20for%20statistical%20computing&author=&publication_year=2013
https://doi.org/10.1128%2FAEM.01886-10
https://doi.org/10.1371%2Fjournal.pone.0070460
https://doi.org/10.1002%2Fmbo3.216
https://doi.org/10.1111%2Fmec.12611
https://doi.org/10.1016%2F0020-1790%2885%2990020-4
https://scholar.google.com/scholar_lookup?title=Tissue%20storage%20and%20primer%20selection%20influence%20pyrosequencing-based%20inferences%20of%20diversity%20and%20community%20composition%20of%20endolichenic%20and%20endophytic%20fungi&author=U%E2%80%99Ren&publication_year=2014
https://doi.org/10.1128%2FAEM.00062-07
http://had.co.nz/ggplot2/book
http://had.co.nz/ggplot2/book
https://doi.org/10.1111%2F1755-0998.12331
https://doi.org/10.1128%2FAEM.01226-14
http://www.mendeley.com/import/?doi=10.7717/peerj.1190
https://www.facebook.com/
http://www.lib.noaa.gov/noaa_research.xml
http://www.microbiomedigest.com/
http://www.tobinhammer.com/publications-and-twitter-feed.html
https://m.facebook.com/
http://m.facebook.com
http://m.facebook.com/
http://www.facebook.com/Gertruda
http://www.traackr.com/
http://apps.webofknowledge.com.proxy2.library.illinois.edu/full_record.do
http://apps.webofknowledge.com/Search.do
http://apps.webofknowledge.com/full_record.do
http://plus.url.google.com/url
http://2015.maintenance.academicanalytics.com/PersonQuadrants/PersonQuadrants
http://adobe.com/apollo
http://apps.webofknowledge.com.ezproxy2.library.arizona.edu/summary.do
http://apps.webofknowledge.com/summary.do
http://feedly.com/i/category/Open%20Access
http://l.facebook.com/l.php
http://scholar.glgoo.org/scholar
http://scholar.google.com.sci-hub.io/
http://scholar.google.com.secure.sci-hub.io/scholar
http://search.aol.com/aol/search
http://sfx.kcl.ac.uk/kings
http://sfx.unimi.it/unimi
http://sfxhosted.exlibrisgroup.com/emu
http://www.ask.com/web
http://www.sciencedirect.com/science/article/pii/030519789390012G
http://www.scopus.com/record/display.uri
http://www.scopus.com/results/citedbyresults.url
http://www.sogou.com/
https://blu182.mail.live.com/
https://dx.doi.org/10.7717/peerj.1190/supp-3
https://exchange.ou.edu/owa/redir.aspx
https://l.facebook.com/l.php
https://login.ezproxy.lib.utexas.edu/connect
https://outlook.caltech.edu/owa/redir.aspx
https://peerj.freshdesk.com/helpdesk/tickets/14030
https://plus.google.com/
https://plus.url.google.com/url
https://scholar-google-com-au.ezproxy2.library.usyd.edu.au/
https://scholar-google-com.ezproxy.library.wisc.edu/
https://scholar-google-com.proxy.lib.fsu.edu/scholar_lookup
https://scholar-google-com.proxy2.library.illinois.edu
https://squirrel.science.ru.nl/src/read_body.php
https://weboutlook.du.edu/owa/redir.aspx
https://www.facebook.com
https://twitter.com/share
https://peerj.com/blog/
http://twitter.com/thePeerJ/
http://facebook.com/thePeerJ/
https://plus.google.com/+Peerj
http://www.linkedin.com/company/peerj
http://www.pinterest.com/thepeerj/boards/
###Markdown
Clearly, we need to expand the excluded URL list. And we need to match domains, not URLs.
###Code
excluded = [ 'nature.com',
'doi.org',
'ncbi.nlm.nih.gov',
'creativecommons.org',
'copyright.com',
'isme-microbes.org',
'doubleclick.net',
'force.com',
'isiglobalnet2.com',
'readcube.com',
'cas.org',
'publicationethics.org',
'natureasia.com',
'uq.edu.au',
'edx.org',
'facebook.com',
'instagram.com',
'youtube.com',
'flickr.com',
'twitter.com',
'go8.edu.au',
'google.com',
'vimeo.com',
'peerj.com',
'mendeley.com',
'cloudfront.net',
'webofknowledge.com',
'sciencedirect.com',
'aol.com',
'pinterest.com',
'scopus.com',
'live.com',
'exlibrisgroup.com',
'usyd.edu.au',
'academicanalytics.com',
'microbiomedigest.com',
'ask.com',
'sogou.com',
'ou.com',
'du.edu',
'ru.nl',
'freshdesk.com',
'caltech.edu',
'traackr.com',
'adobe.com',
'linkedin.com',
'feedly.com',
'google.co.uk',
'glgoo.org',
'library.wisc.edu',
'lib.fsu.edu',
'library.illinois.edu',
'exchange.ou.edu',
'lib.noaa.gov',
'innocentive.com',
'sfx.kcl.ac.uk',
'sfx.unimi.it',
'lib.utexas.edu',
'orcid.org',
]
def novel_url( url ) :
for excluded_url in excluded :
if url.__contains__( excluded_url ) :
return False
return True
###Output
_____no_output_____
###Markdown
This excluded list is getting sloppy as the author slowly lapses into a vegitative state, but we'll push on anyway.
###Code
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
try :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in list(set(filter( novel_url, link_urls ))) :
print ' ', url
except IOError :
print item['data']['key'], 'FAILED'
###Output
QC9BAHIK
http://img.jgi.doe.gov/w/doc/SingleCellDataDecontamination.pdf
http://prodege.jgi-psf.org
http://prodege.jgi-psf.org//downloads/src
E7S5UR96 FAILED
T9GDRBT5
http://had.co.nz/ggplot2/book
http://CRAN.R-project.org/package=vegan
http://www.tobinhammer.com/publications-and-twitter-feed.html
QD3JS59Z
https://bitbucket.org/luo-chengwei/constrains
http://hmpdacc.org/resources/tools_protocols.php
###Markdown
Some journals aggressivly ban and throttle IPs, so this process gets slowand awful, but it works. Let's check these for dead links...
###Code
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
try :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in list(set(filter( novel_url, link_urls ))) :
request = requests.get( url )
if request.status_code == 200:
print ' Good : ', url
else:
print ' Fail : ', url
except IOError :
print item['data']['key'], 'FAILED'
###Output
QC9BAHIK
Fail : http://img.jgi.doe.gov/w/doc/SingleCellDataDecontamination.pdf
Good : http://prodege.jgi-psf.org
Good : http://prodege.jgi-psf.org//downloads/src
E7S5UR96 FAILED
T9GDRBT5
Fail : http://had.co.nz/ggplot2/book
Good : http://CRAN.R-project.org/package=vegan
Good : http://www.tobinhammer.com/publications-and-twitter-feed.html
QD3JS59Z
Good : https://bitbucket.org/luo-chengwei/constrains
Good : http://hmpdacc.org/resources/tools_protocols.php
###Markdown
***Latest update: 2020-07-26Density Estimation using Markov Chains Andrea De SimoneAlessandro Morandiniplease cite: arXiv:20XX.XXXX***
###Code
#import the estimator class and some other modules
from MCDE import MCDensityEstimator
import numpy as np
import scipy
import math
import random
%matplotlib inline
###Output
_____no_output_____
###Markdown
1D example
###Code
# Generate a sample of 300 points from a bimodal (sum of two normal distributions)
N_points=300
rv1 = scipy.stats.norm(1,1)
rv2 = scipy.stats.norm(8,2)
Xa=rv1.rvs(size=int(N_points/2), random_state=4)
Xb=rv2.rvs(size=int(N_points/2), random_state=2)
X=np.hstack([Xa,Xb])
# optimization step
loss=[]
bw_range=np.linspace(0.2,1,9)
for bw in bw_range:
DE = MCDensityEstimator(bw=bw, interpolation_method='linear', weight_func='gaussian')
DE.fit(X)
loss.append(-np.sum(np.log(DE.pdf)))
loss=np.array(loss)
opt_bw=bw_range[np.argmin(loss)]
print('The optimal bandwidth is '+str(opt_bw))
# Here we estimate the underlying PDF, bw has been found at the previous point
DE = MCDensityEstimator(bw=0.5, interpolation_method='linear')
DE.fit(X)
import matplotlib.pyplot as plt
x = np.linspace(X.min(),X.max(),500)
est = DE.evaluate_pdf(x)
true=0.5*rv1.pdf(x)+0.5*rv2.pdf(x)
fig, axs = plt.subplots(figsize=(9, 6))
plt.plot(x, true, label='True pdf')
plt.plot(x, est, label='MCDE estimate')
plt.xlim([X.min(),X.max()])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2D example
###Code
# Generate a sample of 1000 points from the sum of two 2D gaussians with non-diagonal covariance
N_points=1000
rv1 = scipy.stats.multivariate_normal(mean=[3,8], cov=[[1,-1.5],[-1.5,4]])
rv2 = scipy.stats.multivariate_normal(mean=[8,3], cov=[[4,1.5],[1.5,1]])
Xa=rv1.rvs(size=int(N_points/2), random_state=1)
Xb=rv2.rvs(size=int(N_points/2), random_state=2)
X=np.vstack([Xa,Xb])
# Here we estimate the underlying PDF, bw has been fixed by us for this problem
DE = MCDensityEstimator(bw=0.3, interpolation_method='linear')
DE.fit(X)
# the estimate has been obtained, if we want to look at marginal distributions
# we need to perform some integrations along x and y
import mcint
# define the sampler and the volume necessary to integrate
def sampler(X, a, axis):
while True:
r = random.uniform(X.min(),X.max())
if axis=='x':
gen_list=[r,a]
elif axis=='y':
gen_list=[a,r]
yield (gen_list)
def volume( X ):
vol = ( X.max() - X.min() )
return( vol )
# this is the true marginal along both x and y
def marginal(x):
return 0.5*6.7*1e-5*np.exp(2*x-0.125*x**2)+0.5*4.4e-3*np.exp(3*x-0.5*x**2)
# in the following we make our marginal estimate, a priori different along x and y
x=np.linspace(0,14,56)
margx=[]
margy=[]
for point in x:
integral, _ = mcint.integrate(DE.evaluate_pdf,
sampler(X,point,'y'),
measure=volume(X),
n=5000)
margx.append(integral)
integral, _ = mcint.integrate(DE.evaluate_pdf,
sampler(X,point,'x'),
measure=volume(X),
n=5000)
margy.append(integral)
# here we plot the true marginal distributions
# to be compared with our estimates
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig= plt.figure(figsize=(12,12))
ax= fig.add_subplot(111, projection= '3d')
ax.plot(X[:,0],X[:,1],'b+',zdir='z',zs=0, label='Sample points')
ax.plot(x, marginal(x), 'g', zdir='x', zs=0, label='Correct marginal')
ax.plot(x, margx, 'r', zdir='x', zs=0, label='Estimated marginal')
ax.plot(x, marginal(x), 'g', zdir='y', zs=14)
ax.plot(x, margy, 'r', zdir='y', zs=14)
ax.set_xlabel('x', fontsize=20, labelpad=10)
ax.set_ylabel('y', fontsize=20, labelpad=10)
ax.set_zlabel('marginal PDF', fontsize=20, labelpad=16)
ax.set_xlim([x.min(), x.max()])
ax.set_ylim([x.min(), x.max()])
plt.legend(fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Testing CPI with arrays of dates 10-22-18
###Code
import cpi
import pandas as pd
import numpy as np
from datetime import date
###Output
_____no_output_____
###Markdown
Using `pandas.util.testing` we can create 2 dataframes with datetime indexes and using `numpy.random.randint()` we can create a set of corresponding _income_ values.We use `freq = 'W'` so we have 100 different dates but ~25 different months as an example of data that could be found _in the wild_.
###Code
first_date = pd.date_range(end = '2018-01-01', periods = 3000, freq = 'W')
second_date = pd.date_range(start = '1930-01-01', periods = 3000, freq = 'W')
incomes = np.random.randint(low = 1500, high = 200_000, size = 3000)
###Output
_____no_output_____
###Markdown
From there we can construct our working dataframe
###Code
df = pd.DataFrame(columns=['date_from', 'date_to', 'incomes'])
df['date_from'] = second_date
df['date_to'] = first_date
df['incomes'] = incomes
df.head()
###Output
_____no_output_____
###Markdown
***`CPI` works in a simple fashion:1. Look for `year_or_month` for when the values are from and retrieve its ___source_index___.2. Look for `to` for the ___target_index___ to inflate the values _to_.3. `return (value * target_index) / float(source_index)`The simplicity of the conversion lends `CPI` to be useful when `value` is a pandas series or numpy array. The goal is to be able to provide a pandas series or numpy array with dates as well.Here's an example of a work-around:
###Code
# source_index
cpi_values_source = {}
dates_from = df['date_from'].astype(str).str[:7] # 1234-56
for item in dates_from.unique():
# retrieve all values and store them in a dict()
y_m = str(item).split("-")
y_m = [int(y_m[0]), int(y_m[1])] # year and month
target_date = date(y_m[0], y_m[1], 1)
cpi_values_source[item] = cpi.get(target_date)
# Map those values to another series
source_index = dates_from.map(cpi_values_source)
# targe_index
cpi_values_target = {}
dates_to = df['date_to'].astype(str).str[:7]
for item in dates_to.unique():
# retrieve all values and store them in a dict()
y_m = str(item).split("-")
y_m = [int(y_m[0]), int(y_m[1])] # year and month
target_date = date(y_m[0], y_m[1], 1)
cpi_values_target[item] = cpi.get(target_date)
# Map those values to another series
target_index = dates_to.map(cpi_values_target)
df['inflated'] = df['incomes'] * target_index / source_index
df.head()
###Output
_____no_output_____
###Markdown
***The value here is that even if you have 6000 different weekly observations you only have around ($6000 / 4=$) 1500 different months and so you should only call `cpi.get()` 1500 times, not 6000. A much more common case would be to have a set of weekly observations and to _inflate_ them all to the most up-to-date index.
###Code
cpi.update()
cpi.LATEST_MONTH
date_from = pd.date_range(end = '2018-01-01', periods = 3000, freq = 'W')
incomes = np.random.randint(low = 1500, high = 200_000, size = 3000)
df = pd.DataFrame(columns=['date_from', 'date_to', 'incomes'])
df['date_from'] = date_from
df['date_to'] = "2018-09-01"
df['incomes'] = incomes
df.head()
###Output
_____no_output_____
###Markdown
Currently, the example from `CPI` documentation for working with pandas uses the `.apply()` method.
###Code
df['YEAR'] = df['date_from'].dt.year # prepping for CPI README example
%%timeit
df['ADJUSTED'] = df.apply(lambda x: cpi.inflate(x['incomes'], x['YEAR']), axis=1)
df[['YEAR', 'incomes', 'ADJUSTED']].head()
###Output
_____no_output_____
###Markdown
Using this _new_ method, not only can we do this for each month (not just year) but it is also a bit faster.
###Code
%%timeit
# source_index
cpi_values_source = {}
dates_from = df['date_from'].astype(str).str[:7] # 1234-56
for item in dates_from.unique():
# retrieve all values and store them in a dict()
y_m = str(item).split("-")
y_m = [int(y_m[0]), int(y_m[1])] # year and month
target_date = date(y_m[0], y_m[1], 1)
cpi_values_source[item] = cpi.get(target_date)
# Map those values to another series
source_index = dates_from.map(cpi_values_source)
target_index = cpi.get(date(2018,9,1))
df['ADJUSTED_2'] = df['incomes'] * target_index / source_index
df[['date_from', 'incomes', 'ADJUSTED_2']].head()
###Output
_____no_output_____
###Markdown
ChestX-Ray 14 Dataset
###Code
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from src.cxr14 import CXR14
(ds_train, ds_val, ds_test), ds_info = tfds.load(
'cx_r14',
split=['train', 'val', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
print(ds_info)
print(ds_info.metadata)
###Output
tfds.core.DatasetInfo(
name='cx_r14',
full_name='cx_r14/1.1.0',
description="""
"ChestX-ray dataset comprises 112,120 frontal-view X-ray images of 30,805 unique patients with
the text-mined fourteen disease image labels (where each image can have multi-labels), mined
from the associated radiological reports using natural language processing. Fourteen common
thoracic pathologies include Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema,
Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening, Cardiomegaly, Nodule, Mass and
Hernia, which is an extension of the 8 common disease patterns listed in our CVPR2017 paper.
Note that original radiology reports (associated with these chest x-ray studies) are not
meant to be publicly shared for many reasons. The text-mined disease labels are expected to
have accuracy >90%."
""",
homepage='https://nihcc.app.box.com/v/ChestXray-NIHCC',
data_path='/home/tmarkmann/tensorflow_datasets/cx_r14/1.1.0',
download_size=41.98 GiB,
dataset_size=41.97 GiB,
features=FeaturesDict({
'image': Image(shape=(None, None, 3), dtype=tf.uint8),
'label': Sequence(ClassLabel(shape=(), dtype=tf.int64, num_classes=2)),
'name': Text(shape=(), dtype=tf.string),
}),
supervised_keys=('image', 'label'),
disable_shuffling=False,
splits={
'test': <SplitInfo num_examples=1518, num_shards=8>,
'train': <SplitInfo num_examples=104266, num_shards=512>,
'val': <SplitInfo num_examples=6336, num_shards=32>,
},
citation="""@article{DBLP:journals/corr/WangPLLBS17,
author = {Xiaosong Wang and
Yifan Peng and
Le Lu and
Zhiyong Lu and
Mohammadhadi Bagheri and
Ronald M. Summers},
title = {ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on
Weakly-Supervised Classification and Localization of Common Thorax
Diseases},
journal = {CoRR},
volume = {abs/1705.02315},
year = {2017},
url = {http://arxiv.org/abs/1705.02315},
eprinttype = {arXiv},
eprint = {1705.02315},
timestamp = {Thu, 03 Oct 2019 13:13:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/WangPLLBS17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}""",
)
{'class_weights': [{'0': 0.1032743176107264, '1': 0.8967256823892736}, {'0': 0.024590950070013235, '1': 0.9754090499299868}, {'0': 0.11874436537318013, '1': 0.8812556346268199}, {'0': 0.17674026048759903, '1': 0.823259739512401}, {'0': 0.05132066061803464, '1': 0.9486793393819654}, {'0': 0.05654767613603667, '1': 0.9434523238639633}, {'0': 0.012736654326434312, '1': 0.9872633456735657}, {'0': 0.04782958970325897, '1': 0.952170410296741}, {'0': 0.04120230947768208, '1': 0.9587976905223179}, {'0': 0.020505246197226323, '1': 0.9794947538027736}, {'0': 0.022826232904302458, '1': 0.9771737670956976}, {'0': 0.015076822741833388, '1': 0.9849231772581666}, {'0': 0.030201599754474135, '1': 0.9697984002455259}, {'0': 0.002004488519747569, '1': 0.9979955114802525}]}
###Markdown
Simple Build Pipeline
###Code
def preproc_img(image, label):
image = tf.image.resize(image, [224, 224])
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(
preproc_img, num_parallel_calls=tf.data.AUTOTUNE)
#ds_train = ds_train.shuffle(buffer_size=1000)
ds_train = ds_train.batch(8)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(
preproc_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(8)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
###Output
_____no_output_____
###Markdown
Benchmark
###Code
tfds.benchmark(ds_train, batch_size=8)
###Output
_____no_output_____
###Markdown
Visualization
###Code
import matplotlib.pyplot as plt
import numpy as np
#tfds.show_examples(ds_train, ds_info)
def show(image, label):
plt.figure()
plt.imshow(image)
plt.title(np.array2string(label.numpy(), separator=','))
plt.axis('off')
for image, label in ds_train.take(1).unbatch():
show(image, label)
###Output
_____no_output_____
###Markdown
Train
###Code
model = tf.keras.models.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(14, activation='sigmoid')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
metrics=[tf.keras.metrics.AUC(curve='ROC',multi_label=True, num_labels=14, from_logits=False)],
)
model.summary()
model.fit(
ds_train,
epochs=6,
validation_data=ds_test,
)
###Output
_____no_output_____
###Markdown
XGBoost Regression with TensorFlow Pooling and Loss IntroConsider features are available on Individual level, predictions are required also on the Individual level but target is available for Groups of Individuals only. ![picture](img/arch.png) Predictions of XGBoost on the Individual level will be pooled to the Group level using a custom TensorFlow function. The same function uses one of TensorFlow losses to calculate the final scalar loss by comparing the target on Group level with the pooled predictions to the Group level.The goal is to provide a decorator, which turns the mentioned TensorFlow pooling and loss function to the XGBoost custom objective function, such that the whole aggregation and calculation of the 1st and 2nd order derivatives is done seamlessly during XGBoost training.
###Code
import numpy as np
import xgboost as xgb
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
from tf2xgb import get_ragged_nested_index_lists, gen_random_dataset, xgb_tf_loss
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
Dummy Input DatasetLet's generate random "observed" data incl. targets on Individual level. Then, add aggregated targets on Subgroup and Group levels. In the end, we will be able to compare estimates on the Individual-level targets, which is not available in practice in the example above, with the estimates on Subgroup- and Group-level targets.Note that the aggregation from Individual to Subgroup level is MAX, and the aggregation from the Subgroup to Group level is SUM in this Example.
###Code
N = 100000
N_TEST = 10000
N_SUBGRP = N//2
N_GRP = N_SUBGRP//2
BETA_TRUE = [2,1,0,0,0]
SIGMA = 1
# main data frame with features X, subgroup IDs subgrp_id and group ID grp_id;
# target y is NOT observable on the individual level in real data,
# we have it here to be able to simulate target on group level
# and to be able to compared result of the estimate on the group-level
# target with the estimate on the individual level.
df_train = gen_random_dataset(N, N_SUBGRP, N_GRP, BETA_TRUE, SIGMA)
df_test = gen_random_dataset(N_TEST, 0, 0, BETA_TRUE, SIGMA)
df_train.head()
X_train = np.asarray(df_train['X'].to_list())
y_train = np.asarray(df_train['y'].to_list())
X_test = np.asarray(df_test['X'].to_list())
y_test = np.asarray(df_test['y'].to_list())
###Output
_____no_output_____
###Markdown
Calculate simulated target `y` on the level of `subgrp_id` (by max pooling of individual-level `y`'s) and `grp_id` (by sum of `subgrp_id`-level `y`'s).
###Code
df_train_subgrp_y = (df_train
.groupby('subgrp_id')
.agg({'y':np.max, 'grp_id':max})
.reset_index()
)
df_train_grp_y = (df_train_subgrp_y
.groupby('grp_id')
.agg({'y':np.sum})
.reset_index()
)
df_train_subgrp_inds = get_ragged_nested_index_lists(df_train, ['subgrp_id'])
df_train_grp_inds = get_ragged_nested_index_lists(df_train, ['grp_id', 'subgrp_id'])
###Output
_____no_output_____
###Markdown
Custom TF Pooling and Loss Functions
###Code
@xgb_tf_loss(df_train_subgrp_inds.sort_values(by=['subgrp_id'])['_row_'].to_list(),
df_train_subgrp_y.sort_values(by=['subgrp_id'])['y'].to_numpy())
def xgb_subgrp_obj_fn_from_tf(target, preds_cube):
"""Custom TF Pooling and Loss function.
This example function performs max pooling from the individual
level to subgroups.
The function takes appropriate care of missing values in preds_cube.
Inputs:
= target: 1D tensor with target on the level of groups
= preds_cube: ND tensor with predictions on the individual level;
the first dimension is that of groups, the other dimensions reflect
sub-groups on different levels and individual observations
(target.shape[0] == preds_cube.shape[0];
preds_cube.shape[-1] == max # indiv observations per the most detailed
sub-group).
Missing values are denoted by np.nan and have to be taken care of in
this function body. They occur simply because preds_cube
has typically much more elements that the original flat predictions
vector from XGBoost.
Output: scalar tensor reflecting MEAN of losses over all dimensions.
This is the output of e.g. tf.keras.losses.mean_squared_error().
The mean is translated to SUM later in tf_d_loss() because of the
compatibility with XGB custom objective function.
"""
x = preds_cube
# replace NaNs with -Inf: neutral value for reduce_max()
x = tf.where(tf.math.is_nan(x), tf.constant(-np.inf, dtype=x.dtype), x)
x = tf.math.reduce_max(x, axis=-1)
l = tf.keras.losses.mean_squared_error(target, x)
return l
@xgb_tf_loss(df_train_grp_inds.sort_values(by=['grp_id'])['_row_'].to_list(),
df_train_grp_y.sort_values(by=['grp_id'])['y'].to_numpy())
def xgb_grp_obj_fn_from_tf(target, preds_cube):
"""Custom TF Pooling and Loss function.
This example function performs first max pooling from the individual
level to subgroups, and second sum of subgroups to groups.
The function takes appropriate care of missing values in preds_cube.
Inputs:
= target: 1D tensor with target on the level of groups
= preds_cube: ND tensor with predictions on the individual level;
the first dimension is that of groups, the other dimensions reflect
sub-groups on different levels and individual observations
(target.shape[0] == preds_cube.shape[0];
preds_cube.shape[-1] == max # indiv observations per the most detailed
sub-group)
Missing values are denoted by np.nan and have to be taken care of in
this function body. They occur simply because preds_cube
has typically much more elements that the original flat predictions
vector from XGBoost.
Output: scalar tensor reflecting MEAN of losses over all dimensions.
This is the output of e.g. tf.keras.losses.mean_squared_error().
The mean is translated to SUM later in tf_d_loss() because of the
compatibility with XGB custom objective function.
"""
x = preds_cube
# replace NaNs with -Inf: neutral value for reduce_max()
x = tf.where(tf.math.is_nan(x), tf.constant(-np.inf, dtype=x.dtype), x)
x = tf.math.reduce_max(x, axis=-1)
# replace (-)Inf's (=missing values from reduce_max()) with 0's:
# neutral value for reduce_sum()
x = tf.where(tf.math.is_inf(x), tf.constant(0, dtype=x.dtype), x)
x = tf.math.reduce_sum(x, axis=-1)
l = tf.keras.losses.mean_squared_error(target, x)
return l
###Output
_____no_output_____
###Markdown
Estimation
###Code
dtest = xgb.DMatrix(X_test)
%%time
# labels on Group level are inputs of grouped_objective(),
# they are not part of dtrain DMatrix
dtrain_subgrp = xgb.DMatrix(X_train)
regr_subgrp = xgb.train({'tree_method': 'hist', 'seed': 1994}, # any other tree method is fine.
dtrain=dtrain_subgrp,
num_boost_round=10,
obj=xgb_subgrp_obj_fn_from_tf)
# predictions are on Individual level despite the target on Group level
y_subgrp = regr_subgrp.predict(dtest)
%%time
# labels on Group level are inputs of grouped_objective(),
# they are not part of dtrain DMatrix
dtrain_grp = xgb.DMatrix(X_train)
regr_grp = xgb.train({'tree_method': 'hist', 'seed': 1994}, # any other tree method is fine.
dtrain=dtrain_grp,
num_boost_round=10,
obj=xgb_grp_obj_fn_from_tf)
# predictions are on Individual level despite the target on Group level
y_grp = regr_grp.predict(dtest)
dtrain_indiv = xgb.DMatrix(X_train, label=y_train)
regr_indiv = xgb.train({'tree_method': 'hist', 'seed': 1994}, # any other tree method is fine.
dtrain=dtrain_indiv,
num_boost_round=10
)
y_indiv = regr_indiv.predict(dtest)
###Output
_____no_output_____
###Markdown
ResultsFirst, plot the true values vs predictions of both models on the Individual level to see the prediction accuracy:
###Code
print(f"MSE of individual predictions based on grp_id-pooled targets : "
f"{mean_squared_error(y_test, y_grp)}")
print(f"MSE of individual predictions based on subgrp_id-pooled targets: "
f"{mean_squared_error(y_test, y_subgrp)}")
print(f"MSE of individual predictions based on individual targets : "
f"{mean_squared_error(y_test, y_indiv)}")
plt.figure()
plt.scatter(y_test, y_grp, color="red", label="grp_id pooled targets", linewidth=2)
plt.scatter(y_test, y_subgrp, color="blue", label="subgrp_id pooled targets", linewidth=2)
plt.scatter(y_test, y_indiv, color="green", label="individual targets", linewidth=2)
plt.xlabel("true")
plt.ylabel("pred")
plt.title("XGBoost Regression: true vs predicted values")
plt.legend()
plt.show()
###Output
MSE of individual predictions based on grp_id-pooled targets : 1.104760634211889
MSE of individual predictions based on subgrp_id-pooled targets: 1.0666397738650766
MSE of individual predictions based on individual targets : 1.033071328607703
###Markdown
The predictions on targets on different levels (individual, subgroup, group) are similarly precise compared to true individual-level target values. Note that MSE<1 is impossible to get because of the unit standard errorin the simulated data.In ideal case, predictions on the Subgroup- and Group-level targets would be equal to the predictions on Individual-level target. Let's check similarity of both predictions:
###Code
print(mean_squared_error(y_indiv, y_subgrp))
print(mean_squared_error(y_indiv, y_grp))
plt.figure()
plt.scatter(y_indiv, y_grp, color="red", label="grp_id pooled targets", linewidth=2)
plt.scatter(y_indiv, y_subgrp, color="green", label="subgrp_id pooled targets", linewidth=2)
plt.xlabel("y_indiv")
plt.ylabel("y_subgrp, y_grp")
plt.title("XGBoost Regression: predictions on individual vs gruped targets")
plt.legend()
plt.show()
###Output
0.038243078
0.06462737
###Markdown
This notebook contains an example for how to use the `taxbrain` python package
###Code
from taxbrain import TaxBrain
reform_url = "https://raw.githubusercontent.com/PSLmodels/Tax-Calculator/master/taxcalc/reforms/Larson2019.json"
###Output
_____no_output_____
###Markdown
Static ReformAfter importing the `TaxBrain` class from the `taxbrain` package, we initiate an instance of the class by specifying the start and end year of the anlaysis, which microdata to use, and a policy reform. Additional arguments can be used to specify econoimc assumptions and individual behavioral elasticites.Once the class has been initiated, the `run()` method will handle executing each model
###Code
tb_static = TaxBrain(2019, 2028, use_cps=True, reform=reform_url)
tb_static.run()
###Output
_____no_output_____
###Markdown
Once the calculators have been run, you can produce a number of tables, including a weighted total of a given variable each year under both current law and the user reform.
###Code
print("Combined Tax Liability Over the Budget Window")
tb_static.weighted_totals("combined")
###Output
Combined Tax Liability Over the Budget Window
###Markdown
If you are interested in a detailed look on the reform's effect, you can produce a differences table for a given year.
###Code
print("Differences Table")
tb_static.differences_table(2019, "weighted_deciles", "combined")
###Output
Differences Table
###Markdown
You can run a partial-equlibrium dynamic simulation by initiating the TaxBrian instance exactly as you would for the static reform, but with your behavioral assumptions specified
###Code
tb_dynamic = TaxBrain(2019, 2028, use_cps=True, reform=reform_url,
behavior={"sub": 0.25})
tb_dynamic.run()
###Output
_____no_output_____
###Markdown
Once that finishes running, we can produce the same weighted total table as we did with the static run.
###Code
print("Partial Equilibrium - Combined Tax Liability")
tb_dynamic.weighted_totals("combined")
###Output
Partial Equilibrium - Combined Tax Liability
###Markdown
Or we can produce a distribution table to see details on the effects of the reform.
###Code
print("Distribution Table")
tb_dynamic.distribution_table(2019, "weighted_deciles", "expanded_income", "reform")
###Output
Distribution Table
###Markdown
**VAE for Breast Cancer Dataset**
###Code
from sklearn.datasets import load_breast_cancer
# Import dataset
breast_cancer_dataset = load_breast_cancer()
data = breast_cancer_dataset['data']
labels = breast_cancer_dataset['target']
target_names = breast_cancer_dataset['target_names']
feature_names = breast_cancer_dataset['feature_names']
# Split test/train
data_train = data[:500,]
labels_train = labels[:500,]
data_test = data[500:,]
labels_test = labels[500:,]
# Print out key stats
print(f'Number of data samples: {data.shape[0]}')
print(f'Number of features: {len(feature_names)}')
# Preliminaries
import tensorflow as tf
from tensorflow.layers import dense
import numpy as np
seed = 11
def accuracy(guesses, labels):
return np.mean([g == l for g, l in zip(guesses, labels)])
###Output
_____no_output_____
###Markdown
**Define Supervised Graph**
###Code
g_super = tf.Graph()
with g_super.as_default():
# Set tf seed
tf.set_random_seed(seed)
# Inputs
x_super = tf.placeholder(tf.float32, shape=[None, 30], name='x')
y_super = tf.placeholder(tf.int32, shape=[None,], name='y')
# Model
logits = dense(dense(inputs=x_super, activation='relu', units=30), activation=None, units=2)
y_hat_super = tf.argmax(logits, 1)
# Loss
cost = tf.losses.sparse_softmax_cross_entropy(logits=logits, labels=y_super)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
# Summaries
tf.summary.scalar("Total_Loss", cost)
merged_super = tf.summary.merge_all()
# Saver
supervised_saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
**Run Supervised Training on Supervised Graph**
###Code
np.random.seed(seed)
with tf.Session(graph=g_super) as sess:
# Initialize variables and saver and Tensorboard writer
writer = tf.summary.FileWriter('./supervised', g_super)
tf.global_variables_initializer().run()
for step in range(2501):
# Generate training batches
indexes = np.random.randint(low=0, high=data_train.shape[0]-1, size=250)
feed_dict = {x_super: data_train[indexes], y_super: labels_train[indexes]}
# Training iteration
summary, y_hatt, _ = sess.run([merged_super, y_hat_super, optimizer], feed_dict=feed_dict)
if step % 500 == 0:
print(f'Accuracy of supervised model on train set at {step} iterations: {accuracy(y_hatt, labels[indexes])}')
writer.add_summary(summary=summary, global_step=step)
# Save model weights
save_path = supervised_saver.save(sess, "./supervised/model.ckpt")
print("Model saved in path: %s" % save_path)
###Output
Accuracy of supervised model on train set at 0 iterations: 0.592
Accuracy of supervised model on train set at 500 iterations: 0.888
Accuracy of supervised model on train set at 1000 iterations: 0.932
Accuracy of supervised model on train set at 1500 iterations: 0.944
Accuracy of supervised model on train set at 2000 iterations: 0.98
Accuracy of supervised model on train set at 2500 iterations: 0.96
Model saved in path: ./supervised/model.ckpt
###Markdown
**Evaluate Trained Supervised Model on Test Set**
###Code
with tf.Session(graph=g_super) as sess:
# Initialize variables and Tensorboard writer
tf.global_variables_initializer().run()
supervised_saver.restore(sess, "./supervised/model.ckpt")
guesses = sess.run([y_hat_super], feed_dict={x_super: data_test})[0]
print(f'Accuracy of supervised model on test set: {accuracy(guesses, labels_test)}')
###Output
INFO:tensorflow:Restoring parameters from ./supervised/model.ckpt
Accuracy of supervised model on test set: 0.9855072463768116
###Markdown
**Define Semi-Supervised Graph**
###Code
g_semi = tf.Graph()
with g_semi.as_default():
# Set tf seed
tf.set_random_seed(seed)
# Inputs
x_semi = tf.placeholder(tf.float32, shape=[None, 30], name='x')
y_semi = tf.placeholder(tf.int32, shape=[None,], name='y')
eps = tf.placeholder(tf.float32, shape=[None, 10], name='eps')
# Unsupervised Model
with tf.variable_scope("unsupervised"):
mu = dense(inputs=x_semi, activation='relu', units=10)
sigma = dense(inputs=x_semi, activation='relu', units=10)
z = mu + sigma * eps
x_hat = dense(dense(inputs=z, units=10), units=30)
# Supervised Model ON TOP of Unsupervised Latent Variables
with tf.variable_scope("supervised"):
logits = dense(inputs=z, activation=None, units=2)
y_hat_semi = tf.argmax(logits, 1)
# Unsupervised Loss
unsupervised_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "unsupervised")
recon = tf.reduce_sum(tf.squared_difference(x_semi, x_hat))
vae = -0.5 * tf.reduce_sum(1.0 - tf.square(mu) - tf.square(sigma) + 2.0 * tf.log(sigma + 1e-8))
vae_cost = tf.reduce_sum(recon + 0.01 * vae)
vae_optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(vae_cost, var_list=unsupervised_vars)
# Supervised Loss
supervised_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "supervised")
super_cost = tf.losses.sparse_softmax_cross_entropy(logits=logits, labels=y_semi)
super_optimizer = tf.train.AdamOptimizer(learning_rate=8e-3).minimize(super_cost, var_list=supervised_vars)
# Summaries
tf.summary.scalar("Unsuper_Vae_loss", vae)
tf.summary.scalar("Unsuper_Recon_loss", recon)
tf.summary.scalar("Unsuper_Total_loss", vae_cost)
tf.summary.scalar("Super_Total_loss", super_cost)
merged_semi = tf.summary.merge_all()
# Saver
semi_saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
**Run Unsupervised Training on Semi-Supervised Graph**
###Code
np.random.seed(seed)
with tf.Session(graph=g_semi) as sess:
# Initialize variables and Tensorboard writer
writer = tf.summary.FileWriter('./vae', g_semi)
tf.global_variables_initializer().run()
for step in range(10001):
# Generate training batches
epsilon = np.random.normal(size=(250, 10))
indexes = np.random.randint(low=0, high=data_train.shape[0]-1, size=250)
feed_dict = {x_semi: data_train[indexes], y_semi: labels_train[indexes], eps: epsilon}
# Training iteration
summary, y_hatt, _ = sess.run([merged_semi, y_hat_semi, vae_optimizer], feed_dict=feed_dict)
if step % 1000 == 0:
print(f'Accuracy of unsupervised model on train set at {step} iterations: {accuracy(y_hatt, labels[indexes])}')
writer.add_summary(summary=summary, global_step=step)
save_path = semi_saver.save(sess, "./vae/model.ckpt")
print("Model saved in path: %s" % save_path)
###Output
Accuracy of unsupervised model on train set at 0 iterations: 0.572
Accuracy of unsupervised model on train set at 1000 iterations: 0.632
Accuracy of unsupervised model on train set at 2000 iterations: 0.768
Accuracy of unsupervised model on train set at 3000 iterations: 0.336
Accuracy of unsupervised model on train set at 4000 iterations: 0.344
Accuracy of unsupervised model on train set at 5000 iterations: 0.416
Accuracy of unsupervised model on train set at 6000 iterations: 0.368
Accuracy of unsupervised model on train set at 7000 iterations: 0.42
Accuracy of unsupervised model on train set at 8000 iterations: 0.356
Accuracy of unsupervised model on train set at 9000 iterations: 0.472
Accuracy of unsupervised model on train set at 10000 iterations: 0.36
Model saved in path: ./vae/model.ckpt
###Markdown
**Run Supervised Training on Semi-Supervised Graph**
###Code
np.random.seed(seed)
with tf.Session(graph=g_semi) as sess:
# Initialize variables and Tensorboard writer
writer = tf.summary.FileWriter('./semisupervised', g_semi)
tf.global_variables_initializer().run()
semi_saver.restore(sess, "./vae/model.ckpt")
for step in range(2501):
# Generate training batches
epsilon = np.random.normal(size=(250, 10))
indexes = np.random.randint(low=0, high=data_train.shape[0]-1, size=250)
feed_dict = {x_semi: data_train[indexes], y_semi: labels_train[indexes], eps: epsilon}
# Training iteration
summary, y_hatt, _ = sess.run([merged_semi, y_hat_semi, super_optimizer], feed_dict=feed_dict)
if step % 500 == 0:
print(f'Accuracy of supervised model on train set at {step} iterations: {accuracy(y_hatt, labels[indexes])}')
writer.add_summary(summary=summary, global_step=step)
save_path = semi_saver.save(sess, "./semisupervised/model.ckpt")
print("Model saved in path: %s" % save_path)
###Output
INFO:tensorflow:Restoring parameters from ./vae/model.ckpt
Accuracy of supervised model on train set at 0 iterations: 0.416
Accuracy of supervised model on train set at 500 iterations: 0.944
Accuracy of supervised model on train set at 1000 iterations: 0.94
Accuracy of supervised model on train set at 1500 iterations: 0.908
Accuracy of supervised model on train set at 2000 iterations: 0.96
Accuracy of supervised model on train set at 2500 iterations: 0.92
Model saved in path: ./semisupervised/model.ckpt
###Markdown
**Evaluate Trained Semi-Supervised Model on Test Set**
###Code
with tf.Session(graph=g_semi) as sess:
# Initialize variables and Tensorboard writer
tf.global_variables_initializer().run()
semi_saver.restore(sess, "./semisupervised/model2.ckpt")
epsilon = np.random.normal(size=(data_test.shape[0], 10))
guesses = sess.run([y_hat_semi], feed_dict={x_semi: data_test, eps: epsilon})[0]
print(f'Accuracy of semi-supervised model on test set: {accuracy(guesses, labels_test)}')
###Output
INFO:tensorflow:Restoring parameters from ./semisupervised/model2.ckpt
Accuracy of semi-supervised model on test set: 0.9855072463768116
###Markdown
This notebook presents an example of using the moltr package for multi-objective learning to rank. It consists of two sections. In the first section, we compare our custom objective implementation with the original one from LightGBM. Specifically, we check that it produces similar results and has a similar runtime.In the second section, we use the custom objective to build a LambdaMART model optimising a combination of two NDCG-type metrics.
###Code
import warnings
warnings.simplefilter("ignore")
import numpy as np
import pandas as pd
import lightgbm as lgb
from matplotlib import pyplot as plt
from moltr.lambdaobj import get_gradients
from moltr.calculator import Calculator, MIN_SIGMOID_ARG, MAX_SIGMOID_ARG
###Output
_____no_output_____
###Markdown
Generating Data
###Code
np.random.seed(0)
def generate_data(n_positions, coef, n_requests):
"""
This function is used for simulating the data. We generate n_requests result pages
with n_positions positions each. We simulate interactions of two types. A logistic
regression model is used for generating interactions of each type. The coefficients
are provided via the coef parameter. This parameter must be a matrix with
two rows. The number of features is inferred from the number of its columns.
Features are simulated as standard normal random variables.
:param page_len: the number of positions on each result page
:param coef: a matrix defining the two logistic regression models for generating
interactions
:param n_requests: the number of requests/queries/result pages
:returns: a pandas.DataFrame having n_requests * n_positions rows
and the following columns:
request_id,
feature_1, ..., feature_m (where m is coef.shape[1]),
i_1 and i2 (interaction indicators - one for each interaction type)
"""
n_features = coef.shape[1]
feature_names = ["feature_%i" % i for i in range(1, n_features + 1)]
data = pd.DataFrame(
np.concatenate(
[
np.repeat(range(n_requests), n_positions)[:, None],
np.random.normal(0, 1, (n_requests * n_positions, n_features))
],
axis=1
),
columns=["request_id"] + feature_names
)
for i in range(2):
z = np.dot(data[feature_names].values, coef[i, :]) - 4.0
data[f"i_{i + 1}"] = np.random.binomial(1, 1 / (1 + np.exp(-z)))
return data
def drop_requests_with_no_interactions(data, interaction_col):
interaction_requests = set(data.loc[data[interaction_col] > 0].request_id)
return data.loc[data.request_id.isin(interaction_requests)]
COEF = np.array(
[
[1.0, -1.0, 1.0],
[-1.0, 1.0, 1.0]
]
)
N_POSITIONS = 32
MAX_NDCG_POS = 10
N_TRAIN = 10000
N_VALIDATION = 1000
train_data = generate_data(N_POSITIONS, COEF, N_TRAIN)
validation_data = generate_data(N_POSITIONS, COEF, N_VALIDATION)
###Output
_____no_output_____
###Markdown
Custom Objective Code
###Code
class DatasetWithCalculator(lgb.Dataset):
def __init__(self, *args, **kwargs):
lgb.Dataset.__init__(self, *args, **kwargs)
self.calculator = Calculator(self.label, self.get_group(), MAX_NDCG_POS)
def lambdamart_objective(preds, dataset):
groups = dataset.get_group()
if len(groups) == 0:
raise Error("Group/query data should not be empty.")
else:
grad = np.zeros(len(preds))
hess = np.zeros(len(preds))
get_gradients(np.ascontiguousarray(dataset.label, dtype=np.double),
np.ascontiguousarray(preds),
len(preds),
np.ascontiguousarray(groups),
np.ascontiguousarray(dataset.calculator.query_boundaries),
len(dataset.calculator.query_boundaries) - 1,
np.ascontiguousarray(dataset.calculator.discounts),
np.ascontiguousarray(dataset.calculator.inverse_max_dcgs),
np.ascontiguousarray(dataset.calculator.sigmoids),
len(dataset.calculator.sigmoids),
MIN_SIGMOID_ARG,
MAX_SIGMOID_ARG,
dataset.calculator.sigmoid_idx_factor,
np.ascontiguousarray(grad),
np.ascontiguousarray(hess))
return grad, hess
###Output
_____no_output_____
###Markdown
Section 1. Training with a Custom Objective
###Code
train_data_1 = drop_requests_with_no_interactions(train_data, "i_1")
train_dataset_1 = DatasetWithCalculator(
train_data_1.drop(["request_id", "i_1", "i_2"], axis=1),
label=train_data_1.i_1,
group=[N_POSITIONS] * train_data_1.request_id.nunique(),
free_raw_data=False
)
lgb_params = {
"num_trees": 10,
"objective": "lambdarank",
"max_position": MAX_NDCG_POS,
"metric": "ndcg",
"eval_at": MAX_NDCG_POS
}
def fit_original(dataset, verbose_eval=True):
lgb.train(
params=lgb_params,
train_set=dataset,
valid_sets=[dataset],
verbose_eval=verbose_eval
)
fit_original(train_dataset_1)
%timeit -r 100 fit_original(train_dataset_1, False)
def fit_custom_objective(dataset, verbose_eval=True):
lgb.train(
params=lgb_params,
train_set=dataset,
valid_sets=[dataset],
verbose_eval=verbose_eval,
fobj=lambdamart_objective
)
fit_custom_objective(train_dataset_1)
%timeit -r 100 fit_custom_objective(train_dataset_1, False)
###Output
328 ms ± 73.7 ms per loop (mean ± std. dev. of 100 runs, 1 loop each)
###Markdown
Section 2. Optimising a Combination of Metrics We will use two NDCG metrics - one for each interaction type.
###Code
class DatasetWithTwoLabels(lgb.Dataset):
def __init__(self, label_2, alpha, *args, **kwargs):
lgb.Dataset.__init__(self, *args, **kwargs)
assert(len(self.label) == len(label_2))
self.label_1 = self.label
self.label_2 = label_2
self.calculator_1 = Calculator(self.label_1, self.get_group(), MAX_NDCG_POS)
self.calculator_2 = Calculator(self.label_2, self.get_group(), MAX_NDCG_POS)
self.alpha = alpha
def set_alpha(self, alpha):
self.alpha = alpha
def get_grad_hess(labels, preds, groups, calculator):
grad = np.zeros(len(preds))
hess = np.zeros(len(preds))
get_gradients(np.ascontiguousarray(labels, dtype=np.double),
np.ascontiguousarray(preds),
len(preds),
np.ascontiguousarray(groups),
np.ascontiguousarray(calculator.query_boundaries),
len(calculator.query_boundaries) - 1,
np.ascontiguousarray(calculator.discounts),
np.ascontiguousarray(calculator.inverse_max_dcgs),
np.ascontiguousarray(calculator.sigmoids),
len(calculator.sigmoids),
MIN_SIGMOID_ARG,
MAX_SIGMOID_ARG,
calculator.sigmoid_idx_factor,
np.ascontiguousarray(grad),
np.ascontiguousarray(hess))
return grad, hess
def combined_objective(preds, dataset):
groups = dataset.get_group()
if len(groups) == 0:
raise Error("Group/query data should not be empty.")
else:
grad_1, hess_1 = get_grad_hess(
dataset.label_1, preds, groups, dataset.calculator_1
)
grad_2, hess_2 = get_grad_hess(
dataset.label_2, preds, groups, dataset.calculator_2
)
alpha = dataset.alpha
return alpha * grad_1 + (1 - alpha) * grad_2, alpha * hess_1 + (1 - alpha) * hess_2
def fit_combined_objective(dataset, alpha):
dataset.set_alpha(alpha)
return lgb.train(
params=lgb_params,
train_set=dataset,
fobj=combined_objective
)
train_data_12 = drop_requests_with_no_interactions(
drop_requests_with_no_interactions(train_data, "i_1"),
"i_2"
)
validation_data_12 = drop_requests_with_no_interactions(
drop_requests_with_no_interactions(validation_data, "i_1"),
"i_2"
)
train_dataset = DatasetWithTwoLabels(
data=train_data_12.drop(["request_id", "i_1", "i_2"], axis=1),
label=train_data_12.i_1,
label_2=train_data_12.i_2,
alpha=1.0,
group=[N_POSITIONS] * train_data_12.request_id.nunique(),
free_raw_data=False
)
validation_dataset = DatasetWithTwoLabels(
data=validation_data_12.drop(["request_id", "i_1", "i_2"], axis=1),
label=validation_data_12.i_1,
label_2=validation_data_12.i_2,
alpha=1.0,
group=[N_POSITIONS] * validation_data_12.request_id.nunique(),
free_raw_data=False
)
###Output
Computing inverse_max_dcg-s..
Computing sigmoids..
Computing inverse_max_dcg-s..
Computing sigmoids..
Computing inverse_max_dcg-s..
Computing sigmoids..
Computing inverse_max_dcg-s..
Computing sigmoids..
###Markdown
Now we fit the combination of the two NDCG metrics for different values of alpha.
###Code
lgb_params = {
"num_trees": 100,
"objective": "lambdarank",
"max_position": MAX_NDCG_POS,
"metric": "ndcg",
"eval_at": MAX_NDCG_POS
}
alpha_values = np.arange(0.0, 1.1, 0.1)
ndcg_arr_1 = []
ndcg_arr_2 = []
for alpha in alpha_values:
m = fit_combined_objective(train_dataset, alpha)
ndcg_arr_1.append(
validation_dataset.calculator_1.compute_ndcg(m.predict(validation_dataset.data))
)
ndcg_arr_2.append(
validation_dataset.calculator_2.compute_ndcg(m.predict(validation_dataset.data))
)
def plot_point(x, y, text, marker, offset_x=0, offset_y=0):
handles = ax.scatter(x, y, marker=marker, color="k")
if text is not None:
ax.annotate(
text,
(x, y),
(x + offset_x, y + offset_y)
)
return handles
fig, ax = plt.subplots(figsize=(10, 5))
ax.set_xlim([0.0, 0.8])
ax.set_ylim([0.0, 0.8])
handles = []
for alpha, ndcg_1, ndcg_2 in zip(alpha_values, ndcg_arr_1, ndcg_arr_2):
h = plot_point(ndcg_1, ndcg_2, f"$\\alpha={alpha:.2f}$", "s", 0.01, 0.01)
handles.append(h)
ax.set_xlabel("NDCG_1");
ax.set_ylabel("NDCG_2");
###Output
_____no_output_____
###Markdown
Install the function using >`pip install inequalipy` Import the function
###Code
import inequalipy as ineq
###Output
_____no_output_____
###Markdown
Randomly create a distribution:
###Code
import numpy as np
a = np.random.normal(5,1,100)
weights = np.ones(len(a), dtype=int)
# weights = np.random.randint(0,100,len(a), dtype=int)
###Output
_____no_output_____
###Markdown
Gini Index
###Code
# our function
ineq.gini(a)
# our function with weights (of ones)
ineq.gini(a, weights)
ineq.gini
# Pysal's gini coefficient
import inequality as pysal
pysal.gini._gini(a)
# Grasia's gini coefficient
from example import grasia
grasia.gini(a)/100
###Output
_____no_output_____
###Markdown
Atkinson
###Code
# our function
ineq.atkinson.index(a, 0.5)
from example import ineqpy
ineqpy.atkinson(a,e=0.5)
###Output
_____no_output_____
###Markdown
Kolm-Pollak
###Code
# our function
ineq.kolmpollak.ede(a, epsilon=0.5)
###Output
_____no_output_____
###Markdown
Spatial stratification processIn this example, we do a grid stratification. At this step, you need to decide the spatial granularity. Since this example uses a grid stratification, we need to decide the length of each side of a grid. In the following example, we keep this length as 1 km.
###Code
# Here, we keep a cellSide of length 1 km (the first argument)
spatial = GridStratification(1, 77.58467674255371, 12.958180959662695, 77.60617733001709, 12.977167633046893)
spatial.stratify()
###Output
_____no_output_____
###Markdown
Now, `spatial.input_geojson` returns the GeoJSON containing the strata (along with stratum ID). Below, we print the first stratum that was generated. If desired, you can store this GeoJSON using the in-built Python `json` library.
###Code
spatial.input_geojson['features'][0]
###Output
_____no_output_____
###Markdown
Data loading processIn this step, we upload the vehicle mobility data to a [MongoDB](https://docs.mongodb.com/) database. You need to take care of a few things here:1. You must ensure that you have a MongoDB server (local or remote) running before you continue with this process.2. The input CSV file must containing the following columns: vehicle_id, timestamp, latitude, longitude.3. You will need to decide upon a `temporal_granularity` (in seconds). In this example, we use a temporal granularity of 1 hour (= 3600 seconds).4. Decide the database name and a collection name (inside that database) that you want to upload your data to.
###Code
dataloader = CSVDataLoader('sample_mobility_data.csv', 3600,
anonymize_data=False,
mongo_uri='mongodb://localhost:27017/',
db_name='modulo',
collection_name='mobility_data')
###Output
_____no_output_____
###Markdown
At this point, if you want, you can check your MongoDB database using a [MongoDB GUI](https://www.mongodb.com/products/compass). You should see your data uploaded in the database.Now, we need to compute the stratum ID that each vehicle mobility datum falls into. Similarly, we also need to calculate the temporal ID that each datum falls into. Think of the temporal ID as referring to a "time bucket", each of length `temporal_granularity`. Both these methods return the number of records that were updated with the `stratum_id` and the `temporal_id` respectively.
###Code
dataloader.compute_stratum_id_and_update_db(spatial)
dataloader.compute_temporal_id_and_update_db()
###Output
_____no_output_____
###Markdown
You can use the following helper function to fetch the vehicle mobility data stored in the database. This function will return the stored values as a Pandas DataFrame, which you can conveniently use to do any checks, operations, analysis, etc.
###Code
df = dataloader.fetch_data()
df.head()
###Output
_____no_output_____
###Markdown
Vehicle SelectionNow, we can finally use the available algorithms to select the desired number of vehicles. In the following example, we assume that we want to choose 2 vehicles.The vehicle selection ("training") process requires the vehicle mobility data from the database. We use another helper method in `DataLoader` to fetch this data as a Pandas DataFrame.
###Code
selection_df = dataloader.fetch_data_for_vehicle_selection()
# Using greedy
greedy = GreedyVehicleSelector(2, selection_df, 1589389199)
selected_vehicles = greedy.train()
greedy.test(selected_vehicles)
# Using max-points
maxpoints = MaxPointsVehicleSelector(2, selection_df, 1589389199)
selected_vehicles = maxpoints.train()
maxpoints.test(selected_vehicles)
# Using random
random_algo = RandomVehicleSelector(2, selection_df, 1589389199)
selected_vehicles = random_algo.train()
random_algo.test(selected_vehicles)
###Output
_____no_output_____
###Markdown
Examples
###Code
def normalize(tensor,
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]):
dtype = tensor.dtype
mean = torch.as_tensor(mean, dtype=dtype, device=tensor.device)
std = torch.as_tensor(std, dtype=dtype, device=tensor.device)
tensor.sub_(mean[None, :, None, None]).div_(std[None, :, None, None])
return tensor
# PATH variables
PATH = os.getcwd() + '/'
dataset = PATH + 'samples/'
save_path = PATH + 'results/'
# Dataset loader for sample images
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
])
sample_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(dataset, transform=transform),
batch_size=1,
shuffle=False)
# Check CUDA
cuda = torch.cuda.is_available()
device = torch.device("cuda" if cuda else "cpu")
def get_visualization(predictions, **kwargs):
# Print Top-5 predictions
prob = torch.softmax(predictions, dim=1)
class_indices = predictions.data.topk(5, dim=1)[1][0].tolist()
max_str_len = 0
class_names = []
for cls_idx in class_indices:
class_names.append(CLS2IDX[cls_idx])
if len(CLS2IDX[cls_idx]) > max_str_len:
max_str_len = len(CLS2IDX[cls_idx])
print('Top 5 classes:')
for cls_idx in class_indices:
output_string = '\t{} : {}'.format(cls_idx, CLS2IDX[cls_idx])
output_string += ' ' * (max_str_len - len(CLS2IDX[cls_idx])) + '\t\t'
output_string += 'value = {:.3f}\t prob = {:.1f}%'.format(out[0, cls_idx], 100 * prob[0, cls_idx])
print(output_string)
return model.AGF(**kwargs)
# Load model
model = vgg19(pretrained=True).to(device)
model.eval();
###Output
_____no_output_____
###Markdown
Dog-Cat
###Code
dog_cat_image = imageio.imread('samples/dog-cat.JPEG')
dog_cat_tensor = torch.tensor(dog_cat_image).permute(2, 0, 1).unsqueeze(0).to(device).float() / 255
norm_data = normalize(dog_cat_tensor.clone())
out = model(norm_data)
# Compute Dog (top) Class Attribution Map
dog = get_visualization(out)
cat = get_visualization(out, class_id=[282])
dog = (render.hm_to_rgb(dog[0, 0].data.cpu().numpy(), scaling=3, sigma=1, cmap='seismic') * 255).astype(np.uint8)
cat = (render.hm_to_rgb(cat[0, 0].data.cpu().numpy(), scaling=3, sigma=1, cmap='seismic') * 255).astype(np.uint8)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(dog_cat_image);
axs[0].axis('off');
axs[1].imshow(dog);
axs[1].axis('off');
axs[2].imshow(cat);
axs[2].axis('off');
###Output
Top 5 classes:
243 : bull mastiff value = 11.861 prob = 46.1%
245 : French bulldog value = 10.980 prob = 19.1%
254 : pug, pug-dog value = 10.928 prob = 18.1%
242 : boxer value = 9.964 prob = 6.9%
281 : tabby, tabby cat value = 8.008 prob = 1.0%
Top 5 classes:
243 : bull mastiff value = 11.861 prob = 46.1%
245 : French bulldog value = 10.980 prob = 19.1%
254 : pug, pug-dog value = 10.928 prob = 18.1%
242 : boxer value = 9.964 prob = 6.9%
281 : tabby, tabby cat value = 8.008 prob = 1.0%
###Markdown
Visualize Folder
###Code
number_of_samples = len(sample_loader)
fig, axs = plt.subplots(2, number_of_samples, figsize=(13,3))
for batch_idx, (data, target) in enumerate(sample_loader):
data, target = data.to(device).requires_grad_(), target.to(device)
# Input image
image = data[0, :, :, :].cpu()
_img = np.uint8(image.data.cpu().numpy() * 255).transpose(1, 2, 0)
axs[0, batch_idx].imshow(_img)
axs[0, batch_idx].axis('off')
# Input data
norm_data = normalize(data.clone())
out = model(norm_data)
# Compute Class Attribution Map
cam = get_visualization(out,
lmd=10,
no_fx=False,
no_m=False,
gradcam=False,
no_reg=False,
no_fgx=False,
no_a=False)
# Render CAM
filename = save_path + str(batch_idx + 1)
filename_new = filename
maps = (render.hm_to_rgb(cam[0, 0].data.cpu().numpy(), scaling=3, sigma=1, cmap='seismic') * 255).astype(np.uint8)
maps = cv2.resize(maps, (224, 224))
# Visualization
axs[1, batch_idx].imshow(maps)
axs[1, batch_idx].axis('off')
###Output
Top 5 classes:
266 : miniature poodle value = 13.294 prob = 42.7%
265 : toy poodle value = 13.094 prob = 35.0%
194 : Dandie Dinmont, Dandie Dinmont terrier value = 12.266 prob = 15.3%
267 : standard poodle value = 10.614 prob = 2.9%
219 : cocker spaniel, English cocker spaniel, cocker value = 9.832 prob = 1.3%
Top 5 classes:
237 : miniature pinscher value = 15.276 prob = 63.7%
165 : black-and-tan coonhound value = 13.767 prob = 14.1%
234 : Rottweiler value = 13.466 prob = 10.4%
236 : Doberman, Doberman pinscher value = 12.648 prob = 4.6%
434 : bath towel value = 12.396 prob = 3.6%
Top 5 classes:
185 : Norfolk terrier value = 18.597 prob = 81.4%
186 : Norwich terrier value = 16.289 prob = 8.1%
192 : cairn, cairn terrier value = 16.120 prob = 6.8%
194 : Dandie Dinmont, Dandie Dinmont terrier value = 14.278 prob = 1.1%
193 : Australian terrier value = 13.752 prob = 0.6%
Top 5 classes:
222 : kuvasz value = 11.401 prob = 56.0%
207 : golden retriever value = 10.396 prob = 20.5%
257 : Great Pyrenees value = 9.741 prob = 10.7%
208 : Labrador retriever value = 8.083 prob = 2.0%
539 : doormat, welcome mat value = 7.626 prob = 1.3%
Top 5 classes:
62 : rock python, rock snake, Python sebae value = 19.571 prob = 65.0%
65 : sea snake value = 18.233 prob = 17.1%
58 : water snake value = 17.417 prob = 7.5%
54 : hognose snake, puff adder, sand viper value = 16.661 prob = 3.5%
67 : diamondback, diamondback rattlesnake, Crotalus adamanteus value = 15.890 prob = 1.6%
Top 5 classes:
230 : Shetland sheepdog, Shetland sheep dog, Shetland value = 17.126 prob = 72.9%
231 : collie value = 16.131 prob = 26.9%
169 : borzoi, Russian wolfhound value = 9.504 prob = 0.0%
157 : papillon value = 9.180 prob = 0.0%
193 : Australian terrier value = 8.454 prob = 0.0%
Top 5 classes:
967 : espresso value = 17.052 prob = 39.4%
809 : soup bowl value = 16.588 prob = 24.8%
969 : eggnog value = 16.166 prob = 16.3%
968 : cup value = 15.925 prob = 12.8%
441 : beer glass value = 13.830 prob = 1.6%
Top 5 classes:
340 : zebra value = 14.657 prob = 77.9%
386 : African elephant, Loxodonta africana value = 13.169 prob = 17.6%
101 : tusker value = 10.352 prob = 1.1%
385 : Indian elephant, Elephas maximus value = 10.275 prob = 1.0%
354 : Arabian camel, dromedary, Camelus dromedarius value = 10.032 prob = 0.8%
###Markdown
Optimized Kalman Filter: A TutorialThis notebook presents an explained demonstration of the Optimized Kalman Filter package (okf), using the guiding example of the Simple Lidar problem.Familiarity with the Kalman Filter algorithm is assumed. We use the following notations for the KF model (where $\omega\sim N(0,Q)$ and $\nu\sim N(0,R)$):$$ X_{t+1} = F\cdot X_t + \omega $$$$ Z_{t} = H\cdot X_t + \nu $$Contents:* A minimal working example in one cell* The Simple Lidar problem: an introduction* Preparing the data* Creating a model* Training* Testing & analysis
###Code
# Auto reload
%reload_ext autoreload
%autoreload 2
import pickle as pkl
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import okf
from okf.example import simple_lidar_simulator as SIM
from okf.example import simple_lidar_model as LID
# Set wide notebook
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
display(HTML("<style>.output_result { max-width:90% !important; }</style>"))
###Output
_____no_output_____
###Markdown
A minimal working example in one cellAfter the minimal working example, we repeat all the steps below, explain them in detail, and demonstrate more features (e.g. analysis tools).Note the format of the data needed for OKF train and test - 2 lists of n_targets each:* X[i] = a numpy array of type double and shape (n_time_steps(target i), state_dimensions).* Z[i] = a numpy array of type double and shape (n_time_steps(target i), observation_dimensions).
###Code
%%time
# Simulate data for the simple lidar example, and convert it to the required format
X, Z = SIM.simulate_data(fpath='data/simple_lidar_data.pkl')
X, Z = SIM.get_trainable_data(X, Z)
print('Data:')
print(f'Simulated states:\ta {type(X)} of {len(X):d} targets, each is a {type(X[0])} of shape (n_time_steps, {X[0].shape[1]}).')
print(f'Simulated observations:\ta {type(Z)} of {len(Z):d} targets, each is a {type(Z[0])} of shape (n_time_steps, {Z[0].shape[1]}).')
# Split to train/test data
n_train = int(0.7*len(X))
Ztrain, Xtrain = Z[:n_train], X[:n_train]
Ztest, Xtest = Z[n_train:], X[n_train:]
# Define model
lidar_model_args = dict(
dim_x = 4, # the number of entries in a state
dim_z = 2, # the number of entries in an observation
init_z2x = LID.initial_observation_to_state, # a function that receives the first observation and returns the first state-estimate
F = LID.get_F(), # the dynamics model: a pytorch tensor of type double and shape (dim_x, dim_x)
H = LID.get_H(), # the observation model: a pytorch tensor of type double and shape (dim_z, dim_x); or a function (see below)
loss_fun = LID.loss_fun(), # function(predicted_x, true_x) used as loss for training and evaluation
model_files_path = 'models', # directory in which to save the model
)
print('---------------\nModel arguments:\n', lidar_model_args)
baseline_model = okf.OKF(**lidar_model_args, optimize=False, model_name='KF')
model = okf.OKF(**lidar_model_args)
# Train
okf.train(baseline_model, Ztrain, Xtrain)
print('---------------\nBaseline KF model training (noise estimation) done.')
okf.train(model, Ztrain, Xtrain, verbose=1)
# Test
print('---------------\nTest loss:')
baseline_loss = okf.test_model(baseline_model, Ztest, Xtest, loss_fun=LID.loss_fun())
loss = okf.test_model(model, Ztest, Xtest, loss_fun=LID.loss_fun())
print(f'KF (baseline):\t{baseline_loss:.0f}')
print(f'OKF:\t{loss:.0f}')
print('---------------')
###Output
Data:
Simulated states: a <class 'list'> of 1000 targets, each is a <class 'numpy.ndarray'> of shape (n_time_steps, 4).
Simulated observations: a <class 'list'> of 1000 targets, each is a <class 'numpy.ndarray'> of shape (n_time_steps, 2).
---------------
Model arguments:
{'dim_x': 4, 'dim_z': 2, 'init_z2x': <function initial_observation_to_state at 0x000001C615C1C598>, 'F': tensor([[1., 0., 1., 0.],
[0., 1., 0., 1.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]], dtype=torch.float64), 'H': tensor([[1., 0., 0., 0.],
[0., 1., 0., 0.]], dtype=torch.float64), 'loss_fun': <function loss_fun.<locals>.<lambda> at 0x000001C615C381E0>, 'model_files_path': 'models'}
---------------
Baseline KF model training (noise estimation) done.
Training OKF:
samples=595(t)+105(v)=700; batch_size=10; iterations=1(e)x59(b)=59.
[OKF] Training done (29 [s])
best valid loss: 767; no early stopping: 1 epochs, 59 batches, 59 total iterations.
---------------
Test loss:
KF (baseline): 808
OKF: 674
---------------
Wall time: 43.4 s
###Markdown
------------------------------ The Simple Lidar problem: an introduction Our example problem is a simulation of a simple lidar system: there is a single lidar sensor in a constant, known location, and we iteratively receive new measurements of the target location from the sensor.Our simulator generates the data as 2 lists of dataframes:* X[i] = a dataframe of the i'th target states.* Z[i] = a dataframe of the i'th target observations.
###Code
# True = load existing data; False = generate new data (takes a few seconds)
ONLY_LOAD = False
%%time
if ONLY_LOAD:
X, Z = SIM.load_data(fpath='data/simple_lidar_data.pkl')
else:
X, Z = SIM.simulate_data(fpath='data/simple_lidar_data.pkl')
print(f'Simulated states:\ta {type(X)} of {len(X):d} targets, each is a {type(X[0])} of shape (n_time_steps, {X[0].shape[1]}).')
print(f'Simulated observations:\ta {type(Z)} of {len(Z):d} targets, each is a {type(Z[0])} of shape (n_time_steps, {Z[0].shape[1]}).')
X[0].head()
Z[0].head()
SIM.display_data(X, Z);
###Output
_____no_output_____
###Markdown
Prepare the dataThe data format used in the okf package is 2 lists of n_targets:* X[i] = a numpy array of type double and shape (n_time_steps(target i), **state_dimensions**).* Z[i] = a numpy array of type double and shape (n_time_steps(target i), **observation_dimensions**).In our example, we just need to convert the dataframes into numpy arrays, as implemented in the simulator module:
###Code
X, Z = SIM.get_trainable_data(X, Z)
print(f'Simulated states:\ta {type(X)} of {len(X):d} targets, each is a {type(X[0])} of shape (n_time_steps, {X[0].shape[1]}).')
print(f'Simulated observations:\ta {type(Z)} of {len(Z):d} targets, each is a {type(Z[0])} of shape (n_time_steps, {Z[0].shape[1]}).')
print(len(X))
X[0][:5, :]
print(len(Z))
Z[0][:5, :]
###Output
1000
###Markdown
Split to train/test dataNote that in the terminology of machine learning, Z is the sequential input and X is the sequential output.
###Code
n_train = int(0.7*len(X))
Ztrain, Xtrain = Z[:n_train], X[:n_train]
Ztest, Xtest = Z[n_train:], X[n_train:]
###Output
_____no_output_____
###Markdown
Model Prepare the model configurationTo create a model, we need to define the state & observation dimensions; the dynamics & observation models; and the function that initializes the state according to the first observation in a new trajectory.All these are provided by the `okf.example.simple_lidar_model.py` module:
###Code
lidar_model_args = dict(
dim_x = 4, # the number of entries in a state
dim_z = 2, # the number of entries in an observation
init_z2x = LID.initial_observation_to_state, # a function that receives the first observation and returns the first state-estimate
F = LID.get_F(), # the dynamics model: a pytorch tensor of type double and shape (dim_x, dim_x)
H = LID.get_H(), # the observation model: a pytorch tensor of type double and shape (dim_z, dim_x); or a function (see below)
loss_fun=LID.loss_fun(), # function(predicted_x, true_x) used as loss for training and evaluation
model_files_path = 'models', # directory in which to save the model
)
print('Dynamics model (F) & observation model (H):')
lidar_model_args['F'], lidar_model_args['H']
###Output
Dynamics model (F) & observation model (H):
###Markdown
Create the modelsIn this example, we will train and test the following models:1. **KF**: A standard KF tuned by noise estimation.2. **OKF**: A KF optimized wrt the MSE of the location estimates.
###Code
models = [
okf.OKF(model_name='KF', optimize=False, **lidar_model_args),
okf.OKF(model_name='OKF', optimize=True, **lidar_model_args),
]
###Output
_____no_output_____
###Markdown
Handling a non-linear model (not needed in our example)**Q**: What if the model is not linear, i.e., F or H is not a constant matrix?**A**: okf supports a non-linear models - you simply need to provide a function instead of a matrix.For example, if the sensor is a 2D Doppler radar, and you want to use the recent observation to approximate the non-linear observation model, you have to pass the following `H` to OKF:def H(x, z): :param x: the current state estimate (unused in this case; could be used instead of z to approximate H). :param z: the current observation. The resulting observation model is a linear transformation (3x4 matrix): x,y,vx,vy -> x,y,Doppler. r = np.sqrt(z[0]**2+z[1]**2) return torch.tensor([ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, z[0]/r, z[1]/r], ], dtype=torch.double) Train We can call `okf.train()` for every model or `okf.train_models()` once for all models:
###Code
%%time
res_per_iter, res_per_sample = okf.train_models(models, Ztrain, Xtrain, verbose=2)
###Output
Training OKF:
samples=595(t)+105(v)=700; batch_size=10; iterations=1(e)x59(b)=59.
[OKF] 01.0001/01.0059: train_RMSE=28.61, valid_RMSE=26.45 | 2 [s]
[OKF] 01.0031/01.0059: train_RMSE=26.47, valid_RMSE=24.44 | 16 [s]
[OKF] 01.0059/01.0059: train_RMSE=25.44, valid_RMSE=23.99 | 28 [s]
[OKF] Epoch 1/1 (28 [s])
[OKF] Training done (28 [s])
best valid loss: 598; no early stopping: 1 epochs, 59 batches, 59 total iterations.
Wall time: 31.5 s
###Markdown
Only optimized models appear in the training monitor, but all models appear in the final results:
###Code
print('Models in training monitor:')
print(print(np.unique(res_per_iter.model)))
res_per_iter.head()
print('Models in final results:')
print(print(np.unique(res_per_sample.model)))
res_per_sample.head()
###Output
Models in final results:
['KF' 'OKF']
None
###Markdown
OKF training summary:
###Code
ax = okf.utils.Axes(1, 1, axsize=(8,4))[0]
sns.lineplot(data=res_per_iter[res_per_iter.model=='OKF'], x='t', hue='group', y='RMSE', ax=ax)
okf.utils.labels(ax, 'training iteration', 'RMSE');
###Output
_____no_output_____
###Markdown
Parameters inspectionWe can display the learned parameters Q,R.In our example, we see that the optimization learned to decrease the parameters of the observation noise R: the values in the bottom-right heatmap are lower than in the top-right.In our paper, we explain how this is caused by the choice of coordinates, which makes the noise autocorrelated.This issue is easy to miss, and without using the optimizer, the user might not even be aware to the sub-optimality of the model.
###Code
for m in models:
plt.figure()
m.display_params()
###Output
_____no_output_____
###Markdown
Test
###Code
# Reloading the models is not necessary if run the tests immediately after training
for m in models:
m.load_model()
###Output
_____no_output_____
###Markdown
Test every model over all the test data, and concatenate all the results to a single data-frame:
###Code
%%time
test_res = pd.DataFrame()
for m in models:
test_res = pd.concat((test_res, okf.test_model(m, Ztest, Xtest, detailed=True, loss_fun=LID.loss_fun())))
test_res.head()
###Output
Wall time: 9.88 s
###Markdown
Various visualizations are supported.In our example, it is clear that the optimization achieves significantly more accurate state estimations than noise estimation (lower filtering errors).
###Code
%%time
okf.analyze_test_results(test_res);
###Output
Wall time: 2.95 s
###Markdown
Tracking visualizationNote that in our example problem, the targets begin relatively close to the sensor and their distance tends to grow in time. Thus, the observation errors, which are simulated i.i.d in polar coordinates, tend to increase in Cartesin coordinates. This also explains the figure above of the error vs. time-step.
###Code
# Note:
# - (xdim, ydim) determine which state-dimensions to plot - in our case the (x,y) location components.
# - show_observations=True shows observations in addition to states, using the same (xdim, ydim).
okf.display_tracking(models, Ztest, Xtest, n=4, t_min=10, xdim=0, ydim=1, show_observations=True);
###Output
_____no_output_____
###Markdown
Dirac on a Graph Mark Hale
###Code
from dirac_graph import *
from dirac_graph import DifferentialOperators
from igraph import Graph
from igraph.drawing import plot
import numpy as np
np.set_printoptions(linewidth=100)
###Output
_____no_output_____
###Markdown
Graph
###Code
g = Graph(directed=True) # directed to track edge orientations
g.add_vertices([1,2,3,4,5,6,7])
edge_list = [(2,1),(3,1),(3,2),(4,2),(4,3),(5,3),(6,4),(6,5),(7,4)]
g.add_edges([(i-1,j-1) for i,j in edge_list])
plot(g, bbox=(250,250), vertex_label=g.vs['name'])
###Output
_____no_output_____
###Markdown
Cliques
###Code
clqs = cliques_by_dim(g)
for i,cs in enumerate(clqs):
print("{0}-vertex cliques".format(i+1))
for c in cs:
print("\t{0}".format(', '.join([str(g.vs[v]['name']) for v in c])))
###Output
1-vertex cliques
1
2
3
4
5
6
7
2-vertex cliques
2, 1
3, 1
3, 2
4, 2
4, 3
5, 3
6, 4
7, 4
6, 5
3-vertex cliques
1, 2, 3
2, 3, 4
###Markdown
Dirac
###Code
D = dirac(clqs)
D
# eigenvalues
sorted(np.linalg.eigvalsh(D))
###Output
_____no_output_____
###Markdown
Laplacian
###Code
L = D@D
L
# eigenvalues
sorted(np.linalg.eigvalsh(L))
###Output
_____no_output_____
###Markdown
Cohomology
###Code
d = exterior_d(D)
assert np.allclose(d@d, 0)
dstar = adjoint_d(D)
L_ = [subspace(L, i, clqs) for i in range(len(clqs))]
###Output
_____no_output_____
###Markdown
Direct calculation
###Code
# coboundary operators
d_ = [subspace(D, i+1, clqs, i) for i in range(len(clqs)-1)]
ker_ = []
im_ = []
for d_i in d_:
im, ker = im_ker(d_i)
im_.append(im)
ker_.append(ker)
for i in range(len(clqs)):
if i < len(ker_):
dim_ker = ker_[i].shape[1]
dim_im = im_[i-1].shape[1] if i > 0 else 0
b = dim_ker - dim_im
else:
b = 0
print("b_{0} = {1}\n".format(i, b))
###Output
b_0 = 1
b_1 = 1
b_2 = 0
###Markdown
Using Hodge theory
###Code
cohom_groups = cohomology_groups(L_)
for i, H in enumerate(cohom_groups):
print("b_{0} = {1}, H^{0} spanned by\n{2}\n".format(i, len(H), H))
###Output
b_0 = 1, H^0 spanned by
[array([0.37796447, 0.37796447, 0.37796447, 0.37796447, 0.37796447, 0.37796447, 0.37796447])]
b_1 = 1, H^1 spanned by
[array([ 6.56532164e-02, -6.56532164e-02, -1.31306433e-01, 1.96959649e-01, 3.28266082e-01,
-5.25225731e-01, 5.25225731e-01, -1.11022302e-16, -5.25225731e-01])]
b_2 = 0, H^2 spanned by
[]
###Markdown
Differential operators$$\mathrm{curl}\circ\mathrm{grad} = 0$$$$\mathrm{div}\circ\mathrm{curl}^* = 0$$
###Code
diff_ops = DifferentialOperators(clqs)
assert np.allclose(diff_ops.curl@diff_ops.grad, 0)
assert np.allclose(diff_ops.div@diff_ops.cocurl, 0)
###Output
_____no_output_____
###Markdown
Supersymmetry
###Code
Y = gamma(clqs)
assert np.allclose(Y@D + D@Y, 0)
assert np.isclose(np.trace(Y@L), 0)
###Output
_____no_output_____
###Markdown
Gravity$$d^* \mathbf{F} = \rho$$Let $\mathbf{F} = d V$, where $V$ is the gravitational potential, then$$d^* d V = \rho$$As $d^* V = 0$, have$$L_0 V = \rho$$
###Code
# put some mass on a few vertices
# cannot be in H^0 (which basically requires rho to be 0 on average)
rho = np.zeros(clqs.dim(0))
rho[0] = 1
rho[6] = -1
# uncomment to fix-up an invalid rho
# rho = remove_kernel(rho, cohom_groups[0])
# print("Corrected rho: {0}".format(rho))
for basis_vec in cohom_groups[0]:
assert np.allclose(0, np.dot(basis_vec, rho)), "Not outside the kernel of L (H^0)"
V = np.linalg.pinv(L_[0])@rho
V_D = dirac_space(V, 0, clqs)
F = d@V_D
# sanity checks
rho_D = dirac_space(rho, 0, clqs)
assert np.allclose(rho_D, dstar@F), "F is not a solution for rho"
assert np.allclose(rho_D, dstar@d@V_D), "V is not a solution for rho"
g.vs['V'] = get_vertex_values(V, clqs)
g.es['F'] = get_edge_values(F, clqs, g)
plot(g, bbox=(400,300), margin=20, vertex_label=["{0}\n{1:.3f}".format(v['name'], v['V']) for v in g.vs], edge_label=["{0:.3f}".format(e['F']) for e in g.es], vertex_size=40, vertex_color=255, vertex_frame_color='grey')
###Output
_____no_output_____
###Markdown
Electromagnetism$$d F = 0,$$$$d^* F = \mathbf{j}$$Let $F = d \mathbf{A}$, where $A$ is the electromagnetic potential, then$$d^* d \mathbf{A} = \mathbf{j}$$In the Coulomb gauge, $d^* \mathbf{A}=0$, have$$L_1 \mathbf{A} = \mathbf{j}$$
###Code
# setup two current loops as an example
# must be built from non-zero eigenvalue eigenvectors
j = np.zeros(clqs.dim(1))
j[0] = -0.75
j[1] = 0.75
j[2] = -1.5
j[3] = 0.75
j[4] = -0.75
# uncomment to fix-up invalid j
# j = remove_kernel(j, cohom_groups[1])
# print("Corrected j: {0}".format(j))
for basis_vec in cohom_groups[1]:
assert np.allclose(0, np.dot(basis_vec, j)), "Not outside the kernel of L (H^1)"
A = np.linalg.pinv(L_[1])@j
A_D = dirac_space(A, 1, clqs)
F=d@A_D
# sanity checks
# verify Coulomb gauge
assert np.allclose(0, dstar@A_D), "Doesn't satisfy Coulomb gauge condition"
j_D = dirac_space(j, 1, clqs)
assert np.allclose(j_D, dstar@F), "F is not a solution for j"
assert np.allclose(j_D, dstar@d@A_D), "A is not a solution for j"
g.es['j'] = get_edge_values(j, clqs, g)
g.es['A'] = get_edge_values(A, clqs, g)
plot(g, bbox=(400,300), margin=20, vertex_label=g.vs['name'], edge_label=["{0}\n{1:.3f}".format(e['j'], e['A']) for e in g.es], vertex_color=255, vertex_frame_color='grey')
# F 2-form values
for k, v in get_2form_values(F, clqs).items():
print("{0}: {1}".format(', '.join([str(g.vs[i]['name']) for i in k]), v))
###Output
1, 2, 3: 0.7499999999999996
2, 3, 4: 0.7499999999999991
###Markdown
Get some test data.
###Code
n = 200
X = np.random.rand(n, n).astype(np.float32)
###Output
_____no_output_____
###Markdown
Initialize DistanceMatrix object and calculate the distance matrix.
###Code
DM = DistanceMatrix()
DM.calculate_distmatrix(X)
###Output
_____no_output_____
###Markdown
Get specific value in the distance matrix.
###Code
DM.get_similarity(10,2)
cosine_similarity(X)[10,2]
###Output
_____no_output_____
###Markdown
Retrieve the flatten (under-triangle) distance matrix and compare it to Sklearn's version.
###Code
SKlearn_under = cosine_similarity(X)[np.tril_indices(n, k=-1)]
under_dist = DM.get_distance_matrix(fullMatrix=False)
np.allclose(np.sort(under_dist), np.sort(SKlearn_under))
###Output
_____no_output_____
###Markdown
It is possible to retrieve the full distance matrix if necessary.
###Code
SKlearn_full = cosine_similarity(X)
DM_full = DM.get_distance_matrix(fullMatrix=True)
np.allclose(SKlearn_full, DM_full)
###Output
_____no_output_____
###Markdown
A small demo of background generator[should work in both python2 and python3]
###Code
from __future__ import print_function
from prefetch_generator import BackgroundGenerator, background,__doc__
print(__doc__)
###your super-mega data iterator
import numpy as np
import time
def iterate_minibatches(n_batches, batch_size=10):
for b_i in range(n_batches):
time.sleep(0.1) #here it could read file or SQL-get or do some math
X = np.random.normal(size=[batch_size,20])
y = np.random.randint(0,2,size=batch_size)
yield X,y
###Output
_____no_output_____
###Markdown
regular mode
###Code
%%time
#tqdm made in china
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in iterate_minibatches(50):
#training
time.sleep(0.1) #here it could use GPU for example
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 100 ms, sys: 20 ms, total: 120 ms
Wall time: 10.1 s
###Markdown
with prefetch
###Code
%%time
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in BackgroundGenerator(iterate_minibatches(50)):
#training
time.sleep(0.1) #here it could use some GPU
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 68 ms, sys: 16 ms, total: 84 ms
Wall time: 5.14 s
###Markdown
Same with decorator
###Code
###your super-mega data iterator again, now with background decorator
import numpy as np
import time
@background(max_prefetch=3)
def bg_iterate_minibatches(n_batches, batch_size=10):
for b_i in range(n_batches):
time.sleep(0.1) #here it could read file or SQL-get or do some math
X = np.random.normal(size=[batch_size,20])
y = np.random.randint(0,2,size=batch_size)
yield X,y
%%time
print('/'+'-'*42+' Progress Bar ' + '-'*42 + '\\')
for b_x,b_y in bg_iterate_minibatches(50):
#training
time.sleep(0.1)#you guessed it
print('!',end=" ")
print()
###Output
/------------------------------------------ Progress Bar ------------------------------------------\
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
CPU times: user 56 ms, sys: 20 ms, total: 76 ms
Wall time: 5.14 s
###Markdown
Example: Learning the Fisher-KPP equations from simulated dataIn this notebook, we demonstrate using the `pdel` module I have coded up to learn equations from simulated data in two dimensions for the Fisher-KPP equation: $$\partial_t \rho = r \rho (\rho_0-\rho) + D (\partial_{xx} \rho + \partial_{yy} \rho),$$where $r$ and $D$ are the reaction/growth and diffusion parameters, respectively. PDEs of the above type appear in diverse chemical and biological reaction-diffusion systems. For the provided data, the parameters are set to $r = 1, \rho_0 = 1$, and $D = 0.01$.
###Code
import os
import sys
import numpy as np
import h5py
import sys
import matplotlib as mpl
import matplotlib.pyplot as plt
import pdel.pdelearn as pdel
from pdel.funcs import *
from tqdm import tqdm_notebook as tqdm
import glob
from tqdm import tqdm
import logging
import importlib
%matplotlib inline
#set environment variables
os.environ["MKL_NUM_THREADS"] = "1"
os.environ["NUMEXPR_NUM_THREADS"] = "1"
os.environ["OMP_NUM_THREADS"] = "1"
os.environ['HDF5_USE_FILE_LOCKING']='FALSE'
###Output
_____no_output_____
###Markdown
Set data paths and the learning parameters
###Code
data_path = 'data'
filename = 'data.h5'
f = h5py.File('%s/%s' %(data_path,filename), 'r')
desc = list(f['tasks'])
num_data = int(5e3)
num_cores = 2
n_folds = 4
n_repeats = 25
add_noise = True
std_dev = 0.05
w0 = 0 #weighting, set as 1 to switch it on
order = 3 #polynomial order upto which to generate the features
stab_thresh = 0.8 #threshold for stability selection
num_iters = 100 #max number of iterations
algo = 'stridge' #algorithm to use
nlam1 = 20 #number of values of the first hyperparameter
nlam2 = 1 #number of values of the second hyperparameter
run_name = 'noise%0.2f-%s' %(std_dev, algo)
if algo != 'stridge':
nlam2 = 1
seed0 = np.random.randint(0,100)
pdel_path = '%s/PDElearn' %(data_path)
save_path = '%s/PDElearn/%s' %(data_path, run_name)
if os.path.exists(pdel_path) == False: os.mkdir(pdel_path)
if os.path.exists(save_path) == False: os.mkdir(save_path)
###Output
_____no_output_____
###Markdown
Start logging
###Code
importlib.reload(logging)
logging.basicConfig(format='%(asctime)s:%(name)s:%(levelname)s: %(message)s',
level=logging.INFO, \
handlers=[logging.FileHandler(save_path + '/log.out', mode = 'w'), \
logging.StreamHandler()])
logger = logging.getLogger(__name__)
logger.info('Logging Started for run: %s' %(run_name))
###Output
2021-02-24 22:51:07,425:__main__:INFO: Logging Started for run: noise0.05-stridge
###Markdown
Load data for the field and its derivatives That is, $\rho, \partial_t \rho, \partial_x \rho, \partial_yy \rho,$ etc.
###Code
# create a dictionary of all the variables
data_raw = {key: None for key in desc}
data_cv = {key: None for key in desc}
nt, nx, ny = np.array(f['tasks']['rho']).shape
x = np.array(f['scales']['x']['1.0'])
y = np.array(f['scales']['y']['1.0'])
t = np.array(f['scales']['sim_time'])
ttt, xxx, yyy = np.meshgrid(t,x,y, indexing='ij')
#radius within which to consider the data
r0 = 0.75
inds = np.where((xxx**2 + yyy**2) < r0**2)
num_data = min([num_data, len(inds[0])])
rand= np.random.RandomState(seed=seed0)
rand_inds = rand.choice(len(inds[0]), num_data, replace=False)
for key in desc:
data_raw[key] = np.array(f['tasks'][key])
temp = np.expand_dims(data_raw[key][inds],axis=1)
data_cv[key] = temp[rand_inds, :]
#add noise to the simulated data
if add_noise:
for key in desc:
data_cv[key] = data_cv[key]*np.random.normal(1, std_dev, (num_data, 1))
data_raw[key] = data_raw[key]*np.random.normal(1, std_dev, data_raw[key].shape)
data_raw['1'] = np.ones_like(data_raw['rho'])
data_cv['1'] = np.ones_like(data_cv[desc[0]])
n = data_cv['1'].shape[0]
###Output
_____no_output_____
###Markdown
Plot the data
###Code
plt.figure(figsize=(8,3), dpi=300)
ind_t = 30
ind_x = 64
plt.subplot(121)
plt.pcolormesh(x, y, data_raw['rho'][ind_t, :, :], cmap='plasma')
plt.colorbar()
plt.axvline(0, linestyle='--', color='w')
plt.title(r'Density $\rho$ at $t=%0.2f$' %(t[ind_t]))
plt.xlabel('x')
plt.ylabel('y')
plt.subplot(122)
plt.pcolormesh(t, y, data_raw['rho'][:, ind_x, ].T, cmap='plasma')
plt.colorbar()
plt.title(r'Density $\rho$ at $x=%0.2f$' %(x[ind_x]))
plt.xlabel('t')
plt.ylabel('y')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Make a video of the dynamics
###Code
from matplotlib.animation import FFMpegWriter
metadata = dict(title='Fields Movie', artist='Matplotlib', comment='Movie support!')
writer = FFMpegWriter(fps=15, metadata=metadata)
dpi = 300
rho = data_raw['rho']*np.random.normal(1, std_dev, data_raw['rho'].shape)
rho_max_lim = np.max(rho)
rho_min_lim = np.min(rho)
fig = plt.figure(figsize=(5,4))
ax = plt.axes()
img = plt.pcolormesh(x,y,rho[0, :, :], vmin=rho_min_lim, vmax=rho_max_lim, cmap='plasma'); plt.colorbar()
step = 20
skip = 0
with writer.saving(fig, "%s/rho_movie.mp4" %(save_path), dpi):
for count, i in enumerate(tqdm(range(0,nt,skip+1))):
img.set_array(rho[i, :, :][:-1,:-1].ravel())
ax.set_title('t=%0.2f' %(t[i]))
writer.grab_frame()
###Output
2021-02-24 22:40:53,379:matplotlib.animation:INFO: MovieWriter.run: running command: ['ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '1500x1200', '-pix_fmt', 'rgba', '-r', '15', '-loglevel', 'quiet', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-metadata', 'title=Fields Movie', '-metadata', 'artist=Matplotlib', '-metadata', 'comment=Movie support!', '-y', 'data/PDElearn/noise0.05-stridge/rho_movie.mp4']
100%|██████████| 60/60 [00:04<00:00, 13.58it/s]
###Markdown
Here, we define the base features/library terms to be includedOther library terms are generated by multiplying these features with powers of $\rho$ as shown later.
###Code
#define features
#(include a * between raw features that need to be multiplied)
features = ['1', 'rho_x', 'rho_y', 'rho_xx', 'rho_yy', 'rho_xy']
###Output
_____no_output_____
###Markdown
Generate pdel model by providing it with data and the training labels
###Code
#reload the module
importlib.reload(pdel)
model = pdel.PDElearn('rho', 'rho_t', features, data_cv, poly_order=order, \
print_flag = False, sparse_algo=algo, \
path=save_path)
print('The features are shown below:\n')
print(model.Theta_desc)
###Output
The features are shown below:
['rho^1', 'rho^2', 'rho^3', 'rho_x', 'rho^1 rho_x', 'rho^2 rho_x', 'rho^3 rho_x', 'rho_y', 'rho^1 rho_y', 'rho^2 rho_y', 'rho^3 rho_y', 'rho_xx', 'rho^1 rho_xx', 'rho^2 rho_xx', 'rho^3 rho_xx', 'rho_yy', 'rho^1 rho_yy', 'rho^2 rho_yy', 'rho^3 rho_yy', 'rho_xy', 'rho^1 rho_xy', 'rho^2 rho_xy', 'rho^3 rho_xy']
###Markdown
Perform cross-validation after defining the hyperparameter interval range
###Code
#hyper-parameters to sweep
lambda_min, lambda_max = get_lambda_lims(*scale_X_y(model.Theta, model.ft), 0.1)
lam1_arr = np.logspace(np.log10(lambda_min), np.log10(lambda_max), nlam1)
lam2_arr = np.logspace(-2, 3, nlam2)
#cross validate
model.run_cross_validation(lam1_arr, lam2_arr, n_cores=num_cores, \
n_folds=n_folds, n_repeats=n_repeats, maxit=num_iters, plot_folds=True);
#find the relaxed intersection set of the learned PDEs
model.find_intersection_of_folds(thresh=stab_thresh, plot_hist=False);
###Output
2021-02-24 23:03:08,878:pdel.pdelearn:INFO: Spase solver selected: stridge
2021-02-24 23:03:08,882:pdel.pdelearn:INFO: Running cross validation: 4 folds, 25 repeats
100%|██████████| 100/100 [00:14<00:00, 7.33it/s]
2021-02-24 23:03:56,794:pdel.pdelearn:INFO: Cross Validation done!
2021-02-24 23:03:56,795:pdel.pdelearn:INFO: Finding the intersection set of PDEs from the folds!
###Markdown
Find the Pareto front and print the PDEsAs we see below, the correct PDE is discovered below. Due to the combination of cross-validation and construction of the Pareto front based on the test area, we are left with only one PDE, which is the true one.
###Code
model.find_pareto(plot_fig=True)
model.print_pdes(model.pareto_coeffs, model.pareto_errors, score=model.pareto_scores, \
complexity=model.pareto_complexity, file_name_end='intersect')
###Output
Log(loss) = 0.084829
score = 1.000
complexity = 4.000
rho_t = (0.99076)rho^1
+ (-0.97966)rho^2
+ (0.01014)rho_xx
+ (0.00995)rho_yy
###Markdown
Use stability selection to observe the stability of the features and choose models based on a thresholdThis is yet another way to choose models. It provides a graphical way to investigate which terms are 'stable'. We look at the 'stability score' of each term, which is the fraction of folds in which the term was found. Through the stability plot, we can see that the most relevant terms jump to a stability score of 1 as the hyperparameter (threshold in the STRidge algorithm) is decreased. Along this stability path, all the unique PDEs above the stability threshold are printed.
###Code
#stability selection
coeffs_all, error_all, _, complexity = model.select_stable_components(thresh=stab_thresh, plot_stab=True)
model.print_pdes(coeffs_all, error_all, complexity=complexity, file_name_end='stability')
###Output
Log(loss) = 1.371029
complexity = 3.000
rho_t = (1.15840)rho^1
+ (-1.43255)rho^2
+ (0.00976)rho_yy
Log(loss) = -0.281876
complexity = 4.000
rho_t = (0.99076)rho^1
+ (-0.97966)rho^2
+ (0.01014)rho_xx
+ (0.00995)rho_yy
###Markdown
Goal:The primary goal of this example script is to showcase the tools available in the bmpmod package using mock data. The mock data is produced by randomly sampling the density and temperature profiles models published in Vikhlinin+06 for a sample of clusters (Vikhlinin, A., et al. 2006, ApJ, 640, 691). A secondary goal of this example is thus to also explore how the backwards mass modeling process used in the bmpmod package compares to the forward fitting results of Vikhlinin+. The mock profiles generated here allow for a flexible choice in noise and radial sampling rate, which enables an exploration of how these quantities affect the output of the backwards-fitting process. There is also some flexibility built into the bmpmod package that can be additionally tested such as allowing for the stellar mass of the central galaxy to be included (or not included) in the model of total gravitating mass. If the stellar mass profile of the BCG is toggled on, the values for the BCG effective radius Re are pulled from the 2MASS catalog values for a de Vaucouleurs fit to K-band data . After generating the mock temperature and density profiles, the script walks the user through performing the backwards-fitting mass modelling analysis which can be summarized as fitting the below $T_{\mathrm{model}}$ expression to the observed temperature profile by constraining the parameters in the total gravitating mass model $M_{\mathrm{tot}}$.$kT_{\mathrm{model}}(R) = \frac{kT(R_{\mathrm{ref}}) \ n_{e}(R_{\mathrm{ref}})}{n_{e}(R)} -\frac{\mu m_{p} G}{n_{e}(R)}\int_{R_{\mathrm{ref}}}^R \frac{n_{e}(r) M_{\mathrm{grav}}(r)}{r^2} dr$The output of the bmpmod analysis includes a parametric model fit to the gas denisty profile, a non-parametric model fit to the temperature profile, the total mass profile and its associated parameters describing the profile (e.g., the NFW c, Rs), and the contributions of different mass components (i.e., DM, gas, stars) to the total mass profile.This tutorial will go over: 1. Generating mock gas density and temperature data2. Fiting the gas density profile with a parametric model3. Maximum likelihood mass profile parameter estimation 4. MCMC mass profile parameter estimation5. Plotting and summarizing the results A note on usage:Any of the clusters in Vikhlinin+06 are options to be used to generate randomly sampled temperature and density profiles. The full list of clusters is as follows: Vikhlinin+ clusters: [A133, A262, A383, A478, A907, A1413, A1795, A1991, A2029, A2390, RXJ1159+5531, MKW4, USGCS152] After selecting one of these clusters, this example script will automatically generate the cluster and profile data in the proper format to be used by the bmpmod modules. If you have your own data you would like to analyze with the bmpmod package, please see the included template.py file.
###Code
#select any cluster ID from the Vikhlinin+ paper
clusterID='A383'
###Output
_____no_output_____
###Markdown
1. Generate mock gas density and temperature profiles To generate the mock profiles, the density and temperature models define in Table 2 and 3 of Vikhlinin+06 are sampled. The sampling of the models occurs in equally log-spaced radial bins with the number of bins set by N_ne and N_temp in gen_mock_data(). At each radial point, the density and temperature values are randomly sampled from a Gaussian distribution centered on the model value and with standard deviation equal to noise_ne and noise_temp multiplied by the model value for density or temperature.Args for gen_mock_data(): N_ne: the number of gas density profile data points N_temp: the number of temperature profile data pointsnoise_ne: the percent noise on the density values noise_temp: the percent noise on the temperature values refindex: index into profile where Tmodel = Tspecincl_mstar: include stellar mass of the central galaxy in the model for total gravitating mass incl_mgas: include gas mass of ICM in the model for total gravitating mass
###Code
clustermeta, ne_data, tspec_data, nemodel_vikhlinin, tmodel_vikhlinin \
= gen_mock_data(clusterID=clusterID,
N_ne=30,
N_temp=10,
noise_ne=0.10,
noise_temp=0.05,
refindex=-1,
incl_mstar=1,
incl_mgas=1)
###Output
_____no_output_____
###Markdown
Now let's take a look at the returns... while these are generated automatically here, if you use your own data, things should be in a similar form.
###Code
# clustermeta:
# dictionary that stores relevant properties of cluster
# (i.e., name, redshift, bcg_re: the effective radius of the central galaxy in kpc,
# bcg_sersc_n: the sersic index of the central galaxy)
# as well as selections for analysis
# (i.e., incl_mstar, incl_mgas, refindex as input previously)
clustermeta
#ne_data: dictionary that stores the mock "observed" gas density profile
ne_data[:3]
#tspec_data: dictionary that store the mock "observed" temperature profile
tspec_data[:3]
###Output
_____no_output_____
###Markdown
Let's take a look at how our mock profiles compare to the model we're sampling from ...
###Code
fig1 = plt.figure(1, (12, 4))
ax = fig1.add_subplot(1, 2, 1)
'''
mock gas denisty profile
'''
# plot Vikhlinin+06 density model
xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000)
plt.loglog(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k')
plt.xlim(xmin=min(ne_data['radius']))
# plot sampled density data
plt.errorbar(ne_data['radius'], ne_data['ne'],
xerr=[ne_data['radius_lowerbound'], ne_data['radius_upperbound']],
yerr=ne_data['ne_err'], marker='o', markersize=2, linestyle='none', color='b')
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
plt.xlabel('r [kpc]')
plt.ylabel('$n_{e}$ [cm$^{-3}$]')
'''
mock temperature profile
'''
ax = fig1.add_subplot(1, 2, 2)
# plot Vikhlinin+06 temperature model
xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000)
plt.semilogx(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-')
# plot sampled temperature data
plt.errorbar(tspec_data['radius'], tspec_data['tspec'],
xerr=[tspec_data['radius_lowerbound'], tspec_data['radius_upperbound']],
yerr=[tspec_data['tspec_lowerbound'], tspec_data['tspec_upperbound']],
marker='o', linestyle='none', color='b')
plt.xlabel('r [kpc]')
plt.ylabel('kT [keV]')
###Output
_____no_output_____
###Markdown
2. Fitting the gas density profile with a parametric model To determine the best-fitting gas density model, bmpmod has the option of fitting the four following $n_{e}$ models through the Levenberg-Marquardt optimization method. "single\_beta": $n_{e} = n_{e,0} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta}$"cusped\_beta": $n_{e} = n_{e,0} \ (r/r_{c})^{-\alpha} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta+\frac{1}{2}\alpha}$"double\_beta\_tied": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta)$"double\_beta": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta_1)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta_2)$All four models can be fit and compared using the find_nemodeltype() function. A selected model must then be chosen for the following mass profile analysis with the fitne() function.
###Code
#suppress verbose log info from sherpa
logger = logging.getLogger("sherpa")
logger.setLevel(logging.ERROR)
#fit all four ne moels and return the model with the lowest reduced chi-squared as nemodeltype
nemodeltype, fig=find_nemodeltype(ne_data=ne_data,
tspec_data=tspec_data,
optplt=1)
print 'model with lowest reduced chi-squared:', nemodeltype
###Output
bmpmod/mod_gasdensity.py:71: RuntimeWarning: divide by zero encountered in power
* ((1.+((x/rc)**2.))**((-3.*beta/2.)+(alpha/2.))) # [cm^-3]
###Markdown
*Note*: while the function find_nemodeltype() returns the model type producing the lowest reduced chi-squared fit, it may be better to choose a simpler model with fewer free-parameters if the reduced chi-squared values are similar
###Code
# Turn on logging for sherpa to see details of fit
logger = logging.getLogger("sherpa")
logger.setLevel(logging.INFO)
# Find the parameters and errors of the seleted gas density model
nemodel=fitne(ne_data=ne_data,tspec_data=tspec_data,nemodeltype=str(nemodeltype)) #[cm^-3]
#nemodel stores all the useful information from the fit to the gas denisty profile
print nemodel.keys()
###Output
['parmins', 'nefit', 'dof', 'parmaxes', 'rchisq', 'chisq', 'parvals', 'parnames', 'type']
###Markdown
3. Maximum likelihood estimation of mass profile free-parameters The maximum likelihood method can be used to perform an initial estimation of the free-parameters in the cluster mass profile model. The free parameters in the mass model, which will be returned in this estimation, are:- the mass concentration $c$ of the NFW profile used to model the DM halo, - the scale radius $R_s$ of the NFW profile- optionally, the log of the normalization of the Sersic model $\rho_{\star,0}$ used to model the stellar mass profile of the central galaxyThe maximum likelihood estimation is performed using a Gaussian log-likelihood function of the form:$\ln(p) = -\frac{1}{2} \sum_{n} \left[\frac{(T_{\mathrm{spec},n} - T_{\mathrm{model},n})^{2}}{\sigma_{T_{\mathrm{spec},n}}^{2}} + \ln (2 \pi \sigma_{T_{\mathrm{spec},n}}^{2}) \right]$
###Code
ml_results = fit_ml(ne_data, tspec_data, nemodel, clustermeta)
###Output
MLE results
MLE: c= 3.301457611753752
MLE: rs= 313.06947333137026
MLE: normsersic= 7.087350990470565
###Markdown
bmpmod uses these maximum likelihood results to initialize the walkers in the MCMC chain... 4. MCMC estimation of mass profile model parameters Here the emcee python package is implemented to estimate the free-parameters of the mass model through the MCMC algorithm. bmpmod utilizes the ensemble sampler from emcee, and initializes the walkers in narrow Gaussian distribution about the parameter values returned from the maximum likelihood analysis.Returns of fit_mcmc(): samples - the marginalized posterior distribution sampler - the sampler class output by emcee
###Code
#fit for the mass model and temperature profile model through MCMC
samples, sampler = fit_mcmc(ne_data=ne_data,
tspec_data=tspec_data,
nemodel=nemodel,
ml_results=ml_results,
clustermeta=clustermeta,
Ncores=3,
Nwalkers=50,
Nsteps=50,
Nburnin=15)
###Output
MCMC progress: 10.0%
MCMC progress: 20.0%
MCMC progress: 30.0%
MCMC progress: 40.0%
MCMC progress: 50.0%
MCMC progress: 60.0%
MCMC progress: 70.0%
MCMC progress: 80.0%
MCMC progress: 90.0%
MCMC progress: 100.0%
autocorrelation time: autocorrelation time cannot be calculated
###Markdown
*Note*: autocorrelation time should be longer than Nburnin 4.1 analysis of the marginalized MCMC distributionWe also want to calculate the radius of the cluster $R_{500}$ and the mass (total, DM, gas, stars) within this radius. The auxililary calculations are taken care of in samples_aux() for each step of the MCMC chain.
###Code
# calculate R500 and M(R500) for each step of MCMC chain
samples_aux = calc_posterior_mcmc(samples=samples,
nemodel=nemodel,
clustermeta=clustermeta,
Ncores=1)
###Output
_____no_output_____
###Markdown
From the marginialized MCMC distribution, we can calculate the free-parameter and auxiliary parameter (R500, M500) values as the median of the distribution with confidence intervals defined by the 16th and 84th percentiles. With samples_results() we combine all output parameter values and their upper and lower 1$\sigma$ error bounds.
###Code
# combine all MCMC results
mcmc_results = samples_results(samples=samples,
samples_aux=samples_aux,
clustermeta=clustermeta)
for key in mcmc_results.keys():
print 'MCMC: '+str(key)+' = '+str(mcmc_results[str(key)])
#Corner plot of marginalized posterior distribution of free params from MCMC
fig1 = plt_mcmc_freeparam(mcmc_results=mcmc_results,
samples=samples,
sampler=sampler,
tspec_data=tspec_data,
clustermeta=clustermeta)
###Output
WARNING:root:Too few points to create valid contours
###Markdown
5. Summary plot
###Code
# Summary plot: density profile, temperature profile, mass profile
fig2, ax1, ax2 = plt_summary(ne_data=ne_data,
tspec_data=tspec_data,
nemodel=nemodel,
mcmc_results=mcmc_results,
clustermeta=clustermeta)
# add vikhlinin model to density plot
xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000)
ax1.plot(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k')
#plt.xlim(xmin=min(ne_data['radius']))
# add viklinin model to temperature plot
xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000)
ax2.plot(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-')
###Output
_____no_output_____
###Markdown
Clean scraped amazon data with 'CleanAmazonData' package
###Code
# Importing Libraries
import pandas as pd
import numpy as np
import os
from CleaningAmazonData import CleanDescriptionFile, CleanReviewFile
# Creating dataframe for description and review table
path = r'C:\Users\Lajar\OneDrive\CrowdDoing\Research\Revised_data\Scrapy_Data'
desc_df = pd.read_csv(os.path.join(path,'1_st_johns_wort_description.csv'))
review_df = pd.read_csv(os.path.join(path,'1_st_johns_wort_review.csv'))
desc_df.shape, review_df.shape
###Output
_____no_output_____
###Markdown
Cleaning Description file
###Code
# Check for missing Value
desc_df.isnull().sum()
# Create instance of CleanDescriptionFile
cdf = CleanDescriptionFile(check_ASIN = True, add_category = True)
# Find invalid ASIN if any in description file
invalid_ASIN = cdf.check_ASIN_validity(desc_df)
###Output
2003 ref=sr_1_1158?dchild=1&keywords=st+johns+wort&...
dtype: object
###Markdown
Note: Analyse the invalid_ASIN array. Try to correct it if possible or remove rows.
###Code
# transform raw description df to cleaned and featured df
cleaned_desc_df = cdf.transform(desc_df)
cleaned_desc_df.head(2)
###Output
_____no_output_____
###Markdown
Note: Resulting dataframe is cleaned dataframe with feature included 'Category'
###Code
cleaned_desc_df.shape
cleaned_desc_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Cleaning Review File
###Code
# Create instance of CleanReviewFile
crf = CleanReviewFile(check_ASIN = True, add_ProcessedText = True)
# Check for invalid ASIN if any
invalid_ASIN = crf.check_ASIN_validity(review_df)
# trnsform raw review_df to cleaned_review_df with additional feature 'ProcessedText'
cleaned_review_df = crf.transform(review_df)
cleaned_review_df.ProcessedText
###Output
_____no_output_____
###Markdown
Integration in sklearn pipeline
###Code
from sklearn.pipeline import Pipeline
pipe = Pipeline([('CleanDescriptionFile',CleanDescriptionFile(check_ASIN = True, add_category = True))])
df = pipe.transform(desc_df) # pipe.fit_transform(desc_df)
df.head(2)
pipe = Pipeline([('CleanReviewFile',CleanReviewFile(check_ASIN = True, add_ProcessedText = True))])
df = pipe.transform(review_df) # pipe.fit_transform(desc_df)
df.head(2)
###Output
_____no_output_____
###Markdown
Provide paths to the data. You have to prepare 2 files. The first one, **counts**, contains normalized (but not scaled!) counts matrix (rows are genes, columns are barcodes). The second file contains **meta information** about cells. It must have 2 columns: the first one with cell IDs (e.g. barcodes), and the second with cells labels (e.g. cell types)
###Code
counts_path = Path("data/pbmc3k_counts.tsv")
meta_path = Path("data/pbmc3k_meta.txt")
cpdb_output_path = Path("data/cpdb_output")
!head data/pbmc3k_meta.txt
###Output
Cell cell_type
AAACATACAACCAC-1 RPS expressing
AAACATTGAGCTAC-1 B
AAACATTGATCAGC-1 CD4 T
AAACCGTGCTTCCG-1 CD14 Monocytes
AAACCGTGTATGCG-1 NK
AAACGCACTGGTAC-1 CD4 T
AAACGCTGACCAGT-1 CD8 T
AAACGCTGGTTCTT-1 CD8 T
AAACGCTGTAGCCA-1 CD8 T
###Markdown
Running CellPhoneDB
###Code
cpdb_launcher = CellPhoneDBLauncher(
meta_file_path=meta_path,
counts_file_path=counts_path,
output_path=cpdb_output_path, # CellPhoneDB saves its output to the files
project_name="pbmc3k",
counts_data="gene_name"
)
cpdb_launcher.run()
###Output
Running command cellphonedb method statistical_analysis --counts-data=gene_name --project-name=pbmc3k --threshold=0.1 --result-precision=3 --output-path=data/cpdb_output --output-format=txt --means-result-name=means --significant-means-result-name=significant_means --deconvoluted-result-name=deconvoluted --debug-seed=-1 --pvalue=0.05 --pvalues-result-name=pvalues --iterations=1000 --threads=-1 --verbose data/pbmc3k_meta.txt data/pbmc3k_counts.tsv
[ ][APP][29/08/21-23:48:04][WARNING] Latest local available version is `v2.0.0`, using it
[ ][APP][29/08/21-23:48:04][WARNING] User selected downloaded database `v2.0.0` is available, using it
[ ][CORE][29/08/21-23:48:04][INFO] Initializing SqlAlchemy CellPhoneDB Core
[ ][CORE][29/08/21-23:48:04][INFO] Using custom database at /home/vladimir/.cpdb/releases/v2.0.0/cellphone.db
[ ][APP][29/08/21-23:48:04][INFO] Launching Method cpdb_statistical_analysis_local_method_launcher
[ ][APP][29/08/21-23:48:04][INFO] Launching Method _set_paths
[ ][APP][29/08/21-23:48:04][WARNING] Output directory (/home/vladimir/PycharmProjects/communiquer/data/cpdb_output/pbmc3k) exist and is not empty. Result can overwrite old results
[ ][APP][29/08/21-23:48:04][INFO] Launching Method _load_meta_counts
[ ][CORE][29/08/21-23:48:06][INFO] Launching Method cpdb_statistical_analysis_launcher
[ ][CORE][29/08/21-23:48:06][INFO] Using Default thread number: 4
[ ][CORE][29/08/21-23:48:06][INFO] Launching Method _counts_validations
[ ][CORE][29/08/21-23:48:07][INFO] [Cluster Statistical Analysis] Threshold:0.1 Iterations:1000 Debug-seed:-1 Threads:4 Precision:3
[ ][CORE][29/08/21-23:48:07][INFO] Running Real Analysis
[ ][CORE][29/08/21-23:48:07][INFO] Running Statistical Analysis
[ ][CORE][29/08/21-23:48:38][INFO] Building Pvalues result
[ ][CORE][29/08/21-23:48:40][INFO] Building results
###Markdown
Read the output and display it
###Code
cpdb_launcher.read_output(convert_to_cellchat_format=True)
cpdb_launcher.pvalues_df
cpdb_launcher.means_df.head()
cpdb_launcher.count_significant_interactions()
cpdb_launcher.counts_df
cpdb_launcher.visualise_interactions()
cpdb_launcher.dotplot_counts()
###Output
_____no_output_____
###Markdown
Running CellChat
###Code
cellchat_launcher = CellChatLauncher(counts_file_path=counts_path, meta_file_path=meta_path)
cellchat_launcher.run()
cellchat_launcher.read_output()
cellchat_launcher.weights_df
###Output
_____no_output_____
###Markdown
CellChat has a different output format. Instead of one big table it has a separate table for each interation
###Code
cellchat_launcher.weights_df
cellchat_launcher.pvalues_dfs
cellchat_launcher.probabilities_dfs
###Output
_____no_output_____
###Markdown
Let's take a look at one of the tables
###Code
cellchat_launcher.pvalues_dfs["ESAM_ESAM"]
###Output
_____no_output_____
###Markdown
This format is somehow useful, e.g. when we want to build a chord diagram. That's why we set `convert_to_cellchat_format=True` when running `read_output()` for CellPhoneDB launcher. So you can also get a matrix for every interactions for CellPhoneDB:
###Code
cpdb_launcher.pvalues_dfs["ESAM_ESAM"]
###Output
_____no_output_____
###Markdown
As you can see, CellPhoneDB predicts an ESAM-ESAM interaction between megakaryocytes, and CellChat predicts it for RPS expressing cells cluster
###Code
cellchat_launcher.counts_df
cellchat_launcher.visualise_interactions()
cellchat_launcher.dotplot_counts()
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from math import sin, pi
import warnings
from qbstyles import mpl_style
###Output
_____no_output_____
###Markdown
Choose between interactive or static plots:
###Code
# interactive plots:
# %matplotlib notebook
# static plots:
%matplotlib inline
###Output
_____no_output_____
###Markdown
Test plot definitions:
###Code
# LINE PLOT
def line_plot(ax):
rng = np.random.RandomState(4)
x = np.linspace(0, 10, 500)
y = np.cumsum(rng.randn(500, 4), 0)
ax.set_title('Line Graph')
ax.set_xlabel('— Time')
ax.set_ylabel('— Random values')
ax.legend(['Bitcoin', 'Ethereum', 'Dollar', 'Oil'])
ax.set_xlim([0, 10])
ax.set_ylim([-20, 60])
ax.plot(x, y)
# SCATTER PLOT
def scatter_plot(ax):
rng = np.random.RandomState(4)
x = np.linspace(0.6, pi-0.6, 100)
y = [sin(x) + rng.rand() - 0.5 for x in x]
t = np.linspace(-1, pi+0.2, 300)
z = [0.5*sin(x*5) + rng.rand() - 0.5 for x in t]
ax.set_title('Scatter Plot')
ax.set_xlabel('— space')
ax.set_ylabel('— altitude')
ax.legend(['sun', 'mountain'])
plt.xlim([-0.2, pi+0.2])
plt.ylim([-1.6, 1.6])
ax.scatter(x, y, s=100, alpha=.6, color='C2')
ax.scatter(t, z, s=100, alpha=.6, marker='^', color='C1')
# DISTRIBUTIONS
def distribution_plot(ax):
np.random.seed(2)
data = np.random.multivariate_normal((0, 0), [(5, 2), (2, 2)], size=2000)
data[:, 1] = np.add(data[:, 1], 2)
ax.set_title('Distribution Plot')
ax.set_xlabel('— Density')
ax.set_ylabel('— Random values')
ax.set_xlim([-10, 10])
ax.set_ylim([0, 0.31])
# supress seaborn FutureWarnings
warnings.simplefilter(action='ignore', category=(FutureWarning, UserWarning))
for col in range(2):
sns.distplot(data[:, col], ax=ax, color='C' + str(col+3))
# POLAR PLOT
def polar_plot(ax):
r = np.arange(0, 3.0, 0.01)
theta = 2 * pi * r
ax.plot(theta, r)
ax.plot(0.5 * theta, r, ls='--')
ax.set_title("Polar Axis Plot")
###Output
_____no_output_____
###Markdown
Plot all in a subplot
###Code
def plot():
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
line_plot(axes[0, 0])
scatter_plot(axes[0, 1])
distribution_plot(axes[1, 0])
ax = plt.subplot(2, 2, 4, projection='polar')
polar_plot(ax)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Use QB's style:
###Code
mpl_style()
plot()
mpl_style(dark=False)
plot()
###Output
_____no_output_____
###Markdown
PDBe Aggregated API - A step-by-step example This Jupyter Notebook provides step-by-step instructions for querying the PDBe Aggregated API and retrieving information on predicted binding sites, macromolecular interaction interfaces and observed ligands for the protein Thrombin using Python3 programming language. Step 1 - Import necessary dependenciesIn order to query the API, import the `requests` library.
###Code
import requests
###Output
_____no_output_____
###Markdown
Step 2 - Choose a UniProt accession and the necessary API endpointsAll the API endpoints have keys that the users must provide. For this example, we will use API endpoints that are keyed on a UniProt accession.The UniProt accession of Thrombin is "P00734".For this example, we are interested in functional annotations of Thrombin which are provided to PDBe-KB [1] by consortium partner resources such as P2rank [2] and canSAR [3]. We are also interested in all the macromolecular interaction interface residues of Thrombin, as calculated by the PDBe PISA service [4], and all the observed ligand binding sites, as calculated by Arpeggio [5].In order to retrieve this (and any other) information, users should study the documentation page of the PDBe Aggregated API: We set the variables below for the UniProt accession of Thrombin, and the API endpoint URLs we will use.
###Code
ACCESSION = "P00734"
ANNOTATIONS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/annotations/{ACCESSION}"
INTERACTIONS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/interface_residues/{ACCESSION}"
LIGANDS_URL = f"https://www.ebi.ac.uk/pdbe/graph-api/uniprot/ligand_sites/{ACCESSION}"
###Output
_____no_output_____
###Markdown
Step 3 - Define helper functionsWe will define a few helper functions to avoid code repetition when retrieving data from the API.
###Code
def get_data(accession, url):
"""
Helper function to get the data from an API endpoint using an accession as key
:param accession: String; a UniProt accession
:param url: String; a URL to an API endpoint
:return: Response object or None
"""
try:
return requests.get(url)
except Error as err:
print("There was an error while retrieving the data: %s" % err)
def parse_data(data):
"""
Helper function to parse a response object as JSON
:param data: Response object; data to be parsed
:return: JSON object or None
"""
# Check if the status code is 200 and raise error if not
if data.status_code == 200:
return data.json()
else:
raise ValueError('No data received')
###Output
_____no_output_____
###Markdown
Step 4 - Get annotations dataWe will use the annotations API endpoint (defined as `ANNOTATIONS_URL`) to get the functional annotations for Thrombin (defined as `ACCESSION`)
###Code
annotations_data = parse_data(get_data(ACCESSION, ANNOTATIONS_URL))
###Output
_____no_output_____
###Markdown
We then filter the data for the predicted binding sites annotations provided by P2rank and canSAR.
###Code
all_predicted_ligand_binding_residues = list()
for provider_data in annotations_data[ACCESSION]["data"]:
if provider_data["accession"] in ["p2rank", "cansar"]:
residues = [x["startIndex"] for x in provider_data["residues"]]
all_predicted_ligand_binding_residues.extend(residues)
###Output
_____no_output_____
###Markdown
These are the residues which are annotated as predicted ligand binding sites:
###Code
print(all_predicted_ligand_binding_residues)
###Output
[136, 237, 246, 251, 265, 273, 324, 329, 330, 331, 332, 333, 334, 336, 372, 383, 386, 388, 389, 390, 391, 396, 400, 406, 407, 410, 413, 414, 415, 417, 434, 436, 459, 493, 506, 507, 510, 511, 530, 541, 549, 565, 566, 568, 572, 574, 585, 589, 590, 591, 596, 597, 605, 613, 615, 617]
###Markdown
Step 5 - Get interaction interfaces dataWe will use the interaction interfaces API endpoint (defined as `INTERACTIONS_URL`) to get all the macromolecular interaction interface residues of Thrombin (defined as `ACCESSION`)
###Code
interactions_data = parse_data(get_data(ACCESSION, INTERACTIONS_URL))
###Output
_____no_output_____
###Markdown
We then list the macromolecular interaction partners of Thrombin:
###Code
interaction_partner_names = list()
for item in interactions_data[ACCESSION]["data"]:
interaction_partner_names.append(item["name"])
print(interaction_partner_names)
###Output
['Prothrombin', 'Hirudin variant-1', 'Proteinase-activated receptor 1', 'Other', 'DNA', 'Tsetse thrombin inhibitor', 'Hirudin variant-2 (Fragment)', 'Hirudin-2', 'Salivary anti-thrombin peptide anophelin', 'Thrombomodulin', 'Heparin cofactor 2', 'Thrombin inhibitor madanin 1', 'Antithrombin-III', 'Staphylocoagulase (Fragment)', 'Thrombininhibitor', 'AGAP008004-PA', 'Pancreatic trypsin inhibitor', 'Uncharacterized protein avahiru', 'RNA', 'Fibrinogen alpha chain', 'Glia-derived nexin', 'Fibrinogen gamma chain', 'Hirudin-2B', 'Variegin', 'Proteinase-activated receptor 4', 'Plasma serine protease inhibitor', 'Hirudin-3A', 'Vitamin K-dependent protein C', 'Platelet glycoprotein Ib alpha chain', 'Hirullin-P18', 'BIVALIRUDIN C-terminus fragment', 'Coagulation factor V', "Hirudin-2'", "Hirudin-3B'", 'D-phenylalanyl-L-prolyl-N~5~-[amino(iminio)methyl]-D-ornithyl-L-cysteinamide', 'Kininogen-1', 'D-phenylalanyl-L-prolyl-N~5~-[amino(iminio)methyl]-D-ornithyl-D-threoninamide', 'Hirudin-PA', "Hirudin-3A'", 'Hirudin-3', 'D-phenylalanyl-L-prolyl-N~5~-[amino(iminio)methyl]-D-ornithyl-L-isoleucinamide', 'BIVALIRUDIN N-terminus fragment', 'Hirudin-2A', 'AERUGINOSIN 298-A']
###Markdown
We can see it has many interaction partners, and several of them are variants of Hirudin, a natural inhibitor of Thrombin. We will use `Hirudin variant-1` for the next steps of this example. Step 6 - Compare the interaction interface residues between Thrombin and Hirudin (variant-1)We compare the predicted ligand binding site residues with the interaction interface residues of Thrombin that interact with Hirudin (variant 1)
###Code
interface_residues_with_hirudin = list()
for item in interactions_data[ACCESSION]["data"]:
if item["name"] == "Hirudin variant-1":
interacting_residues = [x["startIndex"] for x in item["residues"] if x["startIndex"] in all_predicted_ligand_binding_residues]
interface_residues_with_hirudin.extend(interacting_residues)
###Output
_____no_output_____
###Markdown
We can see that there are 9 residues found in the region between GLU388 and GLY591 which both interact with Hirudin and are predicted to bind small molecules:
###Code
print(interface_residues_with_hirudin)
###Output
[388, 406, 434, 541, 565, 566, 568, 589, 591]
###Markdown
Summary of the results so farUsing the PDBe Aggregated API we could retrieve all the residues of Thrombin which are predicted to bind small molecules. We then retrieved the data on macromolecular interactions between Thrombin and other proteins/peptides. We could see that Thrombin interacts with several variants of Hirudin.Next, we compared the predicted ligand binding sites with the interaction interface residues and saw that there is a region on the sequence of Thrombin where several potential target residues can be found. Step 7 - Retrieving observed ligand binding sitesNext, we retrieve all the binding sites using the ligand sites API endpoint (defined as `LIGANDS_URL`) to get all the ligand binding residues of Thrombin (defined as `ACCESSION`)
###Code
ligands_data = parse_data(get_data(ACCESSION, LIGANDS_URL))
ligand_list = list()
for ligand in ligands_data[ACCESSION]["data"]:
for residue in ligand["residues"]:
if residue["startIndex"] in interface_residues_with_hirudin:
ligand_list.append(ligand["accession"])
break
###Output
_____no_output_____
###Markdown
Finally, we compare the ligands found in the PDB with the annotations and interaction interfaces we have collated in the previous steps, and we find that indeed there are many small molecules, such as TYS, MRD, P6G that interact with the Thrombin residues which form the macromolecular interaction interface with Hirudin (variant-1).
###Code
print("There are %i ligands observed in PDB that bind to this " % len(ligand_list))
print("These are the Chemical Componant identifiers of the ligands:")
print(ligand_list)
###Output
These are the Chemical Componant identifiers of the ligands:
['8K2', 'FQI', 'TYS', 'DPN', '71F', 'BAM', 'WCE', 'HBD', 'OJK', 'DKQ', '02N', 'Y4L', 'SZ4', 'C2A', 'ABN', 'APA', 'BEN', 'ESI', 'PRL', 'BT3', 'BT2', 'BZT', 'C2D', 'BAI', 'BAH', 'BAB', '897', '896', '501', '4ND', 'R11', 'DKK', 'I26', 'I25', 'I50', 'C1M', '382', 'L03', '121', 'BMZ', '130', '696', '132', '166', '167', 'GR1', 'L02', 'CR9', 'D6Y', 'NLI', '120', '81A', 'C02', 'C7M', 'C5M', 'C4M', 'C3M', 'UIR', 'UIB', 'F25', 'ESH', '348', 'UIP', 'FSN', 'SHY', 'R56', '0IT', 'L86', 'T76', '1ZV', 'MRQ', 'ODB', 'G44', 'QQW', 'QQE', 'N6H', 'QQK', 'QQT', 'QQ5', 'QQN', 'BT1', 'BPP', 'T42', 'MUQ', '0NW', 'GR4', 'ALZ', 'SJR', 'C24', '165', '2OJ', '2FN', '00R', 'IH3', 'MUZ', 'GAH', 'T19', 'PHW', 'PHV', '34P', 'P05', 'GOZ', 'M6Q', 'LXW', 'MJK', '3SP', 'O5Z', 'J5K', '99P', 'P97', 'CDO', 'B03', 'B01', 'MM9', 'M6S', 'M4Z', 'MVF', 'MEL', 'M67', '45S', 'S49', 'S00', 'S04', '46U', '45U', 'KDQ', 'M32', 'EU5', 'BJA', 'S28', 'M41', 'M34', 'WX5', 'TIF', 'S29', 'M31', '2TS', 'S54', '13U', '12U', '11U', '10U', '53U', '51U', 'K73', '37U', '22U', '27U', 'N6L', '32U', 'MD8', '64U', '6OV', '6TH', '50U', '91U', '71U', '49U', '177', '176', '23U', '33U', '062', '16U', '24U', '19U', '26U', '21U', '31U', '29U', '02P', 'B04', '163', '10P', '98P', '162', '06P', '6XS', '7R9', 'QPW', '9MU', 'BM9', '00N', '0ZE', '00Q', 'MJH', 'MDL', 'PPX', 'SN3', 'BLI', 'J3I', '0BM', '110', 'MIN', 'IH1', '00P', 'IH2', 'RA4', '14A', 'CDA', '170', 'CDD', 'T15', 'CDB', 'M18', 'UNB', 'L17', '1TS', 'S33', '9MQ', 'N5N', '9MX', '4CP', '2CE', 'CCR', 'MIU', '0ZI', 'MIT', '15U', 'MID', 'BM2', 'RA8', 'I11', '1Z0', '6V2', 'UET', '3ZD', 'D6J', '894', '701', '895', '5CB', '0IV', '0KV', 'OSC', '0E7', 'AZL', 'T16', 'DFK', 'DI2', '0ZJ', 'T29', 'DP7', 'DI5', 'DI4', '0G7', 'DI3', '0G6', 'QWE', '00L', '00K', 'NA9', 'N12', 'IGN', '44U', 'MKY', 'T87', 'MRZ', 'PRO', '157', 'F05', 'C1D', 'GR3', 'GOL', 'P6G', 'TFA', 'DFP', 'RB', 'IN2', 'ACY', '0GJ', 'IOD', 'ZN', 'CL', 'EDO', 'MPD', 'PO4', 'DMS', 'SO4']
###Markdown
Overdamped Langevin equation$$\gamma \frac{dx(t)}{dt} = -\frac{dU(x)}{dt} + F_{st}^{\alpha}(t)$$The alpha-stable distribution may be reduced to the Normal distribution by setting $\alpha$ parameter equal to __2__. Here$\gamma$ -- friction coefficient$U(x)$ -- stochastic potential with a given autocorrelation function$F_{st}^{\alpha}(t)$ -- stochastic force with from alpha-stable distribution
###Code
# Parameters for the Langevin equation (LE) solver
dt = 2e-4
dx = 2e-2
t_steps = 10000
t_sol = np.arange(0, (t_steps + 1) * dt, dt)
x_lim = dx*(2**18)
n_attempts = 10000
alpha = 2.0
U0= 1.0
K_alpha = 1.0
# LE solution
x_sol = sls.solve_le_corr_alpha_euler_periodic(dt, dx, t_steps, x_lim, sls.acf_polynomial,
n_attempts=n_attempts, alpha=alpha, U0=U0, K_alpha=K_alpha)
%matplotlib inline
plt.plot(np.log(t_sol), np.log(sls.calculate_eamsd(x_sol)));
plt.ylabel('Mean square displacement')
plt.xlabel('$t, time$')
plt.text(-5, -1, str('slope = ') + str(np.polyfit(np.log(t_sol[1:]), np.log(sls.calculate_eamsd(x_sol)[1:]), 1)[0]));
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: RuntimeWarning: divide by zero encountered in log
###Markdown
First passage time calculations
###Code
t_fpt = sls.calculate_fpt(t_sol, x_sol, dx_barrier=1.)
fpt_bins = plt.hist(t_fpt[t_fpt < 1000], bins = 100)
fpt_probs = fpt_bins[0] / t_fpt.size
fpt_bins_centers = (fpt_bins[1][:-1] + fpt_bins[1][1:]) / 2
plt.xlabel('First passage time, s')
plt.ylabel('Counts')
###Output
_____no_output_____
###Markdown
Filtering sam/bam files by percent identity or percent of matched sequenceTools to filter alignments in SAM/BAM files by percent identity or percent of matched sequence. Percent identity is computed as:$$PI = 100 \frac{N_m}{N_m + N_i}$$where $N_m$ is the number of matches and $N_i$ is the number of mismatches.Percent of matched sequences is computed as:$$PM = 100 \frac{N_m}{L}$$where $L$ corresponds to query sequence length. NOTESBAM/SAM files must contain [MD tags](https://github.com/vsbuffalo/devnotes/wiki/The-MD-Tag-in-BAM-Files) to be able to filter by percent identity. Aligners such as [BWA](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705234/) add MD tags to each queried sequence in a BAM file. MD tags can also be generated with [samtools](http://www.htslib.org/doc/samtools-calmd.html). Dependencies1. [Samtools](http://www.htslib.org/)2. [Pysam](https://pysam.readthedocs.io/en/latest/api.html) Installation```pip3 install filtersam``` TODO1. Make it command line callable2. Perhaps good idea (if possible) to add a specific tag to BAM/SAM containing computed percent identity3. Include several definitions of percent identity and/or let the user define one UsageThis package contains two main functions: ```filterSAMbyIdentity``` and ```filterSAMbyPercentMatched```, to filter BAM files by percent identity or percent of matched sequence, respectively. To exemplify its usage, let's filter a BAM file by percent identity and percent of matched sequence.
###Code
from filtersam.filtersam import filterSAMbyIdentity, filterSAMbyPercentMatched
# Filter alignments with percent identity greater or equal to 95%
filterSAMbyIdentity(input_path='ERS491274.bam',
output_path='ERS491274_PI95.bam',
identity_cutoff=95)
# Filter alignments with percent of matched sequence greater or equal to 50%
filterSAMbyPercentMatched(input_path='ERS491274.bam',
output_path='ERS491274_PM50.bam',
matched_cutoff=50)
###Output
_____no_output_____
###Markdown
Parallelizing filtersamFiltering large BAM files can take a while. However, ```filtersam``` can be parallelized with an additional python package: [parallelbam](https://pypi.org/project/parallelbam/). Effectively, ```parallelbam``` splits a large BAM file into chunks and calls ```filtersam``` in dedicated processes for each one of them.Let's try this out, we will parallelize the above operation in 8 processes.
###Code
from parallelbam.parallelbam import parallelizeBAMoperation, getNumberOfReads
# Filter alignments with percent identity greater or equal to 95% in parallel
parallelizeBAMoperation('ERS491274.bam',
callback=filterSAMbyIdentity,
callback_additional_args=[95],
n_processes=8,
output_dir='ERS491274_PI95_parallel.bam')
###Output
_____no_output_____
###Markdown
We can further check if the filtered bam files produced in a single process and in parallel contain the same number of segments with the function ```getNumberOfReads``` of parallelbam.
###Code
# Number of segments in the original bam
getNumberOfReads('ERS491274.bam')
# Number of segments in the single-process PI-filtered bam file
getNumberOfReads('ERS491274_PI95.bam')
# Number of segments in the paralllized PI-filtered bam file
getNumberOfReads('ERS491274_PI95_parallel.bam')
###Output
_____no_output_____
###Markdown
Notebook A: Generation of omics data for the wild type (WT) strain This notebook uses the OMG library to create times series of synthetic "experimental" data (transcriptomics, proteomics, metabolomics, fluxomics, cell density, external metabolites), that will be used to demonstrate the use of ICE and EDD. These data will also be the base for creating similar data for bioengineereed strains. Tested using **biodesign_3.7** kernel on jprime.lbl.gov (see github repository for kernel details) Inputs and outputs Required file to run this notebook: - A modified E. coli model with the isoprenol pathway added to it (`iJO1366_MVA.json` file in the `../data/models` directory) Files generated by running this notebook for import into EDD: - `EDD_experiment_description_file_WT.csv` - `EDD_OD_WT.csv` - `EDD_external_metabolites_WT.csv` - `EDD_transcriptomics_WT.csv` - `EDD_proteomics_WTSM.csv` - `EDD_metabolomics_WTSM.csv` - `EDD_fluxomics_WT.csv` The files are stored in the user defined directory. Setup Clone the git repository with the `OMG` library:`git clone https://github.com/JBEI/OMG.git`or pull the latest version. Importing needed libraries:
###Code
import sys
#sys.path.insert(1, '../../OMG')
#sys.path.append('../')
import omg
from plot_multiomics import *
import cobra
###Output
_____no_output_____
###Markdown
User parameters
###Code
user_params = {
'host': 'ecoli', # ecoli or ropacus supported
'modelfile': './sample_files/iJO1366_MVA.json', # GSM host model file location
'cerevisiae_modelfile': './sample_files/iMM904.json', # GSM pathway donor model file location
'timestart': 0.0, # Start and end for time in time series
'timestop': 8.0,
'numtimepoints': 9, # Number of time points
'mapping_file': './sample_files/inchikey_to_cid.txt', # Maps of metabolite inchikey to pubchem compound id (cid)
'output_file_path': './data/output/', # Folder for output files
'edd_omics_file_path': './data/output/edd/', # Folder for EDD output files
'numreactions': 8, # Number of total reactions to be bioengineered
'ext_metabolites': { # Initial concentrations (in mMol) of external metabolites
'glc__D_e': 22.203,
'nh4_e': 18.695,
'pi_e': 69.454,
'so4_e': 2.0,
'mg2_e': 2.0,
'k_e': 21.883,
'na1_e': 103.7,
'cl_e': 27.25,
'isoprenol_e': 0.0,
'ac_e': 0.0,
'for_e': 0.0,
'lac__D_e': 0.0,
'etoh_e': 0.0
},
'initial_OD': 0.01,
'BIOMASS_REACTION_ID': 'BIOMASS_Ec_iJO1366_core_53p95M' # Biomass reaction in host GSM
}
###Output
_____no_output_____
###Markdown
Using the OMG library to create synthetic multiomics data 1) Getting and preparing the metabolic model First we obtain the metabolic model:
###Code
file_name = user_params['modelfile']
model = cobra.io.load_json_model(file_name)
###Output
_____no_output_____
###Markdown
We now add minimum flux constraints for production of isoprenol and formate, and we limit oxygen intake:
###Code
iso = 'EX_isoprenol_e'
iso_cons = model.problem.Constraint(model.reactions.EX_isoprenol_e.flux_expression,lb = 0.20)
model.add_cons_vars(iso_cons)
for_cons = model.problem.Constraint(model.reactions.EX_for_e.flux_expression,lb = 0.10)
model.add_cons_vars(for_cons)
o2_cons = model.problem.Constraint(model.reactions.EX_o2_e.flux_expression,lb = -8.0)
model.add_cons_vars(o2_cons)
###Output
_____no_output_____
###Markdown
And then we constrain several central carbon metabolism fluxes to more realistic upper and lower bounds:
###Code
CC_rxn_names = ['ACCOAC','MDH','PTAr','CS','ACACT1r','PPC','PPCK','PFL']
for reaction in CC_rxn_names:
reaction_constraint = model.problem.Constraint(model.reactions.get_by_id(reaction).flux_expression,lb = -1.0,ub = 1.0)
model.add_cons_vars(reaction_constraint)
###Output
_____no_output_____
###Markdown
2) Obtaining fluxomics times series First create time grid for simulation:
###Code
t0 = user_params['timestart']
tf = user_params['timestop']
points = user_params['numtimepoints']
tspan, delt = np.linspace(t0, tf, points, dtype='float64', retstep=True)
grid = (tspan, delt)
###Output
_____no_output_____
###Markdown
We then use this model to obtain the times series for fluxes, OD and external metabolites, by solving the model for each time point:
###Code
solution_TS, model_TS, cell, Emets, Erxn2Emet = \
omg.get_flux_time_series(model, user_params['ext_metabolites'], grid, user_params)
###Output
0.0 optimal 0.5363612610171448
1.0 optimal 0.5363612610171448
2.0 optimal 0.5363612610171448
3.0 optimal 0.5363612610171448
4.0 optimal 0.5363612610171448
5.0 optimal 0.5363612610171448
6.0 optimal 0.5363612610171448
7.0 optimal 0.5363612610171448
8.0 optimal 0.5363612610171448
###Markdown
These are the external metabolites concentrations as a function of time:
###Code
Emets
plot_DO_extmets(cell, Emets[['glc__D_e','isoprenol_e','ac_e','for_e','lac__D_e','etoh_e']])
###Output
_____no_output_____
###Markdown
3) Use fluxomics data to obtain the rest of multiomics data We now obtain the multiomics data for each time point:
###Code
proteomics_timeseries = {}
transcriptomics_timeseries = {}
metabolomics_timeseries = {}
metabolomics_oldids_timeseries = {}
fluxomics_timeseries = {}
# By setting the old_ids flag to True, we get two time series for metabolomics data: one with Pubchem CIDs and one with BIGG ids.
# Setting the old_ids flag to False and returns only three dictionaries:proteomics, transcriptomics, metabolomics
for t in tspan:
fluxomics_timeseries[t] = solution_TS[t].fluxes.to_dict()
(proteomics_timeseries[t], transcriptomics_timeseries[t],
metabolomics_timeseries[t], metabolomics_oldids_timeseries[t]) = omg.get_multiomics(model,
solution_TS[t],
user_params['mapping_file'],
old_ids=True)
###Output
_____no_output_____
###Markdown
4) Write the multiomics, cell concentration and external metabolites data into output files EDD data output First write the experiment description files needed for input (label indicates a label at the end of the file name):
###Code
omg.write_experiment_description_file(user_params['edd_omics_file_path'], line_name='WT', label='_WT')
###Output
_____no_output_____
###Markdown
Write OD data:
###Code
omg.write_OD_data(cell, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
###Output
_____no_output_____
###Markdown
Write external metabolites:
###Code
omg.write_external_metabolite(Emets, user_params['edd_omics_file_path'], line_name='WT', label='_WT')
###Output
_____no_output_____
###Markdown
Write multiomics data:
###Code
omg.write_omics_files(fluxomics_timeseries, 'fluxomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(proteomics_timeseries, 'proteomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(transcriptomics_timeseries, 'transcriptomics', user_params, line_name='WT', label='_WT')
omg.write_omics_files(metabolomics_timeseries, 'metabolomics', user_params, line_name='WT', label='_WT')
###Output
_____no_output_____
###Markdown
We will also write a small version of the multiomics data with a subset of proteins, transcripts and metabolites:
###Code
genesSM = ['b0180','b2708','b3197','b1094','b2224','b3256','b2316','b3255','b0185','b1101']
proteinsSM = ['P17115','P76461','P0ABD5','P00893','P15639','P0AC44','P0A6I6','P0A9M8']
metabolitesSM = ['CID:1549101','CID:175','CID:164533','CID:15938965','CID:21604863','CID:15939608','CID:27284','CID:1038','CID:16741146','CID:1778309']
transcriptomics_timeseriesSM ={}
proteomics_timeseriesSM ={}
metabolomics_timeseriesSM ={}
for t in tspan:
transcriptomics_timeseriesSM[t] = {gene: transcriptomics_timeseries[t][gene] for gene in genesSM}
proteomics_timeseriesSM[t] = {protein: proteomics_timeseries[t][protein] for protein in proteinsSM}
metabolomics_timeseriesSM[t] = {metab: metabolomics_timeseries[t][metab] for metab in metabolitesSM}
omg.write_omics_files(proteomics_timeseriesSM, 'proteomics' , user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(transcriptomics_timeseriesSM,'transcriptomics', user_params, line_name='WT', label='_WTSM')
omg.write_omics_files(metabolomics_timeseriesSM, 'metabolomics' , user_params, line_name='WT', label='_WTSM')
###Output
_____no_output_____
###Markdown
DespikeThis notebook provide an example of how to manipulate the [`despike`](https://pypi.org/project/despike/) package to remove spikes in 2D images. DescriptionThe spikes in 2D-images correspond to high-energy pixels generated by cosmic rays, sensor noise or dead pixels. They use to have values very different from the rest of their neighboor.To find them, we use a moving box (5×5 pixels by default) on the image and we compare the mean/median of this sub-image to the central pixel. If the value is `n` (3 by default) times larger than the observed standard deviation we use the median value a the surrounding pixels (8 pixels by default) to replace the spike.
###Code
import numpy as np
import matplotlib.pyplot as plt
import despike
# Load some data
img = np.loadtxt('tests/data/img.dat')
plt.figure(figsize=(7,7))
plt.imshow(img)
plt.title('Original image')
plt.show()
# Search the location of spikes in the image
spikes = despike.spikes(img)
plt.figure(figsize=(7,7))
plt.imshow(spikes, cmap=plt.get_cmap('gray'))
plt.title('Spikes locations')
plt.show()
# Clean the image from spikes
clean_img = despike.clean(img)
f, (ax0, ax1) = plt.subplots(1, 2, sharey=True, figsize=(14,7))
ax0.imshow(img)
ax1.imshow(clean_img)
ax0.set_title('Original image')
ax1.set_title('Despiked image')
plt.show()
###Output
_____no_output_____
###Markdown
Other filteringThis module also provide mean and median filters on the image to remove globally the oulayer pixels:
###Code
# Mean filtering
from despike.mean import mask, mean
mean_mask = mask(img)
mean_img = mean(img)
f, (ax0, ax1, ax2) = plt.subplots(1, 3, sharey=True, figsize=(21,7))
ax0.imshow(img)
ax1.imshow(mean_mask, cmap=plt.get_cmap('gray'))
ax2.imshow(mean_img)
ax0.set_title('Original image')
ax1.set_title('Mean mask')
ax2.set_title('Mean image')
plt.show()
# Median filtering
from despike.median import mask, median
median_mask = mask(img)
median_img = median(img)
f, (ax0, ax1, ax2) = plt.subplots(1, 3, sharey=True, figsize=(21,7))
ax0.imshow(img)
ax1.imshow(median_mask, cmap=plt.get_cmap('gray'))
ax2.imshow(median_img)
ax0.set_title('Original image')
ax1.set_title('Median mask')
ax2.set_title('Median image')
plt.show()
###Output
_____no_output_____
###Markdown
NRE-based Camera Pose Estimation In this notebook, we show a simple running example on how to perform NRE-based camera pose estimation given a pair of query-database images, along with the visible 3D points. For demonstration purposes, we use a pre-assembled SfM validation sample from Megadepth. 1. Setup
###Code
from lib.estimator import NREEstimator
estimator = NREEstimator("weights/coarse.pth", "weights/fine.pth", "config.yml")
###Output
_____no_output_____
###Markdown
2. Load sample data and run
###Code
import numpy as np
sample = np.load("assets/sample/sample.npz")
pose = estimator.localize(
source_image=sample["source_image"],
target_image=sample["target_image"],
p3D=sample["p3D"],
source_P=sample["source_P"],
source_K=sample["source_K"],
target_K=sample["target_K"],
source_dist=sample["source_dist"],
target_dist=sample["target_dist"],
)
###Output
_____no_output_____
###Markdown
3. Visualize prediction
###Code
from lib.tools.viz import display_localization
display_localization(sample, pose, sample["target_P"]).show("svg", width=800, height=550)
# For an interactive display use:
# display_localization(sample, pose, sample["target_P"]).show()
###Output
_____no_output_____
###Markdown
Running MTCNN
###Code
from mtcnn import MTCNN
import cv2
image_path = 'ivan.jpg'
img = cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB)
detector = MTCNN()
detections = detector.detect_faces(img)
detections
###Output
_____no_output_____
###Markdown
Filtering detections with confidence greater than the confidence threshold and plotting detections
###Code
import matplotlib.pyplot as plt
img_with_dets = img.copy()
min_conf = 0.9
for det in detections:
if det['confidence'] >= min_conf:
x, y, width, height = det['box']
keypoints = det['keypoints']
cv2.rectangle(img_with_dets, (x,y), (x+width,y+height), (0,155,255), 2)
cv2.circle(img_with_dets, (keypoints['left_eye']), 2, (0,155,255), 2)
cv2.circle(img_with_dets, (keypoints['right_eye']), 2, (0,155,255), 2)
cv2.circle(img_with_dets, (keypoints['nose']), 2, (0,155,255), 2)
cv2.circle(img_with_dets, (keypoints['mouth_left']), 2, (0,155,255), 2)
cv2.circle(img_with_dets, (keypoints['mouth_right']), 2, (0,155,255), 2)
plt.figure(figsize = (10,10))
plt.imshow(img_with_dets)
plt.axis('off')
###Output
_____no_output_____
###Markdown
This notebook provides a basic example of using the `blg_strain` package to calculate the magnetoelectric susceptibility for strained bilayer graphene. Strained Lattice
###Code
from blg_strain.lattice import StrainedLattice
sl = StrainedLattice(eps=0.01, theta=0)
sl.calculate()
###Output
_____no_output_____
###Markdown
Below is a plot of the Brillouin zone (black hexagon) and location of the K/K' points (red markers), which do not coincide with the high-symmetry points of the Brillouin zone.
###Code
fig = plt.figure()
axes = [fig.add_subplot(x) for x in (121, 222, 224)]
for ax in axes:
sl.plot_bz(ax)
ax.set_aspect(1)
w = 0.02
axes[1].set_xlim(sl.K[0] - w, sl.K[0] + w)
axes[1].set_ylim(sl.K[1] - w, sl.K[1] + w)
axes[2].set_xlim(sl.Kp[0] - w, sl.Kp[0] + w)
axes[2].set_ylim(sl.Kp[1] - w, sl.Kp[1] + w)
###Output
_____no_output_____
###Markdown
Band Structure
###Code
from blg_strain.bands import BandStructure
bs = BandStructure(sl=sl, window=0.1, Delta=0.01)
bs.calculate(Nkx=200, Nky=200)
###Output
_____no_output_____
###Markdown
Below are plots of the energy, one component of the wavefunction, Berry curvature, and orbital magnetic moment in regions of momentum space surrounding the K and K' valleys.
###Code
fig, axes = plt.subplots(2, 4, figsize=(14, 7))
pcolormesh_kwargs = dict(cmap='cividis', shading='gouraud')
contour_kwargs = dict(colors='k', linewidths=0.5, linestyles='solid')
n = 2 # Band index
m = 1 # component of wavefunction
for i, (axK, axKp, A) in enumerate(zip(axes[0,:],
axes[1,:],
[bs.E[n], bs.Psi[n,m,:,:].real, bs.Omega[n], bs.Mu[n]])):
# K
axK.pcolormesh(bs.Kxa, bs.Kya, A, **pcolormesh_kwargs)
axK.contour(bs.Kxa, bs.Kya, A, **contour_kwargs)
# K'
if i >= 2: # Omega and Mu
A = -A
axKp.pcolormesh(-bs.Kxa, -bs.Kya, A, **pcolormesh_kwargs)
axKp.contour(-bs.Kxa, -bs.Kya, A, **contour_kwargs)
for ax in axes.flatten():
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect(1)
axes[0,0].set_title('Conduction band energy')
axes[0,1].set_title(f'Component {m} of wavefunction')
axes[0,2].set_title('Berry curvature')
axes[0,3].set_title('Orbital magnetic moment')
axes[0,0].set_ylabel('$K$', rotation=0, labelpad=30, fontsize=16, va='center')
axes[1,0].set_ylabel('$K\'$', rotation=0, labelpad=30, fontsize=16, va='center')
###Output
_____no_output_____
###Markdown
Filled bands
###Code
from blg_strain.bands import FilledBands
fb = FilledBands(bs=bs, EF=0.01)
fb.calculate(Nkx=500, Nky=500)
###Output
_____no_output_____
###Markdown
Below is a plot of the $x$ component of magnetoelectric susceptibility as a function of doping (carrier density) for the band structure illustrated above.
###Code
EFs = np.linspace(0, 0.015, 100)
ns = np.empty_like(EFs)
alphas = np.empty_like(EFs)
for i, EF in enumerate(EFs):
fb = FilledBands(bs=bs, EF=EF)
fb.calculate(500, 500)
ns[i] = fb.n
alphas[i] = fb.alpha[0]
fig, ax = plt.subplots()
ax.plot(ns/1e16, alphas)
ax.set_xlabel('Carrier density ($10^{12}$ cm$^{-2}$)')
ax.set_ylabel('Magnetoelectric coefficient (a.u.)')
###Output
_____no_output_____
###Markdown
Saving and Loading
###Code
base_path = 'example'
sl.save(base_path)
bs.save()
fb.save()
sl_path = '/'.join((base_path, 'StrainedLattice_eps0.010_theta0.000_Run0'))
sl = StrainedLattice.load(sl_path + '.h5')
bs_path = '/'.join((sl_path, 'BandStructure_Nkx200_Nky200_Delta10.000'))
bs = BandStructure.load(bs_path + '.h5')
fb_path = '/'.join((bs_path, 'FilledBands_Nkx500_Nky500_EF15.000'))
fb = FilledBands.load(fb_path + '.h5')
###Output
_____no_output_____
###Markdown
Create and load "summary" file
###Code
from blg_strain.utils.saver import load
Deltas, EFs, ns, Ds, alphas = load(sl_path)
Deltas, EFs, ns, Ds, alphas
###Output
_____no_output_____
###Markdown
Example notebook
###Code
import sisld
import numpy as np
from math import sin, cos, atan2
# define some hamiltonian
# function have to depend on k, and return an N x N array
@sisld.alias
def some_hamiltonian(k, u=0.1) -> np.ndarray:
sx = np.array([
[0, 1],
[ 1, 0]
])
sy = np.array([
[0, -1j],
[1j, 0]
])
sz = np.array([
[1, 0],
[0, -1]
])
h = sin(k[0])*sx + sin(k[1])*sy + (u + cos(k[0]) + cos(k[1]))*sz
return np.kron(h, np.eye(2))
# standard z2 calculation on the xy plane
res = sisld.getz2(h=some_hamiltonian, eta=True)
print(res.z2)
# chern calculation
res = sisld.getz2(h=some_hamiltonian, grid=10, chern=True, plane="xy")
print(res.chern) # this is a bad result
res.plot(); # if you have black circles it indicates bad hamiltonian/calculation
# define a sphere
s = sisld.Sphere(grid=6)
s.plot();
# This calculates the chern invariant on the sphere
res = sisld.getz2(h=some_hamiltonian, chern=True, shape=s)
res.plot();
# half-open surface
shape = sisld.FreeShape(grid=5, shape_type="half")
shape.plot();
def cubeToCilinder(k):
k = sisld.rotate(vector=k, degree=90) # rotates the points by 90 degree
phi = atan2(k[1]-.5, k[0]-.5)
x = cos(phi)
y = sin(phi)
z = k[2]
return sisld.resize(np.array([x, y, z]), 1/10) # ressizes the surface
# transform the shape
shape.deform(cubeToCilinder)
shape.plot();
res = sisld.getz2(some_hamiltonian, shape=shape, chern=True)
print(res.chern)
res.plot();
###Output
2.0
###Markdown
Using SignalRecognition and cwt_learnerThis library provides a software implementation of a mentod to recognize events in signals. If you use this work in research, please cite this article: >[G. Subramani, D. Rakita, H. Wang, J. Black, M. Zinn and M. Gleicher, "Recognizing actions during tactile manipulations through force sensing," 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, 2017, pp. 4386-4393. doi: 10.1109/IROS.2017.8206302](http://reach.wisc.edu/recognizing-actions-during-tactile-manipulations-through-force-sensing/) BackgroundThis software provides libraries necessary to transform signals to the __Continuous Wavelet Transform__ for the purpose of applying Machine Learning algorithms on the signals. __This can be used to detect events/signal shapes in time domain signals(containing multiple channels) by training on a few labeled examples.__ It can determine the type and timing of these events in the signals.Note: The documentation and the library use signals and channels interchangeably. 'Signals' is an n-dimensional time series. A channel is one of these dimensions. For example, a force torque sensor would have 6 channels, 3 force channels and 3 torque channels. In the libraries, signal-bundle consists of multiple signals/channels. In retrospect, it might have been better to call a signal-bundle as a signal and a signal as a channel. When in doubt, please print out/plot the data to see for yourself. `cwt_learner` is the class that implements the signal recognition software. `signal_data_base` is a convenience library to store and manage many time domain signals and their corresponding labels. You don't need this to use `cwt_learner`.
###Code
from cwt_learner.wavelet_feature_engineering import CWT_learner
from signal_data_base import SignalDB
import matplotlib.pyplot as plt
from plot_generator import plotResult_colorbars
###Output
_____no_output_____
###Markdown
The signals are loaded from a database.
###Code
sdb = SignalDB('JLego', path='./sample_data/')
training_data_ = sdb.get_labeleddata()
###Output
_____no_output_____
###Markdown
`training_data_` contains an array of `LabeledData`. Please see the file labelSelectedData.py for more information. Please print out `training_data_[0].labels and training_data_[0].signal_bundle_signals` to understand what they are. `LabeledData` object allows associating multiple signal channels and their corresponding labels together. In the real world,
###Code
print 'training_data_ is a data set'
print 'Number of training data trials = ', len(training_data_)
print "Looking closely at one trial:\n"
print "Number of time domain samples of first trial ", len(training_data_[0].labels)
print "Events in first trial ", list(set(training_data_[0].labels))
print len(training_data_[0].signal_bundle.signals),\
" time domain floating point signals reside in training_data_[0].signal_bundle.signals as a "\
, type(training_data_[0].signal_bundle.signals)
plt.figure()
plt.subplot(2,1,1)
plt.title("A trial consisting of multiple channels in the signal and its corresponding labels over time \
(color coded).")
for channel in training_data_[0].signal_bundle.signals:
plt.plot(channel)
plt.subplot(2,1,2)
labels = training_data_[0].labels
plotResult_colorbars(labels, range(len(labels)))
plt.show()
###Output
_____no_output_____
###Markdown
Now we create a `cwt_learner` object and adding training data to it. `cwt_learn.add_training_data` can be used by adding an array of signals and their corresponding labels.
###Code
cwt_learn = CWT_learner(signal_indices = [0,1,2,3])
training_data = training_data_[0:8]
testing_data = training_data_[8:10]
for ld in training_data:
labels = [label.split(' ')[0] for label in ld.labels]
labels = [label.split(' ')[0] for label in ld.labels]
cwt_learn.add_training_data(ld.signal_bundle.signals,labels)
###Output
_____no_output_____
###Markdown
The default ML algorithm is `sklearn.neural_network. MLPClassifier`, but you can change this by providing an argument for classifer in `CWT_learner.train(self,classifier = MLPClassifier())`
###Code
cwt_learn.train()
labels = cwt_learn.fit(testing_data[0].signal_bundle.signals)
# Plotting
plt.figure()
plt.subplot(18,1,1)
plt.title("Training Data")
for ii in range(0,8):
ax = plt.subplot(18,1,2*ii + 1)
plt.plot(training_data[ii].signal_bundle.signals[0])
ax.get_yaxis().set_visible(False)
plt.subplot(18, 1, 2*ii + 2)
plotResult_colorbars(training_data[ii].labels, range(len(training_data[ii].labels)))
plt.show()
plt.figure()
ax = plt.subplot(2, 1, 1)
plt.title("Test Example")
plt.plot(testing_data[0].signal_bundle.signals[0])
ax.get_yaxis().set_visible(False)
plt.subplot(2, 1, 2)
plotResult_colorbars(labels, range(len(labels)))
plt.show()
###Output
_____no_output_____
###Markdown
Authentication + List sketches
###Code
from timesketch_api_client.client import TimesketchApi
api_client = TimesketchApi("https://demo.timesketch.org","demo", "demo")
sketches = api_client.list_sketches()
print("Sketch id\t-\tSketch Name")
print("--------------------------------")
for sketch in sketches:
print(str(sketch.id)+"\t|\t"+ sketch.name)
###Output
Sketch id - Sketch Name
--------------------------------
130 | test1Untitled sketch
3 | The Greendale investigation
###Markdown
Scraping
###Code
!python -m sosen scrape -h
%%bash
python -m sosen scrape --all \
--graph_out zenodo_9.ttl \
--threshold 0.9 \
--format turtle \
--data_dict zenodo_9_data_dict.json \
--zenodo_cache zenodo_9_cache.json
###Output
_____no_output_____
###Markdown
The above command will query zenodo with a blank search, extract GitHub urls from each result, and then use SoMEF to extract metadata from those GitHub urls. The final graph is stored in .ttl in zenodo_9.ttl. Note that the above command could take multiple days to run, due to GitHub rate limiting.Notice `--data_dict` and `--zenodo_cache`. These are two files that SoSEn uses to save data while it runs the process, and can be used to resume the scraping at any point. `--zenodo_cache` stores the results from Zenodo once the scraping of Zenodo is complete, and `--data_dict` stores the outputs of SoMEF. Note, however, that `--zenodo_cache` is written to once, but `--data_dict` is stored to periodically, sort of as a checkpoint. Additionally, before making a call to SoMEF to analyze a repository, SoSEn checks if the analysis was already present in `--data_dict`. This means that `--data_dict` is also an input.Next, we will show the command that can be used to resume the scraping, if the previous long-running process fails for some reason. Notice that the command is virtually the same, except instead of the `--all` option, we pass in `zenodo_9_cache.json` file with the `--zenodo_in` option. This skips the Zenodo scraping step and instead uses the data already scraped. Additionally, `zenodo_9_data_dict.json` will contain the metadata that was extracted through SoMEF, and the process will continue to add to it until all records from Zenodo have been examined.
###Code
%%bash
python -m sosen scrape \
--zenodo_in zenodo_9_cache.json \
--graph_out zenodo_9.ttl \
--threshold 0.9 \
--format turtle \
--data_dict zenodo_9_data_dict.json \
###Output
_____no_output_____
###Markdown
Searching the Knowledge GraphCurrently, there are three methods for searching the Knowledge Graph via exact keyword matching. There are manual keywords from GitHub, and additional keywords that are extracted from the title and description of software objects, queried using the methods keyword, title, and description, respectively. After the `--method` input, everything else is interpreted as part of the search query. The first 20 matches are printed, ordered first by the number of keywords
###Code
%%bash
python -m sosen search --method description adversarial machine learning
%%bash
python -m sosen search --method keyword machine learning
%%bash
python -m sosen search --method title kgtk
###Output
SoSEn Command Line Interface
['kgtk']
FOUND KEYWORDS:
keyword: https://w3id.org/okn/o/i/Keyword/kgtk, idf: 9.70503661381229
MATCHES:
1. https://w3id.org/okn/o/i/Software/usc-isi-i2/kgtk
###Markdown
Describing a MatchOnce we get a match, we can inspect it using `sosen describe`.
###Code
%%bash
python -m sosen describe https://w3id.org/okn/o/i/Software/usc-isi-i2/kgtk
###Output
SoSEn Command Line Interface
DESCRIBE <https://w3id.org/okn/o/i/Software/usc-isi-i2/kgtk>
@prefix sd: <https://w3id.org/okn/o/sd#> .
@prefix sosen: <http://example.org/sosen#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<https://w3id.org/okn/o/i/Software/usc-isi-i2/kgtk> a sd:Software ;
sosen:descriptionKeywordCount 60 ;
sosen:hasDescriptionKeyword <https://w3id.org/okn/o/i/Keyword/add>,
<https://w3id.org/okn/o/i/Keyword/additional>,
<https://w3id.org/okn/o/i/Keyword/adds>,
<https://w3id.org/okn/o/i/Keyword/bug>,
<https://w3id.org/okn/o/i/Keyword/clean>,
<https://w3id.org/okn/o/i/Keyword/columns>,
<https://w3id.org/okn/o/i/Keyword/command>,
<https://w3id.org/okn/o/i/Keyword/commands>,
<https://w3id.org/okn/o/i/Keyword/custom>,
<https://w3id.org/okn/o/i/Keyword/docker>,
<https://w3id.org/okn/o/i/Keyword/expand>,
<https://w3id.org/okn/o/i/Keyword/explode>,
<https://w3id.org/okn/o/i/Keyword/export>,
<https://w3id.org/okn/o/i/Keyword/filter>,
<https://w3id.org/okn/o/i/Keyword/fixes>,
<https://w3id.org/okn/o/i/Keyword/graph>,
<https://w3id.org/okn/o/i/Keyword/installation>,
<https://w3id.org/okn/o/i/Keyword/instructions>,
<https://w3id.org/okn/o/i/Keyword/kgtk>,
<https://w3id.org/okn/o/i/Keyword/knowledge>,
<https://w3id.org/okn/o/i/Keyword/li>,
<https://w3id.org/okn/o/i/Keyword/lift>,
<https://w3id.org/okn/o/i/Keyword/new>,
<https://w3id.org/okn/o/i/Keyword/ns>,
<https://w3id.org/okn/o/i/Keyword/options>,
<https://w3id.org/okn/o/i/Keyword/prefixes>,
<https://w3id.org/okn/o/i/Keyword/refines>,
<https://w3id.org/okn/o/i/Keyword/rename>,
<https://w3id.org/okn/o/i/Keyword/stats>,
<https://w3id.org/okn/o/i/Keyword/support>,
<https://w3id.org/okn/o/i/Keyword/toolkit>,
<https://w3id.org/okn/o/i/Keyword/triples>,
<https://w3id.org/okn/o/i/Keyword/ul>,
<https://w3id.org/okn/o/i/Keyword/updates>,
<https://w3id.org/okn/o/i/Keyword/validate>,
<https://w3id.org/okn/o/i/Keyword/version>,
<https://w3id.org/okn/o/i/Keyword/wd> ;
sosen:hasKeyword <https://w3id.org/okn/o/i/Keyword/efficient>,
<https://w3id.org/okn/o/i/Keyword/etl-framework>,
<https://w3id.org/okn/o/i/Keyword/graphs>,
<https://w3id.org/okn/o/i/Keyword/knowledge-graphs>,
<https://w3id.org/okn/o/i/Keyword/rdf>,
<https://w3id.org/okn/o/i/Keyword/triples> ;
sosen:hasTitleKeyword <https://w3id.org/okn/o/i/Keyword/additional>,
<https://w3id.org/okn/o/i/Keyword/bug>,
<https://w3id.org/okn/o/i/Keyword/commands>,
<https://w3id.org/okn/o/i/Keyword/fixes>,
<https://w3id.org/okn/o/i/Keyword/i2>,
<https://w3id.org/okn/o/i/Keyword/isi>,
<https://w3id.org/okn/o/i/Keyword/kgtk>,
<https://w3id.org/okn/o/i/Keyword/usc> ;
sosen:keywordCount 6 ;
sosen:titleKeywordCount 13 ;
sd:author <https://w3id.org/okn/o/i/Person/usc-isi-i2> ;
sd:description """<p>This version of KGTK fixes:</p>
<ul>
<li>Updates installation instructions to add Docker support</li>
<li>Updates stats</li>
<li>Refines filter command</li>
<li>adds expand and explode commands</li>
<li>Refines the clean and validate command with additional options</li>
<li>Bug fixes in export to WD triples (additional support for custom ns prefixes)</li>
<li>new commands: lift, rename columns</li>
<li>...</li>
</ul>""",
"Knowledge Graph Toolkit " ;
sd:doi "10.5281/zenodo.3828068" ;
sd:downloadUrl "https://github.com/usc-isi-i2/kgtk/releases"^^xsd:anyURI ;
sd:executionInstructions """To list all the available KGTK commands, run:
```
kgtk -h
```
To see the arguments of a particular commands, run:
```
kgtk -h
```
An example command that computes instances of the subclasses of two classes:
```
kgtk instances --transitive --class Q13442814,Q12345678
```
""" ;
sd:hasInstallationInstructions """0. Our installations will be in a conda environment. If you don't have a conda installed, follow [link](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) to install it.
1. Set up your own conda environment:
```
conda create -n kgtk-env python=3.7
conda activate kgtk-env
```
**Note:** Installing Graph-tool is problematic on python 3.8 and out of a virtual environment. Thus: **the advised installation path is by using a virtual environment.**
2. Install (the dev branch at this point): `pip install kgtk`
You can test if `kgtk` is installed properly now with: `kgtk -h`.
3. Install `graph-tool`: `conda install -c conda-forge graph-tool`. If you don't use conda or run into problems, see these [instructions](https://git.skewed.de/count0/graph-tool/-/wikis/installation-instructions).
""",
"""```
docker pull uscisii2/kgtk
```
To run KGTK in the command line just type:
```
docker run -it uscisii2/kgtk /bin/bash
```
If you want to run KGTK in a Jupyter notebook, then you will have to type:
```
docker run -it -p 8888:8888 uscisii2/kgtk /bin/bash -c "jupyter notebook --ip='*' --port=8888 --allow-root --no-browser"
```
Note: if you want to load data from your local machine, you will need to [mount a volume](https://docs.docker.com/storage/volumes/).
More information about versions and tags is available here: https://hub.docker.com/repository/docker/uscisii2/kgtk
See additional examples in [the documentation](https://kgtk.readthedocs.io/en/latest/install/).
""" ;
sd:hasSourceCode <https://w3id.org/okn/o/i/SoftwareSource/usc-isi-i2/kgtk> ;
sd:hasVersion <https://w3id.org/okn/o/i/SoftwareVersion/usc-isi-i2/kgtk/0.1.0>,
<https://w3id.org/okn/o/i/SoftwareVersion/usc-isi-i2/kgtk/0.1.1>,
<https://w3id.org/okn/o/i/SoftwareVersion/usc-isi-i2/kgtk/v0.2.0>,
<https://w3id.org/okn/o/i/SoftwareVersion/usc-isi-i2/kgtk/v0.2.1> ;
sd:identifier "10.5281/zenodo.3828068",
"10.5281/zenodo.3891993" ;
sd:issueTracker "https://github.com/usc-isi-i2/kgtk/issues"^^xsd:anyURI ;
sd:keyword "efficient",
"etl-framework",
"graphs",
"knowledge-graphs",
"rdf",
"triples" ;
sd:license "https://api.github.com/licenses/mit"^^xsd:anyURI ;
sd:name "usc-isi-i2/kgtk",
"usc-isi-i2/kgtk: KGTK 0.2.1: Additional commands and bug fixes" ;
sd:readme "https://github.com/usc-isi-i2/kgtk/blob/master/README.md"^^xsd:anyURI ;
sd:referencePublication """@article{ilievski2020kgtk,
title={KGTK: A Toolkit for Large Knowledge Graph Manipulation and Analysis},
author={Ilievski, Filip and Garijo, Daniel and Chalupsky, Hans and Divvala, Naren Teja and Yao, Yixiang and Rogers, Craig and Li, Ronpeng and Liu, Jun and Singh, Amandeep and Schwabe, Daniel and Szekely, Pedro},
journal={arXiv preprint arXiv:2006.00088},
year={2020},
url={https://arxiv.org/abs/2006.00088}
}""",
"""```
@article{ilievski2020kgtk,
title={KGTK: A Toolkit for Large Knowledge Graph Manipulation and Analysis},
author={Ilievski, Filip and Garijo, Daniel and Chalupsky, Hans and Divvala, Naren Teja and Yao, Yixiang and Rogers, Craig and Li, Ronpeng and Liu, Jun and Singh, Amandeep and Schwabe, Daniel and Szekely, Pedro},
journal={arXiv preprint arXiv:2006.00088},
year={2020},
url={https://arxiv.org/abs/2006.00088}
}
```
""" .
###Markdown
TEMP
###Code
full_df = pd.read_csv("./data/books_read.csv", index_col=0)
full_df
full_df["year_month"] = full_df["event_finish_date"].str.slice(0, 7)
full_df
full_df.groupby("year_month").count()["title"].reset_index()
ym_lst = []
for y in range(2001, 2022):
for m in range(1, 13):
if m < 10:
ym_lst.append(f"{y}-0{m}")
else:
ym_lst.append(f"{y}-{m}")
ym = pd.DataFrame(ym_lst)
ym.columns = ["year_month"]
ym
stats = pd.merge(ym, full_df.groupby("year_month").count()["title"].reset_index(), on="year_month", how="left").fillna(0)
stats["level"] = stats["year_month"].str.slice(2, 4)
stats.to_csv("./data/test.csv")
full_df["event_start_date"] = full_df["event_start_date"].astype("datetime64")
full_df["event_finish_date"] = full_df["event_finish_date"].astype("datetime64")
full_df["date_added"] = full_df["date_added"].astype("datetime64")
full_df.to_csv("./data/books_read_clean.csv")
df = full_df.loc[full_df["event_status"].isin(["Started Reading", "Finished Reading"]) & ~full_df.duplicated()].copy()
def create_date_columns(group):
if group["event_status"] == "Finished Reading"
gb = df.loc[~df["event_date"].isna()].groupby(by="title")["event_status"].count()
l = gb.loc[gb > 2].index.to_list()
df.loc[df["title"].isin(l) & ~df["event_date"].isna()].sort_values(by=["title", "event_date"]).groupby(by="")
df["event_id"] = df["event_id"].fillna(df["book_id"]).astype(int)
#df.loc[df.duplicated(subset=["event_id", "event_status"], keep=False)].sort_values(by=["title", "event_date"])
gb = df.groupby(by="event_id")["event_date"].nunique().reset_index()
df.loc[df.event_id.isin(gb.loc[gb.event_date > 2]["event_id"].to_list())]
#df.pivot(index="event_id", columns="event_status", values="event_date")
gb1 = df.groupby(by="book_id").count().reset_index()
weird1 = gb1.loc[gb1.title == 3]["book_id"].to_list()
weird2 = gb1.loc[gb1.title == 4]["book_id"].to_list()
weird3 = gb1.loc[gb1.title > 4]["book_id"].to_list()
df.loc[df.book_id.isin(weird2)][0:50]
small_df = df.sort_values(by=["book_id", "event_date"]).loc[~df.duplicated(subset=["book_id", "event_status"], keep="last")]
small_df.pivot(index="book_id", columns="event_status", values="event_date")
test = "/work/editions/1121748-the-amulet-of-samarkand"
import re
re.search(r"([0-9]+)", test).group(0)
df.groupby("language").agg({"book_id": "nunique"})
gb = df.groupby(["title"]).agg({"book_id": "nunique"})
repeated = gb.loc[gb["book_id"] > 1].index.to_list()
df.loc[(df["title"].isin(repeated[0:1])) & (df["status"] == "Shelved as")].sort_values(by=["book_id", "date"])
from bs4 import BeautifulSoup
response = session.get("https://www.goodreads.com/book/show/36111562")
soup = BeautifulSoup(response.text, "html.parser")
soup.find("div", attrs={"itemprop": "inLanguage"}).get_text().strip()
import requests
import re
from bs4 import BeautifulSoup
soup = BeautifulSoup(session.get("https://www.goodreads.com/review/list/6897050?shelf=read").text, "html.parser")
rows = soup.find_all("tr", id=re.compile("^review_"))
def extract_fields(row, iteration):
book = {}
# Title
book["title"] = row.select("td.field.title")[0].div.a["title"]
# Author
book["author"] = row.select("td.field.author")[0].div.get_text(strip=True).replace("*","")
# Pages
num_pages_text = row.select("td.field.num_pages")[0].get_text(strip=True)
book["num_pages"] = re.search(r"([0-9]+)", num_pages_text).group(0)
# Avg. Rating
book["avg_rating"] = row.select("td.field.avg_rating")[0].div.get_text(strip=True)
# Number Ratings
book["num_ratings"] = row.select("td.field.num_ratings")[0].div.get_text(strip=True).replace(",", "")
# My Rating
book["rating"] = int(row.select("td.field.rating")[0].div.div["data-rating"])
# Read Count
book["read_count"] = int(row.select("td.field.read_count")[0].div.get_text(strip=True))
# Started Date
start_date = row.select("td.field.date_started")[0].div.get_text(strip=True).split("[edit]")[iteration]
book["event_start_date"] = None if start_date == "not set" else start_date
# Finished Date
finish_date = row.select("td.field.date_read")[0].div.get_text(strip=True).split("[edit]")[iteration]
book["event_finish_date"] = None if finish_date == "not set" else finish_date
# Date Added
book["date_added"] = row.select("td.field.date_added")[0].div.get_text(strip=True)
# Edition
book["edition"] = row.select("td.field.format")[0].div.get_text(strip=True).replace("[edit]", "")
# Book ID
book["id"] = row.select("td.field.rating")[0].div.div["data-resource-id"]
# Work ID
work_id_link = row.select("td.field.format")[0].a["href"]
book["work_id"] = re.search(r"([0-9]+)", work_id_link).group(0) if work_id_link else None
return book
def parse_row(row):
row_books = []
read_count = int(row.select("td.field.read_count")[0].div.get_text(strip=True))
for iteration in range(0, read_count):
row_books.append(extract_fields(row, iteration))
return row_books
books = []
for row in rows[12:16]:
books += parse_row(row)
books
df = pd.DataFrame(books)
df["event_start_date"] = df["event_start_date"].astype("datetime64")
df
###Output
_____no_output_____
###Markdown
Tophat on mock data Set up parametersHere we use a low-density lognormal simulation box.
###Code
boxsize = 750
nbar_str = '3e-4'
proj_type = 'tophat'
rmin = 40
rmax = 150
nbins = 11
mumax = 1.0
seed = 10
#weight_type='pair_product'
weight_type=None
rbins = np.linspace(rmin, rmax, nbins+1)
rcont = np.linspace(rmin, rmax, 1000)
cat_tag = '_L{}_nbar{}'.format(boxsize, nbar_str)
cat_dir = '../byebyebias/catalogs/cats_lognormal{}'.format(cat_tag)
cosmo = 1 #doesn't matter bc passing cz, but required
nthreads = 24
nmubins = 1
verbose = False
###Output
_____no_output_____
###Markdown
Load in data and randoms
###Code
# data
datasky_fn = '{}/catsky_lognormal{}_seed{}.dat'.format(cat_dir, cat_tag, seed)
datasky = np.loadtxt(datasky_fn)
ra, dec, z = datasky.T
nd = datasky.shape[0]
#weights = np.full(nd, 0.5)
weights = None
# randoms
randsky_fn = '{}/randsky{}_10x.dat'.format(cat_dir, cat_tag)
randomsky = np.loadtxt(randsky_fn)
ra_rand, dec_rand, z_rand = randomsky.T
nr = randomsky.shape[0]
#weights_rand = np.full(nr, 0.5)
weights_rand = None
###Output
_____no_output_____
###Markdown
Perform xi(s, mu) continous estimation
###Code
# projection
dd_res_corrfunc, dd_proj, _ = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra, dec, z,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights, weight_type=weight_type)
dr_res_corrfunc, dr_proj, _ = DDsmu_mocks(0, cosmo, nthreads, mumax, nmubins, rbins,
ra, dec, z, RA2=ra_rand, DEC2=dec_rand, CZ2=z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights, weights2=weights_rand, weight_type=weight_type)
rr_res_corrfunc, rr_proj, qq_proj = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra_rand, dec_rand, z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights_rand, weight_type=weight_type)
amps = compute_amps(nbins, nd, nd, nr, nr, dd_proj, dr_proj, dr_proj, rr_proj, qq_proj)
xi_proj = evaluate_xi(nbins, amps, len(rcont), rcont, len(rbins)-1, rbins, proj_type)
###Output
Computing amplitudes (Corrfunc/utils)
Evaluating xi (Corrfunc/utils)
###Markdown
Perform xi(s, mu) standard estimation
###Code
def extract_counts(res, weight_type=None):
counts = np.array([x[4] for x in res], dtype=float)
if weight_type:
weights = np.array([x[5] for x in res], dtype=float)
counts *= weights
return counts
# standard
proj_type = None
dd_res_corrfunc, _, _ = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra, dec, z,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights, weight_type=weight_type)
dd = extract_counts(dd_res_corrfunc, weight_type)
dr_res_corrfunc, _, _ = DDsmu_mocks(0, cosmo, nthreads, mumax, nmubins, rbins,
ra, dec, z, RA2=ra_rand, DEC2=dec_rand, CZ2=z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights, weights2=weights_rand, weight_type=weight_type)
dr = extract_counts(dr_res_corrfunc, weight_type)
rr_res_corrfunc, _, _ = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra_rand, dec_rand, z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nbins, verbose=verbose,
weights1=weights_rand, weight_type=weight_type)
rr = extract_counts(rr_res_corrfunc, weight_type)
fN = float(nr)/float(nd)
xi_ls = (dd * fN**2 - 2*dr * fN + rr)/rr
print("Standard L-S:")
print(xi_ls)
rbins_avg = 0.5*(rbins[1:]+rbins[:-1])
plt.plot(rcont, xi_proj, color='blue')
plt.plot(rbins_avg, xi_ls, marker='o', color='grey', ls='None')
###Output
_____no_output_____
###Markdown
BAO on mock data
###Code
proj_type = 'generalr'
projfn = 'bao.dat'
# The spline routine writes to file, so remember to delete later
kwargs = {'cosmo_base':nbodykit.cosmology.Planck15, 'redshift':0}
nprojbins, _ = bao.write_bases(rbins[0], rbins[-1], projfn, **kwargs)
###Output
alpha_model: 1.02
dalpha: 0.005099999999999882
alpha_model: 1.02
###Markdown
Check out basis functions (normalized):
###Code
base_colors = ['magenta', 'red', 'orange', 'green', 'blue']
base_names = ['a1', 'a2', 'a3', 'Bsq', 'C']
bases = np.loadtxt(projfn)
bases.shape
r = bases[:,0]
for i in range(len(bases[0])-1):
#norm = np.mean(bases[:,i])
base = bases[:,i+1]
plt.plot(r, base, color=base_colors[i], label='{}'.format(base_names[i]))
plt.legend()
_, dd_proj, _ = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra, dec, z,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn,
verbose=verbose, weights1=weights, weight_type=weight_type)
_, dr_proj, _ = DDsmu_mocks(0, cosmo, nthreads, mumax, nmubins, rbins,
ra, dec, z, RA2=ra_rand, DEC2=dec_rand, CZ2=z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn, verbose=verbose,
weights1=weights, weights2=weights_rand, weight_type=weight_type)
_, rr_proj, qq_proj = DDsmu_mocks(1, cosmo, nthreads, mumax, nmubins, rbins, ra_rand, dec_rand, z_rand,
is_comoving_dist=True, proj_type=proj_type, nprojbins=nprojbins, projfn=projfn, verbose=verbose,
weights1=weights_rand, weight_type=weight_type)
amps = compute_amps(nprojbins, nd, nd, nr, nr, dd_proj, dr_proj, dr_proj, rr_proj, qq_proj)
print("amplitudes:",amps)
xi_proj = evaluate_xi(nprojbins, amps, len(rcont), rcont, len(rbins)-1, rbins, proj_type, projfn=projfn)
rbins_avg = 0.5*(rbins[1:]+rbins[:-1])
plt.plot(rcont, xi_proj, color='purple')
#plt.plot(rbins_avg, xi_ls, marker='o', color='grey', ls='None')
total = np.zeros(len(bases))
for i in range(0, bases.shape[1]-1):
ampbase = amps[i]*bases[:,i+1]
total += ampbase
plt.plot(rcont, ampbase, color=base_colors[i], label='{} = {:.4f}'.format(base_names[i], amps[i]))
plt.plot(r, total, color='purple', label='total', lw=3, ls='-.')
plt.xlabel(r'$r (h^{-1}Mpc)$')
plt.ylabel(r'$\xi(r)$')
plt.legend()
os.remove(projfn)
#!jupyter nbconvert --to script example.ipynb
###Output
_____no_output_____
###Markdown
Example DocumentThis is an example notebook to try out the ["Notebook as PDF"](https://github.com/betatim/notebook-as-pdf) extension. It contains a few plots from the excellent [matplotlib gallery](https://matplotlib.org/3.1.1/gallery/index.html).To try out the extension click "File -> Download as -> PDF via HTML". This will convert this notebook into a PDF. This extension has three new features compared to the official "save as PDF" extension:* it produces a PDF with the smallest number of page breaks,* the original notebook is attached to the PDF; and* this extension does not require LaTex.The created PDF will have as few pages as possible, in many cases only one. This is useful if you are exporting your notebook to a PDF for sharing with others who will view them on a screen.To make it easier to reproduce the contents of the PDF at a later date the original notebook is attached to the PDF. Not all PDF viewers know how to deal with attachments. This mean you need to use Acrobat Reader or pdf.js to be able to get the attachment from the PDF. Preview for OSX does not know how to display/give you access to PDF attachments.
###Code
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(111, projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
###Output
_____no_output_____
###Markdown
Below we show some more lines that go up and go down. These are noisy lines because we use a random number generator to create them. Fantastic isn't it?
###Code
x = np.linspace(0, 10)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + np.random.randn(50));
###Output
_____no_output_____
###Markdown
ExampleThis notebook should build.
###Code
from lib import *
if example_function():
print 'Success!'
###Output
Success!
###Markdown
Which GPUs should be used
###Code
gpu_ids = '0'
###Output
_____no_output_____
###Markdown
Initialize and load the model
###Code
# Initialize original model
import sys
sys.argv = ['test.py',
'--checkpoints_dir', './samples/models/',
'--name', 'GanAuxPretrained',
'--model', 'gan_aux',
'--netG', 'resnet_residual',
'--netD', 'disc_noisy',
'--epoch', '200',
'--gpu_ids', gpu_ids,
'--peer_reg', 'bidir']
opt_ours = TestOptions().parse()
# hard-code some parameters for test
opt_ours.num_threads = 1 # test code only supports num_threads = 1
opt_ours.batch_size = 1 # test code only supports batch_size = 1
opt_ours.serial_batches = True # no shuffle
opt_ours.no_flip = True # no flip
opt_ours.display_id = -1 # no visdom display
opt_ours.num_style_samples = 1
opt_ours.knn = 5
opt_ours.eval = True
model_ours = create_model(opt_ours)
model_ours.setup(opt_ours)
# test with eval mode. This only affects layers like batchnorm and dropout.
if model_ours.eval:
model_ours.eval()
###Output
_____no_output_____
###Markdown
Prepare data for content and style
###Code
from PIL import Image
import torchvision.transforms as transforms
def get_transform(loadSize = 512, fineSize = 512, pad = None):
transform_list = []
transform_list.append(transforms.Resize(loadSize, Image.BICUBIC))
transform_list.append(transforms.CenterCrop(fineSize))
if pad is not None:
transform_list.append(transforms.Pad(pad, padding_mode='reflect'))
transform_list += [transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))]
return transforms.Compose(transform_list)
sz = 512
fsz = sz
padsz = 8 * (sz // 256)
transform_fn = get_transform(loadSize = sz, fineSize = fsz, pad = padsz)
transform_fn_style = get_transform(loadSize = sz, fineSize = fsz)
def get_image(A_path, transf_fn):
A_img = Image.open(A_path).convert('RGB')
A = transf_fn(A_img)
return A
imgs = []
for i in range(1,9):
img = get_image('samples/data/content_imgs/img%i.jpg' % i, transform_fn)
imgs.append(img)
imgs = torch.stack(imgs)
print(img.shape)
plt.figure()
plt.imshow(renorm_0_1(imgs[0].permute(1, 2, 0)))
styles = []
for i in range(1,9):
style = get_image('samples/data/style_imgs/style%i.jpg' % i, transform_fn_style)
styles.append(style)
styles = torch.stack(styles)
print(style.shape)
plt.figure()
plt.imshow(renorm_0_1(styles[0].permute(1, 2, 0)))
num_runs = len(imgs)
num_styles = len(styles)
# Test our model
model = model_ours
outs_ours = []
for i in range(num_runs):
for j in range(num_styles):
print('Processing sample %i.%i ...' % (i, j))
#j = i
real_A = imgs[i:i+1]
style_B = styles[j:j+1]
with torch.no_grad():
fake_B, z_cont_real_A, z_style_real_A, z_cont_style_B, z_style_B = model.netG.module.stylize_image([imgs[i:i+1].cuda(), styles[j:j+1].cuda()])
if padsz is not None:
real_A = real_A[:, :, padsz:-padsz, padsz:-padsz]
fake_B = fake_B[:, :, padsz:-padsz, padsz:-padsz]
out_dict = {
'real_A': real_A.data.cpu().numpy()[0].transpose((1,2,0)), 'fake_B': fake_B.data.cpu().numpy()[0].transpose((1,2,0)),
'z_cont_real_A': z_cont_real_A.data.cpu().numpy(), 'z_cont_style_B': z_cont_style_B.data.cpu().numpy(),
'z_style_real_A': z_style_real_A.data.cpu().numpy(), 'z_style_B': z_style_B.data.cpu().numpy(),
'style_B': style_B.data.cpu().numpy().transpose(0, 2, 3, 1)
}
outs_ours.append(out_dict)
###Output
_____no_output_____
###Markdown
Visualize some examples (in three columns: (source image, stylized image, target style)
###Code
import scipy as sp
for i in range(len(outs_ours)):
out_dict = outs_ours[i]
real_A, fake_B = out_dict['real_A'], out_dict['fake_B']
style_B = out_dict['style_B']
fig = plt.figure(figsize = (16, 8))
ax = plt.subplot(131)
ax.set_title('Real A')
plt.imshow(renorm_0_1(real_A))
plt.axis('off')
ax = plt.subplot(132)
ax.set_title('Fake B')
plt.imshow(renorm_0_1(fake_B))
plt.axis('off')
ax = plt.subplot(133)
ax.set_title('Style B')
plt.imshow(renorm_0_1(style_B[0]))
plt.axis('off')
###Output
_____no_output_____
###Markdown
Example of usageHere is one example on how to use interpretation techniques with 1D signals like ECG.
###Code
%load_ext autoreload
%autoreload 2
# basic libraries to use
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import signal_screen
import signal_screen_tools
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, MaxPool1D, Flatten, BatchNormalization, Input
from tensorflow.keras.callbacks import ModelCheckpoint
###Output
_____no_output_____
###Markdown
Data preprocessing* data used: preprocessed data from [kaggle.com](https://www.kaggle.com/shayanfazeli/heartbeat) with origin at [MIT-BIT arrhythmia database](https://www.physionet.org/content/mitdb/1.0.0/)* data are already normalised 0-1 and framed to equal length* classifications are placed at last column: * 0 - nonectopic - N * 1 - supraventricular ectopic beat - S * 2 - ventricular ectopic beat - V * 3 - fusion beat - F * 4 - unknown - Q* counts: * N: 72471 * S: 2223 * V: 5788 * F: 641 * Q: 6431
###Code
# load data
data_train = pd.read_csv("archive/mitbih_train.csv", sep=",", header=None).to_numpy()
data_test = pd.read_csv("archive/mitbih_test.csv", sep=",", header=None).to_numpy()
# get X and y
X_train, y_train = data_train[:, :data_train.shape[1]-2], data_train[:, -1]
X_test, y_test = data_test[:, :data_test.shape[1]-2], data_test[:, -1]
# number of categories
num_of_categories = np.unique(y_train).shape[0]
del data_train, data_test
#indexing examples to show visualisations
examples_to_visualise = [np.where(y_test == i)[0][0] for i in range(5)]
titles = [ "nonectopic", "supraventricular ectopic beat", "ventricular ectopic beat", "fusion beat", "unknown"]
# creation of tensors
X_train = np.expand_dims(tf.convert_to_tensor(X_train), axis=2)
X_test = np.expand_dims(tf.convert_to_tensor(X_test), axis=2)
# one-hot encoding for 5 categories
y_train = tf.one_hot(y_train, num_of_categories)
y_test = tf.one_hot(y_test, num_of_categories)
###Output
_____no_output_____
###Markdown
ModelBasic convolutional model with 3 1D conv layers, one max-pooling and batch normalisations.At the end dense layers to classify categories.
###Code
# basic model
model = Sequential([
Input(shape=[X_train.shape[1], 1]),
Conv1D(filters=16, kernel_size=3, activation="relu"),
BatchNormalization(),
MaxPool1D(),
Conv1D(filters=32, kernel_size=3, activation="relu"),
BatchNormalization(),
Conv1D(filters=64, kernel_size=3, activation="relu"),
BatchNormalization(),
Flatten(),
Dense(20, activation="relu"),
Dense(num_of_categories, activation="softmax")
]
)
# train process
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
checkPoint = ModelCheckpoint(filepath="model.h5", save_weights_only=False, monitor='val_accuracy',
mode='max', save_best_only=True)
model.fit(x=np.expand_dims(X_train, axis=2), y=y_train,
batch_size=128, epochs=10, validation_data=(np.expand_dims(X_test, axis=2), y_test),
callbacks=[checkPoint])
model = tf.keras.models.load_model("model.h5")
loss, acc = model.evaluate(np.expand_dims(X_test, axis=2), y_test)
###Output
685/685 [==============================] - 1s 2ms/step - loss: 0.0907 - accuracy: 0.9777
###Markdown
Occlusion sensitivityBasic principle:* we have frame of zeros, which is gradually moved through the signal* if we delete important part of the signal -> the result of the neural network will change* we can subtract this result from reference and get changes in accuracy through whole signal, which can be visualised* reference: https://arxiv.org/abs/1311.2901We picked from every category one signal and watched, what will happen to output of the neural network. Gradient plottool can be used from "signal_screen_tools" to show changing of the output with template signal.
###Code
fig, axs = plt.subplots(nrows=5, ncols=1)
fig.suptitle("Occlusion sensitivity")
fig.tight_layout()
fig.set_size_inches(10, 10)
axs = axs.ravel()
for c, row, ax, title in zip(range(5), examples_to_visualise, axs, titles):
# pass model and input for the model - multiple inputs could be done by e.g. np.expand_dims(X_test[5:10, :], axis=(2))
sensitivity, _ = signal_screen.calculate_occlusion_sensitivity(model=model,
data=np.expand_dims(X_test[row, :], axis=(0, 2)),
c=c,
number_of_zeros=[15])
# create gradient plot
signal_screen_tools.plot_with_gradient(ax=ax, y=X_test[row, :].ravel(), gradient=sensitivity[0], title=title)
ax.set_xlabel("Samples[-]")
ax.set_ylabel("ECG [-]")
plt.show()
###Output
Occlusion sensitivity for 15 samples and class 0: 100%|██████████| 186/186 [00:04<00:00, 38.71it/s]
Occlusion sensitivity for 15 samples and class 1: 100%|██████████| 186/186 [00:04<00:00, 38.71it/s]
Occlusion sensitivity for 15 samples and class 2: 100%|██████████| 186/186 [00:05<00:00, 36.24it/s]
Occlusion sensitivity for 15 samples and class 3: 100%|██████████| 186/186 [00:05<00:00, 36.54it/s]
Occlusion sensitivity for 15 samples and class 4: 100%|██████████| 186/186 [00:04<00:00, 38.08it/s]
###Markdown
Saliency map* Saliency maps or sometimes referred to as the vanilla gradient method use a simple idea to identify the regions in the image that are important* In this technique, the derivative of the outputs with respect to the actual input over the whole model are calculated. Thus, we describe, for example, how changes in values in individual signals change our output.* math expression: $$ saliency = \frac{\partial y^c}{\partial input} $$* High values directly describe that a given value contributes to a given result, which we can then visualize.* Prior to the application of this method, in order to maintain at least a minimum linear dependence between input and output, the activation function of the output layer was replaced by a linear one. (It is recommended to store model before using this library)* reference: https://arxiv.org/abs/1312.6034
###Code
fig, axs = plt.subplots(nrows=5, ncols=1)
fig.suptitle("Saliency maps")
fig.tight_layout()
fig.set_size_inches(10, 10)
axs = axs.ravel()
for c, row, ax, title in zip(range(5), examples_to_visualise, axs, titles):
# pass model and input for the model - multiple inputs could be done by e.g. X_test[5:10, :] and average outputs
saliency_map = signal_screen.calculate_saliency_map(model=model,
data=np.expand_dims(X_test[row, :], axis=0),
c=c)
# create gradient plot
signal_screen_tools.plot_with_gradient(ax=ax, y=X_test[row, :].ravel(), gradient=saliency_map, title=title)
ax.set_xlabel("Samples[-]")
ax.set_ylabel("ECG [-]")
plt.show()
###Output
_____no_output_____
###Markdown
Grad-CAM* Grad-CAM or gradient class activation maps are based on the principle that convolutional layers preserve spatial information. However, spatial information is lost in the interconnected layers that decide on the final classification. Therefore, activation maps can be used to track which positions at the input are relevant to our particular classification.* It is possible to use a gradient leading to the last layer of the output to determine the essential activation maps for classification.* We can define importance of map by $\alpha_{k}^{c}$, where $c$ is class, $A^k$ particular activation map, $y^c$ output for picked class : $$\alpha_{k}^{c} = \frac{1}{Z} \sum_{i}\sum_{j}\frac{\partial y^{c}}{\partial A_{i,j}^{k}}$$* To obtain the location map $L_{Grad-CAM}^c$ we then use a linear combination of these weights and activation maps, which we sum up. $$L_{Grad-CAM}^{c}= ReLU(\sum_k\alpha_k^c A^k )$$* Originally, it is required that weighted summed activation maps are only positive values, which should directly contribute to the decision. This is ensured by the ReLU function.* However, this is not always the case. Sometimes even the negative gradients can provide better interpretation than only positive gradients. You can turn off ReLU with the parameter "use_relu"* reference: https://arxiv.org/abs/1610.02391
###Code
fig, axs = plt.subplots(nrows=5, ncols=1)
fig.suptitle("Grad-CAM")
fig.tight_layout()
fig.set_size_inches(10, 10)
axs = axs.ravel()
for c, row, ax, title in zip(range(5), examples_to_visualise, axs, titles):
grad_cam = signal_screen.calculate_grad_cam(model=model,
data=X_test[row:row+5, :], # in case of one input, expand dims is required with axis 0
c=c,
use_relu=False)
grad_cam = np.average(grad_cam, axis=0) # averaging outputs of grad-cam
# create gradient plot
signal_screen_tools.plot_with_gradient(ax=ax, y=X_test[row, :].ravel(), gradient=grad_cam, title=title)
ax.set_xlabel("Samples[-]")
ax.set_ylabel("ECG [-]")
plt.show()
fig.savefig("grad_cam.png")
###Output
_____no_output_____
###Markdown
PLSXML Examples PurposeThis notebook provides example usages for the PLSXML package. Class Import
###Code
from plsxml import PLSXML
from plsxml.data import data_path
###Output
_____no_output_____
###Markdown
Loading DataTo load data from an XML file or ZIP file containing XML files, pass the file path(s) to the class initializer or through the append method:
###Code
# XML file
path = data_path('galloping') # DATA_FOLDER/galloping.xml
xml = PLSXML(path)
# ZIP file
path = data_path('galloping_zip') # DATA_FOLDER/galloping.zip
xml = PLSXML(path)
# Alternately, use append to add files...
xml = PLSXML()
xml.append(path)
###Output
_____no_output_____
###Markdown
Multiple XML files may be appended to the same class container. Duplicate rows of data will automatically be dropped, with the first row loaded being retained. Listing KeysCalling the `table_summary` method will provide a list of parsed tables, keys, and example data. This may be useful when determining which tables and data you want to work with. An example output is below:
###Code
print(xml.table_summary())
###Output
galloping_ellipses_summary
rowtext None
structure TERM
set 1
phase 1
ahead_span_length 258.2
minimum_clearance_set 1
minimum_clearance_phase 2
minimum_clearance_galloping_ellipse_method Single mid span
minimum_clearance_distance 1.52
minimum_clearance_overlap 0.0
minimum_clearance_wind_from Left
minimum_clearance_mid_span_sag 12.15
minimum_clearance_insulator_swing_angle 0.0
minimum_clearance_span_swing_angle 63.1
minimum_clearance_major_axis_length 16.2
minimum_clearance_minor_axis_length 6.5
minimum_clearance_b_distance 3.0
###Markdown
Retrieving Parsed DataThe class is a subclass of a dictionary. The data itself is contained within `pandas` DataFrames within that class dictionary. Data can be accessed through the heirarchy:```xml[table_key][column_index][row_index]```Examples:
###Code
# Specific index value
xml['galloping_ellipses_summary']['minimum_clearance_galloping_ellipse_method'][0]
# Slice of pandas dataframe
xml['galloping_ellipses_summary'][:10]
###Output
_____no_output_____
###Markdown
MyGrADS This is a collection of functions implemented in python that replicatetheir implementation in GrADS.Content:1. Centered Differences (cdifof)2. Horizontal Divergence (hdivg)3. Vertical component of the relative vorticity (hcurl)4. Horizontal Advection (tadv) Only requires Numpy.In this example, we use Xarray to read in the nc files, Matplotlib and Cartopy for plotting. Usual Imports
###Code
import numpy as np
import xarray as xr
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Import MyGrADS
###Code
import sys
sys.path.append('/home/zmaw/u241292/scripts/python/mygrads')
import mygrads as mg
###Output
_____no_output_____
###Markdown
Read Some Data
###Code
# We are using some sample data downloaded from the NCEP Reanalysis 2
# Downloaded from: https://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.html
ds = xr.open_dataset('data/u.nc')
u = ds['uwnd'][0,0,:,:].values
lat = ds['lat'].values
lon = ds['lon'].values
ds = xr.open_dataset('data/v.nc')
v = ds['vwnd'][0,0,:,:].values
ds = xr.open_dataset('data/t.nc')
t = ds['air'][0,0,:,:].values
###Output
_____no_output_____
###Markdown
Calculations Horizontal Divergence$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}$
###Code
div = mg.hdivg(u,v,lat,lon)
###Output
_____no_output_____
###Markdown
Relative Vorticity (vertical component of)$\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}$
###Code
vort = mg.hcurl(u,v,lat,lon)
###Output
_____no_output_____
###Markdown
Temperature Advection$u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}$
###Code
tadv = mg.hadv(u,v,t,lat,lon)
###Output
_____no_output_____
###Markdown
Plot
###Code
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(2,2,1,projection=ccrs.Mercator())
ax.set_extent([-120, -10, -60, 10], crs=ccrs.PlateCarree())
ax.coastlines(resolution='50m')
mesh = ax.pcolormesh(lon, lat,t-273.5,
vmin=-30,vmax=0,
transform=ccrs.PlateCarree(), cmap="Spectral_r")
cbar=plt.colorbar(mesh, shrink=0.75,label='[°C]')
q = ax.quiver(lon, lat, u, v, minlength=0.1, scale_units='xy',scale=0.0001,
transform=ccrs.PlateCarree(), color='k',width=0.003)
plt.title('Input Data\n wind and temperature at 500 hPa')
ax = fig.add_subplot(2,2,2,projection=ccrs.Mercator())
ax.set_extent([-120, -10, -60, 10], crs=ccrs.PlateCarree())
ax.coastlines(resolution='50m')
mesh = ax.pcolormesh(lon, lat, div*100000,
vmin=-1.5,vmax=1.5,
transform=ccrs.PlateCarree(), cmap="RdBu_r")
cbar=plt.colorbar(mesh, shrink=0.75,label='[$x10^{-5}$ s$^{-1}$]')
# q = ax.quiver(lon, lat, u, v, minlength=0.1, scale_units='xy',scale=0.0001,
# transform=ccrs.PlateCarree(), color='k',width=0.003)
plt.title('Horizontal Divergence')
ax = fig.add_subplot(2,2,3,projection=ccrs.Mercator())
ax.set_extent([-120, -10, -60, 10], crs=ccrs.PlateCarree())
ax.coastlines(resolution='50m')
mesh = ax.pcolormesh(lon, lat, vort*100000,
vmin=-5,vmax=5,
transform=ccrs.PlateCarree(), cmap="RdBu_r")
cbar=plt.colorbar(mesh, shrink=0.75,label='[$x10^{-5}$ s$^{-1}$]')
# q = ax.quiver(lon, lat, u, v, minlength=0.1, scale_units='xy',scale=0.0001,
# transform=ccrs.PlateCarree(), color='k',width=0.003)
plt.title('Relative Vorticity')
ax = fig.add_subplot(2,2,4,projection=ccrs.Mercator())
ax.set_extent([-120, -10, -60, 10], crs=ccrs.PlateCarree())
ax.coastlines(resolution='50m')
mesh = ax.pcolormesh(lon, lat, tadv*84600,
vmin=-5,vmax=5,
transform=ccrs.PlateCarree(), cmap="RdBu_r")
cbar=plt.colorbar(mesh, shrink=0.75,label='[°C day$^{-1}$]')
# q = ax.quiver(lon, lat, u, v, minlength=0.1, scale_units='xy',scale=0.0001,
# transform=ccrs.PlateCarree(), color='k',width=0.003)
plt.title('Advection of Temperature')
plt.tight_layout()
fig.savefig('example.png', dpi=300)
###Output
_____no_output_____
###Markdown
this is the simple data science sample project
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("heart-disease.csv.csv")
df.head()
df.target.value_counts().plot(kind="bar")
###Output
_____no_output_____
###Markdown
Print to the Javascript console from Python
###Code
console = window.console
console.log('Hello from Python to Javascript')
###Output
_____no_output_____
###Markdown
Replace Javascript's `console.log` with Python's `print`
###Code
console.log = print
console.log('Hello from Python to Javascript to Python!')
###Output
_____no_output_____
###Markdown
Use Javascript to print to Python
###Code
%%javascript
console.log('Hello from Javascript to Python');
###Output
_____no_output_____
###Markdown
Rational algebra exampleBy William Davis MotivationWhen solving linear algebra problems, it can be useful to use a computer to automate the calculations. However, Python's implementation of matrix operations often reverts to floating point representation of data. These incur a small but noticeable error. As an example, say I have a $5\times 5$ matrix which I want to find the inverse of.
###Code
import numpy as np
def matrixMultiplier(C, N):
M = np.zeros((N, N))
M = M.astype(int)
exponentRange = range(N)
for i in exponentRange:
M[:,i] = [C[i]**x for x in exponentRange]
return M
intM = matrixMultiplier([-2,-1,0,1,2], 5)
print(intM)
###Output
[[ 1 1 1 1 1]
[-2 -1 0 1 2]
[ 4 1 0 1 4]
[-8 -1 0 1 8]
[16 1 0 1 16]]
###Markdown
If we use `numpy`'s linear algebra inverse function, the result is a matrix of floating point numbers with small errors.
###Code
intMinv = np.linalg.inv(intM)
print(intMinv)
###Output
[[-1.00228468e-18 8.33333333e-02 -4.16666667e-02 -8.33333333e-02
4.16666667e-02]
[ 0.00000000e+00 -6.66666667e-01 6.66666667e-01 1.66666667e-01
-1.66666667e-01]
[ 1.00000000e+00 -2.37904934e-16 -1.25000000e+00 0.00000000e+00
2.50000000e-01]
[-2.29093640e-18 6.66666667e-01 6.66666667e-01 -1.66666667e-01
-1.66666667e-01]
[ 1.14546820e-18 -8.33333333e-02 -4.16666667e-02 8.33333333e-02
4.16666667e-02]]
###Markdown
If I multiply the inverse result with the original matrix, the result is almost the identity matrix but not quite.
###Code
print(intMinv @ intM)
###Output
[[ 1.00000000e+00 2.08166817e-17 -1.00228468e-18 0.00000000e+00
0.00000000e+00]
[-1.33226763e-15 1.00000000e+00 0.00000000e+00 -1.11022302e-16
-4.44089210e-16]
[ 1.33226763e-15 4.44089210e-16 1.00000000e+00 0.00000000e+00
4.44089210e-16]
[-4.44089210e-16 -1.11022302e-16 -2.29093640e-18 1.00000000e+00
4.44089210e-16]
[ 0.00000000e+00 0.00000000e+00 1.14546820e-18 -6.93889390e-18
1.00000000e+00]]
###Markdown
How can we do operations like this, but keep the results in the rational numbers? In this work, I aimed to define matrix operations purely using rational numbers. Using the package: RationalAlgebraThe matrix is passed to `RationalMatrix()` function, which instantiates it as a matrix of rational numbers.
###Code
import RationalAlgebra.RationalAlgebra as ra
rationalM = ra.RationalMatrix(intM)
print(rationalM)
###Output
[[ 1, 1, 1, 1, 1],
[-2, -1, 0, 1, 2],
[ 4, 1, 0, 1, 4],
[-8, -1, 0, 1, 8],
[16, 1, 0, 1, 16]]
###Markdown
Then, functions such as `inv()` can be used to perform operations. The result is a matrix of rational numbers!
###Code
Minv = ra.inv(rationalM)
print(Minv)
###Output
[[ 0, 1/12, -1/24, -1/12, 1/24],
[ 0, -2/3, 2/3, 1/6, -1/6],
[ 1, 0, -5/4, 0, 1/4],
[ 0, 2/3, 2/3, -1/6, -1/6],
[ 0, -1/12, -1/24, 1/12, 1/24]]
###Markdown
We can verify that the product of the matrix with its inverse is exactly the identity matrix.
###Code
print(Minv @ rationalM)
###Output
[[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]]
###Markdown
Other features: rational vectorsWe can also instatiate row and column vectors of rational numbers.
###Code
from math import factorial
def vectorMultiplicand(d, N):
C = np.zeros((N, 1))
C = C.astype(int)
C[d] = factorial(d)
return C
intC = vectorMultiplicand(3,5)
rationalC = ra.RationalVector(intC)
print(rationalC)
###Output
[[0],
[0],
[0],
[6],
[0]]
###Markdown
We can then perform multiplication between matricies and vectors (as well as other combinations).
###Code
print( ra.inv(rationalM) @ rationalC )
###Output
[[-1/2],
[ 1],
[ 0],
[ -1],
[ 1/2]]
###Markdown
Other features: LU decompositionThe rational inverse algorithm is implemented by a [LUP decomposition](https://en.wikipedia.org/wiki/LU_decomposition), with partial piviting. The $L, U, P$ matricies can be called with the `lu()` function.
###Code
L, U, P = ra.lu(rationalM)
print(L)
print(U)
###Output
[[ 1, 0, 0, 0, 0],
[ 1/16, 1, 0, 0, 0],
[ -1/8, -14/15, 1, 0, 0],
[ 1/4, 4/5, -6/7, 1, 0],
[ -1/2, -8/15, 4/7, 1/2, 1]]
[[ 16, 1, 0, 1, 16],
[ 0, 15/16, 1, 15/16, 0],
[ 0, 0, 14/15, 2, 4],
[ 0, 0, 0, 12/7, 24/7],
[ 0, 0, 0, 0, 12]]
###Markdown
We can verify that the decomposition worked by checking if $PM = LU$.
###Code
print( L @ U )
print( P @ rationalM )
###Output
[[16, 1, 0, 1, 16],
[ 1, 1, 1, 1, 1],
[-2, -1, 0, 1, 2],
[ 4, 1, 0, 1, 4],
[-8, -1, 0, 1, 8]]
[[16, 1, 0, 1, 16],
[ 1, 1, 1, 1, 1],
[-2, -1, 0, 1, 2],
[ 4, 1, 0, 1, 4],
[-8, -1, 0, 1, 8]]
###Markdown
TestingTesting is provided by the `test_basic.py` script.
###Code
!python tests/test_basic.py
###Output
................................
----------------------------------------------------------------------
Ran 32 tests in 0.033s
OK
###Markdown
Creating a DatasetImport the required modules
###Code
import ipfshttpclient
import pytorchipfs
import torch
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
Pick some image hashesN.B.: Many images are stored as webp, so you might need to install WEBP in order to read some images.
###Code
hashes = [
'bafkreic3aeripksj7a7pnvkiybq3i43hme6pxlmpx7jaokubpz2lfdrvti',
'bafybeic7qbuo2ail2y5urbm5btfp7dwcxigjs4kq6m36ecbozaurt4z3te',
'bafkreidcct7qpk3tadwtqmboncnmfouu674vusm4zhvuxcmf2n57wxeqfa'
]
###Output
_____no_output_____
###Markdown
Initialize the dataset
###Code
client = ipfshttpclient.connect()
# Standard dataset
dataset = pytorchipfs.datasets.IPFSImageTensorDataset(
client,
'data', # Where the files will be downloaded
None, # Don't make assumptions about the image shape
hashes
)
# Dataset with cropping (to be fed to the model)
cropped_dataset = pytorchipfs.datasets.IPFSImageTensorDataset(
client,
'data', # Where the files will be downloaded
None, # Don't make assumptions about the image shape
hashes,
transform=transforms.CenterCrop(32) # Crop the images
)
###Output
_____no_output_____
###Markdown
Visualize the results
###Code
import matplotlib.pyplot as plt
import numpy as np
for image in dataset:
# Convert to channel-last Numpy
image = image.cpu().numpy() / 255
image = np.transpose(image, (2, 1, 0))
plt.imshow(image)
plt.show()
###Output
_____no_output_____
###Markdown
Backup with IPFSDefine a new model
###Code
import torch.nn as nn
model = nn.Sequential(
nn.Conv2d(3, 16, 4, stride=2, padding=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(16*16*16, 10)
)
model.train()
model.cuda()
###Output
_____no_output_____
###Markdown
Add some simple training code
###Code
import torch.utils.data as data
batch_size = 1
dataloader = data.DataLoader(cropped_dataset, batch_size, shuffle=True)
def training_step(model, optimizer, loader):
for images in dataloader:
images = images.cuda()
outputs = model(images)
print('Outputs: ', outputs)
# Toy loss
loss = (outputs ** 2).sum()
print('Loss: ', loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
###Output
_____no_output_____
###Markdown
Create a CheckpointBackup
###Code
backup = pytorchipfs.checkpoint.CheckpointBackup(client, 'checkpoints')
###Output
_____no_output_____
###Markdown
Train for three iterarations and perform backups
###Code
for i in range(3):
backup.store_checkpoint(model.state_dict())
training_step(model, optimizer, dataloader)
print()
print(backup.checkpoint_hashes)
###Output
Outputs: tensor([[-31.6460, 41.7457, -79.9334, -74.0487, -76.6020, -19.8673, -54.0581,
-23.9575, 88.4943, 34.5444]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(33400.1250, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ 288656.2188, -381726.5625, 730352.8125, 676970.7500, 700270.7500,
181921.8125, 493537.9688, 219633.0312, -808415.1250, -315333.9375]],
device='cuda:0', grad_fn=<AddmmBackward>)
Loss: tensor(2.7890e+12, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -58.0951, 76.5706, -143.8133, -133.7103, -135.8831, -36.2974,
-96.7951, -44.5693, 160.2001, 61.9186]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(108434.2188, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ 14710.8389, -17117.0371, 35235.6172, 33081.6602, 33883.7969,
9484.5410, 24018.6191, 9619.6797, -36528.3672, -15347.5977]],
device='cuda:0', grad_fn=<AddmmBackward>)
Loss: tensor(6.3227e+09, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -60.6575, 79.7356, -153.0744, -141.9668, -146.7956, -38.2741,
-103.4734, -45.8301, 168.9312, 66.1218]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(122354.2656, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -60.6454, 79.7196, -153.0437, -141.9384, -146.7663, -38.2664,
-103.4527, -45.8210, 168.8974, 66.1086]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(122305.3281, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -60.6333, 79.7037, -153.0131, -141.9100, -146.7369, -38.2588,
-103.4320, -45.8118, 168.8636, 66.0954]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(122256.4141, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -60.6211, 79.6878, -152.9825, -141.8816, -146.7076, -38.2511,
-103.4113, -45.8027, 168.8298, 66.0822]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(122207.5156, device='cuda:0', grad_fn=<SumBackward0>)
Outputs: tensor([[ -60.6090, 79.6718, -152.9519, -141.8532, -146.6782, -38.2435,
-103.3906, -45.7935, 168.7961, 66.0690]], device='cuda:0',
grad_fn=<AddmmBackward>)
Loss: tensor(122158.6328, device='cuda:0', grad_fn=<SumBackward0>)
['QmZDeK5T4wDc7EbodhuE2DfSHRxhXa4KiAyCGq3jRk4hUB', 'QmeD6AXkQ4mH6xB57jc9t7wbvM8vZPNyDXg37bE9GvYwJh', 'Qmavyi8TkQmB37N2oBVK9JwaZyBp9ra8AU4vZM6M9WmpyY']
###Markdown
Retrieve a checkpoint from IPFS
###Code
state_dict = backup.latest_checkpoint
model.load_state_dict(state_dict)
###Output
_____no_output_____
###Markdown
Now we produce an on-disk format of the genotype likelihoods.`local_pcangsd` functions work with an `xarray.Dataset` to avoid loading all genotype likelihoods into memory.The Dataset in stored as a zarr file.Run the following only once.
###Code
if not os.path.exists(store):
lp.beagle_to_zarr(input, store, chunksize=10000)
ds = lp.load_dataset(store) # open the Dataset
ds
###Output
_____no_output_____
###Markdown
You can see that the dataset is similar to the sgkit internal format. We now create windows variables, using sgkit functions internally.
###Code
ds = lp.window(ds, type='position', size=50000)
ds
###Output
_____no_output_____
###Markdown
We compute PCAngsd on each window.
###Code
%%time
# pca_zarr_store = lp.pca_window(ds, zarr_store='data/mytilus_test_lp_results.zarr', k=5, num_workers=2)
pca_zarr_store = lp.pca_window(ds, zarr_store='data/mytilus_test_lp_results.zarr', k=5, scheduler="single-threaded")
ds_pca = xr.open_dataset('data/mytilus_test_lp_results.zarr', engine='zarr')
ds_pca
results = lp.to_lostruct(ds_pca)
print(f"Results on {results.shape[0]} windows")
###Output
Results on 62 windows
###Markdown
The output is formatted to be readable by `lostruct` functions.
###Code
pc_dists = lostruct.get_pc_dists(results)
mds = pcoa(pc_dists)
plt.figure()
plt.scatter(x=range(pc_dists.shape[0]), y=mds.samples["PC1"])
plt.title("MDS Coordinate 1 (y-axis) compared to Window (x-axis)")
plt.xlabel("Window")
plt.ylabel("MDS 1")
plt.figure()
plt.scatter(x=mds.samples["PC1"], y=mds.samples["PC2"])
plt.xlabel("MDS 1")
plt.ylabel("MDS 2")
###Output
_____no_output_____
###Markdown
Convolutions in the sphereWe follow Krachmalnicoff & Tomasi (A&A, 2019, 628, A129) to define convolutional layers in the sphere using the HEALPix pixelization. Let us define an HEALPix NSIDE and use the NESTED mapping. We define three layers: a convolutional layer with one input and one output filter, a downsampling layer using average pooling and an upsampling layer that just repeats every pixel.
###Code
NSIDE = 16
nest = True
conv = sp.sphericalConv(NSIDE, 1, 1, nest=nest)
down = sp.sphericalDown(NSIDE)
up = sp.sphericalUp(NSIDE // 2)
###Output
_____no_output_____
###Markdown
Define a simple map on the sphere and apply the different layers.
###Code
npix = hp.nside2npix(NSIDE)
im = torch.zeros(1,1,npix, requires_grad=False)
im[0, 0, :] = torch.linspace(0.0, 1.0, npix)
###Output
_____no_output_____
###Markdown
Just the convolutional layer:
###Code
with torch.no_grad():
out = conv(im)
hp.mollview(out[0, 0, :].numpy(), nest=nest)
###Output
_____no_output_____
###Markdown
Convolution+downsampling:
###Code
with torch.no_grad():
out = conv(im)
out = down(out)
hp.mollview(out[0, 0, :].numpy(), nest=nest)
###Output
_____no_output_____
###Markdown
Convolution+downsampling+upsampling:
###Code
with torch.no_grad():
out = conv(im)
out = down(out)
out = up(out)
hp.mollview(out[0, 0, :].numpy(), nest=nest)
###Output
_____no_output_____
###Markdown
To demonstrate the interpolation effect, let's turn to some data.
###Code
x = np.random.uniform(0,100,15)
y = np.random.uniform(0,100,15)
z = np.random.uniform(0,100,15)
xy = np.c_[x,y]
###Output
_____no_output_____
###Markdown
Then let's interpolate it
###Code
kri = kriging.Kriging()
kri.fit(xy,z)
xls = np.linspace(0,100,100)
yls = np.linspace(0,100,100)
xgrid,ygrid = np.meshgrid(xls,yls)
zgridls = kri.predict(np.c_[xgrid.ravel(),ygrid.ravel()])
zgrid = zgridls.reshape(*xgrid.shape)
###Output
_____no_output_____
###Markdown
Let's show:
###Code
fig = plt.figure(figsize=(7,4))
plt.contourf(xgrid,ygrid,zgrid,cmap='jet')
fig.tight_layout()
fig.savefig('png/random.png')
###Output
_____no_output_____
###Markdown
Next, interpolate with some real data In addition, the interpolation process is further encapsulated.This is a set of temperature data for 2018 in China.
###Code
data = np.genfromtxt('data/temperature.csv',delimiter=',')[1:]
xgrid2,ygrid2,zgrid2 = kriging.interpolate(data[:,1:],data[:,0],point_counts=(500,500),extension=1.3)
fig,ax = plt.subplots(1,1,figsize=(7,4))
plt.contourf(xgrid2,ygrid2,zgrid2,cmap='jet')
###Output
_____no_output_____
###Markdown
You can also insert the map.This requires loading map data.
###Code
mapdata = kriging.load_mapdata()
fig,ax = plt.subplots(1,1,figsize=(7,4))
ax.contourf(xgrid2,ygrid2,zgrid2,cmap='jet')
kriging.plot_map(mapdata['China'],ax=ax)
mask = kriging.shape_shadow(xgrid2,ygrid2,mapdata['China'])
zgrid2_mask = np.ma.array(zgrid2,mask=mask)
fig,ax = plt.subplots(1,1,figsize=(8,4))
cb = ax.contourf(xgrid2,ygrid2,zgrid2_mask,cmap='jet')
kriging.plot_map(mapdata['China'],ax=ax)
plt.colorbar(cb,ax=ax)
fig.tight_layout()
fig.savefig('png/china_temperature.png')
###Output
_____no_output_____
###Markdown
Example application of the CILVA model to calcium imaging data from the larval zebrafish optic tectum OverviewThis notebook demonstrates some of the functionality of the calcium imaging latent variable analysis (CILVA) method. The provided file `data/zf1.ca2` contains 15 minutes of two-photon calcium imaging data from the larval zebrafish optic tectum. The file `data/zf1.stim` contains onset times of spot stimuli that were presented at $15^\circ$ intervals across the visual field to map the retinotectal projection. For more details on the experiments see the methods section of the accompanying paper. ModelThe CILVA model assumes that fluorescence levels arise from a convolution of a neural activity vector $\lambda_n$ with a GCaMP6s calcium kernel $k$, plus additive imaging noise\begin{align*}x_l(t) & \sim \text{Exp}(\gamma) \\\lambda_n(t) & = w_n^\top s(t) + b_n^\top x(t) \\f_n(t) & = \alpha_n (k \ast \lambda_n)(t) + \beta_n + \epsilon_n(t) \\\epsilon_n(t) & \sim \mathcal{N}(0, \sigma_n^2).\end{align*}Here $x_l(t)$ is the activity of the $l$th latent variable (i.e., hidden source of spontaneous activity) at time $t$, $w_n$ is the stimulus filter for neuron $n$, $s(t)$ is the encoded stimulus at time $t$, and $b_n$ is a vector describing how strongly neuron $n$ is coupled to each latent variable. The parameters $\alpha_n$ and $\beta_n$ set the scale and baseline for the fluorescence levels. The decoupled evoked $f_n^\text{evoked}$ and spontaneous $f_n^\text{spont}$ components are then defined as $f^\text{evoked}_n = \alpha_n k \ast w^\top_n s + \beta_n$ and $f^\text{spont}_n = \alpha_n k \ast b^\top_n x + \beta_n.$ Using the softwareThe provided bash script `example.sh` contains the code>```python cilva/run.py --data data/zf1 --L 3 --num_iters 40 --iters_per_altern 40 --max_threads 2 --out output/cilva_example --tau_r 2.62 --tau_d 5.31 --imrate 2.1646 --convert_stim ```In this example we fit the model with three latent factors via the `--L` argument. We iterate between finding the MAP estimate $\hat x = \text{argmax}_x p(f | x, \theta) p(x | \gamma)$ and updating the model parameters $\hat\theta = \text{argmax}_\theta p(f | \hat x, \theta)$; the `--num_iters` argument specifies how many times we do this alternation.- The `--iters_per_altern` argument sets the maximum number of L-BFGS-B steps within each iteration. - The `--max_threads` argument configures the number of threads available for multithreaded processing. - The `--out` argument specifies the prefix of the output folder containing the fitted model parameters.- The `--tau_r` and `--tau_d` arguments provide the rise and decay time constants. Often these are known *a priori*; otherwise, we provide a penalised non-negative regression approach to estimate these.- The `--imrate` argument is used to estimate the imaging noise variance. This argument must always be provided.- The `--convert_stim` switch informs the code that the representation of the stimulus must be converted from a 1d representation (i.e., where $s(t) = i$ if stimulus $i$ is active and $0$ otherwise) to a 2d representation (where $s(i, t) = 1$ if stimulus $k$ is active at time $t$ and $0$ otherwise). This switch can be omitted if the data is already in a 2d representation (i.e., if $\mathbf{s} \in \mathbb{R}^{K \times T}$).More details on input arguments can be found in the `run.py` preamble. To run this code in a Unix shell enter> ``` sh example.sh ```This code took 11 minutes to complete on a 64-bit MacBook Pro with a 3.1 GHz Intel Core i7 Processor and 8 GB DDR3 RAM running Python 3.6.4. The learned model parameters are returned in the folder> ``` output/cilva_example_L_3_num_iters_40_iters_per_altern_40_gamma_1.00_tau_r_2.62_tau_d_5.31_imrate_2.1646/```which can then be loaded and used for analysis as per the example below. ResultThe following video shows 143 neurons from the optic tectum in response to the presented stimuli. Stimulus and factor activity has been convolved with a GCaMP6s calcium kernel for improved visual comparison between stimuli, factors, and neural activity.
###Code
import cilva
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Load and plot example data
###Code
f, s = cilva.core.load_data('data/zf1', convert=True)
N, T = f.shape
for n in range(10):
plt.figure(figsize=(15, 0.75))
plt.plot(f[n], color='k')
plt.xlim([0, T])
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Load and compare model fits
###Code
alpha, beta, w, b, x, sigma, tau_r, tau_d, gamma, L = cilva.analysis.load_fit(
'output/cilva_example_L_3_num_iters_40_iters_per_altern_40_gamma_1.00_tau_r_2.62_tau_d_5.31_imrate_2.1646/',
'train')
kernel = cilva.core.calcium_kernel(tau_r, tau_d, T)
f_hat = cilva.analysis.reconstruction(alpha, beta, w, b, x, kernel, s)
corr_coefs = np.array([np.corrcoef(f[n], f_hat[n])[0, 1] for n in range(N)])
inds = np.argsort(corr_coefs)[::-1]
for n in range(10):
plt.figure(figsize=(15, 0.75))
plt.plot(f[inds[n]], color='k', linewidth=1)
plt.plot(f_hat[inds[n]], color='g', linewidth=2)
plt.axis('off')
plt.xlim([0, T])
plt.show()
plt.figure(figsize=(2, 2))
plt.hist(corr_coefs, color='firebrick')
plt.xlim([-0.15, 1])
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.xlabel('Correlation coefficient')
plt.ylabel('Count')
plt.show()
np.mean(corr_coefs)
###Output
_____no_output_____
###Markdown
Decouple evoked and (low dimensional) spontaneous components
###Code
f_evoked, f_spont = cilva.analysis.decouple_traces(alpha, beta, w, b, x, kernel, s)
for n in range(10):
plt.figure(figsize=(15, 0.75))
plt.plot(f[inds[n]], color='k', linewidth=1)
plt.plot(f_evoked[inds[n]], color='firebrick', linewidth=2)
plt.plot(f_spont[inds[n]], color='C0', linewidth=2)
plt.axis('off')
plt.xlim([0, T])
plt.show()
###Output
_____no_output_____
###Markdown
Model components
###Code
'''
Tuning curves
'''
kmax = np.max(kernel)
tuning_curves = (kmax * alpha[:, None] * w)[:, 2:] # First two stimuli not presented
fig, axes = plt.subplots(figsize=(4, 4), sharex=True, sharey=False, ncols=4, nrows=4)
for n in range(16):
plt.subplot(4, 4, n + 1)
plt.plot(tuning_curves[n, :], color='firebrick', linewidth=2)
plt.gca().set_xticklabels([])
plt.gca().set_yticklabels([])
plt.xticks([])
plt.yticks([])
fig.text(0.5, 0.04, 'Stimulus', ha='center')
fig.text(0.04, 0.5, 'Response', va='center', rotation='vertical')
plt.show()
'''
Factor loading matrix
'''
# Sort neurons to maximise visual modularity
b_order = []
ams = np.argmax(b, 1)
L = L.astype(int)
for l in range(L):
nrns = np.where(ams == l)[0]
b_order.append(nrns[np.argsort(b[nrns, l])[::-1]])
b_order = np.concatenate(b_order)
b = b[b_order, :]
plt.figure(figsize=(2, 2))
plt.imshow(b, aspect='auto')
plt.colorbar()
plt.xticks(range(L))
plt.gca().set_xticklabels(range(1, L + 1))
plt.xlabel('Factors')
plt.ylabel('Neurons')
plt.show()
'''
Decomposition of variance
'''
var_total = np.var(f_hat, 1)
var_evoked = np.var(f_evoked, 1)
var_spont = np.var(f_spont, 1)
var_cov = (var_total - var_spont - var_evoked)/2
var_f = np.var(f, 1) - sigma**2 # Correction for imaging noise variance
plt.figure(figsize=(15, 2))
plt.plot([], [], color='firebrick', linewidth=5)
plt.plot([], [], color='C0', linewidth=5)
plt.plot([], [], color='C1', linewidth=5)
plt.legend(['Evoked variance', 'Spontaneous variance', 'Covariance'], frameon=False, ncol=3, loc=(0.25, 0.9))
sns.barplot(np.arange(N), var_evoked/var_f, color='firebrick', linewidth=0)
sns.barplot(np.arange(N), var_spont/var_f, bottom = var_evoked/var_f, linewidth=0, color='C0')
sns.barplot(np.arange(N), np.abs(var_cov)/var_f, bottom = (var_evoked + var_spont)/var_f, linewidth=0, color='C1')
plt.xlabel('Neuron')
plt.ylabel('Proportion\nvariance')
plt.xticks(np.arange(0, N, 10))
plt.gca().set_xticklabels(np.arange(0, N, 10))
plt.xlim([-1, N])
plt.ylim([0, 1])
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Import modules
###Code
from pyvad import vad, trim, split
from librosa import load
import matplotlib.pyplot as plt
import numpy as np
import IPython.display
###Output
_____no_output_____
###Markdown
Speech data load
###Code
name = "test/voice/arctic_a0007.wav"
data, fs = load(name)
time = np.linspace(0, len(data)/fs, len(data)) # time axis
plt.plot(time, data)
plt.show()
###Output
_____no_output_____
###Markdown
Do VAD (int)
###Code
%time vact = vad(data, fs, fs_vad = 16000, hop_length = 30, vad_mode=3)
###Output
CPU times: user 99.1 ms, sys: 5.39 ms, total: 105 ms
Wall time: 93.7 ms
###Markdown
Plot result
###Code
fig, ax1 = plt.subplots()
ax1.plot(time, data, label='speech waveform')
ax1.set_xlabel("TIME [s]")
ax2=ax1.twinx()
ax2.plot(time, vact, color="r", label = 'vad')
plt.yticks([0, 1] ,('unvoice', 'voice'))
ax2.set_ylim([-0.01, 1.01])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
trim
###Code
%time edges = trim(data, fs, fs_vad = 16000, hop_length = 30, vad_mode=3)
###Output
CPU times: user 85.8 ms, sys: 3.52 ms, total: 89.3 ms
Wall time: 92.1 ms
###Markdown
Plot result
###Code
trimed = data[edges[0]:edges[1]]
time = np.linspace(0, len(trimed)/fs, len(trimed)) # time axis
fig, ax1 = plt.subplots()
ax1.plot(time, trimed, label='speech waveform')
ax1.set_xlabel("TIME [s]")
plt.show()
###Output
_____no_output_____
###Markdown
split
###Code
%time edges = split(data, fs, fs_vad = 8000, hop_length = 10, vad_mode=3)
###Output
CPU times: user 82.9 ms, sys: 4.07 ms, total: 87 ms
Wall time: 87.1 ms
###Markdown
Plot result
###Code
for i, edge in enumerate(edges):
seg = data[edge[0]:edge[1]]
time = np.linspace(0, len(seg)/fs, len(seg)) # time axis
fig, ax1 = plt.subplots()
ax1.plot(time, seg, label='speech waveform')
ax1.set_xlabel("TIME [s]")
plt.show()
###Output
_____no_output_____
###Markdown
Ensure there is a cell beginning ` Parameters:`Pass parameters in the URL, e.g. `?a=1&b="whatever"`.Pass `autorun=true` to automatically run all cells.E.g. `http://localhost:8888/notebooks/example.ipynb?a=1&b="whatever"&autorun=true`
###Code
# Parameters:
print("a=", a)
print("b=", b)
###Output
_____no_output_____
###Markdown
Load network dataset and extract ARW input data
###Code
path = './datasets/acl.pkl'
network = ig.Graph.Read_Pickle(path)
print (network.summary())
attr = 'single_attr' if network['attributed'] else None
input_data = utils.extract_arw_input_data(network, 'time', 0.00, 0.05, debug=False, attrs=attr)
###Output
IGRAPH DN-- 18665 115311 --
+ attr: attributed (g), attributes (g), single_attr (g), attrs (v), id (v), name (v), single_attr (v), time (v), venue_id (v)
###Markdown
Generate ARW graph with fitted parameters
###Code
params = dict(p_diff=0.08, p_same=0.06, jump=0.42, out=1.0)
arw_graph = arw.RandomWalkSingleAttribute(params['p_diff'], params['p_same'],
params['jump'], params['out'],
input_data['gpre'], attr_name=attr)
arw_graph.add_nodes(input_data['chunk_sizes'], input_data['mean_outdegs'],
chunk_attr_sampler=input_data['chunk_sampler'] if attr else None)
###Output
Total chunks: 44
3 7 11 15 19 23 27 31 35 39 43
IGRAPH D--- 18665 118804 --
+ attr: chunk_id (v), single_attr (v)
###Markdown
Compare graph statistics
###Code
utils.plot_deg_and_cc_and_deg_cc([arw_graph.g, network], ['ARW', 'Dataset'], get_atty=network['attributed'])
###Output
Attribute Assortativity:
ARW: 0.065
Dataset: 0.067
###Markdown
Table of Contents 1 Example of building, and using Tiny-Prolog-in-OCaml1.1 Building prolog1.2 Examples1.3 First example1.4 Second example1.5 Other examples1.5.1 The odd predicate1.5.2 Some family history1.6 Conclusion Example of building, and using [Tiny-Prolog-in-OCaml](https://github.com/Naereen/Tiny-Prolog-in-OCaml/) - For more details, please refer to the [GitHub project, Tiny-Prolog-in-OCaml](https://github.com/Naereen/Tiny-Prolog-in-OCaml/).
###Code
LANG=en
bash --version
###Output
GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
###Markdown
--- Building `prolog`
###Code
# cd ~/publis/Tiny-Prolog-in-OCaml.git/
ls prolog/
cd prolog/
###Output
Vous [01;33mquittez[01;37m le dossier [01;34m'/home/lilian/publis/Tiny-Prolog-in-OCaml.git'[01;37m.
Direction ==> [01;32mprolog/[01;37m
###Markdown
Let's build `prolog`, it's really easy:
###Code
/usr/bin/make clean
/usr/bin/make
###Output
rm -f *.cm[iox] *~ *.annot *.o
ocamlc -pp camlp4o -c lib.ml
[0;39;49mocamlc on [4m[01;30m-pp camlp4o -c lib.ml[0;39;49m
ocamlc lib.cmo -c resolution.ml
[0;39;49mocamlc on [4m[01;30mlib.cmo -c resolution.ml[0;39;49m
[01;35m[KFile[m[K [02;34m[K"resolution.ml"[m[K, [05;01;31m[Kline 104[m[K, [01;31m[Kcharacters 12-17:[m[K
[05;01;33m[KWarning 52: Code should not depend on the actual values of[m[K
this constructor's arguments. They are only for information
and may change in future versions. (See manual section 8.5)
ocamlc -o prolog lib.cmo resolution.cmo prolog.ml
[0;39;49mocamlc on [4m[01;30m-o prolog lib.cmo resolution.cmo prolog.ml[0;39;49m
###Markdown
The binary `prolog` that was just generated is an OCaml binary.Ii is not native, but we don't care. If you want a native binary, just do this:
###Code
/usr/bin/make prolog.opt
cd ..
ls prolog/prolog prolog/prolog.opt
file prolog/prolog prolog/prolog.opt
###Output
Vous [01;33mquittez[01;37m le dossier [01;34m'/home/lilian/publis'[01;37m.
Direction ==> [01;32mTiny-Prolog-in-OCaml.git[01;37m
[0m[38;5;208;1mprolog/prolog[0m* [38;5;208;1mprolog/prolog.opt[0m*
prolog/prolog: a /home/lilian/.opam/4.04.2/bin/ocamlrun script executable (binary data)
prolog/prolog.opt: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=37d475d077169be2f4b7145c2adc52cf77dfbf61, with debug_info, not stripped
###Markdown
ExamplesThere is half a dozen of examples:
###Code
ls -larth examples/*.pl
###Output
-rw-r--r-- 1 lilian lilian 358 Aug 28 2017 [0m[38;5;208mexamples/family_solution.pl[0m
-rw-r--r-- 1 lilian lilian 546 Mar 19 16:40 [38;5;208mexamples/domino.pl[0m
-rw-rw-r-- 1 lilian lilian 109 Mar 19 16:51 [38;5;208mexamples/tomandjerry.pl[0m
-rw-rw-r-- 1 lilian lilian 387 Mar 21 16:17 [38;5;208mexamples/family.pl[0m
-rw-r--r-- 1 lilian lilian 36 Mar 23 10:11 [38;5;208mexamples/even.pl[0m
-rw-r--r-- 1 lilian lilian 36 Mar 23 10:11 [38;5;208mexamples/odd.pl[0m
-rw-r--r-- 1 lilian lilian 228 Mar 23 10:13 [38;5;208mexamples/bunny.pl[0m
-rw-r--r-- 1 lilian lilian 691 Mar 23 10:19 [38;5;208mexamples/natural_integer_arithmetics.pl[0m
-rw-rw-r-- 1 lilian lilian 349 Mar 23 10:37 [38;5;208mexamples/natural_integer_arithmetics_nocomment.pl[0m
###Markdown
For instance, a tiny one is the following:
###Code
cd examples
cat odd.pl
###Output
odd(s(o)).
odd(s(s(X))) <-- odd(X).
###Markdown
First example`even.pl` define the even integers. The `prolog` binary accepts a request as its last argument:
###Code
../prolog/prolog even.pl "even(o)." # an empty valuation: it's true!
../prolog/prolog even.pl "even(s(o))." # aucune théorie : c'est false !
../prolog/prolog even.pl "even(s(s(o)))." # une théorie vide : c'est true !
###Output
?- even(s(s(o))).
{ }
###Markdown
And `prolog` can also find all the even integers! It will only display a few (15), but it could *find them all*.
###Code
../prolog/prolog even.pl "even(X)." # it will find 15 e
###Output
?- even(s(s(o))).
{ }
###Markdown
Vous pouvez expérimenter dans votre terminal, en faisant simplement `../prolog/prolog pair.pl` and en tapant les requêtes.Je recommande l'utilisation de [rlwrap](https://github.com/hanslub42/rlwrap) ou [ledit](https://opam.ocaml.org/packages/ledit/) pour faciliter l'édition (mais je peux pas montrer ça dans un notebook). Second example
###Code
../prolog/prolog natural_integer_arithmetics.pl "lowerEq(s(s(o)), s(s(s(o))))." # 2 <= 3 ? yes
../prolog/prolog natural_integer_arithmetics.pl "lowerEq(s(s(s(s(o)))), s(s(s(o))))." # 4 <= 3 ? no
../prolog/prolog natural_integer_arithmetics.pl "sum(o,s(o),s(o))." # 0+1 = 1 ? yes
../prolog/prolog natural_integer_arithmetics.pl "sum(s(o),s(o),s(s(o)))." # 1+1 = 2 ? yes
../prolog/prolog natural_integer_arithmetics.pl "sum(s(o),o,s(s(o)))." # 1+1 = 1 ? no
###Output
?- sum(s(o),o,s(s(o))).
###Markdown
--- Other examples The `odd` predicate
###Code
cat even.pl
[ -f odd.pl ] && rm -vf odd.pl
echo "odd(s(o))." > odd.pl
echo "odd(s(s(X))) <-- odd(X)." >> odd.pl
../prolog/prolog odd.pl "odd(o)." # false
../prolog/prolog odd.pl "odd(s(o))." # true
../prolog/prolog odd.pl "odd(s(s(o)))." # false
###Output
?- odd(o).
?- odd(s(o)).
{ }
?- odd(s(s(o))).
###Markdown
Some family historyNote: this example does *NOT* use anyone from my family. The names are purely imaginary.The only thing we need to define at first is a predicate `parent(X, Y)` that defines the fact that X is a father/mother/parent of Y. It is a direct down link in a family tree.
###Code
rm -vf aSmallFamily.pl
echo "parent(cyrill, renaud)." >> aSmallFamily.pl
echo "parent(cyrill, claire)." >> aSmallFamily.pl
echo "parent(renaud, clovis)." >> aSmallFamily.pl
echo "parent(valentin, olivier)." >> aSmallFamily.pl
echo "parent(claire, olivier)." >> aSmallFamily.pl
echo "parent(renaud, claudia)." >> aSmallFamily.pl
echo "parent(claire, gaelle)." >> aSmallFamily.pl
../prolog/prolog aSmallFamily.pl "parent(cyrill, renaud)." # true
../prolog/prolog aSmallFamily.pl "parent(claire, renaud)." # false
../prolog/prolog aSmallFamily.pl "parent(X, renaud)." # cyrill
../prolog/prolog aSmallFamily.pl "parent(X, gaelle)." # claire
../prolog/prolog aSmallFamily.pl "parent(X, olivier)." # claire, valentin
../prolog/prolog aSmallFamily.pl "parent(renaud, X)." # clovis, claudia
../prolog/prolog aSmallFamily.pl "parent(gaelle, X)." # {}
../prolog/prolog aSmallFamily.pl "parent(olivier, X)." # {}
###Output
?- parent(X, renaud).
{ X = cyrill }
?- parent(X, gaelle).
{ X = claire }
?- parent(X, olivier).
{ X = valentin }
{ X = claire }
?- parent(renaud, X).
{ X = clovis }
{ X = claudia }
?- parent(gaelle, X).
?- parent(olivier, X).
###Markdown
Brother and sisters are defined by having a common parent, and cousins are defined by having a common *grand-parent*:
###Code
echo "brothersister(X,Y) <-- parent(Z,X), parent(Z,Y)." >> aSmallFamily.pl
echo "grandparent(X,Y) <-- parent(X,Z), parent(Z,Y)." >> aSmallFamily.pl
echo "cousin(X,Y) <-- grandparent(Z,X), grandparent(Z,Y)." >> aSmallFamily.pl
../prolog/prolog aSmallFamily.pl "brothersister(cyrill, claire)." # false
../prolog/prolog aSmallFamily.pl "brothersister(renaud, claire)." # true
../prolog/prolog aSmallFamily.pl "brothersister(claire, claire)." # true
../prolog/prolog aSmallFamily.pl "grandparent(X,olivier)." # cyrill
../prolog/prolog aSmallFamily.pl "grandparent(X,gaelle)." # cyrill
###Output
?- brothersister(cyrill, claire).
?- brothersister(renaud, claire).
{ }
?- brothersister(claire, claire).
{ }
?- grandparent(X,olivier).
{ X = cyrill }
?- grandparent(X,gaelle).
{ X = cyrill }
###Markdown
I will let you find a correct recursive definition of this predicate `ancester`.
###Code
#echo "ancester(X,Y) <-- ancester(X,Z), grandparent(Z,Y)." >> aSmallFamily.pl
echo "ancester(X,Y) <-- parent(X,Y)." >> aSmallFamily.pl
echo "ancester(X,Y) <-- grandparent(X,Y)." >> aSmallFamily.pl
#echo "ancester(X,X)." >> aSmallFamily.pl
###Output
_____no_output_____
###Markdown
On peut vérifier tous les axiomes and règles qu'on a ajouté :
###Code
cat aSmallFamily.pl
###Output
parent(cyrill, renaud).
parent(cyrill, claire).
parent(renaud, clovis).
parent(valentin, olivier).
parent(claire, olivier).
parent(renaud, claudia).
parent(claire, gaelle).
brothersister(X,Y) <-- parent(Z,X), parent(Z,Y).
grandparent(X,Y) <-- parent(X,Z), parent(Z,Y).
cousin(X,Y) <-- grandparent(Z,X), grandparent(Z,Y).
ancester(X,Y) <-- parent(X,Y).
ancester(X,Y) <-- grandparent(X,Y).
###Markdown
Questions: - Olivier's ancesters are Valentin, Claire and Cyrill:
###Code
../prolog/prolog aSmallFamily.pl "parent(X,olivier)."
../prolog/prolog aSmallFamily.pl "grandparent(X,olivier)."
../prolog/prolog aSmallFamily.pl "ancester(X,olivier)."
###Output
?- ancester(X,olivier).
{ X = valentin }
{ X = claire }
{ X = cyrill }
###Markdown
- The common ancester of Olivier and Renaud is Cyrill:
###Code
../prolog/prolog aSmallFamily.pl "ancester(olivier,X),ancester(renaud,X)."
../prolog/prolog aSmallFamily.pl "ancester(X,olivier),ancester(X,renaud)."
###Output
?- ancester(X,olivier),ancester(X,renaud).
{ X = cyrill }
###Markdown
- Claudia and Gaëlle are not sister but they are cousin:
###Code
../prolog/prolog aSmallFamily.pl "brothersister(gaelle,claudia)." # false
../prolog/prolog aSmallFamily.pl "cousin(gaelle,claudia)." # true
###Output
?- brothersister(gaelle,claudia).
?- cousin(gaelle,claudia).
{ }
###Markdown
- Claudia is Clovis's sister, and Olivier and Gaëlle are her cousins:
###Code
../prolog/prolog aSmallFamily.pl "brothersister(X,clovis)."
../prolog/prolog aSmallFamily.pl "cousin(X,clovis)."
###Output
?- brothersister(X,clovis).
{ X = clovis }
{ X = claudia }
?- cousin(X,clovis).
{ X = clovis }
{ X = claudia }
{ X = olivier }
{ X = gaelle }
###Markdown
Example Notebook
###Code
import seaborn as sns
# Load the example miles per gallon dataset
mpg = sns.load_dataset('mpg')
# Plot mpg vs. horsepower
sns.relplot(data=mpg, x='horsepower', y='mpg', hue='origin', size='weight',
sizes=(20, 200), alpha=0.5, palette="muted", height=4);
###Output
_____no_output_____
###Markdown
KNMI module demonstation
###Code
import knmi_stations as knmi
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize']=[15,10]
###Output
_____no_output_____
###Markdown
The module uses open data shapefiles that can be downloaded using the *dlshp* function.
###Code
knmi.dlshp()
###Output
_____no_output_____
###Markdown
The function *map* plots the locations of the KNMI weather stations on a map of the Netherlands. A *label* option can be supplied; for *label="name"*, the station locations are labeled by name.
###Code
knmi.map(label="name")
plt.show()
###Output
_____no_output_____
###Markdown
For *label="temp"*, each station is labeled by the temperature at that location.
###Code
knmi.map(label="temp",timestamp=True)
plt.show()
###Output
_____no_output_____
###Markdown
*contour* imputes the temperature at each location on the map using inverse distance weighting and provides a contour plot based on these values.
###Code
knmi.contour()
plt.show()
###Output
_____no_output_____
###Markdown
The inverse distance weighting power parameter is set to 4.5 by default, but can be lowered to give less weight to closer stations or raised to give them more weight.
###Code
knmi.contour(p=4)
plt.show()
###Output
_____no_output_____
###Markdown
An example for clinical concept extraction with visualization We highly recommend our [sentence segment tool](https://github.com/noc-lab/simple_sentence_segment) for detecting sentence boundary if the text contains arbitrary line breaks, such as the sample text in the following. To use this package, just run```pip install git+https://github.com/noc-lab/simple_sentence_segment.git``` installation above is not working probably so installation from a release is better {already included in this repo}```https://github.com/noc-lab/simple_sentence_segment/releases/tag/v0.1.3```Alternatively, you can use the sentence segmentation tool in NLTK or Spacy. Also, you can use other tokenization tools than NLTK. But this example uses NTLK for the illustrative purpose.
###Code
import warnings
warnings.filterwarnings("ignore")
from spacy import displacy
from IPython.core.display import display, HTML
# building (elmo embbedings prediction graph ) and clinical conceot extraction graph
# might take some while about 20~30 seconds
from clinical_concept_extraction import ClinicalConceptExtraction
from clinical_concept_extraction.utils import build_display_elements
clinical_concept_extraction = ClinicalConceptExtraction(models_path='/home/omar/Desktop/cce_assets')
# An example of a discharge summary contains arbitrary line breaks. I faked this reports.
sample_text = """
This is an 119 year old woman with a history of diabetes who has a CT-scan at 2020-20-20. Insulin is prescribed for the type-2 diabetes. Within the past year, the diabetic symptoms have progressively gotten worse.
"""
# function clinical_concept_extraction takes sample text and (batch_size/sentences per batch) as input and outputs the annotations
all_annotations_of_sample_text = clinical_concept_extraction.extract_concepts(sample_text, batch_size=3, as_one_batch=False)
all_annotations_of_sample_text
ent = build_display_elements(all_annotations_of_sample_text)
ent
ent_inp = {
'text': sample_text,
'ents': ent,
'title': ''
}
colors = {'PROBLEM': '#fe4a49', 'TEST': '#fed766', 'TREATMENT': '#2ab7ca'}
options = {'colors': colors}
html = displacy.render(ent_inp, style='ent', manual=True, options=options)
display(HTML(html))
###Output
_____no_output_____
###Markdown
Using equation with LaTeX notation in a markdown cellThe well known Pythagorean theorem $x^2 + y^2 = z^2$ was proved to be invalid for other exponents. Meaning the next equation has no integer solutions:\begin{equation} x^n + y^n = z^n \end{equation}
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig("test.png")
plt.show()
###Output
_____no_output_____
###Markdown
TransNet BasicsThis notebook shows some basic usages of TransNet. 1. Building transition dataset The core of TransNet is `TransDataset` class, which generates unified transition samples from the orignal annotations. To use it please provide the correct paths to the orignal annotations of JAAD, PIE and TITAN, build dataset in the dictionary form. Please follow the default form below.
###Code
anns_paths = {'JAAD': {'anns': 'DATA/annotations/JAAD/JAAD_DATA.pkl',
'split': 'DATA/annotations/JAAD/splits'},
'PIE': {'anns': 'DATA/annotations/PIE/PIE_DATA.pkl'},
'TITAN': {'anns': 'DATA/annotations/TITAN/titan_0_4',
'split':'DATA/annotations/TITAN/splits' }
}
trans_data = TransDataset(data_paths=anns_paths, image_set="train", verbose=False)
###Output
_____no_output_____
###Markdown
Note : Only provide the paths for the datasets to be used. `TransDataset` works normally with arbitrary subsets of supported datasets. For example, if you would like to use TITAN alone, just specify only the path to the annotations of TITAN.
###Code
anns_paths_titan = {'TITAN': {'anns': 'DATA/annotations/TITAN/titan_0_4',
'split':'DATA/annotations/TITAN/splits' }
}
trans_data_titan = TransDataset(data_paths=anns_paths, image_set="train", verbose=False)
###Output
_____no_output_____
###Markdown
Additionally, `JAAD` dataset has three different setting of video splits, namely `defalut`, `all_videos` and `high_visibility`. Without specification we use 'default'. To change please use `subset` argument of `TranDataset`:
###Code
trans_data_jaad_all = TransDataset(data_paths=anns_paths, image_set="train", subset='all_videos',verbose=False)
###Output
_____no_output_____
###Markdown
Use `extract_trans_history()` to collect transition instances. You can use mode and fps to specify desired transtion type and sampling rate respectively.
###Code
samples = trans_data.extract_trans_history(mode='STOP', fps=10, verbose=True)
###Output
Extract 609 STOP history samples from train dataset,
samples contain 539 unique pedestrians and 33647 frames.
###Markdown
`extract_trans_frame()` is for extracting the paticular frames where the transitions happen.
###Code
samples_frame = trans_data.extract_trans_frame(mode='GO',verbose=True)
###Output
Extract 561 GO frame samples from train dataset,
samples contain 499 unique pedestrians.
###Markdown
2. Data loadingWe provode customized PyTorch dataloader for data loading, namely `FrameDataset` for frame samples and `SequenceDataset` for history samples. Here we demonstrate how to use `SequenceDataset` (`torch.utils.data.Dataset`) for reading sequential images and annotations. First provide the path to the image root directory:
###Code
image_dir = 'DATA/images'
###Output
_____no_output_____
###Markdown
Use `SequenceDataset` to convert the dictionary of history samples into `torch.utils.data.Dataset`
###Code
sequences = SequenceDataset(samples, image_dir=image_dir, preprocess=None)
###Output
_____no_output_____
###Markdown
Now each history sample is in the form of stacked tensors. More precisely, every instance contains * `image` : stacked tensors of size $L\times C\times H\times W$, where $L$ is the length of the history sequence, $C$, $H$, $W$ are the number of image channels, height and width respectively.* `bbox`: $L$ bounding boxes of the targeted pedestrian using two-point coordinates (top-left, bottom-right) `[x1, y1, x2, y2]`. One per each frame.* `id`: a string representing the id of indivisual sample, for example `TS_0266_train` indicates the 266th "STOP" transition sample from TITAN training set. Now let's do some visualizations. First choose one instance among the history samples:
###Code
history_sample = sequences.__getitem__(604) # choose one history sample
###Output
_____no_output_____
###Markdown
check the dimensions and sample id:
###Code
print("The size of image tensors: ", history_sample['image'].size())
print("The number of bounding bboxs: ", len(history_sample['bbox']))
print("sample id: ", history_sample['id'])
###Output
The size of image tensors: torch.Size([113, 3, 1520, 2704])
The number of bounding bboxs: 113
sample id: TS_0266_train
###Markdown
You can use `BaseVisualizer` to visualize the history sample:
###Code
visualizer = BaseVisualizer(history_sample)
###Output
_____no_output_____
###Markdown
To show a specific frame, for example plot the 100th image in the history sample use `show_frame()`
###Code
visualizer.show_frame(k=100, title=None)
###Output
_____no_output_____
###Markdown
or you can use `show_history()` to view the entire sequence:
###Code
visualizer.show_history(wait_key=0)
###Output
_____no_output_____
###Markdown
To loop through all the examples you can simply use `torch.utils.data.DataLoader`
###Code
train_loader = torch.utils.data.DataLoader(samples,batch_size=1, shuffle=True)
###Output
_____no_output_____
###Markdown
**iSEEEK FeatureExtractor**
###Code
### Definition of iSEEEK FeatureExtractor
def iseeek_feature(model, model_vocab, top_ranking_gene_list = []):
Xs = []
for s in tqdm(top_ranking_gene_list):
a = ['[CLS]'] + s.split()[0:126] + ['[SEP]']
input_ids = torch.tensor([model_vocab[k] for k in a]).unsqueeze(0).cuda()
token_type_ids = torch.zeros_like(input_ids).cuda()
attention_mask = torch.ones_like(input_ids).cuda()
with torch.no_grad():
feature = model(input_ids,token_type_ids,attention_mask)
Xs.append(feature.cpu())
Xs = torch.cat(Xs)
features = pd.DataFrame(Xs.numpy(), columns=['Feature{}'.format(i) for i in range(Xs.shape[1])])
return features
###Output
_____no_output_____
###Markdown
**Model** **&** **Tokenizer** **Loading**
###Code
!gdown https://drive.google.com/uc?id=1qorygy9HgJSGMgkv0QKdtDfW-K9o3wCY ### Download the File==Vocabulary of gene Tokenizer.
!gdown https://drive.google.com/uc?id=1WEc6v4mG1plPTPMaeLvl7hR1JGPHnUBn ### Download the File==Pre-trained iSEEEK Model.
model_vocab = pickle.load(open('iSEEEK_vocab.pkl',"rb")) ### Load the Vocalulary of gene Tokenizer.
genes = model_vocab.keys()
model = torch.jit.load("iSEEEK.pt") ### Load the iSEEEK model.
model = model.cuda()
model.eval()
print("###End loading model###")
###Output
Downloading...
From: https://drive.google.com/uc?id=1qorygy9HgJSGMgkv0QKdtDfW-K9o3wCY
To: /content/iSEEEK_vocab.pkl
100% 242k/242k [00:00<00:00, 34.5MB/s]
Downloading...
From: https://drive.google.com/uc?id=1WEc6v4mG1plPTPMaeLvl7hR1JGPHnUBn
To: /content/iSEEEK.pt
100% 150M/150M [00:00<00:00, 158MB/s]
###End loading model###
###Markdown
**Data Preparing**
###Code
!gdown https://drive.google.com/uc?id=1sLEMyCDv05nBGqHqFX6QoJ54RiguUrww
!gdown https://drive.google.com/uc?id=1RoP9ygs2oETIRif9royAzaB1CGlftK5m
!gdown https://drive.google.com/uc?id=1aLMDhZ6qtGsEJpDbazXEFhvq_0trQyx5
top_ranking_genes = [i for i in open("gene_rank_HCA_immune_processed.txt")]
label = [i for i in open("labels_HCA_immune_processed.txt")]
batch = [i for i in open("batch_HCA_immune_processed.txt")]
###Output
Downloading...
From: https://drive.google.com/uc?id=1sLEMyCDv05nBGqHqFX6QoJ54RiguUrww
To: /content/batch_HCA_immune_processed.txt
3.39MB [00:00, 106MB/s]
Downloading...
From: https://drive.google.com/uc?id=1RoP9ygs2oETIRif9royAzaB1CGlftK5m
To: /content/gene_rank_HCA_immune_processed.txt
255MB [00:02, 106MB/s]
Downloading...
From: https://drive.google.com/uc?id=1aLMDhZ6qtGsEJpDbazXEFhvq_0trQyx5
To: /content/labels_HCA_immune_processed.txt
4.19MB [00:00, 110MB/s]
###Markdown
**iSEEEK Feature Extraction**
###Code
iseeek_Xs = iseeek_feature(model,model_vocab,top_ranking_genes)
print(iseeek_Xs)
###Output
100%|██████████| 282558/282558 [47:07<00:00, 99.94it/s]
###Markdown
**Single-cell Clustering*** Develop KNN-graph* Single-cell Clustering (Leiden/louvain)* Visualization
###Code
adata = sc.AnnData(iseeek_Xs)
adata.obs['celltype'] = label
adata.obs['celltype'] = adata.obs['celltype'].astype("category")
adata.obs['batch'] = batch
adata.obs['batch'] = adata.obs['batch'].astype("category")
sc.pp.neighbors(adata, use_rep="X")
sc.tl.umap(adata)
sc.tl.leiden(adata)
################## Cell Type #######################
sc.pl.umap(adata, color = ["celltype"], show = True)
##################### Batch #######################
sc.pl.umap(adata, color = ["batch"], show = True)
############ Single-cell Clustering #############
sc.pl.umap(adata, color = ["leiden"], show = True)
###Output
_____no_output_____
###Markdown
**Diffusion-Pseudotime Analysis**
###Code
adata = pegasusio.multimodal_data.MultimodalData(sc.AnnData(iseeek_Xs))
adata.obs['celltype'] = [i.strip() for i in label]
adata.obs['celltype'] = adata.obs['celltype'].astype("category")
adata.obsm["X_pca"] = np.asarray(iseeek_Xs)
pg.neighbors(adata,K =30)
pg.diffmap(adata)
pg.fle(data)
###Output
_____no_output_____
###Markdown
**Diffusion-Pseudotime Visualization**
###Code
pg.scatter(adata, attrs=["celltype"],show=True,basis='fle')
###Output
_____no_output_____
###Markdown
This is a small tutorial on data cleaning, preprocessing and the idea of a pipeline for changing raw data to clean data. The primary purpose of this tutorial is to showcase benefits of my package 'preprocessor' in the above mentioned scenarios.
###Code
from preprocessor.misc import read_csv
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Using our read_csv is a wrapper over pandas function of same name which is better at reading datetime columns
###Code
data = read_csv("example.csv", verbose =True, encoding = 'latin')
data.head()
###Output
Trying to read datetime columns
###Markdown
Lets look at values in each column
###Code
data['mixed'].value_counts()
###Output
_____no_output_____
###Markdown
On first glance we can tell that the first column 'mixed' has very small and large numbers as well as text in some of its values, now lets look at second column
###Code
data['cat'].value_counts()
###Output
_____no_output_____
###Markdown
The second column look like a regular categorical column, however, we can see there are some values which have just one occurance
###Code
data['date'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
The third column look like a regular datetime column, but lets say we want to use this in a machine learning model, in that we cant use dates in their raw form
###Code
data['num'].value_counts()
###Output
_____no_output_____
###Markdown
Finally a regular numerical column, with nans ofcourse Now lets use our preprocessor package to create a piple line for this data
###Code
verbose = True
# First lets try to deal with the mixed column(s)
from preprocessor.feature_extractor import extract_numericals_forall
# extract_numericals_forall goes through all categorical columns and tries to see if it is a mixed column
# if it is, it creates a new column with the column name + '_numerical' as suffix and puts all numerical
# values in the new column
df1 = extract_numericals_forall(data,verbose =verbose)
df1.head()
###Output
creating cat_numerical: 100%|████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.01it/s]
###Markdown
We see we were able to generate two new numerical columns, lets analyze them
###Code
print(df1['mixed_numerical'].value_counts(), df1['cat_numerical'].value_counts().nlargest(10))
###Output
0.000000e+00 257
1.000000e+02 157
1.000000e+00 69
2.000000e+00 55
3.000000e+00 31
...
9.999841e+01 1
1.449463e+06 1
9.999750e+01 1
4.278300e+05 1
1.638170e+05 1
Name: mixed_numerical, Length: 1012, dtype: int64 5.000000e+51 588
2.000000e+51 351
5.000000e+52 348
7.000000e+01 340
5.000000e+71 336
7.000000e+41 180
4.000000e+91 150
2.000000e+52 137
7.000000e+21 132
8.000000e+53 129
Name: cat_numerical, dtype: int64
###Markdown
We see that mixed_numerical looks like a sensible column while on first glance cat_numerical seems to be constructed of values in cat column which just happen to be interpretable as numericals. Therefore we decide to just extract numericals from one column instead of all categorical columns in our pipleline
###Code
from preprocessor.feature_extractor import extract_numericals
df1 = extract_numericals(data,col = 'mixed',verbose =verbose)
df1.head()
###Output
creating mixed_numerical
###Markdown
Now that we have extracted the numbers from 'mixed', we want to remove numericals from the actual 'mixed' column in order to make it purely categorical
###Code
from preprocessor.imputer import remove_numericals_from_categories
# Delete all numericals from 'mixed' column
df2 = remove_numericals_from_categories(df1,include=['mixed'],verbose =verbose)
df2['mixed'].value_counts()
###Output
Removing numericals from categorical columns provided in Include 1
###Markdown
At this point we can assume that 'mixed' is not a useful column anymore so we can delete it, since our df2 object is a regular pandas data structure we can use all pandas functions without any problem
###Code
df3 = df2.drop(['mixed'], axis=1)
df3.head()
###Output
_____no_output_____
###Markdown
Awsome, now that we have dealt with 'mixed', lets focus on date column now
###Code
# Lets try to extract some features from the date column
from preprocessor.feature_extractor import extract_datetime_features_forall
from preprocessor.imputer import remove_datetimes
# extract_datetime_features_forall goes through all datetime columns and tries to extract 15 predefined features from them
df4 = extract_datetime_features_forall(df3,verbose =verbose)
# since we have the features extracted, no need for keeping datetime columns anymore
df4 = remove_datetimes(df4, verbose = verbose)
df4.head()
###Output
Extracting 15 datetime features from date: 100%|█████████████████████████████████████████| 1/1 [00:03<00:00, 3.53s/it]
###Markdown
Great, so we have extracted 15 features from one datetime column, and we could have extracted n*15 where n is number of date time columns in the data Next, Lets try to see if we have any infs or -infs in the data, lets say for now I want to make a new feature column for whenever we find infs in a column
###Code
# Lets try to extract some features from the date column
from preprocessor.feature_extractor import extract_is_inf_forall
# extract_is_inf_forall goes through all numerical columns and create a new column for infs if any
# if seperate_ninf is true then the new column has 3 unique values (1 for inf, 0 for no inf & -1 for ninf)
# otherwise the new column is a boolean which is true if any kind of inf was encountered
df5 = extract_is_inf_forall(df4,verbose =verbose, seperate_ninf = True)
df5.head()
###Output
Adding num_isinf column: 100%|█████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 32.79it/s]
###Markdown
We can see that since no infs were found in any of the data, we might conclude that we might never encounter infs in our data and hence never include this step in our final piple line Now if we remember correctly some of our numerical columns had numbers of varying range, may be outliers which might affect rest of our statistics, So lets identify outliers and remove if we find any
###Code
from preprocessor.feature_extractor import extract_is_outlier_forall
# extract_is_outlier_forall goes through all numerical columns and against each column creates a new boolean column which has true if the value is marked outlier
# replace_with if None then leaves outliers intact in the actual column
df6 = extract_is_outlier_forall(df4,verbose =verbose, replace_with = np.nan)
df6.head()
###Output
Replacing outliers in num with nan: 100%|██████████████████████████████████████████████| 12/12 [00:00<00:00, 17.27it/s]
###Markdown
Looks like we could only find outliers for some datetime features, depending on if we find it a valid thing in the context of our problem we can ignore or keep this step. For this specific example, in the final pipeline, I'll move this step before extracting date time features in order to don't search outliers in datetime feature columns. Now lets deal with nans in our numerical columns, we will first create _isnull column against all numerical columns to preserve the nan information
###Code
from preprocessor.feature_extractor import extract_is_nan_forall
# extract_is_nan_forall goes through all numerical columns and against each column creates a new boolean column which has
# true if the value was nan, this perserves the information of nans for when we finally substitute a valid numerical
# value against all nulls
df7 = extract_is_nan_forall(df6,verbose =verbose)
df7.head()
###Output
Adding num_isnull column: 100%|███████████████████████████████████████████████████████| 17/17 [00:00<00:00, 102.41it/s]
###Markdown
Once we have all the required information preserved, lets replace nans with median for each column, means are usually susceptible to outliers (One can't be too careful!)
###Code
from preprocessor.imputer import fillnans
# fillnans goes through all numerical columns and fills nans with either Mean, Median or any other provided value
df8 = fillnans(df7,verbose =verbose, by = 'median')
df8.head()
###Output
Filling nans in all columns
###Markdown
We have come to far, now all that remains is our categorical column, we will simply on hot encode it, however, we will use a cutoff of 100 (an educated guess, which might differ for your data). The idea is to not create new columns for a value that only occured insignificant time in the data.
###Code
from preprocessor.feature_extractor import onehot_encode_all
# onehot_encode_all goes through all categorical columns and one hot encodes them
# allow_na if True means treat None or Null as a class
# onehot_encode_all drops the actual column after one hot encoding it
df9 = onehot_encode_all(df8,verbose =verbose, cutoff = 100, cutoff_class = 'other', allow_na = True)
df9.head()
###Output
Dropping cat: 100%|██████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.58it/s]
###Markdown
finally looks like our data is all numbers, with no nans, no nulls and almost ready for any algorthim that operates on numbers, but wait! Some algorthims are really sensitive if features are in a very different scale of magnitude, therefore we normalize
###Code
from sklearn.preprocessing import RobustScaler, StandardScaler
from preprocessor.imputer import normalize
# normalize, normalizes each column of the dataframe using provided scaler
df_scaled_standard = normalize(df9, verbose = verbose) #StandardScaler is dfault scaler
# You can read benefits of robust scaler at
# https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html
df_scaled_robust = normalize(df9, verbose = verbose, scaler = RobustScaler())
###Output
Normalizing numbers from all columns
Normalizing numbers from all columns
###Markdown
Lets visualize at the changes in the scales of a feature after scaling
###Code
from preprocessor.plotter import plot_line
%matplotlib inline
col = 'num'
plot_line(data,col,title = 'raw')
plot_line(df9,col,title = 'unscaled_processed')
plot_line(df_scaled_standard,col,title = 'standard scaled')
plot_line(df_scaled_robust,col,title = 'robust scaled')
###Output
C:\Users\Ahsan\.conda\envs\preprocessor\lib\site-packages\matplotlib\figure.py:98: MatplotlibDeprecationWarning:
Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
"Adding an axes using the same arguments as a previous axes "
###Markdown
Looking at the results we can choose which ever scale of features suits us and work with it. Final pipeline
###Code
# imports
from preprocessor.feature_extractor import extract_is_nan_forall, extract_is_outlier_forall,extract_datetime_features_forall,\
extract_is_inf_forall, onehot_encode_all, extract_numericals_forall, extract_numericals
from preprocessor.imputer import fillnans, remove_datetimes, fillinfs, normalize, remove_numericals_from_categories,\
remove_single_value_features
verbose = False
df10 = data
df10 = extract_numericals(df10,col = 'mixed',verbose =verbose)
df10 = remove_numericals_from_categories(df10,include=['mixed'],verbose =verbose)
df10 = df10.drop(['mixed'], axis=1)
df10 = extract_is_outlier_forall(df10,verbose =verbose, replace_with = np.nan)
df10 = extract_datetime_features_forall(df10,verbose =verbose)
df10 = remove_datetimes(df10, verbose = verbose)
df10 = extract_is_nan_forall(df10,verbose =verbose)
df10 = fillnans(df10,verbose =verbose, by = 'median')
df10 = onehot_encode_all(df10,verbose =verbose, cutoff = 100, cutoff_class = 'other', allow_na = True)
df10 = normalize(df10, verbose = verbose)
df10.head()
###Output
Removing numericals from mixed: 100%|████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.15it/s]
Replacing outliers in num with nan: 100%|████████████████████████████████████████████████| 2/2 [00:00<00:00, 18.69it/s]
Useless columns found 0 : 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 333.34it/s]
Extracting 15 datetime features from date: 100%|█████████████████████████████████████████| 1/1 [00:03<00:00, 3.50s/it]
Useless columns found 5: 100%|████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 555.53it/s]
Adding num_isnull column: 100%|███████████████████████████████████████████████████████| 14/14 [00:00<00:00, 184.20it/s]
Useless columns found 0 : 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 499.35it/s]
Filling nans in date_is_month_end with 0.0: 100%|█████████████████████████████████████| 16/16 [00:00<00:00, 197.52it/s]
Dropping cat: 100%|██████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3.15it/s]
Useless columns found 1: 100%|██████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 333.23it/s]
###Markdown
Start Guide for the devicely package Install devicelyInstalling devicely is as easy as executing `pip install devicely`.To run this notebook, get the data by cloning [this repository](https://github.com/hpi-dhc/devicely-documentation-sample-data) in the same directory as this notebook.
###Code
import os
import devicely
import pandas as pd
pd.options.mode.chained_assignment = None
base_path = 'devicely-documentation-sample-data'
###Output
_____no_output_____
###Markdown
Empatica E4The Empatice E4 wristband can be used to obtain data from inter-beat intervals, electrodermal activity, heart rate, temperature and blood volume pulse. The wristband uses [this directory structure](https://github.com/jostmorgenstern/devicely-documentation-sample-data/tree/main/Empatica) for its measurement data. The `tags.csv` file contains the timestamps of important events and is optional. Only if the remaining csv files are present the empatica reader can be created. Read the dataCreate an EmpaticaReader object:
###Code
empatica_reader = devicely.EmpaticaReader(os.path.join(base_path, 'Empatica'))
###Output
_____no_output_____
###Markdown
Access the sampling frequencies and starting times for all signals:
###Code
empatica_reader.start_times
empatica_reader.sample_freqs
###Output
_____no_output_____
###Markdown
Access the individual dataframes via the attributes ACC, BVP, EDA, HR, TEMP, IBI and tags:
###Code
empatica_reader.HR.head()
###Output
_____no_output_____
###Markdown
Access a joined dataframe of all signals:
###Code
empatica_reader.data.head()
###Output
_____no_output_____
###Markdown
The dataframe contains nan values because the individual signals have different sampling frequencies. Timeshift the data:Apply a timeshift:
###Code
empatica_reader.timeshift()
empatica_reader.start_times
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Write the data:
###Code
empatica_write_path = os.path.join(base_path, 'Empatica_write_dir')
empatica_reader.write(empatica_write_path)
os.listdir(empatica_write_path)
###Output
_____no_output_____
###Markdown
SpaceLabs Monitoring SystemSpaceLabs uses [a single file](https://github.com/jostmorgenstern/devicely-documentation-sample-data/blob/main/Spacelabs/spacelabs.abp) to output metadata as well as the actual signals. Read the dataCreate a `SpacelabsReader` object:
###Code
spacelabs_reader = devicely.SpacelabsReader(os.path.join(base_path, 'Spacelabs', 'spacelabs.abp'))
###Output
_____no_output_____
###Markdown
Acess the metadata:
###Code
spacelabs_reader.subject
spacelabs_reader.metadata
###Output
_____no_output_____
###Markdown
Access the signal dataframe:
###Code
spacelabs_reader.data.head()
###Output
_____no_output_____
###Markdown
Timeshift the data:Apply a timeshift:
###Code
spacelabs_reader.timeshift()
spacelabs_reader.data.head()
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Bittium FarosThe Faros device outpus data in [EDF files](https://www.edfplus.info/specs/edf.html). These are specifically made for health sensor data and not human-readable. Read the data:
###Code
faros_reader = devicely.FarosReader(os.path.join(base_path, 'Faros', 'faros.EDF'))
###Output
_____no_output_____
###Markdown
Access metadata:
###Code
faros_reader.start_time
faros_reader.sample_freqs
faros_reader.units
###Output
_____no_output_____
###Markdown
You can access the individual signals via the `ECG`, `ACC`, `HRV` and `Marker` attributes:
###Code
faros_reader.ACC.head()
###Output
_____no_output_____
###Markdown
Access a joined dataframe of all signals:
###Code
faros_reader.data.head()
###Output
_____no_output_____
###Markdown
Timeshift the dataApply a timeshift:
###Code
faros_reader.timeshift()
faros_reader.data.head()
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Write the dataYou can write back the data in the original EDF format or to a directory of individual signal files. Writing to a directory is the preferred method. You can find out why this is the case in our module reference.
###Code
faros_write_path = os.path.join(base_path, 'Faros_write')
faros_reader.write(faros_write_path)
os.listdir(faros_write_path)
###Output
_____no_output_____
###Markdown
You can also create a FarosReader from a written directory:
###Code
new_faros_reader = devicely.FarosReader(faros_write_path)
new_faros_reader.data.head()
###Output
_____no_output_____
###Markdown
Biovotion EverionThe Everion device outputs data in [multiple csv files](https://github.com/jostmorgenstern/devicely-documentation-sample-data/tree/main/Everion). Each csv file has a `tag` column which specifies the type of measurement. You can see the different tags and what they mean by looking at `EverionReader.SIGNAL_TAGS`, `EverionReader.SENSOR_TAGS` and `EverionReader.FEATURE_TAGS`.
###Code
devicely.EverionReader.FEATURE_TAGS
###Output
_____no_output_____
###Markdown
Read the data
###Code
everion_reader = devicely.EverionReader(os.path.join(base_path, 'Everion'))
###Output
_____no_output_____
###Markdown
If you would like to specify which tags to keep, you can specify this when initializing the reader.Access the individual dataframes via aggregates, analytics_events, attributes_dailys, everion_events, features, sensors, signals attributes:
###Code
everion_reader.signals.head()
###Output
_____no_output_____
###Markdown
Access a joined dataframe of all signals:
###Code
everion_reader.data.head()
###Output
_____no_output_____
###Markdown
Timeshift the dataApply a timeshift:
###Code
everion_reader.timeshift()
everion_reader.data.head()
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Write the dataWrite the data to a directory while keeping the same format as the original. If you used only a subset of tags when initializing the reader, only these tags will be written.
###Code
everion_write_path = os.path.join(base_path, 'Everion_write')
everion_reader.write(everion_write_path)
os.listdir(everion_write_path)
###Output
_____no_output_____
###Markdown
ShimmerShimmer uses a [single CSV file](https://github.com/jostmorgenstern/devicely-documentation-sample-data/blob/main/Shimmer/shimmer.csv), indexed by time of measurement. Read the data
###Code
shimmer_reader = devicely.ShimmerPlusReader(os.path.join(base_path, 'Shimmer', 'shimmer.csv'))
shimmer_reader.data.head()
###Output
_____no_output_____
###Markdown
Timeshift the dataApply a timeshift:
###Code
shimmer_reader.timeshift()
shimmer_reader.data.head()
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Write the data
###Code
shimmer_reader.write(os.path.join(base_path, 'Shimmer', 'shimmer_write.csv'))
###Output
_____no_output_____
###Markdown
TagsYou can use the TagReader to read data created by the Android app TimeStamp. Researches use this app to mark important times during experiments. The format simple, as can be seen in this [example file](https://github.com/jostmorgenstern/devicely-documentation-sample-data/blob/main/Tags/tags.csv). Read the data
###Code
timestamp_reader = devicely.TimeStampReader(os.path.join(base_path, 'Tags', 'tags.csv'))
timestamp_reader.data.head()
###Output
_____no_output_____
###Markdown
Timeshif the dataApply a timeshift:
###Code
timestamp_reader.timeshift()
timestamp_reader.data.head()
###Output
_____no_output_____
###Markdown
By providing no parameter to `timeshift` the data is shifted by a random time interval between one month and two years to the past. You can also provide a `pandas.Timedelta` object to shift the data by that timedelta or a `pandas.Timestamp` object to shift your data such that this timestamp is the earliest entry. Write the data
###Code
tag_write_path = os.path.join(base_path, 'Tags', 'tags_write.csv')
timestamp_reader.write(tag_write_path)
###Output
_____no_output_____
###Markdown
U-NetSimple U-Net implementation in pytorch.See [Ronneberger, et al.: U-Net: Convolutional Networks for Biomedical Image Segmentation (2015), arXiv: 1505.04597 \[cs.CV\]](https://arxiv.org/pdf/1505.04597.pdf)for more information.[MIT License](LICENSE.md) Example usageCreate a basic U-Net, as specified in the research paper. See the docs for customization options.
###Code
import torch
import torch.nn.functional as F
from src.unet import UNet
###Output
_____no_output_____
###Markdown
Select training device (CPU or GPU).
###Code
dev = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if not torch.cuda.is_available():
print("Consider using a GPU if possible to accelerate training")
###Output
_____no_output_____
###Markdown
Create a U-Net instance, taking a RGB image (3 channels) and outputting a 2 channel image, corresponding to two segmentation classes.
###Code
net = UNet(in_channels=3, out_channels=2).to(dev)
###Output
_____no_output_____
###Markdown
Generate a random `512x512` RGB image.Batches are specified as `(NxCxHxW)`, where:* `N` is the batch size* `C` is the amount of channels* `HxW` are the image dimensions
###Code
img = torch.rand((1, 3, 512, 521)).to(dev)
###Output
_____no_output_____
###Markdown
Feed the image into the U-Net, calculate a random Binary Cross Entropy loss and backpropagate.
###Code
out = net(img)
target = torch.empty_like(out).random_(2).to(dev)
loss = F.binary_cross_entropy_with_logits(out, target)
loss.backward()
net.zero_grad()
###Output
_____no_output_____
###Markdown
Original
###Code
hlp.plot1d(x_train[0])
###Output
_____no_output_____
###Markdown
Jittering
###Code
hlp.plot1d(x_train[0], aug.jitter(x_train)[0])
## Scaling
hlp.plot1d(x_train[0], aug.scaling(x_train)[0])
## Permutation
hlp.plot1d(x_train[0], aug.permutation(x_train)[0])
## Magnitude Warping
hlp.plot1d(x_train[0], aug.magnitude_warp(x_train)[0])
## Time Warping
hlp.plot1d(x_train[0], aug.time_warp(x_train)[0])
## Rotation
hlp.plot1d(x_train[0], aug.rotation(x_train)[0])
## Window Slicing
hlp.plot1d(x_train[0], aug.window_slice(x_train)[0])
## Window Warping
hlp.plot1d(x_train[0], aug.window_warp(x_train)[0])
## Suboptimal Warping Time Series Generator (SPAWNER)
hlp.plot1d(x_train[0], aug.spawner(x_train, y_train)[0])
## Weighted Dynamic Time Series Barycenter Averaging (wDBA)
hlp.plot1d(x_train[0], aug.wdba(x_train, y_train)[0])
## Random Guided Warping
hlp.plot1d(x_train[0], aug.random_guided_warp(x_train, y_train)[0])
## Discriminative Guided Warping
hlp.plot1d(x_train[0], aug.discriminative_guided_warp(x_train, y_train)[0])
###Output
100%|██████████| 30/30 [00:05<00:00, 5.48it/s]
###Markdown
https://pushover.net/ To get API token go to https://pushover.net/apps/build
###Code
import requests
###Output
_____no_output_____
###Markdown
Write config to a file.
###Code
pushover_config = {
"token": "api token"
,"user": "here use your user key"
,"device": "here use your device name"
}
with open('pushover.config','w') as f:
f.write(str(pushover_config))
###Output
_____no_output_____
###Markdown
Check if reading config works.
###Code
with open('pushover.config','r') as f:
pushover_config = eval(f.read())
pushover_config
def pushover(message, config=None):
# if you do not pass dictionary with pushover config
# it will try to read it from the file
if not config:
with open('pushover.config','r') as f:
config = eval(f.read())
url = 'https://api.pushover.net/1/messages.json'
payload = {
"token": config['token'],
"user": config['user'],
"message": message,
"device": config['device']
}
headers={'Content-Type': 'application/json', "User-Agent": "curl/7.47.0", "Accept": "*/*"}
res = requests.post(url, json=payload, headers=headers)
if res.status_code != 200:
print("pushover, we've got a problem {}".format(res.status_code))
pushover("test_message 2", config=None)
###Output
_____no_output_____
###Markdown
SetupLoad example data and prepare feature normalization.
###Code
from __future__ import annotations
import logging
from typing import Dict, Tuple
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.metrics import r2_score
from pr3.pursuit import PracticalProjectionPursuitRegressor
sns.set_style('whitegrid')
%matplotlib inline
RANDOM_SEED = 2021
TRG_RATIO = 0.75
np.random.seed(RANDOM_SEED)
boston = load_boston()
print(boston.DESCR)
xcols = boston.feature_names
ycol = "MEDV"
df = pd.DataFrame(
data=boston.data,
columns=xcols,
)
df[ycol] = boston.target
trg_idxs = np.random.binomial(1, p=TRG_RATIO, size=df.shape[0]).astype(bool)
trg_df = df.iloc[trg_idxs, :].copy()
tst_df = df.iloc[~trg_idxs, :].copy()
from dataclasses import dataclass
@dataclass
class FeatureNormalizer:
logarithm: bool = False
winsorize: bool = False
zscore: bool = False
_logarithm_cols: Dict[int, float] = None
_winsorize_extremes: Dict[int, Tuple[float, float]] = None
_zscore_stats: Dict[int, Tuple[float, float]] = None
HEAVY_TAILED_SKEW: float = 2.0
CONTINUOUS_UNIQUE_COUNT: int = 5
LOG_SUMMAND_QUANTILE: float = 0.005
EXTREME_QUANTILE: float = 0.005
def fit(self, x: np.ndarray) -> FeatureNormalizer:
x = x.copy()
if self.logarithm:
x = self._logarithm_fit(x)._logarithm_transform(x)
if self.winsorize:
x = self._winsorize_fit(x)._winsorize_transform(x)
if self.zscore:
x = self._zscore_fit(x)._zscore_transform(x)
return self
def transform(self, x: np.ndarray) -> np.ndarray:
x = x.copy()
if self.logarithm:
x = self._logarithm_transform(x)
if self.winsorize:
x = self._winsorize_transform(x)
if self.zscore:
x = self._zscore_transform(x)
return x
def _logarithm_fit(self, x: np.ndarray) -> FeatureNormalizer:
skews = ((x - x.mean(axis=0)) ** 3.0).mean(axis=0) / x.var(axis=0) ** 1.5
self._logarithm_cols = {
col: np.quantile(x[x[:, col] > 0, col], self.LOG_SUMMAND_QUANTILE)
for col, skew in enumerate(skews)
if skew > self.HEAVY_TAILED_SKEW
and len(np.unique(x[:, col])) > self.CONTINUOUS_UNIQUE_COUNT
and all(x[:, col] >= 0)
}
return self
def _winsorize_fit(self, x: np.ndarray) -> FeatureNormalizer:
lows = np.quantile(x, q=self.EXTREME_QUANTILE, axis=0)
highs = np.quantile(x, q=1 - self.EXTREME_QUANTILE, axis=0)
self._winsorize_extremes = dict(zip(range(x.shape[1]), zip(lows, highs)))
return self
def _zscore_fit(self, x: np.ndarray) -> FeatureNormalizer:
mns = np.mean(x, axis=0)
sds = np.std(x, axis=0)
self._zscore_stats = dict(zip(range(x.shape[1]), zip(mns, sds)))
return self
def _logarithm_transform(self, x: np.ndarray) -> np.ndarray:
if self._logarithm_cols is None:
raise AttributeError("Log transform not yet fit on training data.")
for col, quantile in self._logarithm_cols.items():
x[:, col] = np.log(quantile + x[:, col])
return x
def _winsorize_transform(self, x: np.ndarray) -> np.ndarray:
if self._winsorize_extremes is None:
raise AttributeError("Winsorization transform not yet fit on training data.")
for col, extremes in self._winsorize_extremes.items():
x[:, col] = np.clip(x[:, col], extremes[0], extremes[1])
return x
def _zscore_transform(self, x: np.ndarray) -> np.ndarray:
if self._zscore_stats is None:
raise AttributeError("Z-score transform not yet fit on training data.")
for col, stats in self._zscore_stats.items():
x[:, col] = (x[:, col] - stats[0]) / stats[1]
return x
###Output
_____no_output_____
###Markdown
Model fitting We fit our projection pursuit regression below, where the key contributor to some loose form of "interpretability" is the sparsity constraint introduced by the least angle regression used for projection vector optimization. That is, by limiting the projection vector to have three nonzero coordinates (as specified by the argument `max_iter=3`), it becomes easier to to understand the meaning of each one-dimensional projection, and therefore also to understand the contribution of each ridge function.
###Code
f = FeatureNormalizer(logarithm=True, winsorize=False, zscore=True)
trg_x = f.fit(trg_df[xcols].values).transform(trg_df[xcols].values)
trg_y = trg_df[ycol].values
ppr = PracticalProjectionPursuitRegressor(
n_stages=5,
learning_rate=1.0,
ridge_function_class="polynomial",
ridge_function_kwargs=dict(degree=3),
projection_optimizer_class="least_angle",
projection_optimizer_kwargs=dict(max_iter=3),
random_state=RANDOM_SEED,
).fit(trg_x, trg_y)
ppr.plot_losses()
tst_df['yhat'] = ppr.predict(f.transform(tst_df[xcols].values))
print(f"Test R2: {r2_score(tst_df[ycol], tst_df['yhat']):0.3f}")
###Output
_____no_output_____
###Markdown
Model visualization Below, we visualize the learned ridge functions (the nonlinear regression estimates in the one-dimensional projected space). Note that each stage fits against the residuals from previous stages, hence the learned functions do not appear to be good fits to projected data (except in the first stage). Furthermore, any apparent "gap" in the fit represents the component of variance explained by earlier stages of training.It can be very tempting to develop _post hoc_ "just so stories" upon viewing these plots; it may be safer to register any hypotheses about interpretation ahead of generating the plots below.
###Code
ppr.plot(
trg_x,
trg_y,
feature_names=xcols,
fig_height=2.5,
fig_width=5.0,
scatter_sample_ratio=0.5,
)
###Output
_____no_output_____
###Markdown
Decorated Decision Tree RegressorThis notebook contains an example of how to use the decorated decision tree regressor.The `DecoratedDecisionTreeRegressor` is a custom machine learning algorithm which extends sklearn's `DecisionTreeRegressor` by allowing any regression model to be fit on the leaves of a decision tree.First, we import the necessary packages needed for this example.
###Code
from DecoratedDecisionTree import DecoratedDecisionTreeRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
In this example, we will try to make the decorated tree fit $y = X^2 + \epsilon$ where $\epsilon \sim \text{N}(0, \sigma).$ We construct some artificial data:
###Code
sigma = 20000
data = pd.DataFrame({'X': np.arange(-500, 500)})
noise = np.random.normal(0, sigma, len(data))
data['y'] = data['X']**2
data['y_jitter'] = data['y'] + noise
data.head()
###Output
_____no_output_____
###Markdown
We now contruct a `DecoratedDecisionTreeRegressor` object.
###Code
DecoratedDecisionTreeRegressor?
###Output
_____no_output_____
###Markdown
Notice the `DecoratedDecisionTreeRegressor` requires two parameters: a decision tree regressor and a regressor used to fit the leaves of the tree.For our base tree, we require each of our leaves to have at least 120 data points. Once the decision tree is built, we improve the predictions by fitting the data in the leaves using linear regression.
###Code
ddtr = DecoratedDecisionTreeRegressor(dtr = DecisionTreeRegressor(min_samples_leaf=120),
decorator = LinearRegression())
###Output
_____no_output_____
###Markdown
Now, use the regressor to fit the decorated decision tree model and make a prediction.
###Code
ddtr.fit(data[['X']], data['y_jitter'])
data['y_decorated_tree'] = ddtr.predict(data[['X']])
data.head()
###Output
_____no_output_____
###Markdown
Once the decorated decision tree is fit, you are able to access the base decision tree as follows
###Code
# The base decision tree regressor
ddtr.dtr
###Output
_____no_output_____
###Markdown
Let's take a look what what the base tree predicts `y` should be.
###Code
# Predict using the base tree
data['y_base_tree'] = ddtr.dtr.predict(data[['X']])
data.head()
###Output
_____no_output_____
###Markdown
Let's plot the predictions of the decorated tree compared to the base tree to see how well they did.
###Code
fig = plt.figure(figsize = (1.5*8, 1.5*6))
ax = plt.axes()
ax.set_title('Decorated Tree vs Base Tree Predictions')
ax.scatter(data['X'], data['y_jitter'], color='green', label='y_jitter', s=2)
ax.plot(data['X'], data['y_decorated_tree'], color='blue', label = 'y_decorated_tree')
ax.plot(data['X'], data['y_base_tree'], color='cyan', label = 'y_base_tree')
ax.plot(data['X'], data['y'], color='red', label='y')
ax.legend()
ax.grid()
###Output
_____no_output_____
###Markdown
Notice that a decorated tree does a better job at predicting `y` than the base tree.
###Code
print('RMSD Decorated Tree:', int(mean_squared_error(data['y_decorated_tree'], data['y'])**0.5))
print('RMSD Base Tree :', int(mean_squared_error(data['y_base_tree'], data['y'])**0.5))
###Output
RMSD Decorated Tree: 3553
RMSD Base Tree : 25279
###Markdown
True Function $f(x, y) = (a+c)x^2 + (b+d)y^2 + dx + cy + a + b + \epsilon$ $-5<= x<= 5$, $-5<= y<= 5$, a,b,c,dは100個ランダムに生成 $h(x, y, a, b, c, d) =$ 3deg-Poly-Reg
###Code
def f(x, y, a, b, c, d):
return (a+c)*x**2 + (b+d)*y**2 + d*x + c*y + a + b + np.random.randn()
###Output
_____no_output_____
###Markdown
Data Generation
###Code
xlimits = np.array([[0.0, 1.0], [-1.0, 1.0], [-1, 1], [0, 1]])
sampling = LHS(xlimits=xlimits)
num = 100
input_values = sampling(num)
x = np.arange(-5, 5, 0.1)
y = np.arange(-5, 5, 0.1)
xx, yy = np.meshgrid(x, y)
z = np.zeros([100, 100, 100])
points = []
for k, inp in enumerate(input_values):
a, b, c, d = inp[0], inp[1], inp[2], inp[3]
for i in range(100):
for j in range(100):
z[k, i, j] = f(xx[i, j], yy[i, j], a, b, c, d)
points.append([xx[i, j], yy[i, j], a, b, c, d])
###Output
_____no_output_____
###Markdown
Learning W $X = (1, x, y, a, b, c, d, x^2, xy, xa, xb, ...) (1000000 \times m)$ $Z = (z(-5, 5, a1, b1, c1, d1), ) (1000000 \times 1)$ $W = (m\times 1)$ $Z = XW$
###Code
# create X
points = np.array(points)
poly = PolynomialFeatures(degree=3)
X = poly.fit_transform(points)
Z = z.flatten()
model = Pipeline([('poly', PolynomialFeatures(degree=3)),
('linear', LinearRegression(fit_intercept=False))])
learned_model = model.fit(points, Z)
learned_model.predict(points[0][None])
def pred(model, a, b, c, d):
res = []
for x in np.arange(-5.0, 5.0, 0.1):
for y in np.arange(-5.0, 5.0, 0.1):
res.append(model.predict([[x, y, a, b, c, d]]))
return np.array(res).reshape([100, 100])
import joblib
joblib.dump(learned_model, "model.pkl")
###Output
_____no_output_____
###Markdown
This notebook contains an example for how to use the `taxbrain` python package
###Code
from taxbrain import TaxBrain, differences_plot, distribution_plot
reform_url = "https://raw.githubusercontent.com/PSLmodels/Tax-Calculator/master/taxcalc/reforms/Larson2019.json"
start_year = 2021
end_year = 2030
###Output
_____no_output_____
###Markdown
Static ReformAfter importing the `TaxBrain` class from the `taxbrain` package, we initiate an instance of the class by specifying the start and end year of the anlaysis, which microdata to use, and a policy reform. Additional arguments can be used to specify econoimc assumptions and individual behavioral elasticites.Once the class has been initiated, the `run()` method will handle executing each model
###Code
tb_static = TaxBrain(start_year, end_year, use_cps=True, reform=reform_url)
tb_static.run()
###Output
_____no_output_____
###Markdown
Once the calculators have been run, you can produce a number of tables, including a weighted total of a given variable each year under both current law and the user reform.
###Code
print("Combined Tax Liability Over the Budget Window")
tb_static.weighted_totals("combined")
###Output
Combined Tax Liability Over the Budget Window
###Markdown
If you are interested in a detailed look on the reform's effect, you can produce a differences table for a given year.
###Code
print("Differences Table")
tb_static.differences_table(start_year, "weighted_deciles", "combined")
###Output
Differences Table
###Markdown
TaxBrain comes with two (and counting) built in plots as well
###Code
differences_plot(tb_static, 'combined', figsize=(10, 8));
distribution_plot(tb_static, 2021, figsize=(10, 8));
###Output
_____no_output_____
###Markdown
You can run a partial-equlibrium dynamic simulation by initiating the TaxBrian instance exactly as you would for the static reform, but with your behavioral assumptions specified
###Code
tb_dynamic = TaxBrain(start_year, end_year, use_cps=True, reform=reform_url,
behavior={"sub": 0.25})
tb_dynamic.run()
###Output
_____no_output_____
###Markdown
Once that finishes running, we can produce the same weighted total table as we did with the static run.
###Code
print("Partial Equilibrium - Combined Tax Liability")
tb_dynamic.weighted_totals("combined")
###Output
Partial Equilibrium - Combined Tax Liability
###Markdown
Or we can produce a distribution table to see details on the effects of the reform.
###Code
print("Distribution Table")
tb_dynamic.distribution_table(start_year, "weighted_deciles", "expanded_income", "reform")
###Output
Distribution Table
###Markdown
Set sevice coordinates
###Code
url = 'http://localhost:5000/'
###Output
_____no_output_____
###Markdown
Preprocess Data
###Code
df = pd.read_csv('data/train.csv') # For example i will use Titanic dataset
df = df.set_index('PassengerId')
# Create dummy features
df['IsMale'] = pd.get_dummies(df['Sex'])['male']
# fill missing data
df['Age'] = df['Age'].fillna(df['Age'].median())
###Output
_____no_output_____
###Markdown
Create request body
###Code
target = 'Survived'
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'IsMale']
df_train, df_test = model_selection.train_test_split(df)
data = json.dumps({'metric': 'accuracy', 'data': df_train.to_json(), 'features': features, 'target': target})
###Output
_____no_output_____
###Markdown
Train model
###Code
# Send post request to start_calculation
r = requests.post(f'{url}/start_classification', data)
model_id = r.json()['model_id']
# wait while model is being trained
while r.status_code != 200:
time.sleep(5)
r = requests.get(f'{url}/get_model', params={'model_id': model_id})
# load model from binary file
model = pickle.loads(r.content)
# You can check this model final score on test data
score = requests.get(f'{url}/get_score', params={'model_id': model_id})
print(score.text)
###Output
{"model_id":"da1e9a49-76f8-4d4f-8e63-25c019377f39","score":0.8323353293413174,"score_type":"accuracy"}
###Markdown
Use this model
###Code
metrics.accuracy_score(df_test[target], model.predict(df_test[features]))
###Output
_____no_output_____
###Markdown
This is an example on how to decode files into numbers and viceversa. Encode file into number and save it to a file
###Code
filename = 'some_code.py'
a = encode_file(filename)
with open("numbers/number.txt",'w') as f:
f.write(str(a))
###Output
_____no_output_____
###Markdown
Decode number to file
###Code
with open("numbers/number.txt",'r') as f:
number = int(f.read())
number
filename_new = "numbers/program"
decode_number(a,filename_new)
! sha1sum $filename $filename_new # check that both files are the same
###Output
3f781483e4cc474872ba5dd6828d94e5bd8a8904 some_code.py
3f781483e4cc474872ba5dd6828d94e5bd8a8904 numbers/program
###Markdown
Give execution permission to the file and runWe do it like this to be completely general and work for any executable file, regardless of the language it is written in (as long as it is prepared to be executable, see https://en.wikipedia.org/wiki/Shebang_(Unix) for scripting languages).
###Code
! chmod +x "program" # give execution permission ()
! ./program # run
###Output
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxx xxxxxxx
xxxxx xxxxx
xxx xxx
xx xx
x x
x THIS IS A VIRUS x
x x
x SEND ME BITCOIN x
x x
xx xx
xxx xxx
xxxxx xxxxx
xxxxxxx xxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
###Markdown
Minimal failing sparse dot product example
###Code
import numpy as np
import sparse
import tensorly as tl
A = sparse.random((2048, 2048)) * 100
B = sparse.random((2048, 5))
sparse.__version__
A
B
%%time
C = A.dot(B)
(sA.nnz * sB.nnz) / (1024 * 1024 * 1000)
(sA.nnz * sB.nnz)
###Output
_____no_output_____
###Markdown
Main script The Code is created based on the method described in the following paper[1] "Deep Optimization Prior for THz Model Parameter Estimation", T.M. Wong, H. Bauermeister, M. Kahl, P. Haring Bolivar, M. Moeller, A. Kolb,Winter Conference on Applications of Computer Vision (WACV) 2022.If you use this code in your scientific publication, please cite the mentioned paper.The code and the algorithm are for non-comercial use only.For other details, please visit website https://github.com/tak-wong/Deep-Optimization-Prior
###Code
from MoAE import *
def get_dataset_filename(dataset_name):
dataset_filename = ''
if (dataset_name.lower() == 'metalpcb'):
dataset_filename = 'MetalPCB_91x446x446.mat'
if (dataset_name.startswith('MetalPCB_AWGN')):
dataset_filename = "MetalPCB_AWGN/{}_91x446x446.mat".format(dataset_name)
if (dataset_name.startswith('MetalPCB_ShotNoise')):
dataset_filename = "MetalPCB_ShotNoise/{}_91x446x446.mat".format(dataset_name)
if (dataset_name.startswith('SynthUSAF_AWGN')):
dataset_filename = "SynthUSAF_AWGN/{}_91x446x446.mat".format(dataset_name)
if (dataset_name.startswith('SynthUSAF_ShotNoise')):
dataset_filename = "SynthUSAF_ShotNoise/{}_91x446x446.mat".format(dataset_name)
if (dataset_name.startswith('SynthObj_AWGN')):
dataset_filename = "SynthObj_AWGN/{}_91x446x446.mat".format(dataset_name)
if (dataset_name.startswith('SynthObj_ShotNoise')):
dataset_filename = "SynthObj_ShotNoise/{}_91x446x446.mat".format(dataset_name)
return dataset_filename
###Output
_____no_output_____
###Markdown
Example 1: MetalPCB
###Code
if __name__ == '__main__':
seed = 0
lr = 0.01
epochs = 1200
dataset_name = 'metalpcb'
dataset_filename = get_dataset_filename(dataset_name)
dataset_path = './dataset'
dest_path = './result'
verbose = True
debug = True
hp = hyperparameter_unet_thz(use_seed = seed, learning_rate = lr, epochs = epochs)
optimizer = autoencoder_unet_thz(dataset_name, dataset_filename, dataset_path, dest_path, hp, verbose)
if (debug):
optimizer.RUNS = 1
optimizer.INTERVAL_PLOT_LOSS = 100
optimizer.INTERVAL_SAVE_LOSS = 100
optimizer.INTERVAL_PLOT_LR = 100
optimizer.INTERVAL_SAVE_LR = 100
optimizer.INTERVAL_PLOT_PARAMETERS = 100
optimizer.INTERVAL_SAVE_PARAMETERS = 100
optimizer.INTERVAL_PLOT_LOSSMAP = 100
optimizer.INTERVAL_SAVE_LOSSMAP = 100
optimizer.INTERVAL_PLOT_PIXEL = 100
optimizer.INTERVAL_SAVE_PIXEL = 100
optimizer.train()
seed = 0
lr = 0.01
epochs = 1200
dataset_name = 'MetalPCB_AWGN_n20db'
dataset_filename = get_dataset_filename(dataset_name)
dataset_path = './dataset'
dest_path = './result'
verbose = True
debug = False
hp = hyperparameter_nonet1st_thz(use_seed = seed, learning_rate = lr, epochs = epochs)
optimizer = autoencoder_nonet1st_thz(dataset_name, dataset_filename, dataset_path, dest_path, hp, verbose)
optimizer.train()
###Output
_____no_output_____
###Markdown
Example 2: SynthUSAF+ShotNoise
###Code
lr = 0.01
epochs = 1200
dataset_name = 'SynthUSAF_ShotNoise_p10db'
dataset_filename = get_dataset_filename(dataset_name)
dataset_path = './dataset'
dest_path = './result'
verbose = True
debug = False
hp = hyperparameter_nonet2nd_thz(use_seed = seed, learning_rate = lr, epochs = epochs)
optimizer = autoencoder_nonet2nd_thz(dataset_name, dataset_filename, dataset_path, dest_path, hp, verbose)
optimizer.train()
###Output
_____no_output_____
###Markdown
Example 3: SynthObj+AWGN
###Code
lr = 0.01
epochs = 1200
dataset_name = 'SynthObj_AWGN_p0db'
dataset_filename = get_dataset_filename(dataset_name)
dataset_path = './dataset'
dest_path = './result'
verbose = True
debug = False
hp = hyperparameter_ppae_thz(use_seed = seed, learning_rate = lr, epochs = epochs)
optimizer = autoencoder_ppae_thz(dataset_name, dataset_filename, dataset_path, dest_path, hp, verbose)
optimizer.train()
###Output
_____no_output_____
###Markdown
In this notebook, we consider a ZDT1 problem with Gaussian noise, and benckmark two "denoising" methods:* a naive average method,* the KNN-Avg algorithm.
###Code
import nmoo
###Output
_____no_output_____
###Markdown
The first step is to construct our problem pipelines. We start with a `ZDT1` instance, that we wrap in a `ProblemWrapper`. In nmoo, `ProblemWrapper` is the base class to modify problems, in our case adding and removing noise. Additionally, `ProblemWrapper` and classes deriving from it maintain a history of every call made to their `_evaluate` method (see the [pymoo documentation](https://pymoo.org/getting_started.htmlBy-Class)).Next, we add a Gaussian noise of type `N(0, 0.25)` and the averaging algorithm.
###Code
from pymoo.problems.multi import ZDT1
import numpy as np
zdt1 = ZDT1()
wrapped_zdt1 = nmoo.WrappedProblem(zdt1)
mean = np.array([0, 0])
covariance = np.array([[1., -.5], [-.5, 1]])
noisy_zdt1 = nmoo.noises.GaussianNoise(
wrapped_zdt1,
{"F": (mean, covariance)},
)
avg_zdt1 = nmoo.denoisers.ResampleAverage(noisy_zdt1, n_evaluations=10)
###Output
_____no_output_____
###Markdown
We construct a similar pipeline for the KNN-Avg algorithm. Note that parts ofan already existing pipeline can be reused.
###Code
knnavg_zdt1 = nmoo.denoisers.KNNAvg(
noisy_zdt1,
distance_weight_type="squared",
max_distance=1.0,
n_neighbors=100,
)
###Output
_____no_output_____
###Markdown
Now, we setup an algorithm that will try and solve our `avg_zdt1` and `knnavg_zdt1` problems.
###Code
from pymoo.algorithms.moo.nsga2 import NSGA2
nsga2 = NSGA2()
###Output
_____no_output_____
###Markdown
Finally, we setup our benchmark. It will run NSGA2 against `avg_zdt1` and`knnavg_zdt1` tree times each. Additionally, we specify a Pareto frontpopulation to measure the performance. It no Pareto front is specified (orknown), performance indicators will use one automatically calculated based onthe results of the benchmark.Since the `avg` problem evaluates the underlying noisy `ZDT1` problem 10 times,we apply a penalty of 10, meaning that every call to `avg.eval` will count as10 calls.
###Code
from pymoo.factory import get_termination
pareto_front = zdt1.pareto_front(100)
benchmark = nmoo.benchmark.Benchmark(
output_dir_path="./out",
problems={
"knnavg": {
"problem": knnavg_zdt1,
"pareto_front": pareto_front,
},
"avg": {
"problem": avg_zdt1,
"pareto_front": pareto_front,
"evaluator": nmoo.evaluators.EvaluationPenaltyEvaluator("times", 10),
},
},
algorithms={
"nsga2": {
"algorithm": nsga2,
},
"nsga2_100": {
"algorithm": nsga2,
"termination": get_termination("n_gen", 100),
},
},
n_runs=3,
)
! rm out/*
benchmark.run(verbose=50)
###Output
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 1 tasks | elapsed: 1.0min
[Parallel(n_jobs=-1)]: Done 2 out of 12 | elapsed: 1.0min remaining: 5.2min
[Parallel(n_jobs=-1)]: Done 3 out of 12 | elapsed: 1.1min remaining: 3.2min
[Parallel(n_jobs=-1)]: Done 4 out of 12 | elapsed: 1.8min remaining: 3.5min
[Parallel(n_jobs=-1)]: Done 5 out of 12 | elapsed: 1.8min remaining: 2.5min
[Parallel(n_jobs=-1)]: Done 6 out of 12 | elapsed: 1.9min remaining: 1.9min
[Parallel(n_jobs=-1)]: Done 7 out of 12 | elapsed: 1.9min remaining: 1.3min
[Parallel(n_jobs=-1)]: Done 8 out of 12 | elapsed: 2.3min remaining: 1.2min
[Parallel(n_jobs=-1)]: Done 9 out of 12 | elapsed: 2.4min remaining: 47.5s
[Parallel(n_jobs=-1)]: Done 10 out of 12 | elapsed: 3.5min remaining: 41.4s
[Parallel(n_jobs=-1)]: Done 12 out of 12 | elapsed: 4.9min remaining: 0.0s
[Parallel(n_jobs=-1)]: Done 12 out of 12 | elapsed: 4.9min finished
[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done 1 tasks | elapsed: 1.8s
[Parallel(n_jobs=2)]: Done 2 out of 4 | elapsed: 2.2s remaining: 2.2s
[Parallel(n_jobs=2)]: Done 4 out of 4 | elapsed: 2.7s remaining: 0.0s
[Parallel(n_jobs=2)]: Done 4 out of 4 | elapsed: 2.7s finished
[Parallel(n_jobs=2)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done 1 tasks | elapsed: 4.9s
[Parallel(n_jobs=2)]: Done 2 tasks | elapsed: 4.9s
[Parallel(n_jobs=2)]: Done 3 tasks | elapsed: 6.4s
[Parallel(n_jobs=2)]: Done 4 tasks | elapsed: 7.9s
[Parallel(n_jobs=2)]: Done 5 tasks | elapsed: 9.3s
[Parallel(n_jobs=2)]: Done 6 tasks | elapsed: 10.8s
[Parallel(n_jobs=2)]: Done 7 tasks | elapsed: 12.2s
[Parallel(n_jobs=2)]: Done 8 tasks | elapsed: 13.6s
[Parallel(n_jobs=2)]: Done 9 tasks | elapsed: 14.0s
[Parallel(n_jobs=2)]: Done 10 out of 12 | elapsed: 15.1s remaining: 3.0s
[Parallel(n_jobs=2)]: Done 12 out of 12 | elapsed: 16.5s remaining: 0.0s
[Parallel(n_jobs=2)]: Done 12 out of 12 | elapsed: 16.5s finished
###Markdown
Results of the benchmark are automatically saved:
###Code
! ls ./out
###Output
avg.nsga2.1.1-resample_avg.npz
avg.nsga2.1.2-gaussian_noise.npz
avg.nsga2.1.3-wrapped_problem.npz
avg.nsga2.1.csv
avg.nsga2.1.pi.csv
avg.nsga2.1.pp.npz
avg.nsga2.2.1-resample_avg.npz
avg.nsga2.2.2-gaussian_noise.npz
avg.nsga2.2.3-wrapped_problem.npz
avg.nsga2.2.csv
avg.nsga2.2.pi.csv
avg.nsga2.2.pp.npz
avg.nsga2.3.1-resample_avg.npz
avg.nsga2.3.2-gaussian_noise.npz
avg.nsga2.3.3-wrapped_problem.npz
avg.nsga2.3.csv
avg.nsga2.3.pi.csv
avg.nsga2.3.pp.npz
avg.nsga2.gpp.npz
avg.nsga2_100.1.1-resample_avg.npz
avg.nsga2_100.1.2-gaussian_noise.npz
avg.nsga2_100.1.3-wrapped_problem.npz
avg.nsga2_100.1.csv
avg.nsga2_100.1.pi.csv
avg.nsga2_100.1.pp.npz
avg.nsga2_100.2.1-resample_avg.npz
avg.nsga2_100.2.2-gaussian_noise.npz
avg.nsga2_100.2.3-wrapped_problem.npz
avg.nsga2_100.2.csv
avg.nsga2_100.2.pi.csv
avg.nsga2_100.2.pp.npz
avg.nsga2_100.3.1-resample_avg.npz
avg.nsga2_100.3.2-gaussian_noise.npz
avg.nsga2_100.3.3-wrapped_problem.npz
avg.nsga2_100.3.csv
avg.nsga2_100.3.pi.csv
avg.nsga2_100.3.pp.npz
avg.nsga2_100.gpp.npz
benchmark.csv
knnavg.nsga2.1.1-knn_avg.npz
knnavg.nsga2.1.2-gaussian_noise.npz
knnavg.nsga2.1.3-wrapped_problem.npz
knnavg.nsga2.1.csv
knnavg.nsga2.1.pi.csv
knnavg.nsga2.1.pp.npz
knnavg.nsga2.2.1-knn_avg.npz
knnavg.nsga2.2.2-gaussian_noise.npz
knnavg.nsga2.2.3-wrapped_problem.npz
knnavg.nsga2.2.csv
knnavg.nsga2.2.pi.csv
knnavg.nsga2.2.pp.npz
knnavg.nsga2.3.1-knn_avg.npz
knnavg.nsga2.3.2-gaussian_noise.npz
knnavg.nsga2.3.3-wrapped_problem.npz
knnavg.nsga2.3.csv
knnavg.nsga2.3.pi.csv
knnavg.nsga2.3.pp.npz
knnavg.nsga2.gpp.npz
knnavg.nsga2_100.1.1-knn_avg.npz
knnavg.nsga2_100.1.2-gaussian_noise.npz
knnavg.nsga2_100.1.3-wrapped_problem.npz
knnavg.nsga2_100.1.csv
knnavg.nsga2_100.1.pi.csv
knnavg.nsga2_100.1.pp.npz
knnavg.nsga2_100.2.1-knn_avg.npz
knnavg.nsga2_100.2.2-gaussian_noise.npz
knnavg.nsga2_100.2.3-wrapped_problem.npz
knnavg.nsga2_100.2.csv
knnavg.nsga2_100.2.pi.csv
knnavg.nsga2_100.2.pp.npz
knnavg.nsga2_100.3.1-knn_avg.npz
knnavg.nsga2_100.3.2-gaussian_noise.npz
knnavg.nsga2_100.3.3-wrapped_problem.npz
knnavg.nsga2_100.3.csv
knnavg.nsga2_100.3.pi.csv
knnavg.nsga2_100.3.pp.npz
knnavg.nsga2_100.gpp.npz
###Markdown
The benchmark results are saved in `benchmark.csv`. They can also be accessed by `benchmark._results`. The rest are problem call histories, named after the following scheme:```...-.npz```For example, `knnavg.nsga2_100.3.2-gaussian_noise.npz` is the `GaussianNoise`history (level 2) of the 3rd run of `NSGA2` (100 generations) on the `knnavg`pipeline. The Pareto populations of each run are stored in the ```...pp.npz```files, and the global Pareto population for a given problem-algorithm pair isstored in ```..gpp.npz```Statistics about each run are stored in the ```...csv```files. Performance indicators are computed and stored in the```...pi.csv``` files. Let's now visualize the results. The final result of all runs can be found using the `Benchmark.final_results` method:
###Code
benchmark.final_results()
###Output
_____no_output_____
###Markdown
The following boxplot indicates that, with the same number of calls to `ZDT1`, KNN-Avg offers a better GD+ performance. However, on the number of generation is fixed or unconstrained, then the averaging method is better.
###Code
import seaborn as sns
sns.boxplot(
x="problem",
y="perf_igd",
hue="algorithm",
data=benchmark.final_results(),
)
###Output
_____no_output_____
###Markdown
The following boxplot depicts the runtimes.
###Code
sns.boxplot(
x="problem",
y="timedelta",
hue="algorithm",
data=benchmark.final_results(),
)
from nmoo.plotting import plot_performance_indicators
plot_performance_indicators(benchmark, row="algorithm")
###Output
_____no_output_____
###Markdown
Start by importing our Python module
###Code
from gocrypt import Encryption
###Output
_____no_output_____
###Markdown
We begin our test by defining a payload and a passphrase, which are converted and passed through to the generated C lib by our `gocrypt.py` module. We also instantiate our Encryption class.
###Code
hello = "Hello, World!"
passphrase = "password"
e = Encryption()
encrypted = e.encrypt_string(hello, passphrase)
###Output
_____no_output_____
###Markdown
If we examine the encrypted string, we can see it is obfuscated
###Code
encrypted
###Output
_____no_output_____
###Markdown
If we attempt to decrypt the string with the wrong passphrase, we get nothing back
###Code
e.decrypt_string(encrypted, "who_knows")
###Output
_____no_output_____
###Markdown
Using the correct passphrase, we obtain our original string
###Code
e.decrypt_string(encrypted, passphrase)
###Output
_____no_output_____
###Markdown
Example of NaiveFeaThe aim of the project is running as fast as the reporter. (But it has a low level of knowledge now.) 1. Fundamental operation
###Code
import numpy as np
import meshio
import naivefea
from naivefea.constitutive import LinearElastic
from naivefea.analysis import LinearFea
# import a mesh
mesh=meshio.read('abaqus_mesh.inp')
# instantiate fea according to mesh
fea=LinearFea(mesh)
# show the mesh, default plots node index but no element index
fea.plot_mesh()
# set material
material=LinearElastic(E=10.0,nv=0.3)
fea.uniform_material(material)
# set boundary conditions
# left bound is fixed, and right bound apply fx=1.0
node_fix=[0,5,10,15,20]
f_given={14:(0.001,0)}
fea.set_deform_conditions('fix',Uxy=node_fix)
fea.set_force_conditions(f_given)
# print condistions
print('x_fix=',fea.x_given_displace)
print('y_fix=',fea.y_given_displace)
print('f_given=',fea.f_given)
fea.plot_restrict()
# submit for analyzing
fea.submit()
# show result, deformation and stress for example
fea.plot('stress','S12')
fea.plot('deform','Ux',deformed=False)
# all plot choice:
# deform: Ux, Uy
# force: Fx, Fy
# strain: e11, e22, e12
# stress: S11, S22, S12
###Output
_____no_output_____
###Markdown
2. Advanced operation 2.1 Material
###Code
# set material name
material.set_name('rubber')
print(f"Name of the defined material is '{material.name}'.")
# you can choose material in database
from naivefea.constitutive import database
material_new=database.choose('steel')
print(f"Name of the choosed material is '{material_new.name}'.")
###Output
Name of the choosed material is 'default steel'.
###Markdown
2.2 Pre-process
###Code
# generate mesh by pygmsh
import pygmsh
with pygmsh.geo.Geometry() as geom:
geom.add_polygon(
[
[0.0, 0.0],
[1.0, 0.0],
[1.0, 1.0],
[0.0, 1.0],
],
mesh_size=0.3,
)
mesh_new = geom.generate_mesh()
# use plot_mesh view mesh before instantiate Fea by it
naivefea.plot_mesh(mesh_new)
# set boundary conditions by geometry
# clear conditions set before
fea.clear_conditions('all')
# left bound is fixed, and right bound apply fx=1.0
node_fix=[]
f_given={}
for index,position in enumerate(fea.nodes):
x=position[0].tolist()
y=position[1].tolist()
if x<1e-6:
node_fix.append(index)
if 1.0-x<1e-6 and abs(y-0.5)<1e-2:
f_given.update({index:(1.0e-3,0.0)})
fea.set_deform_conditions('fix',Uxy=node_fix)
fea.set_force_conditions(f_given)
# you can also set displacement, such as following
fea.set_deform_conditions('displace',Uy={2:5e-5,22:-5e-5},Uxy={12:(-5e-5,0.0)})
# if you want to clear deformation condition on node 14
fea.clear_node_conditions(12,'Uxy')
# print condistions
print('x_fix=',fea.x_given_displace)
print('y_fix=',fea.y_given_displace)
print('f_given=',fea.f_given)
fea.plot_restrict()
###Output
Conditions may have been changed! Please resubmit for new result.
Conditions may have been changed! Please resubmit for new result.
Conditions may have been changed! Please resubmit for new result.
x_fix= {0: 0.0, 5: 0.0, 10: 0.0, 15: 0.0, 20: 0.0}
y_fix= {0: 0.0, 5: 0.0, 10: 0.0, 15: 0.0, 20: 0.0, 2: 5e-05, 22: -5e-05}
f_given= {14: (0.001, 0.0)}
###Markdown
2.3 Post-process
###Code
fea.get_data('deform',14)
# plot is used for post-process
# so default plot of mesh is deformed
fea.plot('mesh')
# show more variable
fea.calculate('Mises')
fea.plot('stress','Mises')
# calculate user defined show data and show it.
s11=fea.current_dict['stress']['S11']
s22=fea.current_dict['stress']['S22']
s12=fea.current_dict['stress']['S12']
my_Mises=np.sqrt(0.5*(s11**2+s22**2+(s11-s22)**2+6*s12**2))
fea.current_dict['stress']['my_Mises']=my_Mises
fea.plot('stress','my_Mises')
###Output
_____no_output_____
###Markdown
2.4 Simulate two kinds of material
###Code
rve_mesh=meshio.read('enhanced.inp')
enhanced_rubber=LinearFea(rve_mesh)
# plot a larger figure
enhanced_rubber.set_figsize('medium')
enhanced_rubber.plot_mesh()
# difine two material
rubber=LinearElastic(10.0,0.3)
rubber.set_name('rubber')
enhance=LinearElastic(100.0,0.3)
enhance.set_name('enhance')
# difine element set of enhanced material.
# Here, the set is imported. You can difine it by yourself, too.
enhanced_region=rve_mesh.cell_sets['enhance'][0]
# assign them to different region
enhanced_rubber.uniform_material(rubber)
enhanced_rubber.uniform_material(enhance,enhanced_region)
# plot material of element.
enhanced_rubber.plot_material()
# set a pure shear boundary condition
enhanced_rubber.clear_conditions()
node_left=rve_mesh.point_sets['left'].tolist()
node_right=rve_mesh.point_sets['right'].tolist()
dict_right=dict(zip(node_right,[1e-3 for node,_ in enumerate(node_right)]))
enhanced_rubber.set_deform_conditions('fix',Ux=node_left)
enhanced_rubber.set_deform_conditions('fix',Uy=[5,8])
enhanced_rubber.set_deform_conditions('displace',Ux=dict_right)
enhanced_rubber.plot_restrict()
enhanced_rubber.submit()
enhanced_rubber.plot('strain','e11')
###Output
_____no_output_____
###Markdown
Example usage of the Yin-Yang dataset
###Code
import torch
import numpy as np
import matplotlib.pyplot as plt
from dataset import YinYangDataset
from torch.utils.data import DataLoader
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup datasets (training, validation and test set)
###Code
dataset_train = YinYangDataset(size=5000, seed=42)
dataset_validation = YinYangDataset(size=1000, seed=41)
dataset_test = YinYangDataset(size=1000, seed=40)
###Output
_____no_output_____
###Markdown
Setup PyTorch dataloaders
###Code
batchsize_train = 20
batchsize_eval = len(dataset_test)
train_loader = DataLoader(dataset_train, batch_size=batchsize_train, shuffle=True)
val_loader = DataLoader(dataset_validation, batch_size=batchsize_eval, shuffle=True)
test_loader = DataLoader(dataset_test, batch_size=batchsize_eval, shuffle=False)
###Output
_____no_output_____
###Markdown
Plot data
###Code
fig, axes = plt.subplots(ncols=3, sharey=True, figsize=(15, 8))
titles = ['Training set', 'Validation set', 'Test set']
for i, loader in enumerate([train_loader, val_loader, test_loader]):
axes[i].set_title(titles[i])
axes[i].set_aspect('equal', adjustable='box')
xs = []
ys = []
cs = []
for batch, batch_labels in loader:
for j, item in enumerate(batch):
x1, y1, x2, y2 = item
c = batch_labels[j]
xs.append(x1)
ys.append(y1)
cs.append(c)
xs = np.array(xs)
ys = np.array(ys)
cs = np.array(cs)
axes[i].scatter(xs[cs == 0], ys[cs == 0], color='C0', edgecolor='k', alpha=0.7)
axes[i].scatter(xs[cs == 1], ys[cs == 1], color='C1', edgecolor='k', alpha=0.7)
axes[i].scatter(xs[cs == 2], ys[cs == 2], color='C2', edgecolor='k', alpha=0.7)
axes[i].set_xlabel('x1')
if i == 0:
axes[i].set_ylabel('y1')
###Output
_____no_output_____
###Markdown
Setup ANN
###Code
class Net(torch.nn.Module):
def __init__(self, network_layout):
super(Net, self).__init__()
self.n_inputs = network_layout['n_inputs']
self.n_layers = network_layout['n_layers']
self.layer_sizes = network_layout['layer_sizes']
self.layers = torch.nn.ModuleList()
layer = torch.nn.Linear(self.n_inputs, self.layer_sizes[0], bias=True)
self.layers.append(layer)
for i in range(self.n_layers-1):
layer = torch.nn.Linear(self.layer_sizes[i], self.layer_sizes[i+1], bias=True)
self.layers.append(layer)
return
def forward(self, x):
x_hidden = []
for i in range(self.n_layers):
x = self.layers[i](x)
if not i == (self.n_layers-1):
relu = torch.nn.ReLU()
x = relu(x)
x_hidden.append(x)
return x
torch.manual_seed(12345)
# ANN with one hidden layer (with 120 neurons)
network_layout = {
'n_inputs': 4,
'n_layers': 2,
'layer_sizes': [120, 3],
}
net = Net(network_layout)
# Linear classifier for reference
shallow_network_layout = {
'n_inputs': 4,
'n_layers': 1,
'layer_sizes': [3],
}
linear_classifier = Net(shallow_network_layout)
###Output
_____no_output_____
###Markdown
Train ANN
###Code
# used to determine validation accuracy after each epoch in training
def validation_step(net, criterion, loader):
with torch.no_grad():
num_correct = 0
num_shown = 0
for j, data in enumerate(loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
outputs = net(inputs)
winner = outputs.argmax(1)
num_correct += len(outputs[winner == labels])
num_shown += len(labels)
accuracy = float(num_correct) / num_shown
return accuracy
# set training parameters
n_epochs = 300
learning_rate = 0.001
val_accuracies = []
train_accuracies = []
# setup loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
# train for n_epochs
for epoch in range(n_epochs):
val_acc = validation_step(net, criterion, val_loader)
if epoch % 25 == 0:
print('Validation accuracy after {0} epochs: {1}'.format(epoch, val_acc))
val_accuracies.append(val_acc)
num_correct = 0
num_shown = 0
for j, data in enumerate(train_loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
# zero the parameter gradients
optimizer.zero_grad()
# forward pass
outputs = net(inputs)
winner = outputs.argmax(1)
num_correct += len(outputs[winner == labels])
num_shown += len(labels)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
accuracy = float(num_correct) / num_shown
train_accuracies.append(accuracy)
# after training evaluate on test set
test_acc = validation_step(net, criterion, test_loader)
print('#############################')
print('Final test accuracy:', test_acc)
print('#############################')
###Output
Validation accuracy after 0 epochs: 0.316
Validation accuracy after 25 epochs: 0.894
Validation accuracy after 50 epochs: 0.943
Validation accuracy after 75 epochs: 0.963
Validation accuracy after 100 epochs: 0.968
Validation accuracy after 125 epochs: 0.981
Validation accuracy after 150 epochs: 0.974
Validation accuracy after 175 epochs: 0.972
Validation accuracy after 200 epochs: 0.978
Validation accuracy after 225 epochs: 0.979
Validation accuracy after 250 epochs: 0.982
Validation accuracy after 275 epochs: 0.981
#############################
Final test accuracy: 0.986
#############################
###Markdown
Plot training results
###Code
plt.figure(figsize=(10,8))
plt.plot(train_accuracies, label='train acc')
plt.plot(val_accuracies, label='val acc')
plt.axhline(test_acc, ls='--', color='grey', label='test acc')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.ylim(0.3, 1.05)
plt.legend()
###Output
_____no_output_____
###Markdown
Train Linear classifier as reference
###Code
val_accuracies = []
train_accuracies = []
# setup loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(linear_classifier.parameters(), lr=learning_rate)
# train for n_epochs
for epoch in range(n_epochs):
val_acc = validation_step(linear_classifier, criterion, val_loader)
if epoch % 25 == 0:
print('Validation accuracy of linear classifier after {0} epochs: {1}'.format(epoch, val_acc))
val_accuracies.append(val_acc)
num_correct = 0
num_shown = 0
for j, data in enumerate(train_loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
# zero the parameter gradients
optimizer.zero_grad()
# forward pass
outputs = linear_classifier(inputs)
winner = outputs.argmax(1)
num_correct += len(outputs[winner == labels])
num_shown += len(labels)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
accuracy = float(num_correct) / num_shown
train_accuracies.append(accuracy)
# after training evaluate on test set
test_acc = validation_step(linear_classifier, criterion, test_loader)
print('#############################')
print('Final test accuracy linear classifier:', test_acc)
print('#############################')
plt.figure(figsize=(10,8))
plt.plot(train_accuracies, label='train acc (lin classifier)')
plt.plot(val_accuracies, label='val acc (lin classifier)')
plt.axhline(test_acc, ls='--', color='grey', label='test acc (lin classifier)')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.ylim(0.3, 1.05)
plt.legend()
###Output
_____no_output_____
###Markdown
Grad-CAM with PyTorch
###Code
import os
import torch
from torch import nn
from torchvision import models, transforms
from gradcam import GradCAM
###Output
_____no_output_____
###Markdown
Model Loading
###Code
image_model_path = "./fire.model"
image_save_point = torch.load(image_model_path)
image_model = models.resnet34(pretrained=False, num_classes=2)
image_model.load_state_dict(image_save_point['state_dict'])
id_to_label = {
0: 'other',
1: 'fire'
}
###Output
_____no_output_____
###Markdown
Preprocess Image
###Code
from PIL import Image
from torchvision.transforms.functional import to_pil_image
VISUALIZE_SIZE = (224, 224) # size for visualize
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
image_transform = transforms.Compose([
transforms.Resize(VISUALIZE_SIZE),
transforms.ToTensor(),
normalize])
path = "./fire.jpg"
image = Image.open(path)
image.thumbnail(VISUALIZE_SIZE, Image.ANTIALIAS)
display(image)
# save image origin size
image_orig_size = image.size # (W, H)
img_tensor = image_transform(image)
img_tensor = img_tensor.unsqueeze(0)
###Output
_____no_output_____
###Markdown
Put Image in grad-cam
###Code
grad_cam = GradCAM(model=image_model, feature_layer=list(image_model.layer4.modules())[-1])
model_output = grad_cam.forward(img_tensor)
target = model_output.argmax(1).item()
###Output
_____no_output_____
###Markdown
force target value
###Code
# target = 0
###Output
_____no_output_____
###Markdown
Backprop in Grad-CAM
###Code
grad_cam.backward_on_target(model_output, target)
###Output
_____no_output_____
###Markdown
get weights and feature map
###Code
import numpy as np
# Get feature gradient
feature_grad = grad_cam.feature_grad.data.numpy()[0]
# Get weights from gradient
weights = np.mean(feature_grad, axis=(1, 2)) # Take averages for each gradient
# Get features outputs
feature_map = grad_cam.feature_map.data.numpy()
grad_cam.clear_hook()
###Output
_____no_output_____
###Markdown
compute the cam
###Code
# Get cam
cam = np.sum((weights * feature_map.T), axis=2).T
cam = np.maximum(cam, 0) # apply ReLU to cam
###Output
_____no_output_____
###Markdown
Visualize
###Code
import cv2
cam = cv2.resize(cam, VISUALIZE_SIZE)
cam = (cam - np.min(cam)) / (np.max(cam) - np.min(cam)) # Normalize between 0-1
cam = np.uint8(cam * 255) # Scale between 0-255 to visualize
activation_heatmap = np.expand_dims(cam, axis=0).transpose(1,2,0)
org_img = np.asarray(image.resize(VISUALIZE_SIZE))
img_with_heatmap = np.multiply(np.float32(activation_heatmap), np.float32(org_img))
img_with_heatmap = img_with_heatmap / np.max(img_with_heatmap)
org_img = cv2.resize(org_img, image_orig_size)
import matplotlib.pyplot as plt
plt.figure(figsize=(20,10))
plt.subplot(1,2,1)
plt.imshow(org_img)
plt.subplot(1,2,2)
plt.imshow(cv2.resize(np.uint8(255 * img_with_heatmap), image_orig_size))
plt.show()
result = nn.Softmax(dim=0)(model_output[0]).data.tolist()
print({id_to_label[i]: round(score, 4) for i, score in enumerate(result)})
print("Predict Class:", id_to_label[target])
###Output
_____no_output_____
###Markdown
Load and preprocess the data
###Code
graph = load_dataset('data/cora.npz')
adj_matrix = graph['adj_matrix']
labels = graph['labels']
adj_matrix, labels = standardize(adj_matrix, labels)
n_nodes = adj_matrix.shape[0]
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
n_flips = 1000
dim = 32
window_size = 5
###Output
_____no_output_____
###Markdown
Generate candidate edge flips
###Code
candidates = generate_candidates_removal(adj_matrix=adj_matrix)
###Output
_____no_output_____
###Markdown
Compute simple baselines
###Code
b_eig_flips = baseline_eigencentrality_top_flips(adj_matrix, candidates, n_flips)
b_deg_flips = baseline_degree_top_flips(adj_matrix, candidates, n_flips, True)
b_rnd_flips = baseline_random_top_flips(candidates, n_flips, 0)
###Output
_____no_output_____
###Markdown
Compute adversarial flips using eigenvalue perturbation
###Code
our_flips = perturbation_top_flips(adj_matrix, candidates, n_flips, dim, window_size)
###Output
_____no_output_____
###Markdown
Evaluate classification performance using the skipgram objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, b_eig_flips, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding = deepwalk_skipgram(adj_matrix_flipped, dim, window_size=window_size)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
cln, F1: 0.80 0.77
rnd, F1: 0.80 0.76
deg, F1: 0.77 0.73
eig, F1: 0.76 0.73
our, F1: 0.73 0.69
###Markdown
Evaluate classification performance using the SVD objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, b_eig_flips, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding, _, _, _ = deepwalk_svd(adj_matrix_flipped, window_size, dim)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
cln, F1: 0.82 0.80
rnd, F1: 0.81 0.79
deg, F1: 0.79 0.76
eig, F1: 0.80 0.78
our, F1: 0.76 0.74
###Markdown
Import modules
###Code
from pyvad import vad, trim
from librosa import load
import matplotlib.pyplot as plt
import numpy as np
import IPython.display
###Output
_____no_output_____
###Markdown
Speech data load
###Code
name = "test/voice/arctic_a0007.wav"
data, fs = load(name)
time = np.linspace(0, len(data)/fs, len(data)) # time axis
plt.plot(time, data)
plt.show()
###Output
_____no_output_____
###Markdown
Do VAD (int)
###Code
%time vact = vad(data, fs, fs_vad = 16000, hoplength = 30, vad_mode=3)
###Output
CPU times: user 93.9 ms, sys: 3.26 ms, total: 97.1 ms
Wall time: 96.2 ms
###Markdown
Plot result
###Code
fig, ax1 = plt.subplots()
ax1.plot(time, data, color = 'b', label='speech waveform')
ax1.set_xlabel("TIME [s]")
ax2=ax1.twinx()
ax2.plot(time, vact, color="r", label = 'vad')
plt.yticks([0, 1] ,('unvoice', 'voice'))
ax2.set_ylim([-0.01, 1.01])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
trim
###Code
%time trimed = trim(data, fs, fs_vad = 16000, hoplength = 30, vad_mode=3)
###Output
CPU times: user 101 ms, sys: 4.06 ms, total: 105 ms
Wall time: 106 ms
###Markdown
Plot result
###Code
time = np.linspace(0, len(trimed)/fs, len(trimed)) # time axis
fig, ax1 = plt.subplots()
ax1.plot(time, trimed, color = 'b', label='speech waveform')
ax1.set_xlabel("TIME [s]")
plt.show()
###Output
_____no_output_____
###Markdown
Example usage of the gridcell package
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing from filesWe will use recorded data stored in the file ``../data/FlekkenBen.mat``. We have to manually work through this file and present the data it contains in a way that the ``gridcell`` package can understand. To this end, functions from the ``transform`` module may come in handy, both for formatting the data and transforming it to axes we want to use.
###Code
# Select data source
datafile = '../../data/FlekkenBen/data.mat'
# Load raw data from file
from scipy import io
raw_data = io.loadmat(datafile, squeeze_me=True)
#print(raw_data)
# Create sessions dict from the data
from gridcell import transform
positions = [pos.T for pos in raw_data['allpos']]
spike_times = raw_data['allst']
data = transform.sessions(positions, spike_times)
# Transform axes
tight_range = ((-74.5, 74.5), (-74.5, 74.5))
data = transform.transform_sessions(data, global_=False,
range_=tight_range,
translate=True, rotate=True)
###Output
_____no_output_____
###Markdown
Setting up the CellCollectionThe representation of the data provided by ``data`` is just a temporary interface. The functionality of the package is provided mainly through a class ``Cell`` representing the cells, and a container class ``CellCollection`` representing several cells. The standardized dataset representation from `transform.session` can be used to initialize an instance of ``CellCollection``, creating ``Cell`` instances for each cell in the process.
###Code
# Define the binning of the experimental environment
bins = (50, 50)
range_ = ((-75.0, 75.0), (-75.0, 75.0))
# Set filter parameters (use the same length unit as range_,
# and the same time unit as in the raw data)
speed_window = 0.5
min_speed = 5.0
position_kw = dict(speed_window=speed_window, min_speed=min_speed)
bandwidth = 3.3
threshold = 0.2 # Only a default
cell_kw = dict(bandwidth=bandwidth, threshold=threshold)
# Instantiate CellCollection
from gridcell import CellCollection
cells = CellCollection.from_multiple_sessions(
data, bins, range_, position_kw=position_kw, cell_kw=cell_kw)
print("Number of cells: {}".format(len(cells)))
###Output
Number of cells: 176
###Markdown
Note that the ``CellCollection.from_multiple_sessions`` constructor takes a number of arguments affecting different aspects of the analysis. See the documentation for details. Plotting and iterating the parameters
###Code
# To improve on the matplotlib aesthetics, we import the seaborn
# library and choose some nice colormaps
import seaborn
seaborn.set(rc={'figure.facecolor': '.98', 'legend.frameon': True})
ratecmap = 'YlGnBu_r'
corrcmap = 'RdBu_r'
###Output
_____no_output_____
###Markdown
Now, lets take a look at what we just created. The ``CellCollection`` instance can be accessed (and modified) like a list.
###Code
# Select a cell to have a closer look at
cell = cells[109]
###Output
_____no_output_____
###Markdown
Let's begin by plotting the raw data -- the path of the rat, with the spike locations of this cell superimposed.
###Code
# Create a square patch representing the experimental environment
from matplotlib import patches
xmin, xmax = range_[0]
ymin, ymax = range_[1]
dx, dy = xmax - xmin, ymax - ymin
box = patches.Rectangle((xmin, ymin), dx, dy,
fill=False, label="Box")
# Plot the path and spikes
with seaborn.axes_style('ticks'):
path = cell.position.plot_path(label='Path')[0]
axes = path.axes
cell.plot_spikes(axes=axes, alpha=0.2, label='Spikes')
axes.add_patch(box)
axes.set(xlim=[xmin - 0.05 * dx, xmax + 0.55 * dx],
ylim=[ymin - 0.05 * dy, ymax + 0.05 * dy],
xticks=[xmin, xmax], yticks=[xmin, xmax])
axes.legend(loc=5)
seaborn.despine(offset=0, trim=True)
###Output
_____no_output_____
###Markdown
That looks promising. Let's plot the firing rate map. This map has been passed through a smoothing filter with filter size given by the parameter ``filter_size`` in the ``CellCollection`` instantiation.
###Code
cell.plot_ratemap(cmap=ratecmap)
###Output
_____no_output_____
###Markdown
This definitely looks like a grid cell, with a firing fields spread out in a nice pattern. However, the difference in firing field strength is substantial. Let's see how the autocorrelogram looks.
###Code
cell.plot_acorr(cmap=corrcmap)
###Output
_____no_output_____
###Markdown
Pretty nice. But how does the default threshold work with those weak peaks?
###Code
cell.plot_acorr(cmap=corrcmap, threshold=True)
###Output
_____no_output_____
###Markdown
Two of the peaks are to low for this threshold. Let's find out what the threshold for this cell should be, assuming as a rule that the threshold should be as close as possible to the default value (0.20), while allowing all the six inner peaks to be identified and separated from each other and background noise, with at least four pixels per peak above the threshold.
###Code
cell.plot_acorr(cmap=corrcmap, threshold=0.12)
###Output
_____no_output_____
###Markdown
That's it! We had to go all the way down to 0.12 to get the required four pixels per peak. Let's update the ``'threshold'`` parameter of the cell to reflect this
###Code
cell.params['threshold'] = 0.12
###Output
_____no_output_____
###Markdown
We should check that the problem has been fixed:
###Code
cell.plot_acorr(cmap=corrcmap, threshold=True,
grid_peaks=True, grid_ellipse=True)
###Output
_____no_output_____
###Markdown
Notice how the detected peak centers, and the ellipse fitted through them, were added using the keywords ``grid_peaks`` and ``grid_ellipse``. These keywords are provided for convenience, and uses hardcoded defaults for the appearance of the peaks and ellipse. For more fine grained control, use the ``plot_grid_peaks`` and ``plot_grid_ellipse`` methods of the ``Cell`` instance instead.
###Code
cell.plot_acorr(cmap=corrcmap, threshold=False)
cell.plot_grid_peaks(marker='^', color='green', markersize=20)
cell.plot_grid_ellipse(smajaxis=False, minaxis=True, color='magenta',
linewidth=4, zorder=3)
# There are other cells requiring custom thresholds
cells[0].params['threshold'] = 0.17
cells[8].params['threshold'] = 0.31
cells[13].params['threshold'] = 0.21
cells[31].params['threshold'] = 0.11
cells[40].params['threshold'] = 0.08
cells[43].params['threshold'] = 0.09
cells[59].params['threshold'] = 0.18
cells[63].params['threshold'] = 0.27
cells[80].params['threshold'] = 0.18
cells[82].params['threshold'] = 0.16
cells[98].params['threshold'] = 0.19
cells[109].params['threshold'] = 0.12
cells[118].params['threshold'] = 0.40 # Or just 0.20
cells[128].params['threshold'] = 0.22
cells[129].params['threshold'] = 0.17
cells[133].params['threshold'] = 0.22
cells[150].params['threshold'] = 0.10
cells[153].params['threshold'] = 0.19
cells[159].params['threshold'] = 0.17
cells[160].params['threshold'] = 0.19
cells[161].params['threshold'] = 0.19
cells[162].params['threshold'] = 0.16
cells[168].params['threshold'] = 0.45 # Or 0.64
del cells[146] # Won't work using the default settings
###Output
_____no_output_____
###Markdown
Clustering and modulesThe next step is to try to cluster the cells into modules. There are several clustering algorithms available for this purpose. Here, we use the K-means algorithm, implemented using the ``k_means`` function from ``scikit-learn``. We anticipate 4 modules.
###Code
# Find modules among the cells
# The grid scale is weighted a little more than the other features
# when clustering
feat_kw = dict(weights={'logscale': 2.1})
k_means_kw = dict(n_clusters=4, n_runs=10, feat_kw=feat_kw)
# We expect 4 modules
from gridcell import Module
labels = cells.k_means(**k_means_kw)
modules, outliers = Module.from_labels(cells, labels)
modules.sort(key=lambda mod: mod.template().scale())
###Output
_____no_output_____
###Markdown
All clustering methods have a common return signature: ``modules, outliers``. The variable ``modules`` is a list containing a ``Module`` instance for each of the detected modules. ``Module`` is a subclass of ``CellCollection``, implementing some extra module-specific functionality for analyzing the phases of the cells in the module. The variable ``outliers`` is a CellCollection instance containing the cells that were not assigned to any module. When using the K-means algorithm, all cells are assigned to a module, so ``outliers`` is empty. Let's take a look at the clustering by plotting the scales, orientation angles and ellipse parameters of the cells in each module next to each other.
###Code
for (i, mod) in enumerate(modules):
line = mod.plot_features(('scale',), label="Module {}".format(i + 1))[0]
axes = line.axes
axes.set_ylim(bottom=0.0)
axes.legend(loc=0)
for (i, mod) in enumerate(modules):
line = mod.plot_ellpars(label="Module {}".format(i + 1))[0]
axes = line.axes
axes.legend(loc=0)
###Output
_____no_output_____
###Markdown
First generate a few simple fisher matrices and compare on a triangle plot
###Code
# Read in configuration file
pyr = pyranha.Pyranha("configurations/example.py")
# Compute instrument and cosmological spectra.
pyr.compute_instrument()
pyr.compute_cosmology()
# Compute single fisher matrix for this setup.
fisher = pyr.fisher()
# Now change some specific parmaeters and recompute the fisher matrix.
pyr.delensing = True
pyr.delensing_factor = 0.1
fisher_lens = pyr.fisher()
# Again change some parameters!
pyr.map_res = 0.02
pyr.compute_instrument()
fisher_fgnd = pyr.fisher()
# Overlay the three cases we have computed on a triangle plot to compare.
pyranha.plot_fisher_corner([fisher, fisher_fgnd, fisher_lens], [r'LiteBIRD', r'LiteBIRD + 2%, foreground', r'LiteBIRD + 90% delensing'], opath='plots/triangle.pdf')
###Output
_____no_output_____
###Markdown
Now we can iterate over one of the instrumental parameters, keeping the cosmological parameters fixed, which is relatively quick. In this case we iterate over the sky fraction observed.
###Code
pyr = pyranha.Pyranha("configurations/example.py")
xarr = np.linspace(0.2, 0.8, 20)
fishers = pyr.iterate_instrument_parameter_1d('fsky', xarr)
pyranha.plot_fisher_1d(xarr, [fishers], ['LiteBIRD'], xlabel=r'$f_{\rm sky}$')
###Output
_____no_output_____
###Markdown
Finally, we can iterate over two parameters together to create a contour plot of sigma_r.
###Code
pyr = pyranha.Pyranha("configurations/example.py")
xarr = np.arange(2, 40)
yarr = np.arange(200, 240)
fishers2d = pyr.iterate_instrument_parameter_2d('lmin', 'lmax', xarr, yarr)
pyranha.plot_fisher_2d(xarr, yarr, fishers2d, xlabel=r'$\ell_{\rm min}$', ylabel=r'$\ell_{\rm max}$', opath="plots/lmin_lmax.pdf")
pyr = pyranha.Pyranha("configurations/example.py")
xarr = np.arange(2, 40)
yarr = np.linspace(0.1, 1., 20)
fishers2d = pyr.iterate_instrument_parameter_2d('lmin', 'fsky', xarr, yarr)
pyranha.plot_fisher_2d(xarr, yarr, fishers2d, xlabel=r'$\ell_{\rm min}$', ylabel=r'$f_{\rm sky}$', opath="plots/lmin_fsky.pdf")
pyr = pyranha.Pyranha("configurations/example.py")
pyr.map_res = 0.02
xarr = np.linspace(0.1, 1., 20)
yarr = np.linspace(0.1, 1., 20)
fishers2d = pyr.iterate_instrument_parameter_2d('delensing_factor', 'fsky', xarr, yarr)
pyranha.plot_fisher_2d(xarr, yarr, fishers2d, xlabel=r'$f_{\rm delens}$', ylabel=r'$f_{\rm sky}$', opath="plots/delens_fsky.pdf")
pyr = pyranha.Pyranha("configurations/example.py")
xarr = np.linspace(0., 0.05, 20)
yarr = np.linspace(0.1, 1., 20)
fishers2d = pyr.iterate_instrument_parameter_2d('map_res', 'fsky', xarr, yarr)
pyranha.plot_fisher_2d(xarr, yarr, fishers2d, xlabel=r'$f_{\rm res}$', ylabel=r'$f_{\rm sky}$', opath="plots/mapres_fsky.pdf")
###Output
_____no_output_____
###Markdown
Let's load pretrained bert token vectors and project them to 2d space using tSNE. Load data
###Code
f = "bert-base-uncased.30522.768d.vec"
# f = 'bert-base-cased.28996.768d.vec'
# f = 'bert-base-multilingual-uncased.105879.768d.vec'
# f = 'bert-base-chinese.21128.768d.vec'
# f = 'bert-base-multilingual-cased.119547.768d.vec'
# f = 'bert-base-uncased.30522.768d.vec'
# f = 'bert-large-cased.28996.1024d.vec'
model = gensim.models.KeyedVectors.load_word2vec_format(f, binary=False)
###Output
_____no_output_____
###Markdown
Find most related tokens
###Code
model.most_similar("look")
###Output
_____no_output_____
###Markdown
Plot
###Code
def tsne(query, topn=10):
results = model.wv.similar_by_word(query, topn=topn)
words = [query]+[r[0] for r in results]
wordvectors = np.array(model[query]+[model[w] for w in words], np.float32)
reduced = TSNE(n_components=2).fit_transform(wordvectors)
plt.figure(figsize=(20, 20), dpi=100)
max_x = np.amax(reduced, axis=0)[0]
max_y = np.amax(reduced, axis=0)[1]
plt.xlim((-max_x, max_x))
plt.ylim((-max_y, max_y))
plt.scatter(reduced[:, 0], reduced[:, 1], s=20, c=["r"] + ["b"]*(len(reduced)-1))
for i in range(len(words)):
target_word = words[i]
# print(target_word)
x = reduced[i, 0]
y = reduced[i, 1]
plt.annotate(target_word, (x, y))
plt.axis('off')
plt.show()
tsne("look", 30)
tsne("##go", 30)
###Output
/Users/ryan/pytorch1.0/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning: Call to deprecated `wv` (Attribute will be removed in 4.0.0, use self instead).
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
###Code
!git clone https://github.com/Siahkamari/UCI-model-testing-pipline.git
%cd /content/UCI-model-testing-pipline
def get_model(name):
from sklearn.model_selection import GridSearchCV
xgb_params = {'max_depth': [1, 2, 3 , 4, 5, 6, 7, 8, 9, 10],
'learning_rate': [0.01, 0.05, .1, 0.5]}
## Regression
if name=="lasso":
from sklearn.linear_model import LassoCV
model = LassoCV(cv=5, random_state=0, max_iter=10000, n_jobs=-1)
elif name=="xgbr":
from xgboost import XGBRegressor
xgb = XGBRegressor(objective="reg:squarederror", seed = 0)
model = GridSearchCV(estimator=xgb,
param_grid=xgb_params,
n_jobs=-1)
## Classification
elif name=="xgbc":
from xgboost import XGBClassifier
xgb = XGBClassifier(seed = 0)
model = GridSearchCV(estimator=xgb,
param_grid=xgb_params,
n_jobs=-1)
elif name=="lgr":
from sklearn.linear_model import LogisticRegressionCV
model = LogisticRegressionCV(cv=5, random_state=0, max_iter=10000,
n_jobs=-1)
return model
from utils import test
data_names = [ # n x dim : xgboost seconds
'solar_flare', # 1066 x 23 : 13.7xgbs
'airfoil_self_noise', # 1503 x 5 : 15.3xgbs
'concrete_data', # 1030 x 8 : 17.9xgbs
'garment_productivity', # 905 x 37 : 20.6xgbs
'CCPP', # 9568 x 4 : 29.9xgbs
'geographical_original_of_music', # 1059 x 68 : 37.7xgbs
'communities', # 1994 x 122 : 42.6xgbs
'air_quality', # 7110 x 21 : 45.9xgbs
'wine_quality', # 4898 x 11 : 56.0xgbs
'bias_correction_ucl', # 6200 x 52 : 57.3xgbs
'sml2010', # 3000 x 24 : 86.6xgbs
'bike_sharing', # 6570 x 19 : 123.xgbs
'parkinson_updrs', # 4406 x 25 : 134.xgbs
'abalone', # 4177 x 10
]
model_names = ["lasso",
"xgbr"
]
model_list = [get_model(model_name) for model_name in model_names]
for data_name in data_names:
test(data_name, model_list, n_folds=5)
from utils import test
data_names = [ # n x dim xgboost seconds
'iris', # 149 x 4 4.5s
'wine', # 178 x 13 5.4s
'transfusion', # 748 x 4 5.6s
'ionosphere', # 351 x 34 8.7s
'wdbc', # 569 x 30 10.7s
'balance_scale', # 625 x 4 11.6s
'coil_2000', # 5822 x 85 240s
'adult', # 32561 x 64
]
model_names = ["lgr",
"xgbc"
]
model_list = [get_model(model_name) for model_name in model_names]
for data_name in data_names:
test(data_name, model_list, n_folds=5)
###Output
_____no_output_____
###Markdown
Loading toy datasetShould be time sorted
###Code
# loading the dataset
df = pd.read_csv("example_dataset/trilux_dataset.csv")
# filtering
df = (
df
.loc[df["trabajos_id"]==1]
.filter(items=["fecha", "chla", "tby", "pc"])
.set_index("fecha")
.sort_index()
)
df
###Output
_____no_output_____
###Markdown
Detecting a single sample
###Code
detect_outliers?
# first 100 samples as past data
past_data = df.iloc[:100]
# next sample as current data
current_data = df.iloc[101].to_frame().T
# detect if is outlier with flexibility 3 (a relaxed approach)
detect_outliers(past_data, current_data, 3)
###Output
_____no_output_____
###Markdown
returns 1, which means that sample 101 isn't an outlier (based on the previous 100 samples) Detecting outliers on a dataset, with a rolling window
###Code
# select 10k samples for time efficiency
N_SAMPLES = 10000 # let's perform outlier detection on 10k samples
WINDOW_SIZE = 100 # analyzed 100 samples in order to classify the next
ELASTICITY = 3 # relaxed detector
mini_df = df.iloc[:N_SAMPLES]
results = rolling_outlier_detector(mini_df, WINDOW_SIZE, ELASTICITY)
###Output
100%|██████████| 10000/10000 [09:30<00:00, 17.54it/s]
###Markdown
Plotting results
###Code
px.scatter(mini_df.reset_index(), x="fecha" , y="chla", color=results)
###Output
_____no_output_____
###Markdown
Изображения упорядочены по видеопоследовательности сердечного сокращения
###Code
category = 'Norma'
patient = '02'
imgs, msks = data.get_sequence(patient, category)
plt.figure(figsize=(10,10))
plt.subplot(121)
plt.imshow(imgs[0])
plt.axis('off')
plt.title(r'$min = ' + str(np.min(imgs[0])) + '; max = ' + str(np.max(imgs[0])) + '$')
plt.subplot(122)
plt.imshow(msks[0])
plt.axis('off')
plt.title(r'$min = ' + str(np.min(msks[0])) + '; max = ' + str(np.max(msks[0])) + '$')
###Output
_____no_output_____
###Markdown
Параметры алгоритма:- количество гауссовой пирамиды изображения;- размер окна;- количество отслеживаемых точек;На выходе списки масок областей ЛЖ и координаты точек контура.
###Code
lk = LucasKanade(gauss_layers=1, window=61, num_points = 17)
pred_msks, pred_points = lk.predict(imgs, area2cont(msks[0]))
index = 5
plt.figure(figsize=(15,10))
plt.subplot(131)
plt.imshow(pred_msks[index])
plt.subplot(132)
plt.imshow(area2cont(pred_msks[index]))
plt.scatter([p[1] for p in pred_points[index]], [p[0] for p in pred_points[index]], c='r', marker='x');
plt.figure(figsize=(25,6))
num_images = 20
for i, (img, msk, p_msk) in enumerate(zip(imgs, msks, pred_msks)):
if i == num_images: break
plt.subplot(2,num_images,i+1)
plt.imshow(img)
plt.contour(msk, 0, colors = 'g');
plt.contour(p_msk, 0, colors = 'r', linestyles='dashed');
plt.xlim(200, 350)
plt.ylim(50, 400)
plt.axis('off')
plt.gca().invert_yaxis()
for i, (img, msk, p_points) in enumerate(zip(imgs, msks, pred_points)):
if i == num_images: break
plt.subplot(2,num_images,num_images+i+1)
plt.imshow(img)
plt.contour(msk, 0, colors = 'g');
plt.scatter([p[1] for p in p_points], [p[0] for p in p_points], c='r', marker='x');
plt.xlim(200, 350)
plt.ylim(50, 400)
plt.axis('off')
plt.gca().invert_yaxis()
###Output
/home/vasily/.virtualenvs/cnn_course/lib/python3.6/site-packages/ipykernel_launcher.py:7: UserWarning: No contour levels were found within the data range.
import sys
/home/vasily/.virtualenvs/cnn_course/lib/python3.6/site-packages/ipykernel_launcher.py:8: UserWarning: No contour levels were found within the data range.
/home/vasily/.virtualenvs/cnn_course/lib/python3.6/site-packages/ipykernel_launcher.py:18: UserWarning: No contour levels were found within the data range.
###Markdown
Dependencies
###Code
from onn.OnlineNeuralNetwork import ONN
from onn.OnlineNeuralNetwork import ONN_THS
from sklearn.datasets import make_classification, make_circles
from sklearn.model_selection import train_test_split
import torch
from sklearn.metrics import accuracy_score, balanced_accuracy_score
from imblearn.datasets import make_imbalance
import numpy as np
###Output
_____no_output_____
###Markdown
Initialize the Network
###Code
onn_network = ONN(features_size=10, max_num_hidden_layers=5, qtd_neuron_per_hidden_layer=40, n_classes=10)
###Output
_____no_output_____
###Markdown
Creating Fake Classification Dataset
###Code
X, Y = make_classification(n_samples=50000, n_features=10, n_informative=4, n_redundant=0, n_classes=10,
n_clusters_per_class=1, class_sep=3)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42, shuffle=True)
###Output
_____no_output_____
###Markdown
Learning and Predicting at the Same Time
###Code
for i in range(len(X_train)):
onn_network.partial_fit(np.asarray([X_train[i, :]]), np.asarray([y_train[i]]))
if i % 1000 == 0:
predictions = onn_network.predict(X_test)
print("Online Accuracy: {}".format(balanced_accuracy_score(y_test, predictions)))
###Output
Online Accuracy: 0.12345474254226929
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.83978504 0.04005372 0.04005372 0.04005372 0.04005372]
Training Loss: 1.1249603
Online Accuracy: 0.9546213619299208
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.83990383 0.04002402 0.04002402 0.04002402 0.04002402]
Training Loss: 0.2449453
Online Accuracy: 0.9607803207573932
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.83983886 0.04004026 0.04004026 0.04004026 0.04004026]
Training Loss: 0.20816919
Online Accuracy: 0.9607183488393096
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8393975 0.04059466 0.04000259 0.04000259 0.04000259]
Training Loss: 0.18339467
Online Accuracy: 0.9655154179731016
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8353383 0.04421487 0.0403742 0.04003632 0.04003632]
Training Loss: 0.24723394
Online Accuracy: 0.9662435981100199
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8395832 0.04028165 0.04004847 0.04004331 0.04004331]
Training Loss: 0.16647828
Online Accuracy: 0.9619529567236043
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8399791 0.04000521 0.04000521 0.04000521 0.04000521]
Training Loss: 0.16890837
Online Accuracy: 0.9618319844925818
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8353959 0.04115101 0.04115101 0.04115101 0.04115101]
Training Loss: 0.14621308
Online Accuracy: 0.9707971883179001
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8376416 0.04203499 0.04023731 0.04004303 0.04004303]
Training Loss: 0.12924133
Online Accuracy: 0.9638287109074417
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.836956 0.04227248 0.04061536 0.04011491 0.04004124]
Training Loss: 0.1682845
Online Accuracy: 0.9686992001807786
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.82588196 0.04522423 0.04461598 0.04348376 0.04079402]
Training Loss: 0.16665892
Online Accuracy: 0.9614922833157268
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.82686603 0.04565494 0.04267894 0.0426919 0.04210816]
Training Loss: 0.23211034
Online Accuracy: 0.969421407533791
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8133861 0.04747684 0.04714322 0.04724298 0.04475088]
Training Loss: 0.20936987
Online Accuracy: 0.9644026201542217
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.8163941 0.0501083 0.04570407 0.04479402 0.0429996 ]
Training Loss: 0.18542005
Online Accuracy: 0.9713595800808452
WARNING: Set 'show_loss' to 'False' when not debugging. It will deteriorate the fitting performance.
Alpha:[0.82819635 0.04402507 0.04409818 0.04283734 0.04084307]
Training Loss: 0.13875505
Online Accuracy: 0.9683228111229731
###Markdown
An example notebook A [Jupyter notebooks](http://jupyter.org/) mixes blocks of explanatory text, like the one you're reading now, with cells containing Python code (_inputs_) and the results of executing it (_outputs_). The code and its output—if any—are marked by `In [N]` and `Out [N]`, respectively, with `N` being the index of the cell. You can see an example in the computations below:
###Code
def f(x, y):
return x + 2*y
a = 4
b = 2
f(a, b)
###Output
_____no_output_____
###Markdown
By default, Jupyter displays the result of the last instruction as the output of a cell, like it did above; however, `print` statements can display further results.
###Code
print(a)
print(b)
print(f(b, a))
###Output
4
2
10
###Markdown
Jupyter also knows a few specific data types, such as Pandas data frames, and displays them in a more readable way:
###Code
import pandas as pd
pd.DataFrame({ 'foo': [1,2,3], 'bar': ['a','b','c'] })
###Output
_____no_output_____
###Markdown
The index of the cells shows the order of their execution. Jupyter doesn't constrain it; to avoid confusing people, though, you better write your notebooks so that the cells are executed in sequential order as displayed. All cells are executed in the global Python scope; this means that, as we execute the code, all variables, functions and classes defined in a cell are available to the ones that follow. Notebooks can also include plots, as in the following cell:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
f = plt.figure(figsize=(10,2))
ax = f.add_subplot(1,1,1)
ax.plot([0, 0.25, 0.5, 0.75, 1.0], np.random.random(5))
###Output
_____no_output_____
###Markdown
As you might have noted, the cell above also printed a textual representation of the object returned from the plot, since it's the result of the last instruction in the cell. To prevent this, you can add a semicolon at the end, as in the next cell.
###Code
f = plt.figure(figsize=(10,2))
ax = f.add_subplot(1,1,1)
ax.plot([0, 0.25, 0.5, 0.75, 1.0], np.random.random(5));
###Output
_____no_output_____
###Markdown
Build thematic corpora * Build final thematic corpora by specifying an Spatial extent, and the thematic vocabulary concepts file
###Code
### Automate for all voc_concepts files
# files_list = glob.glob("./voc_concept/agriculture.txt")
voc_concept_file = "./voc_concept/agriculture.txt"
spatial_extent = 'montpellier'
# mgdb,mgcol = 'inventaire_medo', 'agriculture' # parameters to be set initially
# for voc_concept in files_list:
advanced_scraper(spatial_extent,voc_concept_file,voc_concept_file)
###Output
Création des dossier pour Montpellier
Création du dossier Documents_SRC
Lecture de keywords.txt ...
Début de la recherche de document concernant la ville de Montpellier
Génération des requêtes
Nouvelle requête : "Montpellier" AND production?agricole
URL : https://languedoc.msa.fr/
Document sauvegardé.
Document web converti en texte brut.
Texte nettoyé.
Document inséré dans l'inventaire.
INFO:collecteDeDonnees_19-10-21:Document inséré dans l'inventaire.
URL : https://www.umontpellier.fr/articles/tag/agriculture
INFO:collecteDeDonnees_19-10-21:URL : https://www.umontpellier.fr/articles/tag/agriculture
Document sauvegardé.
INFO:collecteDeDonnees_19-10-21:Document sauvegardé.
Document web converti en texte brut.
INFO:collecteDeDonnees_19-10-21:Document web converti en texte brut.
Texte nettoyé.
INFO:collecteDeDonnees_19-10-21:Texte nettoyé.
###Markdown
Test Differentiable Neural ComputerCreate synthetic input data `X` of dimension *NxM* where the first *N/2* rows consist of ones and zeros and the last *N/2* rows are zeros. The order of the rows are flipped for target `y` (first *N/2* rows are zeros now). The *DNC* needs to keep this in memory and predict `y` correctly.
###Code
import logging
import numpy as np
import tensorflow as tf
from model import DNC
from trainer import trainer
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Generate training data
###Code
rows, cols = 6, 4
ones = np.random.randint(0, cols, size=rows)
seq = np.zeros((rows, cols))
seq[np.arange(rows), ones] = 1
zer = np.zeros((rows, cols))
X = np.concatenate((seq, zer), axis=0).astype(np.float32)
y = np.concatenate((zer, seq), axis=0).astype(np.float32)
for i in range(rows):
assert (X[i, :] == y[rows+i,:]).all()
X_train = np.expand_dims(X, axis=0)
y_train = np.expand_dims(y, axis=0)
###Output
_____no_output_____
###Markdown
Initialize and train DNC modelInitialize:
###Code
dnc = DNC(
output_dim=cols,
memory_shape=(10,4), # shape of memory matrix
n_read=1 # nb of read heads
)
###Output
_____no_output_____
###Markdown
Train:
###Code
trainer(
model=dnc,
loss_fn=tf.keras.losses.mse,
X_train=X_train,
y_train=y_train,
epochs=2000,
batch_size=1,
verbose=False
)
###Output
_____no_output_____
###Markdown
Predict on `X`:
###Code
y_pred = dnc(X).numpy()
###Output
_____no_output_____
###Markdown
Check if the predictions are almost the same as the ground truth `y`:
###Code
np.testing.assert_almost_equal(y_pred, y, decimal=2)
np.set_printoptions(precision=3)
print('Prediction: ')
print(y_pred)
print('\nGround truth: ')
print(y)
###Output
Prediction:
[[-1.922e-03 1.810e-03 -1.225e-04 1.335e-03]
[-2.168e-03 2.258e-04 -1.364e-05 2.740e-03]
[ 4.904e-04 -6.639e-04 -1.084e-03 -1.633e-03]
[-2.993e-03 1.132e-03 -1.938e-04 4.551e-03]
[-7.027e-04 -2.482e-03 -7.492e-04 -1.286e-05]
[-9.379e-04 -6.188e-04 2.214e-03 -1.392e-03]
[-1.105e-03 -1.609e-03 8.935e-04 9.934e-01]
[-1.471e-03 9.989e-01 -2.230e-05 4.435e-03]
[-2.691e-03 3.490e-03 9.998e-01 2.988e-03]
[ 7.629e-05 9.961e-01 -2.395e-04 5.796e-04]
[-4.813e-03 1.002e+00 1.611e-04 1.424e-03]
[-5.659e-04 9.966e-01 3.111e-04 1.470e-03]]
Ground truth:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 1.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]]
###Markdown
Calculating pi using Monte Carlo methods Here are the equations used in this exercise: - square area = $(2 r)^2$ - circle area = $\pi r^2$ - circle / square = $\pi r^2 / 4 r^2$ = $\pi / 4$ - $\pi$ = 4 * (circle/square) Here is an image which explains the exercise: ![Darts](https://coderefinery.github.io/jupyter/img/darts.svg) Import random and ipywidgets module:
###Code
import random
###Output
_____no_output_____
###Markdown
Initialize variables:
###Code
N = 1000
points = []
###Output
_____no_output_____
###Markdown
“Throw darts”:
###Code
hits = 0
for i in range(N):
x, y = random.random(), random.random()
if x**2 + y**2 < 1.0:
hits += 1
points.append((x, y, True))
else:
points.append((x, y, False))
###Output
_____no_output_____
###Markdown
Plot results:
###Code
%matplotlib inline
from matplotlib import pyplot
x, y, colors = zip(*points)
pyplot.scatter(x, y, c=colors)
###Output
_____no_output_____
###Markdown
Compute final estimate of pi:
###Code
fraction = hits / N
4 * fraction
###Output
_____no_output_____
###Markdown
Widgets add more interactivity to Notebooks, allowing one to visualize and control changes in data, parameters etc. Use interact as a function
###Code
from ipywidgets import interact
def f(x, y, s):
return (x, y, s)
interact(f, x=True, y=1.0, s="Hello");
###Output
_____no_output_____
###Markdown
Use interact as a decorator
###Code
@interact(x=True, y=1.0, s="Hello")
def g(x, y, s):
return (x, y, s)
@interact
def plot_points(n=(1,10)):
# we plot every n-th point
x, y, colors = zip(*points[::n])
pyplot.scatter(x, y, c=colors)
import numpy as np
from ipywidgets import interact
import matplotlib.pyplot as plt
%matplotlib inline
def gaussian(x, a, b, c):
return a * np.exp(-b * (x-c)**2)
def noisy_gaussian():
# gaussian array y in interval -5 <= x <= 5
nx = 100
x = np.linspace(-5.0, 5.0, nx)
y = gaussian(x, a=2.0, b=0.5, c=1.5)
noise = np.random.normal(0.0, 0.2, nx)
y += noise
return x, y
def fit(x, y, n):
pfit = np.polyfit(x, y, n)
yfit = np.polyval(pfit, x)
return yfit
def plot(x, y, yfit):
plt.plot(x, y, "r", label="Data")
plt.plot(x, yfit, "b", label="Fit")
plt.legend()
plt.ylim(-0.5, 2.5)
plt.show()
x, y = noisy_gaussian()
@interact
def slider(n=(3, 30)):
yfit = fit(x, y, n)
plot(x, y, yfit)
###Output
_____no_output_____
###Markdown
Generate the key set. In the paper they took a subset of the dataset and assigned random labels to them in order to combat query modification. However that altered the validation accuracy too much. For simplicity reasons we will just invert the pixels of a subset of the training dataset and assign a random label to it.
###Code
def invert(x, y):
return tf.abs(x - 1.0), tf.convert_to_tensor(random.randint(0, 9), dtype=tf.int64)
key_set = dataset.take(128)
key_set = key_set.map(invert)
dataset = dataset.skip(128)
###Output
_____no_output_____
###Markdown
An easy way to achieve a high accuracy on the key set is to overfit our model on the key set, since it doesn't have to generalize.
###Code
key_set = key_set.concatenate(key_set).concatenate(key_set).concatenate(key_set).concatenate(key_set).concatenate(key_set)
union = dataset.concatenate(key_set)
dataset = dataset.shuffle(2048).batch(128).prefetch(AUTOTUNE)
union = union.shuffle(2048).batch(128).prefetch(AUTOTUNE)
val_set = val_set.batch(128)
###Output
_____no_output_____
###Markdown
t is the 'temperature' hyperparameter. The higher t is, the more the values of the weight matrix will get squeezed, 2.0 was used in the paper.
###Code
t = 2.0
model = keras.Sequential([
EWConv2D(16, 3, t, padding="same", activation=keras.activations.relu),
EWConv2D(32, 3, t, padding="same", strides=2, activation=keras.activations.relu),
EWConv2D(64, 3, t, padding="same", strides=2, activation=keras.activations.relu),
keras.layers.Flatten(),
EWDense(10, activation=None, t=t)
])
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["sparse_categorical_accuracy"])
model.build(input_shape=(None, 28, 28, 1))
###Output
_____no_output_____
###Markdown
Train the model normally with exponential weighting disabled until it converges:
###Code
_ = model.fit(x=dataset, epochs=3, validation_data=val_set)
###Output
Epoch 1/3
468/468 [==============================] - 10s 21ms/step - loss: 0.4735 - sparse_categorical_accuracy: 0.8533 - val_loss: 0.1299 - val_sparse_categorical_accuracy: 0.9613
Epoch 2/3
468/468 [==============================] - 9s 20ms/step - loss: 0.1229 - sparse_categorical_accuracy: 0.9641 - val_loss: 0.1012 - val_sparse_categorical_accuracy: 0.9691
Epoch 3/3
468/468 [==============================] - 9s 19ms/step - loss: 0.0913 - sparse_categorical_accuracy: 0.9727 - val_loss: 0.0779 - val_sparse_categorical_accuracy: 0.9753
###Markdown
Enable exponential weighting and train the model on the union of the dataset and the key set in order to embed the watermark:
###Code
enable_ew(model)
_ = model.fit(x=union, epochs=2, validation_data=val_set)
###Output
Epoch 1/2
474/474 [==============================] - 9s 19ms/step - loss: 0.0753 - sparse_categorical_accuracy: 0.9776 - val_loss: 0.1095 - val_sparse_categorical_accuracy: 0.9646
Epoch 2/2
474/474 [==============================] - 9s 19ms/step - loss: 0.0672 - sparse_categorical_accuracy: 0.9795 - val_loss: 0.0645 - val_sparse_categorical_accuracy: 0.9794
###Markdown
Reset the optimizer. Disable exponential weighting and test the accuracy on the key set:
###Code
model.optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
disable_ew(model)
_, key_acc = model.evaluate(key_set.batch(128))
_, val_acc = model.evaluate(val_set)
print(f"Watermark accuracy is {round(key_acc * 100, 2)}%.")
print(f"Validation set accuracy is {round(val_acc * 100, 2)}%.")
###Output
6/6 [==============================] - 0s 20ms/step - loss: 0.0000e+00 - sparse_categorical_accuracy: 1.0000
79/79 [==============================] - 1s 7ms/step - loss: 0.0645 - sparse_categorical_accuracy: 0.9794
Watermark accuracy is 100.0%.
Validation set accuracy is 97.94%.
###Markdown
OverviewThis example demonstrates how to scan query history from a data warehouse and save it in the data lineage app. The app automatically parses and extracts data lineage from the queries.The example consists of the following sequence of operations:* Start docker containers containing a demo. Refer to [docs](https://tokern.io/docs/data-lineage/installation) for detailed instructions on installing demo-wikimedia.* Scan and send queries from query history to data lineage app.* Visualize the graph by visiting Tokern UI.* Analyze the graph InstallationThis demo requires wikimedia demo to be running. Start the demo using the following instructions: in a new directory run wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/wikimedia-demo.yml or run curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/wikimedia-demo.yml -o docker-compose.ymlRun docker-compose docker-compose up -dVerify container are running docker container ls | grep tokern
###Code
# Required configuration for API and wikimedia database network address
docker_address = "http://127.0.0.1:8000"
wikimedia_db = {
"username": "etldev",
"password": "3tld3v",
"uri": "tokern-demo-wikimedia",
"port": "5432",
"database": "wikimedia"
}
import time
# Setup a connection to catalog using the SDK.
from data_lineage import Catalog, Scan
catalog = Catalog(docker_address)
# Register wikimedia datawarehouse with data-lineage app.
source = catalog.add_source(name="wikimedia", source_type="postgresql", **wikimedia_db)
# Scan the wikimedia data warehouse and register all schemata, tables and columns.
scan = Scan(docker_address)
job = scan.start(source)
# Wait for scan to complete
status = ""
while (status != "finished" and status != "failed"):
time.sleep(5)
status = scan.get(job["id"])["status"]
print("Status is {}".format(status))
import json
with open("test/queries.json", "r") as file:
queries = json.load(file)
from datetime import datetime
from data_lineage import Analyze
analyze = Analyze(docker_address)
for query in queries:
print(query)
analyze.analyze(**query, source=source, start_time=datetime.now(), end_time=datetime.now())
###Output
_____no_output_____
###Markdown
Load dataset
###Code
g = load_dataset('data/cora_ml.npz')
A, X, z = g['A'], g['X'], g['z']
###Output
_____no_output_____
###Markdown
Train a model and evaluate the link prediction performance
###Code
g2g = Graph2Gauss(A=A, X=X, L=64, verbose=True, p_val=0.10, p_test=0.05, p_nodes=0)
sess = g2g.train()
test_auc, test_ap = score_link_prediction(g2g.test_ground_truth, sess.run(g2g.neg_test_energy))
print('test_auc: {:.4f}, test_ap: {:.4f}'.format(test_auc, test_ap))
###Output
test_auc: 0.9753, test_ap: 0.9766
###Markdown
Train another model and evaluate the node classification performance
###Code
g2g = Graph2Gauss(A=A, X=X, L=64, verbose=True, p_val=0.0, p_test=0.00)
sess = g2g.train()
mu, sigma = sess.run([g2g.mu, g2g.sigma])
f1_micro, f1_macro = score_node_classification(mu, z, n_repeat=1, norm=True)
print('f1_micro: {:.4f}, f1_macro: {:.4f}'.format(f1_micro, f1_macro))
###Output
f1_micro: 0.8342, f1_macro: 0.8221
###Markdown
Train another model without the node attributes X
###Code
g2g = Graph2Gauss(A=A, X=A+sp.eye(A.shape[0]), L=64, verbose=True, p_val=0.0, p_test=0.00)
sess = g2g.train()
mu, sigma = sess.run([g2g.mu, g2g.sigma])
f1_micro, f1_macro = score_node_classification(mu, z, n_repeat=1, norm=True)
print('f1_micro: {:.4f}, f1_macro: {:.4f}'.format(f1_micro, f1_macro))
###Output
f1_micro: 0.7804, f1_macro: 0.7626
###Markdown
Creates a playlist with the most listened songs for each term- short term: 4 weeks- medium term: 6 months- long term: years
###Code
for term in ['short_term', 'medium_term', 'long_term']:
gen.create_playlist_for_top_songs(term)
###Output
_____no_output_____
###Markdown
Creates a playlist of recommended songs for a given term based on the spotify recommendation system
###Code
gen.create_recommendation_playlist_for_term("short_term")
###Output
_____no_output_____
###Markdown
Creates a playlist based on the time in which the songs were saved to the user's library. Creates one playlist from January to June, one from July to December for every year in which user stored songs
###Code
gen.create_play_list_by_half_year()
###Output
_____no_output_____
###Markdown
Creates new playlist based on the similarities of the saved songs. The more playlists, the more similar the songs within the playlist will get
###Code
gen.cluster_songs(nClusters=4)
###Output
_____no_output_____
###Markdown
Creates a new playlist only with the top songs of the most listened artists in the given term
###Code
gen.create_playlist_for_top_artists("short_term")
###Output
_____no_output_____
###Markdown
AboutThis notebook demonstrates how to use `HMFlow`.
###Code
from astropy.io import fits
import astropy.units as u
from HMFlow.HMFlow import *
###Output
_____no_output_____
###Markdown
Data
###Code
# Numpy arrays
density = fits.open('example_data/density.fits')[0].data
vx = fits.open('example_data/vx.fits')[0].data
vy = fits.open('example_data/vy.fits')[0].data
vz = fits.open('example_data/vz.fits')[0].data
###Output
_____no_output_____
###Markdown
Load dataThis creates an HMFlow3D object.
###Code
# mandatory parameter
pixscale = 5.*u.pc/512.
# optional parameters
unit_density = u.cm**-3. ## default is 1/cm^3; can be mass density such as g/cm^3
unit_velocity = u.km/u.s ## default is km/s
# Create an HMFlow3D object.
HMFlow = HMFlow3D(density, vx, vy, vz, pixscale, unit_density = unit_density, unit_velocity = unit_velocity)
###Output
_____no_output_____
###Markdown
Calculate the dendrogram
###Code
# mandatory parameters
min_value = 5e4 ## see astrodendro documentation
min_npix = 150
min_delta = 5e4
# optional parameter
periodic = True ## indicate whether the boxes are periodic; default is True
HMFlow.dendrogram(min_value = min_value, min_npix = min_npix, min_delta = min_delta, periodic = periodic)
###Output
Number of structures: 17
Number of leaves: 16
###Markdown
Calculate the flux and the mass flow; output in a csv file
###Code
# optional parameter
direc = 'output.csv' ## default is 'output.csv' in the local folder
HMFlow.calculate(direc = direc)
###Output
_____no_output_____
###Markdown
Dorado sensitivity calculator examples Imports
###Code
from astropy import units as u
from astropy.coordinates import GeocentricTrueEcliptic, get_sun, SkyCoord
from astropy.time import Time
from astropy.visualization import quantity_support
from matplotlib import pyplot as plt
import numpy as np
import synphot
import dorado.sensitivity
###Output
_____no_output_____
###Markdown
Plot filter efficiencyNote that this is converted from the effective area curve assuming a fiducial collecting area of 100 cm$^2$.
###Code
dorado.sensitivity.bandpasses.NUV_D.plot(ylog=True, title=r'$\mathrm{NUV}_\mathrm{D}$ sensitivity')
###Output
_____no_output_____
###Markdown
Example SNR calculationThis example is for a 10 minute observation of a flat-spectrum 21 AB mag source in "high" zodiacal light conditions (looking in the plane of the ecliptic, but anti-sunward), observing while on the night side of the Earth.
###Code
time = Time('2020-10-31 12:33:12')
sun = get_sun(time).transform_to(GeocentricTrueEcliptic(equinox=time))
coord = SkyCoord(sun.lon + 180*u.deg, 0*u.deg, frame=GeocentricTrueEcliptic(equinox=time))
source = synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=21 * u.ABmag)
dorado.sensitivity.get_snr(source, exptime=10*u.min, coord=coord, time=time, night=True)
###Output
_____no_output_____
###Markdown
Limiting magnitude calculationCalculate the SNR=5 limiting magnitude as a function of exposure time for a flat-spectrum source at the position of NGC 4993.
###Code
ax = plt.axes()
ax.invert_yaxis()
ax.set_xlabel('Exposure time (s)')
ax.set_ylabel('Limiting magnitude (AB)')
exptimes = np.linspace(0, 1000) * u.s
coord = SkyCoord.from_name('NGC 4993')
time = Time('2017-08-17 17:54:00')
for night in [False, True]:
limmags = dorado.sensitivity.get_limmag(
synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=0 * u.ABmag), snr=5, exptime=exptimes, coord=coord, time=time, night=night)
ax.plot(exptimes, limmags, label='night' if night else 'day')
ax.legend()
###Output
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:479: RuntimeWarning: divide by zero encountered in true_divide
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:479: RuntimeWarning: divide by zero encountered in true_divide
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
###Markdown
Round trip checkCheck that `get_limmag` is the inverse of `get_snr`.
###Code
for exptime, limmag in zip(exptimes, limmags):
print(dorado.sensitivity.get_snr(
synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=limmag),
exptime=exptime, coord=coord, time=time, night=night))
###Output
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:479: RuntimeWarning: invalid value encountered in multiply
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
nan
4.999999999999993
4.9999999999999964
4.999999999999995
4.999999999999993
4.999999999999992
4.999999999999999
4.9999999999999964
4.999999999999992
4.999999999999999
4.999999999999994
4.999999999999995
4.9999999999999885
4.999999999999996
5.0
4.999999999999995
4.999999999999985
4.999999999999993
5.000000000000004
5.000000000000001
4.999999999999995
5.000000000000005
5.000000000000002
5.000000000000006
4.999999999999999
4.999999999999995
4.999999999999994
4.999999999999994
5.000000000000004
5.000000000000005
5.000000000000004
4.999999999999997
4.999999999999997
4.999999999999997
5.000000000000001
4.999999999999996
4.999999999999996
4.999999999999996
5.000000000000009
4.999999999999998
5.000000000000003
4.999999999999979
4.999999999999997
5.000000000000004
4.999999999999995
4.9999999999999964
5.000000000000001
4.999999999999995
5.000000000000003
5.0000000000000036
###Markdown
Let's load a dataset.
###Code
text_vocab, tips_vocab, train_iter, val_iter, test_iter = (
amazon_dataset_iters('./data/average_dataset/', device=None)
)
items_count = int(max([i.item.max().cpu().data.numpy() for i in train_iter] +
[i.item.max().cpu().data.numpy() for i in test_iter])[0])
users_count = int(max([i.user.max().cpu().data.numpy() for i in train_iter] +
[i.user.max().cpu().data.numpy() for i in test_iter])[0])
items_count, users_count
###Output
_____no_output_____
###Markdown
Creating the model.
###Code
model = Model(vocabulary_size=len(text_vocab.itos),
items_count=items_count+10,
users_count=users_count+10,
context_size=50,
hidden_size=50,
user_latent_factors_count=50,
item_latent_factors_count=50).cuda()
trainer = Trainer(model)
###Output
_____no_output_____
###Markdown
Start training.
###Code
history = trainer.train(train_iter, n_epochs=1)
###Output
Epochs: 0 / 1, Loss: inf: 100%|██████████| 32/32 [00:03<00:00, 8.84it/s]
###Markdown
Let's decode the outputs.
###Code
batch_sample = next(iter(train_iter))
batch_predict_sample = model.forward(batch_sample.user, batch_sample.item)
beam_size = 22
beam = Beam(beam_size, text_vocab.stoi, cuda=True)
for i in range(5):
beam.advance(torch.exp(batch_predict_sample[2][2, :, :]).data)
results = np.array([beam.get_hyp(i) for i in range(beam_size)])
n_best = 60
scores, ks = beam.sort_best()
hyps = list(zip(*[beam.get_hyp(k) for k in ks[:n_best]]))
print('\n'.join('\t'.join(text_vocab.itos[i] if i < len(text_vocab.itos) else '<!>'
for i in results[k])
for k in range(22)
))
###Output
$start $start $start $start $start
$start $start $start to <pad>
$start $start $start $start a
$start $start $start $start <unk>
$start $start $start $start the
$start $start $start $start $end
$start $start $start $start <pad>
$start $start $start $start great
$start $start $start $start of
$start $start $start $start this
$start $start $start best <pad>
$start $start $start $start ,
$start $start $start $start classic
$start $start $start $start it
$start $start $start $start an
$start $start $start $start not
$start $start $start $start christmas
$start $start $start $start movie
$start $start $start $start !
$start $start $start $start -
$start $start $start $start best
$start $start $start $start to
###Markdown
localtileserverLearn more: https://localtileserver.banesullivan.com/
###Code
from localtileserver import examples, get_leaflet_tile_layer, TileClient
from ipyleaflet import Map
# First, create a tile server from local raster file
bahamas = TileClient('bahamas_rgb.tif')
# Create ipyleaflet tile layer from that server
bahamas_layer = get_leaflet_tile_layer(bahamas)
# Create ipyleaflet map, add layers, add controls, and display
m = Map(center=bahamas.center(), zoom=8)
m.add_layer(bahamas_layer)
m
# Create a tile server from an raster URL
oam = TileClient('https://oin-hotosm.s3.amazonaws.com/59c66c5223c8440011d7b1e4/0/7ad397c0-bba2-4f98-a08a-931ec3a6e943.tif')
# Create ipyleaflet tile layer from that server
oam_layer = get_leaflet_tile_layer(oam)
# Create ipyleaflet map, add layers, add controls, and display
m = Map(center=oam.center(), zoom=16)
m.add_layer(oam_layer)
m
###Output
_____no_output_____
###Markdown
bitMEX Scraper ExampleThis is an implementation of the bitMEX Historical Scraper (https://github.com/bmoscon/bitmex_historical_scraper). ColumnsExplanation of the columns can be found at https://www.bitmex.com/api/explorer//Trade. - `size`: Amount of contracts traded.- `tickDirection`: "MinusTick": The trade happened at a lower price than the previous one. "PlusTick" : This trade happened at a higher price than the previous one. "ZeroPlusTick" : The previous trade was PLUSTICK and this one has a price equal or lower than the previous one. "ZeroMinusTick" : The previous trade was MINUSTICK and this one has a price equal or higher than the previous one.- `homeNotional`: Total value of trade in home denomination (e.g. XBT in XBTUSD).- `foreignNotional`: Total value of trade in foreign denomination (e.g. UDS in XBTUSD).
###Code
# Define a folder to store data to
file_dir = 'D:/data/BITMEX/'
# Load library
from bitmex_scraping import get_bitmex_data, get_bitmex_data_period
###Output
_____no_output_____
###Markdown
Scraping one day
###Code
get_bitmex_data('20141122', file_dir).info(memory_usage='deep')
get_bitmex_data('20141122', file_dir, leanMode=True).info(memory_usage='deep')
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 timestamp 2 non-null datetime64[ns]
1 symbol 2 non-null category
2 side 2 non-null category
3 size 2 non-null uint32
4 price 2 non-null float64
dtypes: category(2), datetime64[ns](1), float64(1), uint32(1)
memory usage: 295.0 bytes
###Markdown
Large SizesIf one (or more) of the sizes is too large for an `uint32`, we keep `int64`:
###Code
get_bitmex_data('20210915', file_dir, leanMode=True).info(memory_usage='deep')
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 319079 entries, 0 to 319078
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 timestamp 319079 non-null datetime64[ns]
1 symbol 319079 non-null category
2 side 319079 non-null category
3 size 319079 non-null int64
4 price 319079 non-null float64
dtypes: category(2), datetime64[ns](1), float64(1), int64(1)
memory usage: 7.9 MB
###Markdown
Scraping several days
###Code
df = get_bitmex_data_period(
'20210903',
'20210911',
file_dir,
leanMode=True
)
df.info(memory_usage='deep', null_counts=True)
df['timestamp'].min(), df['timestamp'].max()
###Output
_____no_output_____
###Markdown
Trades
###Code
df.groupby('symbol')['timestamp'].count().sort_values(ascending=False).plot.bar(figsize=(16, 6));
###Output
_____no_output_____
###Markdown
DeepSentiment 0.1.0 This is a python wrapper around Stanford's CoreNLP project V3.6.0 specifically for Senitment Analysis.Wrapper's code has been taken from Stanford's deepsentiment project for Sentiment Anotation trees and parsing. The python code here opens up on 9000 port on localhost.
###Code
from DeepSentiment import GetSentiment
import time
grab = GetSentiment.DeepSentiment()
grab.run_server()
time.sleep(5)
###Output
_____no_output_____
###Markdown
After the server is running, parse the text to deepsentiment function and output would be coming as json format which could be handled by get functions. If you just want the sentiment then get_sentiment should work. If you are looking for specific scores for the text then you can make use of get_sentiment_score function. Don't forget to stop the server if you want to set the port free.
###Code
result = grab.deepsentiment("amazing place you have here")
print(grab.get_sentiment(result))
print(grab.get_sentiment_score(result))
grab.stop_server()
###Output
_____no_output_____
###Markdown
Parameters
###Code
params = {
"model": "mnist_model",
"dataset": "mnist",
"batch_size": 5,
"max_nr_batches": 2, # -1 for no early stopping
"attribution_methods": [
"gradientshap",
"deeplift",
"lime",
"saliency",
"smoothgrad",
"integrated_gradients",
"guidedbackprop",
"gray_image",
],
"ensemble_methods": [
"mean",
"variance",
"rbm",
"flipped_rbm",
"rbm_flip_detection",
],
"attribution_processing": "filtering",
"normalization": "min_max",
"scoring_methods": ["insert", "delete", "irof"],
"scores_batch_size": 40,
"package_size": 1,
"irof_segments": 60,
"irof_sigma": 4,
"batches_to_plot": [0],
}
attribution_params = {
"lime": {
"use_slic": True,
"n_slic_segments": 100,
},
"integrated_gradients": {
"baseline": "black",
},
"noise_normal": {},
"deeplift": {},
"gradientshap": {},
"saliency": {},
"occlusion": {},
"smoothgrad": {},
"guidedbackprop": {},
"gray_image": {},
}
rbm_params = {
"batch_size": 15,
"learning_rate": 0.001,
"n_iter": 300,
}
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def plot(raw_images, attributions, ensemble_attributions):
for idx in range(attributions.shape[1]):
# idx = 0 # first image of the batch
orig_image = raw_images[idx].detach().cpu().numpy()
orig_image = orig_image.transpose(1, 2, 0)
# For MNIST remove the color dimension
if orig_image.shape[2] == 1:
orig_image = orig_image.reshape(orig_image.shape[0:2])
images = [orig_image]
# one image for every attribution method
for j, title in enumerate(params["attribution_methods"]):
# Remove randoms step 1
if "noise" in title:
continue
attribution_img = attributions[j][idx].cpu().detach().numpy()
images.append(attribution_img)
# # one image for every ensemble method
for j in range(len(params["ensemble_methods"])):
ensemble_img = ensemble_attributions[j][idx].cpu().detach().numpy()
images.append(ensemble_img)
# Remove the randoms step 2
non_random = np.array(["noise" not in t for t in params["attribution_methods"]])
attr_methods = np.array(params["attribution_methods"])[non_random]
my_plot(
images,
["original"]
+ list(attr_methods)
+ params["ensemble_methods"]
+ ["flipped_rbm"]
)
def my_plot(images, titles):
# make a square
x = int(np.ceil(np.sqrt(len(images))))
fig, axs = plt.subplots(x, x, figsize=(15, 15))
# Remove the NaNs
for i in range(len(images)):
images[i][np.isnan(images[i])] = 0
# Ensure that all attributions get equal weight during plotting
mean_max_value = np.mean([np.max(img / np.sum(img)) for img in images[1:]])
# plot the images
for i, ax in enumerate(axs.flatten()):
if i < len(images):
if i == 0:
# Show the original image
ax.imshow(images[i])
else:
# Plot the attributions and ensure equal plotting
img = images[i] / np.sum(images[i]) / mean_max_value
ax.imshow(img, vmin=0, vmax=1 / 3, cmap="Greens")
ax.set_axis_off()
ax.set_title(titles[i], color="blue", fontdict={"fontsize": 20})
else:
ax.set_visible(False)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
running the code
###Code
# classification model
model = get_model(params["model"], device=device)
# dataset and which images to explain the classification for
dataset = datasets.get_dataset(params["dataset"])
dataloader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=False, num_workers=2
)
dataset_raw = datasets.get_dataset(params["dataset"], normalized=False)
dataloader_raw = torch.utils.data.DataLoader(
dataset_raw, batch_size=1, shuffle=False, num_workers=2
)
img, label = next(iter(dataloader))
img = img.to(device)
label = label.to(device)
raw_img = next(iter(dataloader_raw))[0]
###########################
# attributions #
###########################
attributions = generate_attributions(
img,
label,
model,
params,
attribution_params,
device,
)
zero = torch.Tensor([0]).to(device)
# Set negative values to zero
attributions = torch.max(attributions, zero)
# Make sure we have values in range [0,1]
attributions = normalize(params["normalization"], arr=attributions)
###########################
# ensembles #
###########################
ensemble_attributions = generate_ensembles(
attributions, params["ensemble_methods"], rbm_params, device
)
# make sure it sums to 1
ensemble_attributions = normalize(
params["normalization"], arr=ensemble_attributions
)
plot(raw_img,
attributions,
ensemble_attributions
)
###Output
_____no_output_____
###Markdown
The datasetDownload url: https://cernbox.cern.ch/index.php/s/9Z1XjrS9ofuyA33The dataset contains 4 blocks of 100k events (all shuffled and balanced 50:50). The following uses the first block for training and the last block for testing. You can adjust this, e.g. to use the first 3 blocks for training set `train_stop=3`.
###Code
x_train, x_test, y_train, y_test, feature_names = load_data(
"erum_data_classifier_comparison.npz",
train_start=0, train_stop=1, test_start=3, test_stop=4
)
###Output
_____no_output_____
###Markdown
The dataset contains 3 arrays: * `x_feature` : particle features * `x_pdg` : pdg ids of each particle * `x_adjacency` : indices of mother particles. This array, if one-hot-encoded, gives the adjacency matrix
###Code
x_train.keys()
###Output
_____no_output_____
###Markdown
The particle lists are padded to a maximum number of 100 particles per event. Missing values are set to 0, except for the `x_adjacency` array, where they are set to -1.
###Code
[x.shape for x in x_train.values()]
###Output
_____no_output_____
###Markdown
No scaling is applied for the features, since they are already in a range around 0. However, one could optimize this.
###Code
feature_names
fig, axs = plt.subplots(nrows=2, ncols=4, figsize=(20, 8))
for i, (name, ax) in enumerate(zip(feature_names, np.array(axs).ravel())):
ax.hist(x_train["x_feature"][:,:,i].ravel(), bins=100)
ax.set_title(name)
ax.set_yscale("log")
###Output
_____no_output_____
###Markdown
The pdg ids are mapped to numbers in a continuous range, since we want to feed them through an embedding layer. The `tokenize_dict` can be used to reverse the mapping.
###Code
reverse_tokenize_dict = {v : k for k, v in tokenize_dict.items()}
@np.vectorize
def revert_pdg_tokens(pdg_token):
return reverse_tokenize_dict[pdg_token]
revert_pdg_tokens(x_train["x_pdg"][0])
###Output
_____no_output_____
###Markdown
The reference modelThe reference model consists of 2 Blocks, 1 Block is a per-particle transformation (including a few simple Graph convolution layers) and then after summing the latent space for all particles a Block of Dense layers that perform an event-level transformation. Have a look at `model.py`The mother indices are converted to adjacency matrices on-the-fly.
###Code
model = get_model(
num_nodes=x_train["x_adjacency"].shape[1],
num_features=x_train["x_feature"].shape[2],
num_pdg=len(pdgTokens),
)
tf.keras.utils.plot_model(model)
model.summary()
model.compile(optimizer="adam", loss="binary_crossentropy")
history = model.fit(
x_train,
y_train,
shuffle=True,
batch_size=128,
validation_split=0.25,
epochs=25,
)
for k in history.history:
plt.plot(history.history[k], label=k)
plt.legend()
scores = model.predict(x_test, batch_size=1024).ravel()
opts = dict(bins=100, range=(0, 1), alpha=0.5)
plt.hist(scores[y_test==0], **opts)
plt.hist(scores[y_test==1], **opts);
from sklearn.metrics import roc_curve, auc
fpr, tpr, thr = roc_curve(y_test, scores)
plt.plot(fpr, tpr)
plt.grid()
auc(fpr, tpr)
###Output
_____no_output_____
###Markdown
building dataset
###Code
from sklearn.datasets import make_swiss_roll
n_points = 1000
data_s_roll, color = make_swiss_roll(n_points)
data_s_roll = data_s_roll.T
data_s_roll.shape
fig_swiss_roll = plt.figure(figsize = (10,10))
fig_swiss_roll.suptitle("Swiss roll dataset")
ax = fig_swiss_roll.add_subplot(projection='3d')
ax.scatter(data_s_roll[0,:], data_s_roll[1,:], data_s_roll[2,:], c=color,
cmap=plt.cm.Spectral)
ax.view_init(4, -72);
###Output
_____no_output_____
###Markdown
CMDS
###Code
mds = MDS(n_dim=2)
X_low = mds.fit(data_s_roll,method='cmds')
plt.figure(figsize = (12,12))
plt.title('cmds')
plt.scatter(X_low[0,:],X_low[1,:],c = color)
plt.grid()
plt.savefig('cmds.jpg')
plt.show()
###Output
_____no_output_____
###Markdown
ISOMAP
###Code
model = IsoMap(n_dim=2,n_neighbors=50)
X_low = model.fit(data_s_roll)
plt.figure(figsize = (12,12))
plt.title('isomap')
plt.scatter(X_low[0,:],X_low[1,:],c = color)
plt.grid()
plt.savefig('isomap.jpg')
plt.show()
###Output
_____no_output_____
###Markdown
Local Linear Embedding
###Code
model = LocalLinearEmbedding(n_dim=2,n_neighbors=100)
X_low = model.fit(data_s_roll)
plt.figure(figsize = (12,12))
plt.title('local linear embedding')
plt.scatter(X_low[0,:],X_low[1,:],c = color)
plt.grid()
plt.savefig('lle.jpg')
plt.show()
###Output
_____no_output_____
###Markdown
Estimation DID 1. DID using self defined cross product
###Code
category_col = ['time'] # group variable, can be id or time
consist_col = ['x_1','treatment','post*treatment'] #independent variables
out_col = ['y'] # dependent variable
result0 = ols_high_d_category(data_df,
consist_col,
out_col,
category_col)
result0.summary()
###Output
demean time: 0.0098 s
time used to calculate degree of freedom of category variables: 0.0003 s
degree of freedom of category variables: 10
['x_1', 'treatment', 'post*treatment']
High Dimensional Fixed Effect Regression Results
====================================================================================
Dep. Variable: y R-squared(proj model): 0.0331
No. Observations: 1000 Adj. R-squared(proj model): 0.0213
DoF of residual: 987.0 R-squared(full model): 0.0595
Residual std err: 7.3724 Adj. R-squared(full model): 0.0471
Covariance Type: nonrobust F-statistic(proj model): 11.2609
Cluster Method: no_cluster Prob (F-statistic (proj model)): 2.874e-07
DoF of F-test (proj model): [3.0, 987.0]
F-statistic(full model): 4.8008
Prob (F-statistic (full model)): 3.697e-08
DoF of F-test (full model): [13, 987]
============================================================================================
coef nonrobust std err t P>|t| [0.025 0.975]
--------------------------------------------------------------------------------------------
const -2.90935 0.26908 -10.8123 0.0000 -3.4374 -2.3813
x_1 1.28514 0.23794 5.4010 0.0000 0.8182 1.7521
treatment 3.68267 1.70248 2.1631 0.0308 0.3418 7.0236
post*treatment -3.28275 1.79472 -1.8291 0.0677 -6.8047 0.2392
============================================================================================
###Markdown
obtain fixedeffect
###Code
getfe(result0)
###Output
_____no_output_____
###Markdown
2. DID using treatment_input
###Code
category_col = ['id','time']
consist_col = ['x_1']
out_col = ['y']
result0 = ols_high_d_category(data_df,
consist_col,
out_col,
category_col,
treatment_input={'treatment_col':'treatment', 'exp_date': 2,'effect': 'group'})
result0.summary()
getfe(result0)
###Output
_____no_output_____
###Markdown
IV
###Code
#iv
formula = 'y~x_1+x_2|id+time|0|(x_3|x_4~x_5+x_6)'
result = ols_high_d_category(data_df, formula = formula)
result.summary()
ivtest(result)
###Output
Weak IV test with critical values based on 2SLS size
================================================
Cragg-Donald Statistics: 0.000577
number of instrumental variables: 2
number of endogenous variables: 2
=============================================================================
5% 10% 20% 30%
-----------------------------------------------------------------------------
2SLS Size of nominal 5% Wald test 7.0300 4.5800 3.9500 3.6300
-----------------------------------------------------------------------------
H0: Instruments are weak
Over identification test - nonrobust
==============================================
test statistics p values
----------------------------------------------
Sargan Statistics: 0 0
Basmann Statistics: 0 0
----------------------------------------------
Tests of endogeneity
=============================================
test statistics p values
---------------------------------------------
Durbin Statistics: 974.8824 0
---------------------------------------------
H0: variables are exogenous
###Markdown
Example of using DOpt Federov Exchange Algorithm Algorithm obtained from- **Algorithm AS 295:** A Fedorov Exchange Algorithm for D-Optimal Design- **Author(s):** Alan J. Miller and Nam-Ky Nguyen- **Source:** Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 43, No. 4, pp. 669-677, 1994- **Stable URL:** http://www.jstor.org/stable/2986264 Source code from- http://ftp.uni-bayreuth.de/math/statlib/apstat/ Notes- This is a two design variable, kwadratic model example problem from Myers and Montgomery, Response Surface Methodology Load the dopt shared library that provides the interface Print the documentation and note that Input- $x$ is the 2D numpy array that contains the candidate points to select from- $n$ is the number of points in the final design- $in$ is the number of preselected points that MUST be in the final design (>= 0)- $rstart$ indicate if a random start should be performed, should be True in most cases. If False the user must supply the initial design in $picked$- $picked$ is a 1D array that contains the preselected point ID's (remember FORTRAN is 1 based array) on input. The first $in$ entries are read for ID's. On output it contains the ID's in x of the final selection Output- $lndet$ is the logarithm of the determinant of the best design- $ifault$ is possible fault codes>- -1 if no full rank starting design is found>- 0 if no error is detected>- 1* if DIM1 < NCAND>- 2* if K < N>- 4* if NRBAR < K(K - 1)/2>- 8* if K KIN + NBLOCK>- 16* if the sum of block sizes is not equal to N>- 32* if any IN(I) BLKSIZ(I)
###Code
import numpy as np
import pandas as pd
import statsmodels.api as sm
import math as m
import dopt
print( dopt.dopt.__doc__ )
###Output
lndet,ifault = dopt(x,n,in,rstart,picked)
Wrapper for ``dopt``.
Parameters
----------
x : input rank-2 array('d') with bounds (dim1,kin)
n : input int
in : input int
rstart : input int
picked : input rank-1 array('i') with bounds (n)
Returns
-------
lndet : float
ifault : int
###Markdown
Load the sample data set from the Excel spreadsheet and clean it up by removing all duplicate points>- 2 Design variables, Full Quadratic model
###Code
# Sample data set from Excel spreadsheet
filename = 'MyersExample.xlsx'
xls = pd.ExcelFile(filename)
df1 = pd.read_excel(xls, 'Sheet2')
# Remove all duplicate rows from the data set - Note that the dataset now only have 8 unique values
df1 = df1.drop_duplicates(subset=['x1','x2'], keep='first')
print(df1)
# Pull out the 3 and 4th columns as the x1 and x2 variables that we will use to create the model matrix from
y = df1.iloc[:, 5].values
x1 = df1.iloc[:, 3].values
x2 = df1.iloc[:, 4].values
# Scale the variables - Seems to work if we scale or not - is typically always a good idea to scale
x1 = (x1 + x1 - x1.min() - x1.max()) / (x1.max()-x1.min())
x2 = (x2 + x2 - x2.min() - x2.max()) / (x2.max()-x2.min())
# Setup the design matrix
x = np.zeros((len(x1), 6), float)
x[:,0] = 1.
x[:,1] = x1
x[:,2] = x2
x[:,3] = x1*x1
x[:,4] = x2*x2
x[:,5] = x1*x2
print(' ')
print (x)
###Output
Observation z1 z2 x1 x2 y
0 1 200.00 15.00 -1.000 -1.000 43
1 2 250.00 15.00 1.000 -1.000 78
2 3 200.00 25.00 -1.000 1.000 69
3 4 250.00 25.00 1.000 1.000 73
4 5 189.65 20.00 -1.414 0.000 48
5 6 260.35 20.00 1.414 0.000 78
6 7 225.00 12.93 0.000 -1.414 65
7 8 225.00 27.07 0.000 1.414 74
8 9 225.00 20.00 0.000 0.000 76
[[ 1. -0.70721358 -0.70721358 0.50015105 0.50015105 0.50015105]
[ 1. 0.70721358 -0.70721358 0.50015105 0.50015105 -0.50015105]
[ 1. -0.70721358 0.70721358 0.50015105 0.50015105 -0.50015105]
[ 1. 0.70721358 0.70721358 0.50015105 0.50015105 0.50015105]
[ 1. -1. 0. 1. 0. -0. ]
[ 1. 1. 0. 1. 0. 0. ]
[ 1. 0. -1. 0. 1. -0. ]
[ 1. 0. 1. 0. 1. 0. ]
[ 1. 0. 0. 0. 0. 0. ]]
###Markdown
Call the interface and print the output and the picked array- We raise an exception with iFault is not 0 - this is just good practice- We repeat the DOptimal process 10 times and pick the best design. We do this in an attempt to avoid local minima
###Code
# Number of points to pick - we can pick a max of 9 and a minimum of 6
n = 8
# Array of point ID's that will be picked
picked = np.zeros( n, np.int32 )
# Number of picked points (points to force into the design)
npicked = 0
# Store the best design and the corresponding determinant values
bestDes = np.copy( picked )
bestDet = 0
rstart = True # Look at documentation for 295 - should not really need to change
# Repeat the process 10 times and store the best design
for i in range(0, 10) :
# Make the DOptimal call
lnDet, iFault = dopt.dopt( x, n, npicked, rstart, picked)
# Raise an exception if iFault is not equal to 0
if iFault != 0:
raise ValueError( "Non-zero return code form dopt algorith. iFault = ", iFault )
# Store the best design
if m.fabs(lnDet) > bestDet:
bestDet =lnDet
bestDes = np.copy( picked )
# Print the best design out
print( "Maximum Determinant Found:", m.exp(bestDet) )
print( "\nBest Design Found (indices):\n", np.sort(bestDes) )
print( "\nBest Design Found (variables):\n", x[np.sort(bestDes)-1,1:4] )
###Output
Maximum Determinant Found: 48.07337205914986
Best Design Found (indices):
[1 2 3 4 5 6 7 9]
Best Design Found (variables):
[[-0.70721358 -0.70721358 0.50015105]
[ 0.70721358 -0.70721358 0.50015105]
[-0.70721358 0.70721358 0.50015105]
[ 0.70721358 0.70721358 0.50015105]
[-1. 0. 1. ]
[ 1. 0. 1. ]
[ 0. -1. 0. ]
[ 0. 0. 0. ]]
###Markdown
Now solve the least squares problem
###Code
# Extract the DOptimum x and y values
y_opt = y[np.sort(bestDes)-1]
x_opt = x[np.sort(bestDes)-1]
# Setup the Statsmodels model and perform the fit
modOLS = sm.OLS( y_opt, x_opt )
resOLS = modOLS.fit()
# Print the summary of the ordinary least squares fit
print( resOLS.summary() )
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.995
Model: OLS Adj. R-squared: 0.982
Method: Least Squares F-statistic: 78.49
Date: Tue, 03 Sep 2019 Prob (F-statistic): 0.0126
Time: 14:10:00 Log-Likelihood: -10.575
No. Observations: 8 AIC: 33.15
Df Residuals: 2 BIC: 33.63
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 76.0008 1.815 41.871 0.001 68.191 83.811
x1 14.3932 0.907 15.861 0.004 10.489 18.298
x2 6.7706 1.171 5.780 0.029 1.730 11.811
x3 -13.6538 2.160 -6.321 0.024 -22.949 -4.359
x4 -5.5362 2.401 -2.306 0.148 -15.865 4.793
x5 -15.4953 1.815 -8.539 0.013 -23.303 -7.688
==============================================================================
Omnibus: 0.029 Durbin-Watson: 1.256
Prob(Omnibus): 0.986 Jarque-Bera (JB): 0.261
Skew: 0.000 Prob(JB): 0.878
Kurtosis: 2.115 Cond. No. 6.09
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
[![Build Status](https://travis-ci.org/devrandom/python-blockstack.svg?branch=master)](https://travis-ci.org/devrandom/python-blockstack)Blockstack API (https://blockstack.io/) Examples
###Code
from blockstack.client import BlockstackClient
# Substitute your API token
token = 'eyJraWQiOm51bGwsImFsZyI6IkhTMjU2In0.eyJpc3MiOiJibG9ja3N0YWNrIiwiYXVkIjoiYmxvY2tzdGFjayIsImV4cCI6MTc0ODI4MzUzMSwianRpIjoiZUNJTzZRdHhiclI1UFdOdmV3YXZjdyIsImlhdCI6MTQzMjkyMzUzMSwibmJmIjoxNDMyOTIzNDExLCJzdWIiOiJtaXJvbiIsImFwaSI6InRydWUifQ.o_IuoWQbD7x49MXyN-OqeApg1OK8MftFJy1JJpiOAtI'
# Substitute https://XXX.blockstack.io/api
endpoint = 'http://localhost:8080/api'
client = BlockstackClient(base_uri=endpoint, token=token)
alice = client.wallets.get('Blue')
bob = client.wallets.get('Pink')
oracle_a = client.oracles.get('Blue')
oracle_b = client.oracles.get('Pink')
print([k for k in alice.__dict__.keys()])
print(alice.currentAddress)
print(alice.currentHeight)
from codecs import encode
alice_txs = alice.transactions
bob_txs = bob.transactions
print(len([t.id for t in alice_txs.list()]))
partial = alice_txs.propose(atomic=True, asset='TRY', address=bob.assetAddress, amount=10000)
complete = bob_txs.create(atomic=True, asset='USD', address=alice.assetAddress, amount=100,
metadata=encode(b'foobar', 'hex').decode('utf8'), # Note: best practice is to use a hash
transaction=partial['transaction'])
signed1 = oracle_a.transactions.sign(complete.id, complete.transaction)
committed = oracle_b.transactions.broadcast(complete.id, signed1.transaction) # sign and broadcast
tx = alice_txs.get(committed.id)
print(tx.id)
print(tx.changes)
print([(a.name, a.amount) for a in alice.assets.list()])
###Output
[('CZK', 60000000000), ('RUB', 60000000000), ('UNKNOWN', 0), ('USD', 60000000000), ('Bitcoin', 0), ('CNH', 60000000000), ('GOOG', 60000000000), ('PLN', 60000000000), ('TRY', 60000000000), ('EUR', 60000000000), ('AAPL', 60000000000), ('HUF', 60000000000)]
###Markdown
Normal small model
###Code
inp = tensorflow.keras.layers.Input(shape=(32, 32, 3))
x = tensorflow.keras.layers.Conv2D(filters=64, kernel_size=3, padding='same', strides=2)(inp)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2D(filters=128, kernel_size=3, padding='same', strides=2)(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2D(filters=128, kernel_size=3, strides=2)(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2D(filters=256, kernel_size=3)(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Flatten()(x)
x = tensorflow.keras.layers.Dense(128, activation='relu')(x)
out = tensorflow.keras.layers.Dense(10, activation='softmax')(x)
model = tensorflow.keras.models.Model(inputs=inp, outputs=out)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(train_part_X, train_part_Y, batch_size=64, epochs=50, validation_data=(testX, testY), verbose=1)
trainX.shape, trainY.shape, testX.shape, testY.shape
###Output
_____no_output_____
###Markdown
teacher model
###Code
image_input = tensorflow.keras.layers.Input(shape=(32, 32, 3))
pre_trained_vgg = tensorflow.keras.applications.vgg19.VGG19(weights='imagenet', input_shape=(32, 32, 3), include_top=False)
pre_trained_vgg_model = tensorflow.keras.models.Model(inputs=pre_trained_vgg.input, outputs=pre_trained_vgg.get_layer('block5_pool').output)
pre_trained_image_feautures = pre_trained_vgg_model(image_input)
custom_vgg = tensorflow.keras.models.Model(inputs=image_input, outputs=pre_trained_image_feautures)
print (custom_vgg.summary())
new_full_y = custom_vgg.predict(train_full_X)
new_full_y.shape
new_test_y = custom_vgg.predict(testX)
new_test_y.shape
new_part_y = custom_vgg.predict(train_part_X)
new_part_y.shape
###Output
_____no_output_____
###Markdown
transfer learning results
###Code
inp = tensorflow.keras.layers.Input(shape=(1, 1, 512))
x = tensorflow.keras.layers.Flatten()(inp)
out = tensorflow.keras.layers.Dense(10, activation='softmax')(x)
transfer = tensorflow.keras.models.Model(inputs=inp, outputs=out)
transfer.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
transfer.summary()
transfer.fit(new_part_y, train_part_Y, batch_size=64, epochs=100, validation_data=(new_test_y, testY))
###Output
Train on 500 samples, validate on 10000 samples
Epoch 1/100
500/500 [==============================] - 0s 278us/sample - loss: 0.4644 - accuracy: 0.9320 - val_loss: 1.7893 - val_accuracy: 0.4245
Epoch 2/100
500/500 [==============================] - 0s 239us/sample - loss: 0.4620 - accuracy: 0.9360 - val_loss: 1.7888 - val_accuracy: 0.4236
Epoch 3/100
500/500 [==============================] - 0s 225us/sample - loss: 0.4604 - accuracy: 0.9440 - val_loss: 1.7903 - val_accuracy: 0.4243
Epoch 4/100
500/500 [==============================] - 0s 242us/sample - loss: 0.4577 - accuracy: 0.9440 - val_loss: 1.7902 - val_accuracy: 0.4251
Epoch 5/100
500/500 [==============================] - 0s 231us/sample - loss: 0.4557 - accuracy: 0.9360 - val_loss: 1.7912 - val_accuracy: 0.4254
Epoch 6/100
500/500 [==============================] - 0s 219us/sample - loss: 0.4535 - accuracy: 0.9440 - val_loss: 1.7952 - val_accuracy: 0.4239
Epoch 7/100
500/500 [==============================] - 0s 248us/sample - loss: 0.4514 - accuracy: 0.9440 - val_loss: 1.7949 - val_accuracy: 0.4246
Epoch 8/100
500/500 [==============================] - 0s 206us/sample - loss: 0.4492 - accuracy: 0.9440 - val_loss: 1.7974 - val_accuracy: 0.4245
Epoch 9/100
500/500 [==============================] - 0s 207us/sample - loss: 0.4470 - accuracy: 0.9480 - val_loss: 1.7981 - val_accuracy: 0.4231
Epoch 10/100
500/500 [==============================] - 0s 185us/sample - loss: 0.4453 - accuracy: 0.9460 - val_loss: 1.7988 - val_accuracy: 0.4246
Epoch 11/100
500/500 [==============================] - 0s 195us/sample - loss: 0.4434 - accuracy: 0.9500 - val_loss: 1.8004 - val_accuracy: 0.4237
Epoch 12/100
500/500 [==============================] - 0s 202us/sample - loss: 0.4406 - accuracy: 0.9480 - val_loss: 1.8013 - val_accuracy: 0.4235
Epoch 13/100
500/500 [==============================] - 0s 194us/sample - loss: 0.4390 - accuracy: 0.9500 - val_loss: 1.8022 - val_accuracy: 0.4242
Epoch 14/100
500/500 [==============================] - 0s 198us/sample - loss: 0.4371 - accuracy: 0.9480 - val_loss: 1.8037 - val_accuracy: 0.4252
Epoch 15/100
500/500 [==============================] - 0s 239us/sample - loss: 0.4357 - accuracy: 0.9480 - val_loss: 1.8089 - val_accuracy: 0.4236
Epoch 16/100
500/500 [==============================] - 0s 231us/sample - loss: 0.4338 - accuracy: 0.9460 - val_loss: 1.8073 - val_accuracy: 0.4241
Epoch 17/100
500/500 [==============================] - 0s 223us/sample - loss: 0.4320 - accuracy: 0.9480 - val_loss: 1.8072 - val_accuracy: 0.4239
Epoch 18/100
500/500 [==============================] - 0s 216us/sample - loss: 0.4296 - accuracy: 0.9480 - val_loss: 1.8115 - val_accuracy: 0.4236
Epoch 19/100
500/500 [==============================] - 0s 222us/sample - loss: 0.4276 - accuracy: 0.9480 - val_loss: 1.8121 - val_accuracy: 0.4230
Epoch 20/100
500/500 [==============================] - 0s 225us/sample - loss: 0.4258 - accuracy: 0.9500 - val_loss: 1.8122 - val_accuracy: 0.4229
Epoch 21/100
500/500 [==============================] - 0s 218us/sample - loss: 0.4239 - accuracy: 0.9500 - val_loss: 1.8131 - val_accuracy: 0.4239
Epoch 22/100
500/500 [==============================] - 0s 238us/sample - loss: 0.4226 - accuracy: 0.9520 - val_loss: 1.8147 - val_accuracy: 0.4232
Epoch 23/100
500/500 [==============================] - 0s 283us/sample - loss: 0.4192 - accuracy: 0.9500 - val_loss: 1.8164 - val_accuracy: 0.4233
Epoch 24/100
500/500 [==============================] - 0s 205us/sample - loss: 0.4175 - accuracy: 0.9520 - val_loss: 1.8214 - val_accuracy: 0.4229
Epoch 25/100
500/500 [==============================] - 0s 202us/sample - loss: 0.4156 - accuracy: 0.9520 - val_loss: 1.8235 - val_accuracy: 0.4227
Epoch 26/100
500/500 [==============================] - 0s 185us/sample - loss: 0.4145 - accuracy: 0.9480 - val_loss: 1.8216 - val_accuracy: 0.4239
Epoch 27/100
500/500 [==============================] - 0s 195us/sample - loss: 0.4126 - accuracy: 0.9480 - val_loss: 1.8205 - val_accuracy: 0.4236
Epoch 28/100
500/500 [==============================] - 0s 194us/sample - loss: 0.4108 - accuracy: 0.9500 - val_loss: 1.8245 - val_accuracy: 0.4239
Epoch 29/100
500/500 [==============================] - 0s 192us/sample - loss: 0.4089 - accuracy: 0.9480 - val_loss: 1.8283 - val_accuracy: 0.4227
Epoch 30/100
500/500 [==============================] - 0s 201us/sample - loss: 0.4067 - accuracy: 0.9500 - val_loss: 1.8296 - val_accuracy: 0.4229
Epoch 31/100
500/500 [==============================] - 0s 198us/sample - loss: 0.4043 - accuracy: 0.9520 - val_loss: 1.8276 - val_accuracy: 0.4229
Epoch 32/100
500/500 [==============================] - 0s 184us/sample - loss: 0.4035 - accuracy: 0.9520 - val_loss: 1.8279 - val_accuracy: 0.4225
Epoch 33/100
500/500 [==============================] - 0s 202us/sample - loss: 0.4012 - accuracy: 0.9520 - val_loss: 1.8308 - val_accuracy: 0.4220
Epoch 34/100
500/500 [==============================] - 0s 230us/sample - loss: 0.3990 - accuracy: 0.9520 - val_loss: 1.8335 - val_accuracy: 0.4219
Epoch 35/100
500/500 [==============================] - 0s 207us/sample - loss: 0.3972 - accuracy: 0.9520 - val_loss: 1.8352 - val_accuracy: 0.4224
Epoch 36/100
500/500 [==============================] - 0s 192us/sample - loss: 0.3961 - accuracy: 0.9520 - val_loss: 1.8357 - val_accuracy: 0.4217
Epoch 37/100
500/500 [==============================] - 0s 191us/sample - loss: 0.3941 - accuracy: 0.9540 - val_loss: 1.8360 - val_accuracy: 0.4228
Epoch 38/100
500/500 [==============================] - 0s 188us/sample - loss: 0.3922 - accuracy: 0.9540 - val_loss: 1.8384 - val_accuracy: 0.4231
Epoch 39/100
500/500 [==============================] - 0s 189us/sample - loss: 0.3912 - accuracy: 0.9520 - val_loss: 1.8429 - val_accuracy: 0.4219
Epoch 40/100
500/500 [==============================] - 0s 205us/sample - loss: 0.3889 - accuracy: 0.9520 - val_loss: 1.8410 - val_accuracy: 0.4225
Epoch 41/100
500/500 [==============================] - 0s 200us/sample - loss: 0.3875 - accuracy: 0.9540 - val_loss: 1.8423 - val_accuracy: 0.4223
Epoch 42/100
500/500 [==============================] - 0s 206us/sample - loss: 0.3857 - accuracy: 0.9540 - val_loss: 1.8442 - val_accuracy: 0.4223
Epoch 43/100
500/500 [==============================] - 0s 213us/sample - loss: 0.3835 - accuracy: 0.9580 - val_loss: 1.8452 - val_accuracy: 0.4220
Epoch 44/100
500/500 [==============================] - 0s 218us/sample - loss: 0.3819 - accuracy: 0.9560 - val_loss: 1.8480 - val_accuracy: 0.4226
Epoch 45/100
500/500 [==============================] - 0s 205us/sample - loss: 0.3803 - accuracy: 0.9540 - val_loss: 1.8492 - val_accuracy: 0.4232
Epoch 46/100
500/500 [==============================] - 0s 201us/sample - loss: 0.3786 - accuracy: 0.9560 - val_loss: 1.8480 - val_accuracy: 0.4227
Epoch 47/100
500/500 [==============================] - 0s 200us/sample - loss: 0.3773 - accuracy: 0.9580 - val_loss: 1.8522 - val_accuracy: 0.4223
Epoch 48/100
500/500 [==============================] - 0s 210us/sample - loss: 0.3756 - accuracy: 0.9580 - val_loss: 1.8537 - val_accuracy: 0.4199
Epoch 49/100
500/500 [==============================] - 0s 201us/sample - loss: 0.3735 - accuracy: 0.9600 - val_loss: 1.8557 - val_accuracy: 0.4218
Epoch 50/100
500/500 [==============================] - 0s 192us/sample - loss: 0.3729 - accuracy: 0.9560 - val_loss: 1.8548 - val_accuracy: 0.4218
Epoch 51/100
500/500 [==============================] - 0s 219us/sample - loss: 0.3710 - accuracy: 0.9580 - val_loss: 1.8556 - val_accuracy: 0.4218
Epoch 52/100
500/500 [==============================] - 0s 217us/sample - loss: 0.3692 - accuracy: 0.9580 - val_loss: 1.8588 - val_accuracy: 0.4218
Epoch 53/100
500/500 [==============================] - 0s 211us/sample - loss: 0.3669 - accuracy: 0.9560 - val_loss: 1.8619 - val_accuracy: 0.4218
Epoch 54/100
500/500 [==============================] - 0s 196us/sample - loss: 0.3660 - accuracy: 0.9580 - val_loss: 1.8607 - val_accuracy: 0.4209
Epoch 55/100
500/500 [==============================] - 0s 203us/sample - loss: 0.3639 - accuracy: 0.9580 - val_loss: 1.8618 - val_accuracy: 0.4223
Epoch 56/100
###Markdown
student
###Code
inp = tensorflow.keras.layers.Input(shape=(32, 32, 3))
x = tensorflow.keras.layers.Flatten()(inp)
#x = tensorflow.keras.layers.Dense(1024, activation='relu')(x)
#x = tensorflow.keras.layers.Dense(784, activation='relu')(x)
x = tensorflow.keras.layers.Dense(128, activation='relu')(x)
x = tensorflow.keras.layers.Dense(512, activation='relu')(x)
features = tensorflow.keras.layers.Reshape((1, 1, 512))(x)
student = tensorflow.keras.models.Model(inputs = inp, outputs = features)
student.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
student.summary()
student.fit(train_full_X, new_full_y, batch_size=64, epochs=50, validation_data=(testX, new_test_y))
student_part_y = student.predict(train_part_X)
student_test_y = student.predict(testX)
student_part_y.shape, student_test_y.shape
###Output
/Users/k15/anaconda/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
warnings.warn('`Model.state_updates` will be removed in a future version. '
###Markdown
transfer learning with student
###Code
inp = tensorflow.keras.layers.Input(shape=(1, 1, 512))
x = tensorflow.keras.layers.Flatten()(inp)
out = tensorflow.keras.layers.Dense(10, activation='softmax')(x)
transfer = tensorflow.keras.models.Model(inputs=inp, outputs=out)
transfer.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
transfer.summary()
transfer.fit(student_part_y, train_part_Y, batch_size=64, epochs=100, validation_data=(student_test_y, testY))
###Output
Train on 500 samples, validate on 10000 samples
Epoch 1/100
500/500 [==============================] - 0s 282us/sample - loss: 1.4489 - accuracy: 0.5220 - val_loss: 1.8483 - val_accuracy: 0.3410
Epoch 2/100
500/500 [==============================] - 0s 262us/sample - loss: 1.4472 - accuracy: 0.5160 - val_loss: 1.8508 - val_accuracy: 0.3405
Epoch 3/100
500/500 [==============================] - 0s 240us/sample - loss: 1.4492 - accuracy: 0.5160 - val_loss: 1.8513 - val_accuracy: 0.3378
Epoch 4/100
500/500 [==============================] - 0s 235us/sample - loss: 1.4477 - accuracy: 0.5180 - val_loss: 1.8520 - val_accuracy: 0.3405
Epoch 5/100
500/500 [==============================] - 0s 240us/sample - loss: 1.4478 - accuracy: 0.5140 - val_loss: 1.8512 - val_accuracy: 0.3423
Epoch 6/100
500/500 [==============================] - 0s 256us/sample - loss: 1.4484 - accuracy: 0.5120 - val_loss: 1.8511 - val_accuracy: 0.3420
Epoch 7/100
500/500 [==============================] - 0s 243us/sample - loss: 1.4467 - accuracy: 0.5200 - val_loss: 1.8500 - val_accuracy: 0.3423
Epoch 8/100
500/500 [==============================] - 0s 215us/sample - loss: 1.4460 - accuracy: 0.5320 - val_loss: 1.8499 - val_accuracy: 0.3396
Epoch 9/100
500/500 [==============================] - 0s 219us/sample - loss: 1.4446 - accuracy: 0.5100 - val_loss: 1.8512 - val_accuracy: 0.3395
Epoch 10/100
500/500 [==============================] - 0s 269us/sample - loss: 1.4442 - accuracy: 0.5080 - val_loss: 1.8513 - val_accuracy: 0.3412
Epoch 11/100
500/500 [==============================] - 0s 269us/sample - loss: 1.4445 - accuracy: 0.5120 - val_loss: 1.8527 - val_accuracy: 0.3426
Epoch 12/100
500/500 [==============================] - 0s 210us/sample - loss: 1.4439 - accuracy: 0.5160 - val_loss: 1.8507 - val_accuracy: 0.3418
Epoch 13/100
500/500 [==============================] - 0s 214us/sample - loss: 1.4446 - accuracy: 0.5120 - val_loss: 1.8521 - val_accuracy: 0.3419
Epoch 14/100
500/500 [==============================] - 0s 210us/sample - loss: 1.4455 - accuracy: 0.5220 - val_loss: 1.8512 - val_accuracy: 0.3381
Epoch 15/100
500/500 [==============================] - 0s 237us/sample - loss: 1.4444 - accuracy: 0.5120 - val_loss: 1.8518 - val_accuracy: 0.3411
Epoch 16/100
500/500 [==============================] - 0s 248us/sample - loss: 1.4431 - accuracy: 0.5120 - val_loss: 1.8526 - val_accuracy: 0.3411
Epoch 17/100
500/500 [==============================] - 0s 223us/sample - loss: 1.4417 - accuracy: 0.5080 - val_loss: 1.8539 - val_accuracy: 0.3418
Epoch 18/100
500/500 [==============================] - 0s 232us/sample - loss: 1.4423 - accuracy: 0.5140 - val_loss: 1.8530 - val_accuracy: 0.3402
Epoch 19/100
500/500 [==============================] - 0s 229us/sample - loss: 1.4439 - accuracy: 0.5140 - val_loss: 1.8530 - val_accuracy: 0.3403
Epoch 20/100
500/500 [==============================] - 0s 217us/sample - loss: 1.4414 - accuracy: 0.5080 - val_loss: 1.8525 - val_accuracy: 0.3397
Epoch 21/100
500/500 [==============================] - 0s 205us/sample - loss: 1.4413 - accuracy: 0.5160 - val_loss: 1.8519 - val_accuracy: 0.3405
Epoch 22/100
500/500 [==============================] - 0s 207us/sample - loss: 1.4428 - accuracy: 0.5220 - val_loss: 1.8516 - val_accuracy: 0.3386
Epoch 23/100
500/500 [==============================] - 0s 209us/sample - loss: 1.4409 - accuracy: 0.5200 - val_loss: 1.8531 - val_accuracy: 0.3412
Epoch 24/100
500/500 [==============================] - 0s 210us/sample - loss: 1.4393 - accuracy: 0.5060 - val_loss: 1.8552 - val_accuracy: 0.3428
Epoch 25/100
500/500 [==============================] - 0s 229us/sample - loss: 1.4396 - accuracy: 0.5060 - val_loss: 1.8540 - val_accuracy: 0.3413
Epoch 26/100
500/500 [==============================] - 0s 204us/sample - loss: 1.4402 - accuracy: 0.5220 - val_loss: 1.8533 - val_accuracy: 0.3414
Epoch 27/100
500/500 [==============================] - 0s 213us/sample - loss: 1.4437 - accuracy: 0.5120 - val_loss: 1.8543 - val_accuracy: 0.3394
Epoch 28/100
500/500 [==============================] - 0s 211us/sample - loss: 1.4401 - accuracy: 0.5120 - val_loss: 1.8520 - val_accuracy: 0.3388
Epoch 29/100
500/500 [==============================] - 0s 239us/sample - loss: 1.4395 - accuracy: 0.5180 - val_loss: 1.8539 - val_accuracy: 0.3402
Epoch 30/100
500/500 [==============================] - 0s 209us/sample - loss: 1.4399 - accuracy: 0.5160 - val_loss: 1.8522 - val_accuracy: 0.3399
Epoch 31/100
500/500 [==============================] - 0s 208us/sample - loss: 1.4372 - accuracy: 0.5220 - val_loss: 1.8543 - val_accuracy: 0.3422
Epoch 32/100
500/500 [==============================] - 0s 218us/sample - loss: 1.4396 - accuracy: 0.5160 - val_loss: 1.8523 - val_accuracy: 0.3394
Epoch 33/100
500/500 [==============================] - 0s 209us/sample - loss: 1.4364 - accuracy: 0.5080 - val_loss: 1.8568 - val_accuracy: 0.3403
Epoch 34/100
500/500 [==============================] - 0s 223us/sample - loss: 1.4358 - accuracy: 0.5160 - val_loss: 1.8545 - val_accuracy: 0.3414
Epoch 35/100
500/500 [==============================] - 0s 213us/sample - loss: 1.4362 - accuracy: 0.5120 - val_loss: 1.8528 - val_accuracy: 0.3407
Epoch 36/100
500/500 [==============================] - 0s 217us/sample - loss: 1.4358 - accuracy: 0.5240 - val_loss: 1.8538 - val_accuracy: 0.3413
Epoch 37/100
500/500 [==============================] - 0s 221us/sample - loss: 1.4364 - accuracy: 0.5160 - val_loss: 1.8579 - val_accuracy: 0.3415
Epoch 38/100
500/500 [==============================] - 0s 226us/sample - loss: 1.4363 - accuracy: 0.5140 - val_loss: 1.8525 - val_accuracy: 0.3419
Epoch 39/100
500/500 [==============================] - 0s 213us/sample - loss: 1.4381 - accuracy: 0.5120 - val_loss: 1.8577 - val_accuracy: 0.3428
Epoch 40/100
500/500 [==============================] - 0s 232us/sample - loss: 1.4345 - accuracy: 0.5200 - val_loss: 1.8548 - val_accuracy: 0.3403
Epoch 41/100
500/500 [==============================] - 0s 249us/sample - loss: 1.4333 - accuracy: 0.5140 - val_loss: 1.8528 - val_accuracy: 0.3395
Epoch 42/100
500/500 [==============================] - 0s 245us/sample - loss: 1.4338 - accuracy: 0.5260 - val_loss: 1.8528 - val_accuracy: 0.3403
Epoch 43/100
500/500 [==============================] - 0s 223us/sample - loss: 1.4345 - accuracy: 0.5140 - val_loss: 1.8575 - val_accuracy: 0.3421
Epoch 44/100
500/500 [==============================] - 0s 222us/sample - loss: 1.4333 - accuracy: 0.5140 - val_loss: 1.8569 - val_accuracy: 0.3423
Epoch 45/100
500/500 [==============================] - 0s 210us/sample - loss: 1.4321 - accuracy: 0.5160 - val_loss: 1.8561 - val_accuracy: 0.3390
Epoch 46/100
500/500 [==============================] - 0s 216us/sample - loss: 1.4322 - accuracy: 0.5120 - val_loss: 1.8538 - val_accuracy: 0.3398
Epoch 47/100
500/500 [==============================] - 0s 229us/sample - loss: 1.4320 - accuracy: 0.5200 - val_loss: 1.8555 - val_accuracy: 0.3410
Epoch 48/100
500/500 [==============================] - 0s 218us/sample - loss: 1.4326 - accuracy: 0.5080 - val_loss: 1.8546 - val_accuracy: 0.3419
Epoch 49/100
500/500 [==============================] - 0s 216us/sample - loss: 1.4318 - accuracy: 0.5220 - val_loss: 1.8572 - val_accuracy: 0.3416
Epoch 50/100
500/500 [==============================] - 0s 208us/sample - loss: 1.4305 - accuracy: 0.5200 - val_loss: 1.8566 - val_accuracy: 0.3413
Epoch 51/100
500/500 [==============================] - 0s 202us/sample - loss: 1.4324 - accuracy: 0.5220 - val_loss: 1.8555 - val_accuracy: 0.3383
Epoch 52/100
500/500 [==============================] - 0s 206us/sample - loss: 1.4300 - accuracy: 0.5240 - val_loss: 1.8535 - val_accuracy: 0.3409
Epoch 53/100
500/500 [==============================] - 0s 209us/sample - loss: 1.4319 - accuracy: 0.5120 - val_loss: 1.8571 - val_accuracy: 0.3391
Epoch 54/100
500/500 [==============================] - 0s 212us/sample - loss: 1.4294 - accuracy: 0.5080 - val_loss: 1.8594 - val_accuracy: 0.3411
Epoch 55/100
500/500 [==============================] - 0s 218us/sample - loss: 1.4300 - accuracy: 0.5160 - val_loss: 1.8557 - val_accuracy: 0.3399
Epoch 56/100
###Markdown
Example usage of the Yin-Yang dataset
###Code
import torch
import numpy as np
import matplotlib.pyplot as plt
from dataset import ClassificationTask
from torch.utils.data import DataLoader
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup datasets (training, validation and test set)
###Code
dataset_train = ClassificationTask(size=5000, seed=42)
dataset_validation = ClassificationTask(size=1000, seed=41)
dataset_test = ClassificationTask(size=1000, seed=40)
###Output
_____no_output_____
###Markdown
Setup PyTorch dataloaders
###Code
batchsize_train = 20
batchsize_eval = len(dataset_test)
train_loader = DataLoader(dataset_train, batch_size=batchsize_train, shuffle=True)
val_loader = DataLoader(dataset_validation, batch_size=batchsize_eval, shuffle=True)
test_loader = DataLoader(dataset_test, batch_size=batchsize_eval, shuffle=False)
###Output
_____no_output_____
###Markdown
Plot data
###Code
fig, axes = plt.subplots(ncols=3, sharey=True, figsize=(15, 8))
titles = ['Training set', 'Validation set', 'Test set']
for i, loader in enumerate([train_loader, val_loader, test_loader]):
axes[i].set_title(titles[i])
axes[i].set_aspect('equal', adjustable='box')
xs = []
ys = []
cs = []
for batch, batch_labels in loader:
for j, item in enumerate(batch):
x1, y1, x2, y2 = item
c = int(np.where(batch_labels[j] == 1)[0])
xs.append(x1)
ys.append(y1)
cs.append(c)
xs = np.array(xs)
ys = np.array(ys)
cs = np.array(cs)
axes[i].scatter(xs[cs == 0], ys[cs == 0], color='C0', edgecolor='k', alpha=0.7)
axes[i].scatter(xs[cs == 1], ys[cs == 1], color='C1', edgecolor='k', alpha=0.7)
axes[i].scatter(xs[cs == 2], ys[cs == 2], color='C2', edgecolor='k', alpha=0.7)
axes[i].set_xlabel('x1')
if i == 0:
axes[i].set_ylabel('y1')
###Output
_____no_output_____
###Markdown
Setup ANN
###Code
class Net(torch.nn.Module):
def __init__(self, network_layout):
super(Net, self).__init__()
self.n_inputs = network_layout['n_inputs']
self.n_layers = network_layout['n_layers']
self.layer_sizes = network_layout['layer_sizes']
self.layers = torch.nn.ModuleList()
layer = torch.nn.Linear(self.n_inputs, self.layer_sizes[0], bias=True)
self.layers.append(layer)
for i in range(self.n_layers-1):
layer = torch.nn.Linear(self.layer_sizes[i], self.layer_sizes[i+1], bias=True)
self.layers.append(layer)
return
def forward(self, x):
x_hidden = []
for i in range(self.n_layers):
x = self.layers[i](x)
if not i == (self.n_layers-1):
relu = torch.nn.ReLU()
x = relu(x)
x_hidden.append(x)
return x
torch.manual_seed(12345)
# ANN with one hidden layer (with 120 neurons)
network_layout = {
'n_inputs': 4,
'n_layers': 2,
'layer_sizes': [120, 3],
}
net = Net(network_layout)
# Linear classifier for reference
shallow_network_layout = {
'n_inputs': 4,
'n_layers': 1,
'layer_sizes': [3],
}
linear_classifier = Net(shallow_network_layout)
###Output
_____no_output_____
###Markdown
Train ANN
###Code
# used to determine validation accuracy after each epoch in training
def validation_step(net, criterion, loader):
with torch.no_grad():
num_correct = 0
num_shown = 0
for j, data in enumerate(loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
outputs = net(inputs)
winner = outputs.argmax(1)
num_correct += len(outputs[winner == labels.argmax(1)])
num_shown += len(labels)
accuracy = float(num_correct) / num_shown
return accuracy
# set training parameters
n_epochs = 500
learning_rate = 0.1
val_accuracies = []
train_accuracies = []
# setup loss and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)
# train for n_epochs
for epoch in range(n_epochs):
val_acc = validation_step(net, criterion, val_loader)
if epoch % 25 == 0:
print('Validation accuracy after {0} epochs: {1}'.format(epoch, val_acc))
val_accuracies.append(val_acc)
num_correct = 0
num_shown = 0
for j, data in enumerate(train_loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
labels = labels.float()
# zero the parameter gradients
optimizer.zero_grad()
# forward pass
outputs = net(inputs)
winner = outputs.argmax(1)
num_correct += len(outputs[outputs.argmax(1) == labels.argmax(1)])
num_shown += len(labels)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
accuracy = float(num_correct) / num_shown
train_accuracies.append(accuracy)
# after training evaluate on test set
test_acc = validation_step(net, criterion, test_loader)
print('#############################')
print('Final test accuracy:', test_acc)
print('#############################')
###Output
Validation accuracy after 0 epochs: 0.316
Validation accuracy after 25 epochs: 0.834
Validation accuracy after 50 epochs: 0.883
Validation accuracy after 75 epochs: 0.946
Validation accuracy after 100 epochs: 0.952
Validation accuracy after 125 epochs: 0.942
Validation accuracy after 150 epochs: 0.958
Validation accuracy after 175 epochs: 0.927
Validation accuracy after 200 epochs: 0.959
Validation accuracy after 225 epochs: 0.951
Validation accuracy after 250 epochs: 0.948
Validation accuracy after 275 epochs: 0.952
Validation accuracy after 300 epochs: 0.963
Validation accuracy after 325 epochs: 0.978
Validation accuracy after 350 epochs: 0.967
Validation accuracy after 375 epochs: 0.948
Validation accuracy after 400 epochs: 0.96
Validation accuracy after 425 epochs: 0.952
Validation accuracy after 450 epochs: 0.963
Validation accuracy after 475 epochs: 0.953
#############################
Final test accuracy: 0.978
#############################
###Markdown
Plot training results
###Code
plt.figure(figsize=(10,8))
plt.plot(train_accuracies, label='train acc')
plt.plot(val_accuracies, label='val acc')
plt.axhline(test_acc, ls='--', color='grey', label='test acc')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.ylim(0.3, 1.05)
plt.legend()
###Output
_____no_output_____
###Markdown
Train Linear classifier as reference
###Code
val_accuracies = []
train_accuracies = []
# setup loss and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(linear_classifier.parameters(), lr=learning_rate)
# train for n_epochs
for epoch in range(n_epochs):
val_acc = validation_step(linear_classifier, criterion, val_loader)
if epoch % 25 == 0:
print('Validation accuracy of linear classifier after {0} epochs: {1}'.format(epoch, val_acc))
val_accuracies.append(val_acc)
num_correct = 0
num_shown = 0
for j, data in enumerate(train_loader):
inputs, labels = data
# need to convert to float32 because data is in float64
inputs = inputs.float()
labels = labels.float()
# zero the parameter gradients
optimizer.zero_grad()
# forward pass
outputs = linear_classifier(inputs)
num_correct += len(outputs[outputs.argmax(1) == labels.argmax(1)])
num_shown += len(labels)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
accuracy = float(num_correct) / num_shown
train_accuracies.append(accuracy)
# after training evaluate on test set
test_acc = validation_step(linear_classifier, criterion, test_loader)
print('#############################')
print('Final test accuracy linear classifier:', test_acc)
print('#############################')
plt.figure(figsize=(10,8))
plt.plot(train_accuracies, label='train acc (lin classifier)')
plt.plot(val_accuracies, label='val acc (lin classifier)')
plt.axhline(test_acc, ls='--', color='grey', label='test acc (lin classifier)')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.ylim(0.3, 1.05)
plt.legend()
###Output
_____no_output_____
###Markdown
Options of the query Academic Units
###Code
bcapi.academic_unit_options()
###Output
_____no_output_____
###Markdown
Categories
###Code
bcapi.category_options()
###Output
_____no_output_____
###Markdown
Campuses
###Code
bcapi.campus_options()
###Output
_____no_output_____
###Markdown
Semesters
###Code
bcapi.semester_options()
###Output
_____no_output_____
###Markdown
QueriesBCAPI has 2 methods that perform a requests the first one returns the response as the original html and the second as json
###Code
bcapi.search_html(semester="2019-2", name="diseño")
###Output
_____no_output_____
###Markdown
The json method returns the response and a boolean that is True if the courses returned are NOT all the courses that meet the search params
###Code
res, incomplete = bcapi.search_json(semester="2019-2", name="diseño")
res
incomplete
import json
res = json.loads(res)
res[0]
###Output
_____no_output_____
###Markdown
1. Get Entity Information
###Code
# Get label of Belgium (Q31)
print(db.get_label("Q31"))
# Gel label in all languages of Belgium (Q31)
print(db.get_labels("Q31"))
# Get label in a specific language
print(db.get_labels("Q31", "ja"))
# Gel aliases in all languages of Belgium (Q31)
print(db.get_aliases("Q31"))
# Get aliases in a specific language of Belgium (Q31)
print(db.get_aliases("Q31", "ja"))
# Gel descriptions in all languages of Belgium (Q31)
print(db.get_descriptions("Q31"))
# Get descriptions in a specific language of Belgium (Q31)
print(db.get_descriptions("Q31", "ja"))
# Gel sitelinks of Belgium (Q31)
print(db.get_sitelinks("Q31"))
# Gel Wikipedia title of Belgium (Q31)
print(db.get_wikipedia_title("ja", "Q31"))
# Gel Wikipedia link of Belgium (Q31)
print(db.get_wikipedia_link("ja", "Q31"))
# Gel claims of Belgium (Q31)
print(db.get_claims("Q31"))
# Get all information of Belgium (Q31)
print(db.get_item("Q31"))
# Get redirect of Belgium (Q31)
redirects = db.get_redirect_of("Q31")
print(redirects)
# Get redirect of
print(db.get_redirect(redirects[0]))
# Get instance of Belgium (Q31)
instance_ofs = db.get_instance_of("Q31")
for i, wd_id in enumerate(instance_ofs):
print(f"{i}: {wd_id} - {db.get_label(wd_id)}")
# Get subclass of Belgium (Q31)
print(db.get_subclass_of("Q31"))
# Get all types of Belgium (Q31)
types = db.get_all_types("Q31")
for i, wd_id in enumerate(types):
print(f"{i}: {wd_id} - {db.get_label(wd_id)}")
# Get properties between two Wikidata items
properties = db.get_properties_from_head_qid_tail_qid("Q1490", "Q17")
for i, wd_id in enumerate(properties):
print(f"{i+1}: {wd_id} - {db.get_label(wd_id)}")
###Output
1: P131 - located in the administrative territorial entity
2: P17 - country
3: P1376 - capital of
###Markdown
2. Get Provenance nodes
###Code
# Print provenance list
def print_provenance_list(iter_obj, top=3):
for i, provenance in enumerate(iter_obj):
if i > top:
break
subject = provenance["subject"]
predicate = provenance["predicate"]
value = provenance["value"]
reference_node = provenance["reference"]
print(
f"{i+1}: <{subject}[{db.get_label(subject)}] - {predicate}[{db.get_label(predicate)}] - {value}>]]"
)
print(f" Reference Node:")
for ref_type, ref_objs in reference_node.items():
for ref_prop, ref_v in ref_objs.items():
print(f" {ref_prop}[{db.get_label(ref_prop)}]: {ref_v}")
print()
# Get provenance of Belgium (Q31)
print_provenance_list(db.iter_provenances("Q31"))
# Get provenance of Belgium (Q31), and Tokyo (Q1490)
print_provenance_list(db.iter_provenances(["Q31", "Q1490"]))
# Get provenance of all items
print_provenance_list(db.iter_provenances())
###Output
1: <Q31[Belgium] - P2581[BabelNet ID] - 00009714n>]]
Reference Node:
P248[stated in]: ['Q4837690']
2: <Q31[Belgium] - P227[GND ID] - 4005406-8>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q48183']
3: <Q31[Belgium] - P982[MusicBrainz area ID] - 5b8a5ee5-0bb3-34cf-9a75-c27c44e341fc>]]
Reference Node:
P248[stated in]: ['Q14005']
4: <Q31[Belgium] - P2981[UIC alphabetical country code] - B>]]
Reference Node:
P854[reference URL]: ['http://otif.org/fileadmin/user_upload/otif_verlinkte_files/06_tech_zulass/05_Reglementation_en_vigueur/Neu_ab_01_01_2015/UTP_MARKING_2015_e_in_force.pdf']
1: <Q31[Belgium] - P2581[BabelNet ID] - 00009714n>]]
Reference Node:
P248[stated in]: ['Q4837690']
2: <Q31[Belgium] - P227[GND ID] - 4005406-8>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q48183']
3: <Q31[Belgium] - P982[MusicBrainz area ID] - 5b8a5ee5-0bb3-34cf-9a75-c27c44e341fc>]]
Reference Node:
P248[stated in]: ['Q14005']
4: <Q31[Belgium] - P2981[UIC alphabetical country code] - B>]]
Reference Node:
P854[reference URL]: ['http://otif.org/fileadmin/user_upload/otif_verlinkte_files/06_tech_zulass/05_Reglementation_en_vigueur/Neu_ab_01_01_2015/UTP_MARKING_2015_e_in_force.pdf']
1: <Q1[universe] - P373[Commons category] - Universe>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q328']
2: <Q1[universe] - P18[image] - Hubble ultra deep field.jpg>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q48183']
P4656[Wikimedia import URL]: ['https://de.wikipedia.org/w/index.php?title=Universum&oldid=211589784']
P813[retrieved]: ['2021-05-22']
3: <Q1[universe] - P18[image] - CMB Timeline300 no WMAP.jpg>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q328']
P4656[Wikimedia import URL]: ['https://en.wikipedia.org/w/index.php?title=Universe&oldid=1023252612']
P813[retrieved]: ['2021-05-22']
4: <Q1[universe] - P18[image] - NASA-HS201427a-HubbleUltraDeepField2014-20140603.jpg>]]
Reference Node:
P143[imported from Wikimedia project]: ['Q328']
P4656[Wikimedia import URL]: ['https://en.wikipedia.org/w/index.php?title=Universe&oldid=1023252612']
P813[retrieved]: ['2021-05-22']
###Markdown
Wikidata provenances stats
###Code
from collections import Counter
from tqdm.notebook import tqdm
c_entities = 0
c_facts = 0
c_refs = 0
ref_types = Counter()
ref_props = Counter()
ref_props_c = 0
ref_types_c = 0
def update_desc():
return f"Facts:{c_facts:,}|Refs:{c_refs:,}"
step = 10000
for wd_id, claims in tqdm(db.iter_item_provenances(), total=db.size()):
c_entities += 1
for claim_type, claim_objs in claims.items():
for claim_prop, claim_values in claim_objs.items():
for claim_value in claim_values:
c_facts += 1
refs = claim_value.get("references")
if not refs:
continue
for reference_node in refs:
c_refs += 1
for ref_type, ref_objs in reference_node.items():
ref_types_c += 1
ref_types[ref_type] += 1
for ref_prop in ref_objs.keys():
ref_props_c += 1
ref_props[ref_prop] += 1
print("Reference node stats")
print(f"Items: {c_entities:,} entities")
print(f"Facts: {c_facts:,} facts, {c_facts/c_entities:.2f} facts/entity")
print(f"References: {c_refs:,} references, {c_refs/c_facts:.2f} references/fact")
print("\nReference stats:")
print(f"Types/reference: {ref_types_c / c_refs:.2f}")
print(f"Properties/reference: {ref_props_c / c_refs:.2f}")
def print_top(counter_obj, total, top=100, message="", get_label=False):
print(f"Top {top} {message}: ")
top_k = sorted(counter_obj.items(), key=lambda x: x[1], reverse=True)[:top]
for i, (obj, obj_c) in enumerate(top_k):
if get_label:
obj = f"{obj}\t{db.get_label(obj)}"
print(f"{i+1}\t{obj_c:,}\t{obj_c/total*100:.2f}%\t{obj}")
print_top(ref_types, total=c_refs, message="types")
print_top(ref_props, total=c_refs, message="properties", get_label=True)
###Output
_____no_output_____
###Markdown
3. Entities boolean searchFind subset of entities (head entities) with information about tail entities and properties (triples: )
###Code
import time
import config as cf
def find_wikidata_items_haswbstatements(params, print_top=3, get_qid=True):
start = time.time()
wd_ids = db.get_haswbstatements(params, get_qid=get_qid)
end = time.time() - start
print("Query:")
for logic, prop, qid in params:
if prop is None:
prop_label = ""
else:
prop_label = f" - {prop}[{db.get_label(prop)}]"
qid_label = db.get_label(qid)
print(f"{logic}{prop_label}- {qid}[{qid_label}]")
print(f"Answers: Found {len(wd_ids):,} items in {end:.5f}s")
for i, wd_id in enumerate(wd_ids[:print_top]):
print(f"{i+1}. {wd_id} - {db.get_label(wd_id)}")
print(f"{4}. ...")
print()
print("1.1. Get all female (Q6581072)")
find_wikidata_items_haswbstatements(
[
[cf.ATTR_OPTS.AND, None, "Q6581072"]
]
)
print("1.1. Get all female (Q6581072)")
find_wikidata_items_haswbstatements(
[
[cf.ATTR_OPTS.AND, None, "Q6581072"]
],
get_qid=False
)
print("1.2. Get all male (Q6581072)")
find_wikidata_items_haswbstatements(
[
[cf.ATTR_OPTS.AND, None, "Q6581097"]
]
)
print("1.2. Get all male (Q6581072)")
find_wikidata_items_haswbstatements(
[
[cf.ATTR_OPTS.AND, None, "Q6581097"]
],
get_qid=False
)
print("2. Get all entities has relation with Graduate University for Advanced Studies (Q2983844)")
find_wikidata_items_haswbstatements(
[
# ??? - Graduate University for Advanced Studies
[cf.ATTR_OPTS.AND, None, "Q2983844"]
]
)
print("3. Get all entities who are human, male, educated at Todai, and work at SOKENDAI")
find_wikidata_items_haswbstatements(
[
# instance of - human
[cf.ATTR_OPTS.AND, "P31", "Q5"],
# gender - male
[cf.ATTR_OPTS.AND, "P21", "Q6581097"],
# educated at - Todai
[cf.ATTR_OPTS.AND, "P69", "Q7842"],
# employer - Graduate University for Advanced Studies
[cf.ATTR_OPTS.AND, "P108", "Q2983844"],
]
)
print("4. Get all entities that have relation with human, male, Todai, and SOKENDAI")
find_wikidata_items_haswbstatements(
[
# instance of - human
[cf.ATTR_OPTS.AND, None, "Q5"],
# gender - male
[cf.ATTR_OPTS.AND, None, "Q6581097"],
# educated at - Todai
[cf.ATTR_OPTS.AND, None, "Q7842"],
# employer - Graduate University for Advanced Studies
[cf.ATTR_OPTS.AND, None, "Q2983844"],
]
)
print("5. Get all entities that have relation with scholarly article or DNA, X-ray diffraction, and Francis Crick and Nature")
find_wikidata_items_haswbstatements(
[
# ? - scholarly article
[cf.ATTR_OPTS.AND, None, "Q13442814"],
# ? - DNA
[cf.ATTR_OPTS.OR, None, "Q7430"],
# ? - X-ray diffraction
[cf.ATTR_OPTS.OR, None, "Q12101244"],
# ? - DNA
[cf.ATTR_OPTS.OR, None, "Q911331"],
# Francis Crick
[cf.ATTR_OPTS.AND, None, "Q123280"],
# ? - Nature
[cf.ATTR_OPTS.AND, None, "Q180445"],
]
)
###Output
1.1. Get all female (Q6581072)
Query:
AND- Q6581072[female]
Answers: Found 1,874,319 items in 1.90434s
1. Q814 - Coco Austin
2. Q873 - Meryl Streep
3. Q839 - Georgina Cassar
4. ...
1.1. Get all female (Q6581072)
Query:
AND- Q6581072[female]
Answers: Found 1,874,319 items in 0.00701s
1. 247 - Coco Austin
2. 269 - Meryl Streep
3. 301 - Georgina Cassar
4. ...
1.2. Get all male (Q6581072)
Query:
AND- Q6581097[male]
Answers: Found 5,868,897 items in 5.04684s
1. Q80 - Tim Berners-Lee
2. Q76 - Barack Obama
3. Q42 - Douglas Adams
4. ...
1.2. Get all male (Q6581072)
Query:
AND- Q6581097[male]
Answers: Found 5,868,897 items in 0.02339s
1. 24 - Tim Berners-Lee
2. 31 - Barack Obama
3. 45 - Douglas Adams
4. ...
2. Get all entities has relation with Graduate University for Advanced Studies (Q2983844)
Query:
AND- Q2983844[Graduate University for Advanced Studies]
Answers: Found 209 items in 0.00059s
1. Q758600 - Tatsuya Horita
2. Q311174 - Fumihito, Prince Akishino
3. Q532387 - Takashi Gojobori
4. ...
3. Get all entities who are human, male, educated at Todai, and work at SOKENDAI
Query:
AND - P31[instance of]- Q5[human]
AND - P21[sex or gender]- Q6581097[male]
AND - P69[educated at]- Q7842[University of Tokyo]
AND - P108[employer]- Q2983844[Graduate University for Advanced Studies]
Answers: Found 28 items in 0.01831s
1. Q1620298 - Hirotaka Sugawara
2. Q1737903 - Keiichi Kodaira
3. Q8056214 - Ōsumi Yoshinori
4. ...
4. Get all entities that have relation with human, male, Todai, and SOKENDAI
Query:
AND- Q5[human]
AND- Q6581097[male]
AND- Q7842[University of Tokyo]
AND- Q2983844[Graduate University for Advanced Studies]
Answers: Found 34 items in 0.01514s
1. Q758600 - Tatsuya Horita
2. Q1620298 - Hirotaka Sugawara
3. Q1737903 - Keiichi Kodaira
4. ...
5. Get all entities that have relation with scholarly article or DNA, X-ray diffraction, and Francis Crick and Nature
Query:
AND- Q13442814[scholarly article]
OR- Q7430[DNA]
OR- Q12101244[X-ray diffraction]
OR- Q911331[molecular geometry]
AND- Q123280[Francis Crick]
AND- Q180445[Nature]
Answers: Found 46 items in 0.01754s
1. Q1895685 - Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid
2. Q40010403 - Letter: Molecular structure of NAD.
3. Q49026943 - X-ray diffraction study of the interaction of phospholipids with cytochrome c in the aqueous phase.
4. ...
###Markdown
Try changing the value in the text box
###Code
w
###Output
_____no_output_____
###Markdown
JS dynamically updates Python
###Code
w.value
###Output
_____no_output_____
###Markdown
Python dynamically updates JS
###Code
w.value="some variable value specified in Python"
###Output
_____no_output_____
###Markdown
Adversarial examples for signatures - exampleThis notebook shows a case of adversarial examples for handwritten signatures (https://github.com/luizgh/adversarial_signatures). The following steps are considered:1. Load data2. Extract features and train a WD classifier3. Perform a type-I attack (change a genuine signature so that it is rejected)4. Perform a type-II attack (change a skilled forgery so that it is accepted)For more details, refer to the paper:[1] Hafemann, Luiz G., Robert Sabourin, and Luiz S. Oliveira. "Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification" [preprint](https://arxiv.org/abs/1901.03398)
###Code
# Load the required libraries:
import torch
import numpy as np
import matplotlib.pyplot as plt
# Model and WD training:
from sigver.featurelearning.models import SigNet
from wd import create_trainset_for_user, train_wdclassifier_user
# Functions to generate attacks
from model_utils import TorchRBFSVM, ToTwoOutputs
from attacks.fgm import fgm
from attack_utils import carlini_attack, rmse
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
%matplotlib inline
###Output
_____no_output_____
###Markdown
1) Loading the dataWe will attack signatures from a fictious user "Joao". We will also use features from other users in the MCYT dataset as negative examples (random forgeries)Manually download these files:Store these on adversarial_examples/data:https://drive.google.com/open?id=1MPNJVVQXZwz38dmeqIeyditIUxTpFO2dhttps://drive.google.com/open?id=1r-5lnJtChaAa8R4ocE1ZyorTdTiAqlQAStore this on adversarial_examples/models:https://drive.google.com/open?id=1l8NFdxSvQSLb2QTv71E6bKcTgvShKPpx
###Code
# Download the dataset and model
from urllib import request
from pathlib import Path
if not Path('data/dataset_joao.npz').exists() or not Path('data/mcyt_train_signet_features.npz').exists():
raise RuntimeError('Please download the dataset from the links above')
if not Path('models/signet.pth').exists():
raise RuntimeError('Please download the model from the link above')
print('All downloaded')
# Load MCYT features
mcyt_data = np.load('data/mcyt_train_signet_features.npz')
mcyt_features = mcyt_data['signet_features']
mcyt_y = mcyt_data['y']
mcyt_yforg = mcyt_data['yforg']
# Load dataset for the user under attack
joao_data = np.load('data/dataset_joao.npz')
joao_x, joao_y, joao_yforg = joao_data['x'], joao_data['y'], joao_data['yforg']
# Visualize some signatures
f, ax = plt.subplots(2, 5, figsize=(12, 4))
for i in range(5):
ax[0][i].imshow(joao_x[15+i], cmap='Greys')
ax[0][i].axis('off')
ax[0][i].set_title('genuine' if joao_yforg[15+i] == 0 else 'forgery' )
for i in range(5):
ax[1][i].imshow(joao_x[i], cmap='Greys')
ax[1][i].axis('off')
ax[1][i].set_title('genuine' if joao_yforg[i] == 0 else 'forgery' )
###Output
_____no_output_____
###Markdown
2) Extract features and train a WD classifierWe will use the SigNet model as a feature extractor, and an SVM with the RBF kernel as the classifier. We will consider a "Perfect Knowledge" scenario, where the attacker has full access to the system under attack.
###Code
# Load the trained model
state_dict, _, _ = torch.load('models/signet.pth')
model = SigNet()
model.load_state_dict(state_dict)
model = model.to(device).eval()
# Extract features (to train the WD classifier)
def extract_features(model, images):
# Note: input pixels must be between [0, 1] for this model
input = torch.tensor(images).float().div(255).view(-1, 1, 150, 220).to(device)
with torch.no_grad():
return model(input).cpu().numpy()
joao_features = extract_features(model, joao_x)
# Let's split the data into train (last 5 samples), and test. For this user, the first 15 samples
# are forgeries, and the remaining are genuine signatures
joao_train_idx = slice(20, None)
joao_test_gen_idx = slice(15, 20)
joao_test_forg_idx = slice(0, 15)
# Ensure that joao has a "y" different from all users in MCYT
assert len(set(joao_y).intersection(set(mcyt_y))) == 0
# Ensure we chose the indexes correctly: first 15 should be forgery, others should be genuine
assert np.all(joao_yforg[joao_test_gen_idx] == 0)
assert np.all(joao_yforg[joao_test_forg_idx] == 1)
joao_id = 0
# Append this new user to the MCYT data:
xfeatures_train = np.concatenate((mcyt_features, joao_features[joao_train_idx]))
y_train = np.concatenate((mcyt_y, joao_y[joao_train_idx]))
yforg_train = np.concatenate((mcyt_yforg, joao_yforg[joao_train_idx]))
# Create the training set for the user
trainingSet = create_trainset_for_user(xfeatures_train, y_train, yforg_train, user=joao_id)
# Train the classifier
clf = train_wdclassifier_user('rbf', 1, 2**-11, trainingSet)
decision_threshold = 0.368 # From the MCYT dataset
# Check the predictions for unseen signatures from the user:
print('Predictions on genuine signatures (True = genuine)')
print(clf.decision_function(joao_features[joao_test_gen_idx]) > decision_threshold)
print()
print('Predictions on skilled forgeries (True = genuine)')
print(clf.decision_function(joao_features[joao_test_forg_idx]) > decision_threshold)
# Let's take the first genuine signature (id 15) and the first skilled forgery (id 0) to
# attack - both are correctly classified by the model
gen_idx = 15
forg_idx = 0
genuine_to_attack = joao_x[gen_idx:gen_idx+1]
forgery_to_attack = joao_x[forg_idx:forg_idx+1]
# Plot the chosen signatures
f, ax = plt.subplots(1,2, figsize=(10,4))
ax[0].imshow(genuine_to_attack.squeeze(), cmap='Greys')
ax[0].axis('off')
ax[0].set_title('genuine')
ax[1].imshow(forgery_to_attack.squeeze(), cmap='Greys')
ax[1].set_title('forgery')
ax[1].axis('off')
###Output
_____no_output_____
###Markdown
3) Type-I attack: making a genuine signature be rejectedWe will take the genuine signature above and run attacks that attempt to change it to be recognized as a forgery.
###Code
# First, let's concatenate the CNN model with the SVM model, so that the whole process
# is implemented in PyTorch, and we can use autograd to compute the gradients:
cnn_svm = torch.nn.Sequential(model, TorchRBFSVM(clf, device)).eval()
def to_torch(np_array):
return torch.tensor(np_array).unsqueeze(0).float().div(255).to(device)
# Let's double check that the concatenated model has the same output as before:
clf_decision = clf.decision_function(joao_features[gen_idx:gen_idx+1])
cnn_svm_decision = cnn_svm(to_torch(genuine_to_attack)).item()
print('classifier score: {:.4f}; concatenated model score: {:.4f}'.format(clf_decision.item(), cnn_svm_decision))
# For some attacks, we need the output to be two values (prediction for class 0 and for class 1),
# We implement this in the function ToTwoOutputs, that also takes the decision threshold into consideration:
cnn_svm_two_outputs = torch.nn.Sequential(cnn_svm, ToTwoOutputs(decision_threshold)).eval()
cnn_svm_two_outputs(to_torch(genuine_to_attack))
###Output
_____no_output_____
###Markdown
This is a normalized score (considering the decision threshold). If the value for class 1 (second position) is greater than 0, it means the score is greater than the decision threshold, so the signature is recognized as a genuine.
###Code
# Now, let's run the attacks
gen_fgm_atk = fgm(cnn_svm_two_outputs, genuine_to_attack, 1000, 0, device,
image_constraints=(0, 255))
gen_carlini_atk = carlini_attack(cnn_svm_two_outputs, genuine_to_attack, 0, device)
rmse_gen_fgm = rmse(gen_fgm_atk - genuine_to_attack)
rmse_gen_carlini = rmse(gen_carlini_atk - genuine_to_attack)
score_genuine = cnn_svm_two_outputs(to_torch(genuine_to_attack))[0,1].item()
score_genuine_fgm = cnn_svm_two_outputs(to_torch(gen_fgm_atk))[0,1].item()
score_genuine_carlini = cnn_svm_two_outputs(to_torch(gen_carlini_atk))[0,1].item()
print('Original image score (normalized): {:.4f}'.format(score_genuine))
print('FGM attack score (normalized): {:.4f}. RMSE (distortion): {:.4f}'.format(score_genuine_fgm, rmse_gen_fgm))
print('Carlini attack score (normalized): {:.4f}. RMSE (distortion): {:.4f}'.format(score_genuine_carlini, rmse_gen_carlini))
###Output
Original image score (normalized): 0.3351
FGM attack score (normalized): -1.7107. RMSE (distortion): 5.1233
Carlini attack score (normalized): -0.5007. RMSE (distortion): 1.3703
###Markdown
We can see that both attacks were successful in making the signature be classified as a forgery (score < 0). Let's visualize them:
###Code
f, ax = plt.subplots(1, 3, figsize=(12,4))
ax[0].imshow(genuine_to_attack.squeeze(), cmap='Greys')
ax[0].axis('off')
ax[0].set_title('Original')
ax[1].imshow(gen_fgm_atk.squeeze(), cmap='Greys')
ax[1].axis('off')
ax[1].set_title('FGM attack')
ax[2].imshow(gen_carlini_atk.squeeze(), cmap='Greys')
ax[2].axis('off')
ax[2].set_title('Carlini attack')
###Output
_____no_output_____
###Markdown
The two images are adversarial but we barely see any difference compared to the original 4) Type-II attack: making a forgery be acceptedWe will take the skilled forgery above and run attacks that attempt to change it to be recognized as a genuine.
###Code
forg_fgm_atk = fgm(cnn_svm_two_outputs, forgery_to_attack, 1000, 1, device,
image_constraints=(0, 255))
forg_carlini_atk = carlini_attack(cnn_svm_two_outputs, forgery_to_attack, 1, device)
rmse_forg_fgm = rmse(forg_fgm_atk - forgery_to_attack)
rmse_forg_carlini = rmse(forg_carlini_atk - forgery_to_attack)
score_forgery = cnn_svm_two_outputs(to_torch(forgery_to_attack))[0,1].item()
score_forgery_fgm = cnn_svm_two_outputs(to_torch(forg_fgm_atk))[0,1].item()
score_forgery_carlini = cnn_svm_two_outputs(to_torch(forg_carlini_atk))[0,1].item()
print('Original image score (normalized): {:.4f}'.format(score_forgery))
print('FGM attack score (normalized): {:.4f}. RMSE (distortion): {:.4f}'.format(score_forgery_fgm, rmse_forg_fgm))
print('Carlini attack score (normalized): {:.4f}. RMSE (distortion): {:.4f}'.format(score_forgery_carlini, rmse_forg_carlini))
###Output
Original image score (normalized): -0.1657
FGM attack score (normalized): 0.3901. RMSE (distortion): 3.3628
Carlini attack score (normalized): 0.5000. RMSE (distortion): 2.4376
###Markdown
Again, the two attacks were succesful (both attacks are classified as genuine)
###Code
f, ax = plt.subplots(1, 3, figsize=(12,4))
ax[0].imshow(forgery_to_attack.squeeze(), cmap='Greys')
ax[0].axis('off')
ax[0].set_title('Original')
ax[1].imshow(forg_fgm_atk.squeeze(), cmap='Greys')
ax[1].axis('off')
ax[1].set_title('FGM attack')
ax[2].imshow(forg_carlini_atk.squeeze(), cmap='Greys')
ax[2].axis('off')
ax[2].set_title('Carlini attack')
###Output
_____no_output_____
###Markdown
Fixed cycle intervals
###Code
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = CyclicCosineDecayLR(optimizer,
init_decay_epochs=100,
min_decay_lr=0.01,
restart_interval = 30,
restart_lr=0.06)
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
Geometrically increasing cycle intervals
###Code
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = CyclicCosineDecayLR(optimizer,
init_decay_epochs=100,
min_decay_lr=0.01,
restart_interval=30,
restart_interval_multiplier=1.5,
restart_lr=0.06)
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
if `restart_lr` is omitted, learning rate is set to `lr` on each restart
###Code
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = CyclicCosineDecayLR(optimizer,
init_decay_epochs=100,
min_decay_lr=0.01,
restart_interval=30,
restart_interval_multiplier=1.5)
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
With warmup
###Code
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = CyclicCosineDecayLR(optimizer,
init_decay_epochs=100,
min_decay_lr=0.01,
restart_interval = 30,
restart_lr=0.06,
warmup_epochs=40,
warmup_start_lr=0.03)
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
No warmup, no cycles Just a normal cosine annealing
###Code
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = CyclicCosineDecayLR(optimizer,
init_decay_epochs=100,
min_decay_lr=0.01)
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
Multiple parameter groups
###Code
model2 = TestNet()
optimizer_mul = optim.SGD([
{'params': model.parameters(), 'lr': 0.08},
{'params': model2.parameters(), 'lr': 0.1}
])
scheduler = CyclicCosineDecayLR(optimizer_mul,
init_decay_epochs=100,
min_decay_lr=[0.01, 0.02],
restart_interval = 30,
restart_lr=[0.05, 0.06],
warmup_epochs=40,
warmup_start_lr=[0.03, 0.04])
visualize_learning_rate(scheduler, epochs=300)
###Output
_____no_output_____
###Markdown
ASRpy usage Example---This notebook will provide a simple example how to apply the Artifact Subspace Reconstruction method to a MNE-Python raw object.You should be able to run this notebook directly from your browser by clicking on the `Open in Colab` link above.--- First you need to install [ASRpy](https://github.com/DiGyt/asrpy) in your Python environment. If you're not working from a Jupyter Notebook, paste the below line (without the `!`) into your command line.
###Code
!pip install git+https://github.com/DiGyt/asrpy.git -q
###Output
[K |████████████████████████████████| 7.4 MB 5.1 MB/s
[?25h Building wheel for asrpy (setup.py) ... [?25l[?25hdone
###Markdown
Now, import all required libraries.
###Code
# import libraries
import mne
from mne.datasets import ssvep
from asrpy import ASR
###Output
_____no_output_____
###Markdown
Load a raw EEG recording and do some basic preprocessing (resampling, filtering).
###Code
# Load raw data
data_path = ssvep.data_path()
raw_fname = data_path + '/sub-02/ses-01/eeg/sub-02_ses-01_task-ssvep_eeg.vhdr'
raw = mne.io.read_raw_brainvision(raw_fname, preload=True, verbose=False)
# Set montage
montage = mne.channels.make_standard_montage('easycap-M1')
raw.set_montage(montage, verbose=False)
# downsample for faster computation
raw.resample(256)
# apply a highpass filter from 1 Hz upwards
raw.filter(1., None, fir_design='firwin') # replace baselining with high-pass
# Construct epochs
event_id = {'12hz': 255, '15hz': 155}
events, _ = mne.events_from_annotations(raw, verbose=False)
# epoching time frame
tmin, tmax = -0.1, 1.5
# create an uncleaned average (for comparison purposes)
noisy_avg = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=None, baseline=None, preload=True,
verbose=False).average()
###Output
Using default location ~/mne_data for ssvep...
Creating ~/mne_data
###Markdown
Use ASRpy with MNE raw objects. ASRpy is implemented to work directly on MNE Raw data instances. As you can see below, you should be able to apply it to an MNE Raw object without any problems. If you want to fit your ASR on simple numpy arrays instead, please use `asrpy.asr_calibrate` and `asrpy.asr_process` instead.
###Code
# Apply the ASR
asr = ASR(sfreq=raw.info["sfreq"], cutoff=15)
asr.fit(raw)
raw = asr.transform(raw)
# Create an average using the cleaned data
clean_avg = mne.Epochs(raw, events, event_id, -0.1, 1.5, proj=False,
picks=None, baseline=None, preload=True,
verbose=False).average()
###Output
_____no_output_____
###Markdown
Done. Now we can plot the noisy vs. the clean data in order to compare them.
###Code
# set y axis limits
ylim = dict(eeg=[-10, 20])
# Plot image epoch before xdawn
noisy_avg.plot(spatial_colors=True, ylim=ylim,
titles="before ASR")
# Plot image epoch before xdawn
clean_avg.plot(spatial_colors=True, ylim=ylim,
titles="after ASR");
###Output
_____no_output_____
###Markdown
Use ASRpy with numpy arrays. If you are working with numpy arrays of EEG data (instead of MNE objects), you can use the `asr_calibrate` and `asr_process` functions to clean your data.
###Code
from asrpy import asr_calibrate, asr_process, clean_windows
# create a numpy array of EEG data from the MNE raw object
eeg_array = raw.get_data()
# extract the sampling frequency from the MNE raw object
sfreq = raw.info["sfreq"]
# (optional) make sure your asr is only fitted to clean parts of the data
pre_cleaned, _ = clean_windows(eeg_array, sfreq, max_bad_chans=0.1)
# fit the asr
M, T = asr_calibrate(pre_cleaned, sfreq, cutoff=15)
# apply it
clean_array = asr_process(eeg_array, sfreq, M, T)
###Output
_____no_output_____
###Markdown
From Flax's annotated MNIST
###Code
import jax
import jax.numpy as jnp
from flax import linen as nn
from flax.training import train_state
import numpy as np
import optax
import tensorflow_datasets as tfds
import tensorflow as tf
tf.config.experimental.set_visible_devices([], "GPU")
class CNN(nn.Module):
@nn.compact
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
x = nn.Conv(features=32, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = nn.Conv(features=64, kernel_size=(3, 3))(x)
x = nn.relu(x)
x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
x = x.reshape((x.shape[0], -1))
x = nn.Dense(features=256)(x)
x = nn.relu(x)
x = nn.Dense(features=10)(x)
x = nn.log_softmax(x)
return x
def cross_entropy_loss(logits: jnp.ndarray, labels: jnp.ndarray) -> jnp.ndarray:
onehot = jax.nn.one_hot(labels, num_classes=10)
return -jnp.mean(jnp.sum(onehot * logits, axis=-1))
def compute_metrics(logits: jnp.ndarray, labels: jnp.ndarray) -> jnp.ndarray:
loss = cross_entropy_loss(logits, labels)
accuracy = jnp.mean(jnp.argmax(logits, -1) == labels)
return {'loss': loss, 'accuracy': accuracy}
def get_datasets():
ds_builder = tfds.builder('mnist')
ds_builder.download_and_prepare()
train_ds = tfds.as_numpy(ds_builder.as_dataset(split='train', batch_size=-1))
test_ds = tfds.as_numpy(ds_builder.as_dataset(split='test', batch_size=-1))
train_ds['image'] = jnp.float32(train_ds['image']) / 255.0
test_ds['image'] = jnp.float32(test_ds['image']) / 255.0
return train_ds, test_ds
def create_train_state(rng, learning_rate, momentum):
cnn = CNN()
params = cnn.init(rng, jnp.ones([1, 28, 28, 1]))['params']
tx = optax.sgd(learning_rate, momentum)
return train_state.TrainState.create(apply_fn=cnn.apply, params=params, tx=tx)
@jax.jit
def train_step(state, batch):
def loss_fn(params):
logits = CNN().apply({'params': params}, batch['image'])
loss = cross_entropy_loss(logits, batch['label'])
return loss, logits
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
(_, logits), grads = grad_fn(state.params)
state = state.apply_gradients(grads=grads)
metrics = compute_metrics(logits, batch['label'])
return state, metrics
@jax.jit
def eval_step(params, batch):
logits = CNN().apply({'params': params}, batch['image'])
return compute_metrics(logits, labels=batch['label'])
def train_epoch(state, train_ds, batch_size, epoch, rng):
train_ds_size = len(train_ds['image'])
steps_per_epoch = train_ds_size // batch_size
perms = jax.random.permutation(rng, train_ds_size)
perms = perms[:steps_per_epoch*batch_size]
perms = perms.reshape((steps_per_epoch, batch_size))
batch_metrics = []
for perm in perms:
batch = {k: v[perm, ...] for k, v in train_ds.items()}
state, metrics = train_step(state, batch)
batch_metrics.append(metrics)
batch_metrics_np = jax.device_get(batch_metrics)
epoch_metrics_np = {
k: np.mean([metrics[k] for metrics in batch_metrics_np])
for k in batch_metrics_np[0]
}
print(f"train epoch: {epoch} loss: {epoch_metrics_np['loss']:.4f} accuracy: {epoch_metrics_np['accuracy']:.4f}")
return state
def eval_model(params, test_ds):
metrics = eval_step(params, test_ds)
metrics = jax.device_get(metrics)
summary = jax.tree_map(lambda x: x.item(), metrics)
return summary['loss'], summary['accuracy']
train_ds, test_ds = get_datasets()
rng = jax.random.PRNGKey(0)
rng, init_rng = jax.random.split(rng)
learning_rate = 0.1
momentum = 0.9
state = create_train_state(init_rng, learning_rate, momentum)
num_epochs = 10
batch_size = 32
for epoch in range(num_epochs):
rng, input_rng = jax.random.split(rng)
state = train_epoch(state, train_ds, batch_size, epoch, input_rng)
test_loss, test_accuracy = eval_model(state.params, test_ds)
print(f">>> loss:{test_loss:.4f} accuracy: {test_accuracy:.4f}")
###Output
train epoch: 0 loss: 0.1334 accuracy: 0.9592
>>> loss:0.0614 accuracy: 0.9796
train epoch: 1 loss: 0.0481 accuracy: 0.9853
>>> loss:0.0540 accuracy: 0.9842
train epoch: 2 loss: 0.0336 accuracy: 0.9898
>>> loss:0.0311 accuracy: 0.9900
train epoch: 3 loss: 0.0246 accuracy: 0.9921
>>> loss:0.0360 accuracy: 0.9912
train epoch: 4 loss: 0.0212 accuracy: 0.9932
>>> loss:0.0340 accuracy: 0.9905
train epoch: 5 loss: 0.0174 accuracy: 0.9948
>>> loss:0.0286 accuracy: 0.9915
train epoch: 6 loss: 0.0114 accuracy: 0.9965
>>> loss:0.0413 accuracy: 0.9888
train epoch: 7 loss: 0.0100 accuracy: 0.9971
>>> loss:0.0462 accuracy: 0.9890
train epoch: 8 loss: 0.0096 accuracy: 0.9971
>>> loss:0.0381 accuracy: 0.9897
train epoch: 9 loss: 0.0083 accuracy: 0.9974
>>> loss:0.0366 accuracy: 0.9917
###Markdown
There is a progress bar for each joint component that you're testing. This helps on really large AJIVE analyses.
###Code
js = JIVEJackstraw()
js.fit(datablock, cns, alpha=.01, bonferroni=True)
js.results[0]['significant']
###Output
_____no_output_____
###Markdown
Dataset**anime.csv** * anime_id - unique id identifying an anime.* name - full name of anime.* genre - comma separated list of genres for this anime.* type - movie, TV, OVA, etc.* episodes - how many episodes in this show. (1 if movie).* rating - average rating out of 10 for this anime.* members - number of community members that are in this anime's"group".**rating.csv*** user_id - non identifiable randomly generated user id.* anime_id - the anime that this user has rated.* rating - rating out of 10 this user has assigned (-1 if the user watched it but didn't assign a rating).Source courtesy: [Kaggle](https://www.kaggle.com/CooperUnion/anime-recommendations-database)
###Code
# user-anime details
anime = pd.read_csv('anime.csv')
# individual user rating details per item
ratings = pd.read_csv('rating.csv')
anime.head()
ratings.head()
# merging anime.csv and rating.csv on anime_id column
fulldata=pd.merge(anime, ratings, on='anime_id', suffixes= ['', '_user'])
fulldata = fulldata.rename(columns={'name': 'anime_title', 'rating_user': 'user_rating'})
fulldata.head()
fulldata.shape
fulldata.isnull().sum()
'''
- df: dataframe is fulldata
- item_col: item character by which we need to recommend (genre in this case)
- user_col: column name that contains user ID's
- user: user ID for whom we need to recomment items
- user_rating_col: column name containing individual user ratings
- avg_rating_col: column name containing average rating per item
'''
sr.rec(df = fulldata,
item_col = 'genre',
user_col = 'user_id',
user = 100,
user_rating_col = 'user_rating',
avg_rating_col = 'rating')
# it important to drop NaN values in case of using parameter `sep`
fulldata.dropna(inplace = True)
'''
.
.
.
- sep: seperator of parameter `item_col`
only in case the recommendation has to be on primary genre
'''
sr.rec(df = fulldata,
item_col = 'genre',
user_col = 'user_id',
user = 100,
user_rating_col = 'user_rating',
avg_rating_col = 'rating',
sep = ',')
###Output
_____no_output_____
###Markdown
Load data
###Code
from sklearn.datasets import fetch_mldata
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score
mnist = fetch_mldata('MNIST original', data_home='./tmp')
# only binary classification supported
mask = (mnist['target'] == 3) + (mnist['target'] == 5)
X_all = scale(mnist['data'][mask].astype(float))
y_all = (mnist['target'][mask]==3)*1
# make it more sparse
X_all = X_all * (np.random.uniform(0, 1, X_all.shape) > 0.8)
print('Dataset shape: {}'.format(X_all.shape))
print('Non-zeros rate: {}'.format(np.mean(X_all != 0)))
print('Classes balance: {} / {}'.format(np.mean(y_all==0), np.mean(y_all==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_all, y_all, random_state=42, test_size=0.3)
###Output
Dataset shape: (13454, 784)
Non-zeros rate: 0.163297919138
Classes balance: 0.469228482236 / 0.530771517764
###Markdown
Baselines
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
for model in [
LogisticRegression(),
RandomForestClassifier(n_jobs=-1, n_estimators=200)
]:
model.fit(X_tr, y_tr)
predictions = model.predict(X_te)
acc = accuracy_score(y_te, predictions)
print('model: {}'.format(model.__str__()))
print('accuracy: {}'.format(acc))
print()
###Output
model: LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
accuracy: 0.902155065643
()
model: RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=-1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
accuracy: 0.892494426554
()
###Markdown
Dense example
###Code
from tffm import TFFMClassifier
for order in [2, 3]:
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=50,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='dense'
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
# this will close tf.Session and free resources
model.destroy()
###Output
100%|██████████| 50/50 [00:10<00:00, 5.32epoch/s]
###Markdown
Sparse example
###Code
import scipy.sparse as sp
# only CRS format supported
X_tr_sparse = sp.csr_matrix(X_tr)
X_te_sparse = sp.csr_matrix(X_te)
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=50,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse'
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
model.destroy()
###Output
100%|██████████| 50/50 [00:23<00:00, 2.31epoch/s]
###Markdown
Regression example
###Code
from tffm import TFFMRegressor
from sklearn.metrics import mean_squared_error
model = TFFMRegressor(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=50,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse'
)
# translate Y from {0,1} to {-10, 10}
model.fit(X_tr_sparse, y_tr*20-10, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions > 0)))
print('MSE: {}'.format(mean_squared_error(y_te*20-10, predictions)))
model.destroy()
###Output
100%|██████████| 50/50 [00:25<00:00, 1.73epoch/s]
###Markdown
n_features/time complexity
###Code
n_features = X_all.shape[1]
used_features = range(100, 1000, 100)
n_repeats = 5
elapsed_mean = []
elapsed_std = []
model_title = ''
for cur_n_feats in tqdm(used_features):
time_observation = []
for _ in range(n_repeats):
active_features = np.random.choice(range(n_features), size=cur_n_feats)
model = TFFMClassifier(
order=5,
rank=50,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=1,
batch_size=-1,
init_std=0.01,
input_type='dense'
)
model_title = model.__str__()
# manually initialize model without calling .fit()
model.core.set_num_features(cur_n_feats)
model.core.build_graph()
model.initialize_session()
start_time = time.time()
predictions = model.decision_function(X_all[:, active_features])
end_time = time.time()
model.destroy()
time_observation.append(end_time - start_time)
elapsed_mean.append(np.mean(time_observation))
elapsed_std.append(np.std(time_observation))
%pylab inline
errorbar(used_features, elapsed_mean, yerr=elapsed_std)
xlim(0, 1000)
title(model_title)
xlabel('n_features')
ylabel('test time')
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Logging example
###Code
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=10,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
log_dir='./tmp/logs',
verbose=1
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
###Output
Initialize logs, use:
tensorboard --logdir=/Users/mikhail/std/tffm/tmp/logs
###Markdown
Save/load example
###Code
model.save_state('./tmp/state.tf')
model.load_state('./tmp/state.tf')
###Output
_____no_output_____
###Markdown
Different optimizers
###Code
for optim, title in [(tf.train.AdamOptimizer(learning_rate=0.001), 'Adam'),
(tf.train.FtrlOptimizer(0.05, l1_regularization_strength=0.001), 'FTRL')]:
acc = []
model = TFFMClassifier(
order=3,
rank=10,
optimizer=optim,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
)
n_epochs = 5
anchor_epochs = range(0, 100+1, n_epochs)
for _ in anchor_epochs:
# score result every 5 epochs
model.fit(X_tr_sparse, y_tr, n_epochs=n_epochs)
predictions = model.predict(X_te_sparse)
acc.append(accuracy_score(y_te, predictions))
plot(anchor_epochs, acc, label=title)
model.destroy()
xlabel('n_epochs')
ylabel('accuracy')
legend()
grid()
###Output
_____no_output_____
###Markdown
Example UsageThis is a basic example using the torchvision COCO dataset from coco.py, it assumes that you've already downloaded the COCO images and annotations JSON. You'll notice that the scale augmentations are quite extreme.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import cv2
import numpy as np
from copy_paste import CopyPaste
from coco import CocoDetectionCP
from visualize import display_instances
import albumentations as A
import random
from matplotlib import pyplot as plt
transform = A.Compose([
A.RandomScale(scale_limit=(-0.9, 1), p=1), #LargeScaleJitter from scale of 0.1 to 2
A.PadIfNeeded(256, 256, border_mode=0), #pads with image in the center, not the top left like the paper
A.RandomCrop(256, 256),
CopyPaste(blend=True, sigma=1, pct_objects_paste=0.8, p=1.) #pct_objects_paste is a guess
], bbox_params=A.BboxParams(format="coco", min_visibility=0.05)
)
data = CocoDetectionCP(
'../../datasets/coco/train2014/',
'../../datasets/coco/annotations/instances_train2014.json',
transform
)
f, ax = plt.subplots(1, 2, figsize=(16, 16))
index = random.randint(0, len(data))
img_data = data[index]
image = img_data['image']
masks = img_data['masks']
bboxes = img_data['bboxes']
empty = np.array([])
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[0])
if len(bboxes) > 0:
boxes = np.stack([b[:4] for b in bboxes], axis=0)
box_classes = np.array([b[-2] for b in bboxes])
mask_indices = np.array([b[-1] for b in bboxes])
show_masks = np.stack(masks, axis=-1)[..., mask_indices]
class_names = {k: data.coco.cats[k]['name'] for k in data.coco.cats.keys()}
display_instances(image, boxes, show_masks, box_classes, class_names, show_bbox=True, ax=ax[1])
else:
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[1])
###Output
_____no_output_____
###Markdown
Generating the datasetEach sample is a "tape" formed by two rows: the top row $y[0]$ contains random numbers sampled from the interval $[0, 1)$, while the second, $y[1]$, is formed by a string of zeros, except for one position, which has a one. The model is trained as a regressor to produce on the output the value from the first row of the column marked as 1. For example, given:$$ X = \begin{bmatrix} 0.13 & 0.01 & 0.11 & 0.32 & 0.24 & 0.01 \\ 0 & 0 & 0 & 0 & 1 & 0 \\\end{bmatrix},$$the model should produce $\mathcal{M}_\theta(X) = 0.24$. We train two models, one without attention and other with, as shown below.
###Code
N_SAMPLES = 1000
TIMESTEPS = 64
# (b, t, d)
X = np.random.rand(N_SAMPLES, TIMESTEPS, 1)
F = np.zeros(shape=(N_SAMPLES, TIMESTEPS, 1))
X = np.concatenate((X, F), axis=2)
Y = list()
correct_timesteps = np.random.randint(low=0, high=TIMESTEPS, size=(N_SAMPLES))
for sample, timestep in enumerate(correct_timesteps):
X[sample][timestep][1] = 1
Y.append(X[sample][timestep][0])
Y = np.asarray(Y).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Defining a simple and an attention model
###Code
def build_vanilla_model():
model_in = Input(shape=(TIMESTEPS, 2), name='sequence-in')
vectors = LSTM(units=4, name='lstm')(model_in)
output = Dense(units=1, activation='linear')(vectors)
# (b, t, d)
model = models.Model(inputs=[model_in], outputs=[output])
model.summary()
model.compile(optimizers.Adam(1e-2), 'mse', metrics=['mse'])
return model
def build_attention_model():
model_in = Input(shape=(None, 2), name='sequence-in')
masked = Masking(name='mask')(model_in)
vectors = LSTM(units=4, name='lstm', return_sequences=True)(masked)
descr = AttentionLayer(name='attention')(vectors)
output = Dense(units=1, activation='linear')(descr)
# (b, t, d)
model = models.Model(inputs=[model_in], outputs=[output])
model.summary()
model.compile(optimizers.Adam(5e-2), 'mse', metrics=['mse'])
return model
###Output
_____no_output_____
###Markdown
Training them
###Code
vanilla_model = build_vanilla_model()
vanilla_model.fit(X, Y, batch_size=32, epochs=10)
attention_model = build_attention_model()
attention_model.predict(X).shape
attention_model.fit(X, Y, batch_size=32, epochs=10)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequence-in (InputLayer) (None, None, 2) 0
_________________________________________________________________
mask (Masking) (None, None, 2) 0
_________________________________________________________________
lstm (LSTM) (None, None, 4) 112
_________________________________________________________________
attention (AttentionLayer) (None, 4) 24
_________________________________________________________________
dense_2 (Dense) (None, 1) 5
=================================================================
Total params: 141
Trainable params: 141
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
1000/1000 [==============================] - 4s 4ms/step - loss: 0.1063 - mean_squared_error: 0.1063
Epoch 2/10
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0822 - mean_squared_error: 0.0822
Epoch 3/10
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0094 - mean_squared_error: 0.0094
Epoch 4/10
1000/1000 [==============================] - 3s 3ms/step - loss: 0.0020 - mean_squared_error: 0.0020
Epoch 5/10
1000/1000 [==============================] - 3s 3ms/step - loss: 8.2353e-04 - mean_squared_error: 8.2353e-04
Epoch 6/10
1000/1000 [==============================] - 3s 3ms/step - loss: 2.9403e-04 - mean_squared_error: 2.9403e-04
Epoch 7/10
1000/1000 [==============================] - 3s 3ms/step - loss: 1.6836e-04 - mean_squared_error: 1.6836e-04
Epoch 8/10
1000/1000 [==============================] - 3s 3ms/step - loss: 1.3372e-04 - mean_squared_error: 1.3372e-04
Epoch 9/10
1000/1000 [==============================] - 3s 3ms/step - loss: 1.4867e-04 - mean_squared_error: 1.4867e-04
Epoch 10/10
1000/1000 [==============================] - 3s 3ms/step - loss: 1.6271e-04 - mean_squared_error: 1.6271e-04
###Markdown
Comparing loss functinos
###Code
plt.plot(vanilla_model.history.history['loss'], 'o-', alpha=0.8, label='Without attention')
plt.plot(attention_model.history.history['loss'], 'o-', alpha=0.8, label='With attention')
plt.grid(alpha=0.4)
plt.legend()
###Output
_____no_output_____
###Markdown
Visualizing attention weights
###Code
def copy_attention_model(model):
model_in = Input(shape=(None, 2), name='sequence-in')
masked = Masking(name='mask')(model_in)
vectors = LSTM(units=4, name='lstm', return_sequences=True, weights=model.layers[2].get_weights())(masked)
attention = AttentionLayer(name='attention', return_attention=True, weights=model.layers[3].get_weights())(vectors)
new_model = models.Model(inputs=[model_in], outputs=[attention])
new_model.summary()
return new_model
def plot_attention_graph(selected_samples, attention_coefs, selected_values):
for sample_id, attention, y in zip(selected_samples, attention_coefs, selected_values):
plt.figure(figsize=(20, 3))
plt.title(f'Sample #{sample_id}. Correct value @ index {y}')
plt.xlabel('Timesteps')
plt.ylabel('Attention coefficient')
plt.grid(alpha=0.4)
plt.plot(attention)
max_attention = np.max(attention)
U = np.linspace(0, max_attention, 20)
y = np.ones_like(U) * y
plt.plot(y, U, 'r', alpha=0.8, label='Correct timestep')
plt.legend()
coef_model = copy_attention_model(attention_model)
N_PICKS = 5
selected_samples = np.random.randint(0, high=N_SAMPLES, size=N_PICKS)
selected_values = correct_timesteps[selected_samples]
attention_coefs = coef_model.predict(X[selected_samples])
plot_attention_graph(selected_samples, attention_coefs, selected_values)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequence-in (InputLayer) (None, None, 2) 0
_________________________________________________________________
mask (Masking) (None, None, 2) 0
_________________________________________________________________
lstm (LSTM) (None, None, 4) 112
_________________________________________________________________
attention (AttentionLayer) (None, None) 24
=================================================================
Total params: 136
Trainable params: 136
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training a model with attention layer and maskingNotice that the model is trained for fewer epochs to make the "bump" on the attention coefficients between the masked and non-masked regions visible. It's actually a bit harder to stop the training in a good spot by just changing the amount of epochs. If the loss is too low, the bump is not visible, if it is too high, the attention simply makes no sense, since the model is still learning to what attend to.
###Code
AMOUNT_BLANKS = 16
# prepending each sample is AMOUNT_BLANKS blank timesteps
O = np.zeros(shape=(1000, AMOUNT_BLANKS, 2))
O = np.concatenate((O, X), axis=1)
# training the model
masked_model = build_attention_model()
masked_model.fit(O, Y, batch_size=16, epochs=2)
# building the model to get the attention weights
masked_model_coefs = copy_attention_model(masked_model)
# picking some samples for visualization
selected_samples = np.random.randint(0, high=N_SAMPLES, size=N_PICKS)
selected_values = correct_timesteps[selected_samples] + AMOUNT_BLANKS
attention_coefs = masked_model_coefs.predict(O[selected_samples])
# plotting their attention weights
plot_attention_graph(selected_samples, attention_coefs, selected_values)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequence-in (InputLayer) (None, None, 2) 0
_________________________________________________________________
mask (Masking) (None, None, 2) 0
_________________________________________________________________
lstm (LSTM) (None, None, 4) 112
_________________________________________________________________
attention (AttentionLayer) (None, 4) 24
_________________________________________________________________
dense_6 (Dense) (None, 1) 5
=================================================================
Total params: 141
Trainable params: 141
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
1000/1000 [==============================] - 6s 6ms/step - loss: 0.0933 - mean_squared_error: 0.0933
Epoch 2/2
1000/1000 [==============================] - 5s 5ms/step - loss: 0.0263 - mean_squared_error: 0.0263
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequence-in (InputLayer) (None, None, 2) 0
_________________________________________________________________
mask (Masking) (None, None, 2) 0
_________________________________________________________________
lstm (LSTM) (None, None, 4) 112
_________________________________________________________________
attention (AttentionLayer) (None, None) 24
=================================================================
Total params: 136
Trainable params: 136
Non-trainable params: 0
_________________________________________________________________
###Markdown
Data ExplorationLet's take a peek into the data and explore the data and its variables. The dataset is a supervised learning dataset with over 12000 instances and 26 attributes; this mean there is an input variable X and an out variable y.
###Code
#load the data to understand the attributes and data types
df.head()
#let's look at the data types
df.dtypes
###Output
_____no_output_____
###Markdown
It seems that the data has some few numberical datatypes and the rest are string objects, however all the data can be categorized as being categorical datatypes with a mix of binary and ordinal datatypes.
###Code
#change temperature into a category as its an ordinal datatype
df['temperature']=df['temperature'].astype('category')
###Output
_____no_output_____
###Markdown
Cleaning The Data
###Code
#check for empty values
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 12684 entries, 0 to 12683
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 destination 12684 non-null object
1 passanger 12684 non-null object
2 weather 12684 non-null object
3 temperature 12684 non-null category
4 time 12684 non-null object
5 coupon 12684 non-null object
6 expiration 12684 non-null object
7 gender 12684 non-null object
8 age 12684 non-null object
9 maritalStatus 12684 non-null object
10 has_children 12684 non-null int64
11 education 12684 non-null object
12 occupation 12684 non-null object
13 income 12684 non-null object
14 car 108 non-null object
15 Bar 12577 non-null object
16 CoffeeHouse 12467 non-null object
17 CarryAway 12533 non-null object
18 RestaurantLessThan20 12554 non-null object
19 Restaurant20To50 12495 non-null object
20 toCoupon_GEQ5min 12684 non-null int64
21 toCoupon_GEQ15min 12684 non-null int64
22 toCoupon_GEQ25min 12684 non-null int64
23 direction_same 12684 non-null int64
24 direction_opp 12684 non-null int64
25 Y 12684 non-null int64
dtypes: category(1), int64(7), object(18)
memory usage: 2.4+ MB
###Markdown
There are some missing values in several columns, and the 'car' variable has only 108 non-null values, more than 99% of the values are NaN. We can just drop it off. These variables are insufficient so its best to remove it completely from the data to avoid inaccuracies in the modeling.
###Code
df["car"].value_counts()
df.drop('car', inplace=True, axis=1)
###Output
_____no_output_____
###Markdown
Empty values in categorical data can be removed or replaced with the most frequent value in each column. Lets iterate through the pandas table and get all the columns with empty or NaN values, and then for each column the code is going to find the largest variable count and fill the empty values with the corresponding variable with maximum count.
###Code
for x in df.columns[df.isna().any()]:
df = df.fillna({x: df[x].value_counts().idxmax()})
#change Object datatypes to Categorical datatypes)
df_obj = df.select_dtypes(include=['object']).copy()
for col in df_obj.columns:
df[col]=df[col].astype('category')
df.dtypes
#lets do some statistcal analysis
df.describe(include='all')
df.select_dtypes('int64').nunique()
###Output
_____no_output_____
###Markdown
From the decription above we can tell that 'toCoupon_GEQ5min' has only one unique variable which won't help much in the encoding of the categorical variables. Therefore, its better to drop that column.
###Code
df.drop(columns=['toCoupon_GEQ5min'], inplace=True)
###Output
_____no_output_____
###Markdown
Let's plot the distribution charts of all the categorical datatypes.
###Code
fig, axes = plt.subplots(9, 2, figsize=(20,50))
axes = axes.flatten()
for ax, col in zip(axes, df.select_dtypes('category').columns):
sns.countplot(y=col, data=df, ax=ax,
palette="ch:.25", order=df[col].value_counts().index);
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
We are going to create feature vectors for our modeling by using the LabelEnconder and OneHotEncoder.
###Code
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
enc = OneHotEncoder(dtype='int64')
df_cat = df.select_dtypes(include=['category']).copy()
df_int = df.select_dtypes(include=['int64']).copy()
df_enc = pd.DataFrame()
for col in df_cat.columns:
enc_results = enc.fit_transform(df_cat[[col]])
enc_cat = [col + '_' + str(x) for x in enc.categories_[0]]
df0 = pd.DataFrame(enc_results.toarray(), columns=enc_cat)
df_enc = pd.concat([df_enc,df0], axis=1)
df_final = pd.concat([df_enc, df_int], axis=1)
df_final
import numpy as np
import pandas as pd
from pandas.io.parsers import read_csv
from BOAmodel import *
from collections import defaultdict
""" parameters """
# The following parameters are recommended to change depending on the size and complexity of the data
N = 2000 # number of rules to be used in SA_patternbased and also the output of generate_rules
Niteration = 500 # number of iterations in each chain
Nchain = 2 # number of chains in the simulated annealing search algorithm
supp = 5 # 5% is a generally good number. The higher this supp, the 'larger' a pattern is
maxlen = 3 # maxmum length of a pattern
# \rho = alpha/(alpha+beta). Make sure \rho is close to one when choosing alpha and beta.
alpha_1 = 500 # alpha_+
beta_1 = 1 # beta_+
alpha_2 = 500 # alpha_-
beta_2 = 1 # beta_-
""" input file """
# # notice that in the example, X is already binary coded.
# # Data has to be binary coded and the column name shd have the form: attributename_attributevalue
# filepathX = 'tictactoe_X.txt' # input file X
# filepathY = 'tictactoe_Y.txt' # input file Y
# df = read_csv(filepathX,header=0,sep=" ")
# Y = np.loadtxt(open(filepathY,"rb"),delimiter=" ")
df = df_final.iloc[:,:-1].reset_index(drop=True)
Y = df_final.iloc[:,-1].reset_index(drop=True)
lenY = len(Y)
train_index = sample(range(lenY),int(0.70*lenY))
test_index = [i for i in range(lenY) if i not in train_index]
model = BOA(df.iloc[train_index].reset_index(drop=True), Y.iloc[train_index].reset_index(drop=True))
model.generate_rules(supp, maxlen,N)
model.set_parameters(alpha_1, beta_1, alpha_2, beta_2, None, None)
rules = model.SA_patternbased(Niteration, Nchain, print_message=True)
# test
Yhat = predict(rules, df.iloc[test_index].reset_index(drop=True))
TP,FP,TN,FN = getConfusion(Yhat, Y[test_index].reset_index(drop=True))
tpr = float(TP)/(TP+FN)
fpr = float(FP)/(FP+TN)
print('TP = {}, FP = {}, TN = {}, FN = {} \n accuracy = {}, tpr = {}, fpr = {}'.\
format(TP,FP,TN,FN, float(TP+TN)/(TP+TN+FP+FN),tpr,fpr))
###Output
Took 57.536s to generate 32162 rules
Screening rules using information gain
###Markdown
Sample Data
###Code
df = pd.read_csv("datasets/FIFA 2018 Statistics.csv")
df.head()
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
numerical_features = [column for column in df.columns if df[column].dtype in [np.int64]]
X = df[numerical_features]
y = df['Man of the Match'].astype('category').cat.codes # Convert `Yes` to 1 and `No` to 0 label
###Output
_____no_output_____
###Markdown
Classification Model Construction
###Code
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
random_seed = 76
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=random_seed)
classifier = RandomForestClassifier(n_estimators=50, random_state=random_seed)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Permutation Importance
###Code
import eli5
from eli5.sklearn import PermutationImportance
permutation = PermutationImportance(classifier, random_state=random_seed).fit(X_test, y_test)
eli5.show_weights(permutation, feature_names=X_test.columns.tolist())
###Output
_____no_output_____
###Markdown
Partial Dependence Plots
###Code
from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
feature_name = 'Goal Scored'
pdp_goal_scores = pdp.pdp_isolate(model=classifier, dataset=X_test, model_features=numerical_features, feature=feature_name)
pdp.pdp_plot(pdp_goal_scores, feature_name)
plt.show()
feature_name = 'Blocked'
pdp_goal_scores = pdp.pdp_isolate(model=classifier, dataset=X_test, model_features=numerical_features, feature=feature_name)
pdp.pdp_plot(pdp_goal_scores, feature_name)
plt.show()
feature_names = ['Goal Scored', 'Blocked']
interact = pdp.pdp_interact(model=classifier, dataset=X_test, model_features=numerical_features, features=feature_names)
pdp.pdp_interact_plot(pdp_interact_out=interact, feature_names=feature_names, plot_type='contour')
plt.show()
###Output
_____no_output_____
###Markdown
SHAP Values
###Code
sample = X_test.iloc[0] # select first row of the test set
sample_array = np.expand_dims(sample.values, axis=0)
confidence = classifier.predict_proba(sample_array)
predicted_class = classifier.classes_[confidence.argmax()]
predicted_prob = confidence.max()
# class 1 is equal to "Win Man of the Match"
print(f"predicted class: {predicted_class} with confidence level: {predicted_prob}")
import shap
explainer = shap.TreeExplainer(classifier)
shap_values = explainer.shap_values(sample)
shap.initjs()
shap.force_plot(explainer.expected_value[1], shap_values[1], sample)
###Output
_____no_output_____
###Markdown
Summary Plots
###Code
shap_values = explainer.shap_values(X_test)
shap.summary_plot(shap_values[1], X_test)
###Output
_____no_output_____
###Markdown
Run the modelMost important arguments:- *dbName*: Name of the database to store to. Default: *chargingmodel/output/[scenario]_[strategy].db*- *scenario*: Set the EVSE scenario. Defines the availability and power of charging options at each location.- *strategy*: Uncontrolled, Opt_county, Opt_National (See Paper)
###Code
uncontrolled = "chargingmodel/output/MyUncontrolled.db"
chargingmodel.run(dbName=uncontrolled, strategy='Uncontrolled', n_worker=3)
# Takes ~ 3min
# Here setting n_worker != 1 means running agents batches in parallel. Can lead to errors due to less frequent residual load updates.
# Default batchsize = 25, Default n_worker = 1
optimal = "chargingmodel/output/MyOptimized.db"
chargingmodel.run(dbName=optimal, strategy='Opt_National', verbose=False)
###Output
_____no_output_____
###Markdown
Create aggregated loads for each region or all regions combined.The result is stored in the database.
###Code
# All regions together
chargingmodel.postprocessing.processTotalLoad(uncontrolled)
chargingmodel.postprocessing.processTotalLoad(optimal)
# All regions individually
chargingmodel.postprocessing.processRegionalLoad(uncontrolled)
chargingmodel.postprocessing.processRegionalLoad(optimal)
###Output
_____no_output_____
###Markdown
Get output load as time series
###Code
df_uncontrolled = chargingmodel.postprocessing.getProcessed(uncontrolled, region="Total")
df_optimal = chargingmodel.postprocessing.getProcessed(optimal, region="Total")
###Output
_____no_output_____
###Markdown
Example: Plot the result for one week
###Code
from datetime import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
# Get the original load (not part of the package)
df_noEVLoad = None
regions = ["Region_1", "Region_2", "Region_3"]
for region in regions:
fn = "chargingmodel/input/residual-load/" + region + ".csv"
if df_noEVLoad is None:
df_noEVLoad = pd.read_csv(fn, sep=";", parse_dates=["TimeStamp"], index_col="TimeStamp")
df_noEVLoad.rename(columns={"ResidualLoad_MW": region}, inplace=True)
else:
df_tmp = pd.read_csv(fn, sep=";", parse_dates=["TimeStamp"], index_col="TimeStamp")
df_noEVLoad[region] = df_tmp["ResidualLoad_MW"]
df_noEVLoad['Total_MW'] = df_noEVLoad.loc[:, regions].sum(axis=1)
week = slice(dt(2030, 9, 16), dt(2030, 9, 22, 23, 45))
x = df_noEVLoad.loc[week, 'Total_MW'].index
noEV = df_noEVLoad.loc[week, 'Total_MW'].values
load_uncontrolled = noEV + df_uncontrolled.loc[week, 'PowerMW']
load_optimal = noEV + df_optimal.loc[week, 'PowerMW']
f, ax = plt.subplots(figsize=(20, 5))
ax.plot(x, noEV, label="No EV")
ax.plot(x, load_uncontrolled, label="Uncontrolled")
ax.plot(x, load_optimal, label="Optimal")
_ = ax.set_ylabel("Load [MW]")
_ = ax.legend()
###Output
_____no_output_____
###Markdown
Example DocumentThis is an example notebook to try out the ["Notebook as PDF"](https://github.com/betatim/notebook-as-pdf) extension. It contains a few plots from the excellent [matplotlib gallery](https://matplotlib.org/3.1.1/gallery/index.html).To try out the extension click "File -> Download as -> PDF via HTML". This will convert this notebook into a PDF. This extension has three new features compared to the official "save as PDF" extension:* it produces a PDF with the smallest number of page breaks,* the original notebook is attached to the PDF; and* this extension does not require LaTex.The created PDF will have as few pages as possible, in many cases only one. This is useful if you are exporting your notebook to a PDF for sharing with others who will view them on a screen.To make it easier to reproduce the contents of the PDF at a later date the original notebook is attached to the PDF. Not all PDF viewers know how to deal with attachments. This mean you need to use Acrobat Reader or pdf.js to be able to get the attachment from the PDF. Preview for OSX does not know how to display/give you access to PDF attachments.
###Code
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(111, projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
###Output
_____no_output_____
###Markdown
Below we show some more lines that go up and go down. These are noisy lines because we use a random number generator to create them. Fantastic isn't it?
###Code
x = np.linspace(0, 10)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + np.random.randn(50));
###Output
_____no_output_____
###Markdown
Using algotrade library
###Code
# importing algotrade library
import algotrade
###Output
_____no_output_____
###Markdown
Creating and testing custom strategy
###Code
# Checking available strategies
algotrade.general.getStrategies()
###Output
_____no_output_____
###Markdown
Using strategy
###Code
from algotrade.strategies import MovingAverageAnd200SMA# import strategy
from ta.trend import ema_indicator # chose indicator from ta library
# read built in doc for __init__ to see available arguments
strategy = MovingAverageAnd200SMA(periods_short=25, periods_long=32, name='ema', indicator=ema_indicator)
test = algotrade.testing.TestStrategy(ticker="AMD", strategy=strategy, start_date='2012-01-01') # Test strategy
print(test) # to print stats
###Output
ticker = AMD
start_date = 2012-01-01
strategy = <class 'algotrade.strategies.MovingAverageAnd200SMA'>
stats = {'profit_sum': 539.0464283962972, 'profit_mean': 26.952321419814858, 'profit_median': -4.89390621536944, 'profit_win': 0.4, 'num_trades': 20}
###Markdown
Ploting strategy
###Code
test.plotBuySell(days=500, display_strategy=True) # plot strategy on chart
###Output
_____no_output_____
###Markdown
Getting most recent buy date
###Code
test.buy_dates[-1]
###Output
_____no_output_____
###Markdown
`scinum` example
###Code
from scinum import Number, Correlation, NOMINAL, UP, DOWN, ABS, REL
###Output
_____no_output_____
###Markdown
The examples below demonstrate- [Numbers and formatting](Numbers-and-formatting)- [Defining uncertainties](Defining-uncertainties)- [Multiple uncertainties](Multiple-uncertainties)- [Configuration of correlations](Configuration-of-correlations)- [Automatic uncertainty propagation](Automatic-uncertainty-propagation) Numbers and formatting
###Code
n = Number(1.234, 0.2)
n
###Output
_____no_output_____
###Markdown
The uncertainty definition is absolute. See the examples with [multiple uncertainties](Multiple-uncertainties) for relative uncertainty definitions.The representation of numbers (`repr`) in jupyter notebooks uses latex-style formatting. Internally, [`Number.str()`](https://scinum.readthedocs.io/en/latest/scinum.Number.str) is called, which - among others - accepts a `format` argument, defaulting to `"%s"` (configurable globally or per instance via [`Number.default_format`](https://scinum.readthedocs.io/en/latest/scinum.Number.default_format)). Let's change the format for this notebook:
###Code
Number.default_format = "%.2f"
n
# or
n.str("%.3f")
###Output
_____no_output_____
###Markdown
Defining uncertainties Above, `n` is defined with a single, symmetric uncertainty. Here are some basic examples to access and play it:
###Code
# nominal value
print(n.nominal)
print(type(n.nominal))
# get the uncertainty
print(n.get_uncertainty())
print(n.get_uncertainty(direction=UP))
print(n.get_uncertainty(direction=DOWN))
# get the nominal value, shifted by the uncertainty
print(n.get()) # nominal value
print(n.get(UP)) # up variation
print(n.get(DOWN)) # down variation
# some more advanved use-cases:
# 1. get the multiplicative factor that would scale the nomninal value to the UP/DOWN varied ones
print("absolute factors:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
# 2. get the factor to obtain the uncertainty only (i.e., the relative unceratinty)
# (this is, of course, more useful in case of multiple uncertainties, see below)
print("\nrelative factors:")
print(n.get(UP, factor=True, diff=True))
print(n.get(DOWN, factor=True, diff=True))
###Output
absolute factors:
1.1620745542949757
0.8379254457050244
relative factors:
0.1620745542949757
0.1620745542949757
###Markdown
There are also a few shorthands for the above methods:
###Code
# __call__ is forwarded to get()
print(n())
print(n(UP))
# u() is forwarded to get_uncertainty()
print(n.u())
print(n.u(direction=UP))
###Output
1.234
1.434
(0.2, 0.2)
0.2
###Markdown
Multiple uncertainties Let's create a number that has two uncertainties: `"stat"` and `"syst"`. The `"stat"` uncertainty is asymmetric, and the `"syst"` uncertainty is relative.
###Code
n = Number(8848, {
"stat": (30, 20), # absolute +30-20 uncertainty
"syst": (REL, 0.5), # relative +-50% uncertainty
})
n
###Output
_____no_output_____
###Markdown
Similar to above, we can access the uncertainties and shifted values with [`get()`](https://scinum.readthedocs.io/en/latest/scinum.Number.get) (or `__call__`) and [`get_uncertainty()`](https://scinum.readthedocs.io/en/latest/scinum.Number.get_uncertainty) (or [`u()`](https://scinum.readthedocs.io/en/latest/scinum.Number.u)). But this time, we can distinguish between the combined (in quadrature) value or the particular uncertainty sources:
###Code
# nominal value as before
print(n.nominal)
# get all uncertainties (stored absolute internally)
print(n.uncertainties)
# get particular uncertainties
print(n.u("syst"))
print(n.u("stat"))
print(n.u("stat", direction=UP))
# get the nominal value, shifted by particular uncertainties
print(n(UP, "stat"))
print(n(DOWN, "syst"))
# compute the shifted value for both uncertainties, added in quadrature without correlation (default but configurable)
print(n(UP))
###Output
8878.0
4424.0
13272.101716733014
###Markdown
As before, we can also access certain aspects of the uncertainties:
###Code
print("factors for particular uncertainties:")
print(n.get(UP, "stat", factor=True))
print(n.get(DOWN, "syst", factor=True))
print("\nfactors for the combined uncertainty:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
###Output
factors for particular uncertainties:
1.0033905967450272
0.5
factors for the combined uncertainty:
1.500011496014129
0.49999489062775576
###Markdown
We can also apply some nice formatting:
###Code
print(n.str())
print(n.str("%.2f"))
print(n.str("%.2f", unit="m"))
print(n.str("%.2f", unit="m", force_asymmetric=True))
print(n.str("%.2f", unit="m", scientific=True))
print(n.str("%.2f", unit="m", si=True))
print(n.str("%.2f", unit="m", style="root"))
###Output
8848.00 +30.00-20.00 (stat) +- 4424.00 (syst)
8848.00 +30.00-20.00 (stat) +- 4424.00 (syst)
8848.00 +30.00-20.00 (stat) +- 4424.00 (syst) m
8848.00 +30.00-20.00 (stat) +4424.00-4424.00 (syst) m
8.85 +0.03-0.02 (stat) +- 4.42 (syst) x 1E3 m
8.85 +0.03-0.02 (stat) +- 4.42 (syst) km
8848.00 ^{+30.00}_{-20.00} #left(stat#right) #pm 4424.00 #left(syst#right) m
###Markdown
Configuration of correlations Let's assume that we have a second measurement for the quantity `n` we defined above,
###Code
n
###Output
_____no_output_____
###Markdown
and we measured it with the same sources of uncertainty,
###Code
n2 = Number(8920, {
"stat": (35, 15), # absolute +35-15 uncertainty
"syst": (REL, 0.3), # relative +-30% uncertainty
})
n2
###Output
_____no_output_____
###Markdown
Now, we want to compute the average measurement, including correct error propagation under consideration of sensible correlations. For more info on automatic uncertainty propagation, see the [subsequent section](Automatic-uncertainty-propagation). In this example, we want to fully correlate the *systematic* uncertainty, whereas we can treat *statistical* effects as uncorrelated. However, just wirting `(n + n2) / 2` will consider equally named uncertainty sources to be 100% correlated, i.e., both `syst` and `stat` uncertainties will be simply averaged. This is the default behavior in scinum as it is not possible (nor wise) to *guesstimate* the meaning of an uncertainty from its name.While this approach is certainly correct for `syst`, we don't achieve the correct treatment for `stat`:
###Code
(n + n2) / 2
###Output
_____no_output_____
###Markdown
Instead, we need to define the correlation specifically for `stat`. This can be achieved in multiple ways, but the most pythonic way is to use a [`Correlation`](https://scinum.readthedocs.io/en/latest/correlation) object.
###Code
(n @ Correlation(stat=0) + n2) / 2
###Output
_____no_output_____
###Markdown
**Note** that the statistical uncertainty decreased as desired, whereas the systematic one remained the same.`Correlation` objects have a default value that can be set as the first positional, yet optional parameter, and itself defaults to one.Internally, the operation `n @ Correlation(stat=0)` (or `n * Correlation(stat=0)` in Python 2) is evaluated prior to the addition of `n2` and generates a so-called [`DeferredResult`](https://scinum.readthedocs.io/en/latest/deferredresult). This object carries the information of `n` and the correlation over to the next operation, at which point the uncertainty propagation is eventually resolved. As usual, in situations where the operator precedence might seem unclear, it is recommended to use parentheses to structure the expression. Automatic uncertainty propagation Let's continue working with the number `n` from above.Uncertainty propagation works in a pythonic way:
###Code
n + 200
n / 2
n**0.5
###Output
_____no_output_____
###Markdown
In cases such as the last one, formatting makes a lot of sense ...
###Code
(n**0.5).str("%.2f")
###Output
_____no_output_____
###Markdown
More complex operations such as `exp`, `log`, `sin`, etc, are provided on the `ops` object, which mimics Python's `math` module. The benefit of the `ops` object is that all its operations are aware of Gaussian error propagation rules.
###Code
from scinum import ops
# change the default format for convenience
Number.default_format = "%.3f"
# compute the log of n
ops.log(n)
###Output
_____no_output_____
###Markdown
The propagation is actually performed simultaneously per uncertainty source.
###Code
m = Number(5000, {"syst": 1000})
n + m
n / m
###Output
_____no_output_____
###Markdown
As described [above](Configuration-of-correlations), equally named uncertainty sources are assumed to be fully correlated. You can configure the correlation in operations through `Correlation` objects, or by using explicit methods on the number object.
###Code
# n.add(m, rho=0.5, inplace=False)
# same as
n @ Correlation(0.5) + m
###Output
_____no_output_____
###Markdown
When you set `inplace` to `True` (the default), `n` is updated inplace.
###Code
n.add(m, rho=0.5)
n
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Local
import Neuron
import models as models
import train as train
import batch_utils
import data_transforms
import generate_training_data
###Output
Using Theano backend.
###Markdown
Data
###Code
training_data = generate_training_data.y_shape(n_nodes=20,
data_size=1000,
first_length=10,
branching_node=6)
###Output
_____no_output_____
###Markdown
Global parameters
###Code
n_nodes = 20
input_dim = 100
n_epochs = 5
batch_size = 32
n_batch_per_epoch = np.floor(training_data['morphology']['n20'].shape[0]/batch_size).astype(int)
d_iters = 20
lr_discriminator = 0.001
lr_generator = 0.001
train_loss = 'binary_crossentropy'
#train_loss = 'wasserstein_loss'
rule = 'none'
d_weight_constraint = [-.03, .03]
g_weight_constraint = [-33.3, 33.3]
m_weight_constraint = [-33.3, 33.3]
###Output
_____no_output_____
###Markdown
Run
###Code
geom_model, morph_model, disc_model, gan_model = \
train.train_model(training_data=training_data,
n_nodes=n_nodes,
input_dim=input_dim,
n_epochs=n_epochs,
batch_size=batch_size,
n_batch_per_epoch=n_batch_per_epoch,
d_iters=d_iters,
lr_discriminator=lr_discriminator,
lr_generator=lr_generator,
d_weight_constraint=d_weight_constraint,
g_weight_constraint=g_weight_constraint,
m_weight_constraint=m_weight_constraint,
rule=rule,
train_loss=train_loss,
verbose=True)
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 19, 3) 0
____________________________________________________________________________________________________
input_2 (InputLayer) (None, 19, 20) 0
____________________________________________________________________________________________________
merge_1 (Merge) (None, 19, 23) 0 input_1[0][0]
input_2[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda) (None, 20, 103) 0 merge_1[0][0]
____________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2060) 0 lambda_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1, 200) 412200 reshape_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1, 50) 10050 dense_1[0][0]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 1, 10) 510 dense_2[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 1, 1) 11 dense_3[0][0]
====================================================================================================
Total params: 422,771
Trainable params: 422,771
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
noise_input (InputLayer) (None, 1, 100) 0
____________________________________________________________________________________________________
dense_5 (Dense) (None, 1, 100) 10100 noise_input[0][0]
____________________________________________________________________________________________________
dense_6 (Dense) (None, 1, 100) 10100 dense_5[0][0]
____________________________________________________________________________________________________
dense_7 (Dense) (None, 1, 50) 5050 dense_6[0][0]
____________________________________________________________________________________________________
dense_8 (Dense) (None, 1, 57) 2907 dense_7[0][0]
____________________________________________________________________________________________________
reshape_2 (Reshape) (None, 19, 3) 0 dense_8[0][0]
====================================================================================================
Total params: 28,157
Trainable params: 28,157
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
noise_input (InputLayer) (None, 1, 100) 0
____________________________________________________________________________________________________
dense_9 (Dense) (None, 1, 100) 10100 noise_input[0][0]
____________________________________________________________________________________________________
dense_10 (Dense) (None, 1, 100) 10100 dense_9[0][0]
____________________________________________________________________________________________________
dense_11 (Dense) (None, 1, 380) 38380 dense_10[0][0]
____________________________________________________________________________________________________
reshape_3 (Reshape) (None, 19, 20) 0 dense_11[0][0]
____________________________________________________________________________________________________
lambda_2 (Lambda) (None, 19, 20) 0 reshape_3[0][0]
====================================================================================================
Total params: 58,580
Trainable params: 58,580
Non-trainable params: 0
____________________________________________________________________________________________________
====================
Epoch #0
After 20 iterations
###Markdown
Inverse Transform Sampling Peter Wills, 6/8/2018We'll use [inverse transform sampling](https://en.wikipedia.org/wiki/Inverse_transform_sampling) to sample from an arbitrary probability density. We won't require that this density is normalized; Make sure we can numerically integrate, so that we can build a CDF from the provided PDF (as well as normalize the PDF):
###Code
import numpy as np
from matplotlib import pyplot as plt
def pdf(x):
"""A unit normal density, NOT normalized"""
return np.exp(-x**2/2)
x = np.linspace(-5,5,100)
plt.plot(x,pdf(x));
###Output
_____no_output_____
###Markdown
So we've got ourselves a PDF, albeit without a normalization factor. Now let's use `sample` to draw samples from this distribution.
###Code
import sys; sys.path.append('/Users/peterwills/google-drive/python/my_packages/itsample/')
from itsample import sample
%%timeit
samples = sample(pdf,1)
%%time
samples = sample(pdf,5000)
from itsample import normalize
pdf_norm = normalize(pdf, vectorize=True)
plt.hist(samples,bins=100,density=True);
x = np.linspace(-3,3)
plt.plot(x,pdf_norm(x))
###Output
_____no_output_____
###Markdown
Note that, for efficiency reasons, Let's compare this the built-in numpy sampler:
###Code
%%time
samples = plt.np.random.normal(size=[5000])
plt.hist(samples,bins=100,density=True);
plt.plot(x,pdf_norm(x))
###Output
_____no_output_____
###Markdown
Much slower, but comparable results. Suppose we wanted a normal against a a background:
###Code
def pdf(x):
"""A unit normal density, NOT normalized"""
return 1 + np.exp(-x**2/2)
lower_bd = -3
upper_bd = 5
guess = 1
samples = sample(pdf,5000,lower_bd=lower_bd,upper_bd=upper_bd, guess=guess)
pdf_norm = normalize(pdf,lower_bd=lower_bd,upper_bd=upper_bd,vectorize=True)
x = np.linspace(lower_bd,upper_bd,100)
plt.hist(samples,bins=100,density=True);
plt.plot(x,pdf_norm(x))
###Output
_____no_output_____
###Markdown
An exception will be raised if the PDF cannot be normalized:
###Code
def pdf(x):
"""A unit normal density, NOT normalized"""
return 1 + np.exp(-x**2/2)
sample(pdf,1)
###Output
_____no_output_____
###Markdown
Chebyshev Approximation of CDFI'm working on coding up an inverse transform sampler that uses Chebyshev approximations of the CDF to speed things up. This follows the work of [Olver & Townsend, 2013](https://arxiv.org/pdf/1307.1223.pdf).We'll see that this approach is not as fast as we would hope. The key here is that the function `chebeval` is highly vectorized, and so is much faster than a numerically integrated CDF **when evaluated at many inputs simultaneously.** However, the root-finding functions in scipy do one evaluation at each iteration, so they do not take advantage of this vectorized structure. When doing single evaluations of the functions, `chebeval` is about the same speed as `quad`.Let's compare the speed of the quadrature approach above to the approach of Olver & Townsend.
###Code
%%timeit
samples = sample(pdf,1,lower_bd=-10, upper_bd=10)
%%timeit
samples = sample(pdf,1,lower_bd=-10, upper_bd=10,chebyshev=True)
###Output
1.85 ms ± 84.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
If we do a single sample, then speeds are approximately equal for the two methods. What if we do 5,000 samples?
###Code
%%time
samples = sample(pdf,5000,lower_bd=-10, upper_bd=10)
%%time
samples = sample(pdf,5000,lower_bd=-10, upper_bd=10,chebyshev=True)
###Output
CPU times: user 3.94 s, sys: 28.4 ms, total: 3.97 s
Wall time: 3.97 s
###Markdown
The Chebyshev approach is faster, but not by the orders of magnitude we are hoping for.Let's take a look at how fast calls to each CDF are:
###Code
from itsample import chebcdf, get_cdf
cdf = get_cdf(pdf,lower_bd,upper_bd)
cdf_cheb = chebcdf(pdf,lower_bd,upper_bd)
###Output
/Users/peterwills/google-drive/python/my_packages/itsample/itsample.py:130: RankWarning: The fit may be poorly conditioned
coeffs = chebfit(x,y,n-1)
###Markdown
Calls of a single value take about the same amount of time:
###Code
%%timeit
cdf([0])
%%timeit
cdf_cheb([0])
###Output
85.5 µs ± 8.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
But vectorized calls are MUCH faster for the Chebyshev CDF.
###Code
x = np.linspace(lower_bd, upper_bd, 100)
%%timeit
cdf(x)
%%timeit
cdf_cheb(x)
###Output
107 µs ± 13.9 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
Series calculations
###Code
geom = ase.io.read("geom/clars_goblet.xyz")
t = 2.7
singlet_mfh_model = tbmfh.MeanFieldHubbardModel(geom, [t], charge=0, multiplicity=1)
triplet_mfh_model = tbmfh.MeanFieldHubbardModel(geom, [t], charge=0, multiplicity=3)
# (for the clar's goblet, the correct triplet state is found also with singlet initial guess)
# Reproducing Extended Data Fig. 1 from
# Mishra 2020 "Topological frustration induces unconventional magnetism in a nanographene"
u_t_ratios = np.arange(0.5, 1.6, 0.1)
singlet_energies = []
triplet_energies = []
for ut_ratio in u_t_ratios:
u = ut_ratio * t
singlet_mfh_model.run_mfh(u)
singlet_energies.append(singlet_mfh_model.energy)
triplet_mfh_model.run_mfh(u)
triplet_energies.append(triplet_mfh_model.energy)
singlet_energies = np.array(singlet_energies)
triplet_energies = np.array(triplet_energies)
st_gap = triplet_energies - singlet_energies
plt.plot(u_t_ratios, st_gap*1000, 'o-')
plt.ylabel("triplet - singlet [meV]")
plt.xlabel("U/t")
plt.show()
###Output
_____no_output_____
###Markdown
Natural orbitals
###Code
geom = ase.io.read("geom/clars_goblet.xyz")
# "open shell" case, normal MFH
mfh_model = tbmfh.MeanFieldHubbardModel(geom, [2.7, 0.1, 0.4], charge=0, multiplicity=1)
mfh_model.run_mfh(u = 3.0, print_iter=False, plot=False)
mfh_model.calculate_natural_orbitals()
# "closed shell" case (just tight-binding)
tb_model = tbmfh.MeanFieldHubbardModel(geom, [2.7, 0.1, 0.4], charge=0, multiplicity=1)
tb_model.run_mfh(u = 0.0, print_iter=False, plot=False)
num_orb = 2
h = 8.0
for i_rel in np.arange(num_orb, -num_orb, -1):
i_mo = int(np.around(0.5 * (mfh_model.num_spin_el[0] + mfh_model.num_spin_el[1]))) + i_rel - 1
fig, axs = plt.subplots(nrows=1, ncols=7, figsize=(7 * mfh_model.figure_size[0], mfh_model.figure_size[1]))
mfh_model.plot_no_eigenvector(i_mo, ax=axs[0])
mfh_model.plot_orb_squared_map(axs[1], mfh_model.no_evecs[i_mo], h=h)
mfh_model.plot_mo_eigenvector(i_mo, spin=0, ax=axs[2])
mfh_model.plot_mo_eigenvector(i_mo, spin=1, ax=axs[3])
mfh_model.plot_sts_map(axs[4], mfh_model.evals[0, i_mo], h=h)
tb_model.plot_mo_eigenvector(i_mo, spin=0, ax=axs[5])
tb_model.plot_sts_map(axs[6], tb_model.evals[0, i_mo], h=h)
plt.subplots_adjust(wspace=0.0, hspace=0)
plt.show()
###Output
_____no_output_____
###Markdown
Getting Started with notebook_xtermAdam Johnson, IBMInstall the package using pip:
###Code
!pip install notebook_xterm
###Output
_____no_output_____
###Markdown
Load the IPython extension. You'll need to reload the extension each time the notebook kernel starts. Alternatively, you can add notebook_xterm to the [configuration file](http://ipython.readthedocs.io/en/stable/config/extensions/index.htmlusing-extensions) to load it automatically.
###Code
%load_ext notebook_xterm
###Output
_____no_output_____
###Markdown
To display a terminal, type the [magic function](http://ipython.readthedocs.io/en/stable/interactive/magics.html) `%xterm` in a blank cell:
###Code
%xterm
###Output
_____no_output_____
###Markdown
Quick Start for Robot Framework on Jupyter Congratulations for trying out Robot Framework on the interactive Jupyter platform! If you did not open this in any Jupyter application, please, [open this notebook at Binder cloud environment](https://mybinder.org/v2/gh/robots-from-jupyter/robotkernel/master?urlpath=lab) for interactive Jupyter experience. You may complete each chapter of this guided start tutorial simply by pressing `SHIFT + ENTER` again and again to advance one cell execution at time until the end of the notebook. Robot notebook structure Robot Framework notebooks may contain any amount of markdown cells and code cells. Each code cell must start with a valid [Robot Framework test data section header](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.htmltest-data-sections).
###Code
*** Settings ***
Library String
###Output
_____no_output_____
###Markdown
That said, it is ok for a cell to contain multiple headers, and the same header may occure more than once in the same notebook.
###Code
*** Variables ***
${MESSAGE} Hello World
*** Test Cases ***
Message is Hello World
Should be equal ${MESSAGE} Hello World
###Output
_____no_output_____
###Markdown
After executing a cell containing either `*** Test Cases ***` or `*** Tasks ***`, a complete Robot Framework test or task suite is being built, executed and its log and report are being attached to the notebook as links **Log | Report**.Clicking **Log** or **Report** will open the attachment in a new browser tab or window, where it can be browsed further or download.An executed notebook can then be saved and shared as a single standalone `.ipynb` file with the embedded execution logs and reports. Read-only viewing of `.ipynb` files is widely supported. Prototyping keywords To ease prototyping custom keywords, executing a cell with one ore more keywords will result in argument fields and execution button being rendered below the cell.Pressing the button will create a complete Robot Framework task suite for executing the keyword, execute it and attach the logs.Executing the same cell twice, will hide the button – preferred when saving the notebook for sharing.
###Code
*** Keywords ***
Return the given argument string
[Arguments] ${message}=Hello World!
[Return] ${message}
###Output
_____no_output_____
###Markdown
If the cell with the keyword is not executed after a change in its robot code, the button will continue to execute the old version of the keyword. If the keyword returns a value, the value will displayed between the cell and **Log | Report** links. Prototyping libraries To ease prototyping Python keywords, a code cell could start with `%%python module ModuleName` magic words to describe a new keyword library as a Python module.
###Code
%%python module GraphLibrary
import base64
import io
import matplotlib.pyplot as plt
import robot
import urllib
class GraphLibrary:
def log_as_graph(self, *args):
"""Log list of values as a graph"""
buffer = io.BytesIO()
# Plot
plt.plot(list(map(float, *args)))
plt.savefig(buffer, format='png')
plt.clf()
# Log
uri = 'data:image/png;base64,' + \
urllib.parse.quote(base64.b64encode(buffer.getvalue()))
html = '<img src="' + uri +'"/>'
robot.api.logger.info(html, html=True)
###Output
_____no_output_____
###Markdown
Once the cell with Python module has been executed, it is injected it is available to be imported as Robot Framework keyword library and its keywords can be used in tests or tasks as usual.
###Code
*** Settings ***
Library GraphLibrary
*** Test Cases ***
Show a graph
${series}= Create list
... 5 5 5 5 5 5 4 10 2 5 5 5 5 5 5
Log as graph ${series}
###Output
_____no_output_____
###Markdown
The simple way
###Code
from curvy import builder, plot
import datetime
start_date = datetime.datetime.now()
forward_prices = [3, 4, 6, 5, 7, 8, 6, 4, 5, 6]
x, y, dr, pr, y_smfc = builder.build_smfc_curve(forward_prices, start_date)
fig, ax = plot.mpl_create_curve_plot(x)
plot.mpl_plot_curves(x, y, fig, ax, (x, y_smfc, 'green', '-'))
###Output
_____no_output_____
###Markdown
Building our x-axis index variables
###Code
from curvy import axis, plot, builder
import datetime
# Define the starting date we want to contruct the forward curve from
start_date = datetime.datetime.now()
forward_prices = [3, 4, 6, 5, 7, 8, 6, 4, 5, 6]
# First we need the dates representing our x-axis
dr = axis.date_ranges(start_date, 8)
x = axis.flatten_ranges(dr)
# We get the unsmooth forward price for each step
pr = axis.price_ranges(dr, forward_prices)
y = axis.flatten_ranges(pr)
###Output
_____no_output_____
###Markdown
Building the curve parameters
###Code
taus = axis.start_end_absolute_index(dr, overlap=1)
knots = axis.knot_index(taus)
H = builder.calc_big_H(taus)
A = builder.calc_big_A(knots, taus)
B = builder.calc_B(forward_prices, taus)
X = builder.solve_lineq(H, A, B)
y_smfc = builder.curve_values(dr, X, builder.smfc, flatten=True)
fig, ax = plot.mpl_create_curve_plot(x)
plot.mpl_plot_curves(x, y, fig, ax, (x, y_smfc, 'green', '-'))
###Output
_____no_output_____
###Markdown
Showing only the segments
###Code
y_smfc = builder.curve_values(dr, X, builder.smfc)
fig, ax = plot.mpl_create_curve_plot(x)
plot.mpl_plot_curve_sections(x, y, fig, ax, (dr, y_smfc), (dr, pr), hide_price=True)
###Output
_____no_output_____
###Markdown
Or customize your own plots
###Code
from scipy.interpolate import interp1d
import numpy as np
start_date = datetime.datetime.now()
forward_prices = [3, 4, 6, 5, 7, 8, 6, 4, 5, 6]
fig, ax = plot.mpl_create_curve_plot(x)
x, y, dr, pr, y_smfc = builder.build_smfc_curve(forward_prices, start_date)
pr_mv = axis.midpoint_values(pr, include_last=True)
dr_mai = axis.midpoint_absolute_index(dr, include_last=True)
f_simple = interp1d(dr_mai, pr_mv)
f_cubic = interp1d(dr_mai, pr_mv, kind='cubic')
# We need to convert the indices from dates to numbers
x_i = np.arange(0, len(x))
plot.mpl_plot_curves(
x, y, fig, ax,
(x, y_smfc, 'red', ':'),
(x, f_simple(x_i), 'orange', '-.'),
(x, f_cubic(x_i), 'green', '--'),
)
###Output
_____no_output_____
###Markdown
Usage example for lmdiagSource: https://github.com/dynobo/lmdiag Imports & Generate Linear Regression Model for Demo
###Code
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from linearmodels.iv import IV2SLS
import lmdiag
%matplotlib inline
np.random.seed(20)
predictor = np.random.normal(size=30, loc=20, scale=3)
response = 5 + 5 * predictor + np.random.normal(size=30)
X = sm.add_constant(predictor)
###Output
_____no_output_____
###Markdown
Print all the Plots as Matrix (You might want to set size beforehand, otherwise it's really tiny)
###Code
statsmodels_lm = sm.OLS(response, X).fit()
plt.style.use('seaborn')
plt.figure(figsize=(10,7))
lmdiag.plot(statsmodels_lm);
###Output
_____no_output_____
###Markdown
Same with `linearmodels`
###Code
linearmodels_lm = IV2SLS(response,X, None, None).fit(cov_type='unadjusted')
plt.figure(figsize=(10,7))
lmdiag.plot(linearmodels_lm);
###Output
_____no_output_____
###Markdown
Plot the charts individually
###Code
lmdiag.resid_fit(statsmodels_lm);
lmdiag.q_q(statsmodels_lm);
lmdiag.scale_loc(statsmodels_lm);
lmdiag.resid_lev(statsmodels_lm);
###Output
_____no_output_____
###Markdown
Print useful descriptions for interpretation**For all available charts:**
###Code
lmdiag.info()
###Output
Name: Residuals vs. Fitted
Method: lmdiag.resid_fit(lm)
x-Axis: Fitted Values (The dependent variable of your model; What you
threw in statsmodels OLS as 1st parameter)
y-Axis: Residuals (The "error" of the model; Distance to the fitted
regression line)
Description: It's purpose is to identify non-linear patterns in the residuals.
If you see a horizontal red line and the points spread around it
without a recognizable pattern, chances are good, that there is
no non-linear relationship in the data. If you can see clear
pattern or a curve, a linear model might not be the best
choice.The red labels show the indices of three observations with
the highest absolute residuals.
Name: Normal Q-Q
Method: lmdiag.q_q(lm)
x-Axis: Theoretical Quantiles (Quantiles from the Normal Distribution)
y-Axis: Standardized residuals (Quantiles of the values of hte dependent
variable in sorted order)
Description: It's purpose is to check, if the residuals are following a normal
distribution. It's good, if the points are aligned on the dashed
line. If only a few points are off, take a look at the other
plots. If lot's of points do not follow the line, your
distribution might be off normal, e.g. regarding skew, tails or
modality.
Name: Scale-Location
Method: lm.scale_loc(lm)
x-Axis: Fitted Values (The dependent variable of your model; What you
threw in statsmodels OLS as 1st parameter)
y-Axis: Squareroot of the absolute value of the Standardized Residuals.
Description: It's purpose is to check "homoscedasticity" the assumption of
equal variance. The plot shows, if the residuals are spread
equally accross the range of predictors (Fitted values). The red
line should be horizonzal and the scatter points equally spread
in a random matter. The red labels are the indices of the
observations with the highest absolute residuals.
Name: Residuals vs. Leverage
Method: lmdiag.resid_lev(lm)
x-Axis: Leverage (The "influence" of an observation. A measure of how far
away the dependend variables value of an observation is from
those of other observations.)
y-Axis: Residuals (The "error" of the model; Distance to the fitted
regression line)
dashed-Lines: Cook' Distance, 0.5 (inner) and 1 (outer).
Description: It's purpose is to identify observations with high influence on
calculating the regression. Those oberservation might but not
have to be outliers, they are just extreme cases concerning the
regression. The pattern of the scatter points is not relevant
here: interesting are observations in the top right and bottom
right of the plot. If we have cases outside the Cook's Distance
(dashed lines), removing those would have an high impact on our
regression line. The red labels are the indices of the most
influencal observations.
###Markdown
**Or for individual chart:**
###Code
lmdiag.info('resid_fit')
# Some with other charts:
# lmdiag.info('q_q')
# lmdiag.info('scale_loc')
# lmdiag.info('resid_lev')
###Output
Name: Residuals vs. Fitted
Method: lmdiag.resid_fit(lm)
x-Axis: Fitted Values (The dependent variable of your model; What you
threw in statsmodels OLS as 1st parameter)
y-Axis: Residuals (The "error" of the model; Distance to the fitted
regression line)
Description: It's purpose is to identify non-linear patterns in the residuals.
If you see a horizontal red line and the points spread around it
without a recognizable pattern, chances are good, that there is
no non-linear relationship in the data. If you can see clear
pattern or a curve, a linear model might not be the best
choice.The red labels show the indices of three observations with
the highest absolute residuals.
###Markdown
Cell Indents can help organize your work. 1. Imports
###Code
### import pandas as pd ###
### import numpy as np ###
###Output
_____no_output_____
###Markdown
2. Format Data a. Get rid of empty values
###Code
### some code ###
###Output
_____no_output_____
###Markdown
b. Append the extra time series to the dataset.
###Code
### some more code ###
###Output
_____no_output_____
###Markdown
3. EDA a. Plots
###Code
### code cells ###
###Output
_____no_output_____
###Markdown
b. Sample Statistics
###Code
### code cells ###
###Output
_____no_output_____
###Markdown
4. Hypothesis Testing
###Code
### code cells ###
###Output
_____no_output_____
###Markdown
Additional tests
###Code
### code cells ###
###Output
_____no_output_____
###Markdown
Treex**Main features**:* Modules contain their parameters* Easy transfer learning* Simple initialization* No metaclass magic* No apply method* No need special versions of `vmap`, `jit`, and friends.To prove the previous we will start with by creating a very contrived but complete module which will use everything from parameters, states, and random state:
###Code
from typing import Tuple
import jax.numpy as jnp
import numpy as np
import treex as tx
class NoisyStatefulLinear(tx.Module):
# tree parts are defined by treex annotations
w: tx.Parameter
b: tx.Parameter
count: tx.State
rng: tx.Rng
# other annotations are possible but ignored by type
name: str
def __init__(self, din, dout, name="noisy_stateful_linear"):
self.name = name
# Initializers only expect RNG key
self.w = tx.Initializer(lambda k: jax.random.uniform(k, shape=(din, dout)))
self.b = tx.Initializer(lambda k: jax.random.uniform(k, shape=(dout,)))
# random state is JUST state, we can keep it locally
self.rng = tx.Initializer(lambda k: k)
# if value is known there is no need for an Initiaizer
self.count = jnp.array(1)
def __call__(self, x: np.ndarray) -> np.ndarray:
assert isinstance(self.count, jnp.ndarray)
assert isinstance(self.rng, jnp.ndarray)
# state can easily be updated
self.count = self.count + 1
# random state is no different :)
key, self.rng = jax.random.split(self.rng, 2)
# your typical linear operation
y = jnp.dot(x, self.w) + self.b
# add noise for fun
state_noise = 1.0 / self.count
random_noise = 0.8 * jax.random.normal(key, shape=y.shape)
return y + state_noise + random_noise
def __repr__(self) -> str:
return f"NoisyStatefulLinear(w={self.w}, b={self.b}, count={self.count}, rng={self.rng})"
linear = NoisyStatefulLinear(1, 1)
linear
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
InitializationAs advertised, initialization is easy, the only thing you need to do is to call `init` on your module with a random key:
###Code
import jax
linear = linear.init(key=jax.random.PRNGKey(42))
linear
###Output
_____no_output_____
###Markdown
Modules are PytreesIts fundamentally important that modules are also Pytrees, we can check that they are by using `tree_map` with an arbitrary function:
###Code
# its a pytree alright
doubled = jax.tree_map(lambda x: 2 * x, linear)
doubled
###Output
_____no_output_____
###Markdown
Modules can be slicedAn important feature of this Module system is that it can be sliced based on the type of its parameters, the `slice` method does exactly that:
###Code
params = linear.slice(tx.Parameter)
states = linear.slice(tx.State)
print(f"{params=}")
print(f"{states=}")
###Output
params=NoisyStatefulLinear(w=[[0.91457367]], b=[0.42094743], count=Nothing, rng=Nothing)
states=NoisyStatefulLinear(w=Nothing, b=Nothing, count=1, rng=[1371681402 3011037117])
###Markdown
Notice the following:* Both `params` and `states` are `NoisyStatefulLinear` objects, their type doesn't change after being sliced.* The fields that are filtered out by the `slice` on each field get a special value of type `tx.Nothing`.Why is this important? As we will see later, it is useful keep parameters and state separate as they will crusially flow though different parts of `value_and_grad`. Modules can be mergedThis is just the inverse operation to `slice`, `merge` behaves like dict's `update` but returns a new module leaving the original modules intact:
###Code
linear = params.merge(states)
linear
###Output
_____no_output_____
###Markdown
Modules composeAs you'd expect, you can have modules inside ther modules, same as previously the key is to annotate the class fields. Here we will create an `MLP` class that uses two `NoisyStatefulLinear` modules:
###Code
class MLP(tx.Module):
linear1: NoisyStatefulLinear
linear2: NoisyStatefulLinear
def __init__(self, din, dmid, dout):
self.linear1 = NoisyStatefulLinear(din, dmid, name="linear1")
self.linear2 = NoisyStatefulLinear(dmid, dout, name="linear2")
def __call__(self, x: np.ndarray) -> np.ndarray:
x = jax.nn.relu(self.linear1(x))
x = self.linear2(x)
return x
def __repr__(self) -> str:
return f"MLP(linear1={self.linear1}, linear2={self.linear2})"
model = MLP(din=1, dmid=2, dout=1).init(key=42)
model
###Output
_____no_output_____
###Markdown
Full ExampleUsing the previous `model` we will show how to train it using the proposed Module system. First lets get some data:
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
def get_data(dataset_size: int) -> Tuple[np.ndarray, np.ndarray]:
x = np.random.normal(size=(dataset_size, 1))
y = 5 * x - 2 + 0.4 * np.random.normal(size=(dataset_size, 1))
return x, y
def get_batch(
data: Tuple[np.ndarray, np.ndarray], batch_size: int
) -> Tuple[np.ndarray, np.ndarray]:
idx = np.random.choice(len(data[0]), batch_size)
return jax.tree_map(lambda x: x[idx], data)
data = get_data(1000)
plt.scatter(data[0], data[1])
plt.show()
###Output
_____no_output_____
###Markdown
Now we will be reusing the previous MLP model, and we will create an optax optimizer that will be used to train the model:
###Code
import optax
optimizer = optax.adam(1e-2)
params = model.slice(tx.Parameter)
states = model.slice(tx.State)
opt_state = optimizer.init(params)
###Output
_____no_output_____
###Markdown
Notice that we are already splitting the model into `params` and `states` since we need to pass the `params` only to the optimizer. Next we will create the loss function, it will take the model parts and the data parts and return the loss plus the new states:
###Code
from functools import partial
@partial(jax.value_and_grad, has_aux=True)
def loss_fn(params: MLP, states: MLP, x, y):
# merge params and states to get a full model
model: MLP = params.merge(states)
# apply model
pred_y = model(x)
# MSE loss
loss = jnp.mean((y - pred_y) ** 2)
# new states
states = model.slice(tx.State)
return loss, states
###Output
_____no_output_____
###Markdown
Notice that the first thing we are doing is merging the `params` and `states` into the complete model since we need everything in place to perform the forward pass. Also, we return the updated states from the model, this is needed because JAX functional API requires us to be explicit about state management.**Note**: inside `loss_fn` (which is wrapped by `value_and_grad`) module can behave like a regular mutable python object, however, every time its treated as pytree a new reference will be created as happens in `jit`, `grad`, `vmap`, etc. Its important to keep this into account when using functions like `vmap` inside a module as certain book keeping will be needed to manage state correctly.Next we will implement the `update` function, it will look indistinguishable from your standard Haiku update which also separates weights into `params` and `states`:
###Code
@jax.jit
def update(params: MLP, states: MLP, opt_state, x, y):
(loss, states), grads = loss_fn(params, states, x, y)
updates, opt_state = optimizer.update(grads, opt_state, params)
# use regular optax
params = optax.apply_updates(params, updates)
return params, states, opt_state, loss
###Output
_____no_output_____
###Markdown
Finally we create a simple training loop that perform a few thousand updates and merge `params` and `states` back into a single `model` at the end:
###Code
steps = 10_000
for step in range(steps):
x, y = get_batch(data, batch_size=32)
params, states, opt_state, loss = update(params, states, opt_state, x, y)
if step % 1000 == 0:
print(f"[{step}] loss = {loss}")
# get the final model
model = params.merge(states)
###Output
[0] loss = 36.88694763183594
###Markdown
Now lets generate some test data and see how our model performed:
###Code
import matplotlib.pyplot as plt
X_test = np.linspace(data[0].min(), data[0].max(), 100)[:, None]
y_pred = model(X_test)
plt.scatter(data[0], data[1], label="data", color="k")
plt.plot(X_test, y_pred, label="prediction")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Example Usage for GeoCluster Package
###Code
## Basic stuff
%load_ext autoreload
%autoreload
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
display(HTML("""<style>div.output_area{max-height:10000px;overflow:scroll;}</style>"""))
## Python Version
import sys
print("Python: {0}".format(sys.version))
## Install
from timeUtils import clock, elapsed
from ioUtils import saveJoblib
from geocluster import geoClusters
from geoUtils import convertMetersToLat, convertLatToMeters, convertMetersToLong, convertLongToMeters
from geoclusterUtils import genCenters, genCluster, genClusters, genTripsBetweenClusters
import datetime as dt
start = dt.datetime.now()
print("Notebook Last Run Initiated: "+str(start))
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Example GPS Data
###Code
genMax = 75
distMax = 500
raw = genClusters(20, 250, latRange=[29.8, 30.2], lngRange=[49.8, 50.2], dist="gauss", maxrad=genMax)
def plotMeters(ax1, longMeters, latMeters):
ax2 = ax1.twinx()
ax2.plot(longMeters, latMeters, color='b', lw=0)
ax3 = ax1.twiny()
ax3.plot(longMeters, latMeters, color='b', lw=0)
def plotRawData(rawdata, color='cyan'):
import seaborn as sns
from matplotlib import pyplot as plt
fig, ax1 = plt.subplots()
lat = rawdata[:,0]
long = rawdata[:,1]
ax1.scatter(long, lat, s=15, linewidth=0, color='cyan', alpha=1) #c=cluster_member_colors, alpha=1)
return ax1
def clusterData(rawdata, distMax):
%load_ext autoreload
%autoreload
gc = geoClusters(key="test", points=rawdata, distMax=distMax, debug=True)
gc.findClusters(seedMin=2, debug=True)
if True:
print("Found {0} clusters using {1} cells and {2} counts".format(gc.getNClusters(), gc.getNCells(), gc.getNCounts()))
return gc
def plotClusters(ax1, gc, color='red'):
from seaborn import color_palette
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
clusters = gc.getClusters()
coms = gc.getClusterCoMs()
color_palette = color_palette('deep', 2)
patches = []
print("Plotting {0} clusters".format(len(clusters)))
for cl, cluster in clusters.items():
radius = cluster.getRadius()
com = cluster.getCoM()
quant = cluster.getQuantiles()
radius = quant[-1]
ax1.scatter(com[1], com[0], s=10, marker='x', linewidth=2, c='black', alpha=1)
latDist = convertMetersToLat(radius)
circle = Circle(xy=(com[1], com[0]), radius=latDist)
patches.append(circle)
p = PatchCollection(patches, facecolor='red', alpha=0.25)
from numpy import array, linspace
#p.set_array(linspace(0,1,len(pcols)))
ax1.add_collection(p)
#latOff = lat - min(lat)
#latMeters = convertLatToMeters(latOff)
#lngOff = long - min(long)
#lngMeters = convertLongToMeters(lngOff, lat)
#plotMeters(ax1, latMeters, lngMeters)
gc = clusterData(raw, distMax=distMax)
ax1 = plotRawData(raw)
ax1 = plotClusters(ax1, gc)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
Current Time is Thu Nov 15, 2018 21:14:27 for Finding Geohash (BitLen=8) Values from 5000 Points
Current Time is Thu Nov 15, 2018 21:14:27 for Done with Finding Geohash (BitLen=8) Values from 5000 Points
Process [Done with Finding Geohash (BitLen=8) Values from 5000 Points] took 0 seconds.
Current Time is Thu Nov 15, 2018 21:14:27 for Finding Geohash (BitLen=8) Frequency Values from Geohash DataFrame
Current Time is Thu Nov 15, 2018 21:14:27 for Done with Finding Geohash (BitLen=8) Frequency Values from Geohash DataFrame
Process [Done with Finding Geohash (BitLen=8) Frequency Values from Geohash DataFrame] took 0 seconds.
Current Time is Thu Nov 15, 2018 21:14:27 for Finding Clusters with at least 2 counts
--> Creating cluster cl0 with seed tj76jb3e and 14 counts
--> Creating cluster cl1 with seed tj7dfh73 and 10 counts
--> Creating cluster cl2 with seed tj77m2sf and 10 counts
--> Creating cluster cl3 with seed tj79fd1j and 9 counts
--> Creating cluster cl4 with seed tj7emcbj and 9 counts
--> Creating cluster cl5 with seed tj77v85z and 9 counts
--> Creating cluster cl6 with seed tj7df4ms and 8 counts
--> Creating cluster cl7 with seed tj7ehc1p and 8 counts
--> Creating cluster cl8 with seed tj7dfxz0 and 8 counts
--> Creating cluster cl9 with seed tj76w9qq and 8 counts
--> Creating cluster cl10 with seed tj76rd14 and 8 counts
--> Creating cluster cl11 with seed tj7e2psd and 8 counts
--> Creating cluster cl12 with seed tj7ed7vt and 7 counts
--> Creating cluster cl13 with seed tj79dp0j and 7 counts
--> Creating cluster cl14 with seed tj7etwgg and 7 counts
--> Creating cluster cl15 with seed tj7e821m and 7 counts
--> Creating cluster cl16 with seed tj7d27dm and 7 counts
--> Creating cluster cl17 with seed tj79tj44 and 7 counts
--> Creating cluster cl18 with seed tj7d0frx and 6 counts
--> Creating cluster cl19 with seed tj77m3kh and 5 counts
Current Time is Thu Nov 15, 2018 21:14:28 for Done with Finding Clusters with at least 2 counts
Process [Done with Finding Clusters with at least 2 counts] took 0 seconds.
Found 20 clusters using 2333 cells and 5000 counts
Plotting 20 clusters
###Markdown
Generate Random Data From Clusters
###Code
%load_ext autoreload
%autoreload
from geoclusterUtils import genCenters, genCluster, genClusters, genTripsBetweenClusters
data = genTripsBetweenClusters(1000, gc, returnLoc=True, returnDF=True)
saveJoblib(data, "../network/trips.p")
x = genTripsBetweenClusters(100, gc, returnDF=True)
###Output
Selected 100 randomized trips
Found Start/End for the 100 randomized trips
Converting (100, 2, 2) trips to a DataFrame
###Markdown
load packages and modules
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
import MangroveConservation.get_twitter_data1 as getTwitterdata
import MangroveConservation.clean_text1 as clean
import MangroveConservation.sentiment_analysis1 as sentiment
import MangroveConservation.visualization as visual
%matplotlib inline
help(getTwitterdata.get_data)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
collect twitter data
###Code
###collect twitter data and save them into CSV
api_key = 'i2uWM8Fvt36ipy3pEXk5Cy7ue'
secret_key = 'FKZBP7QjykINzuAJPVaEsO5l106xd939lmNmXoWQhl0Arqhpzz'
#DEV_ENVIRONMENT_LABEL = 'mangroveConservation'
#API_SCOPE = 'fullarchive' # 'fullarchive' for full archive, '30day' for last 31 days
search_query = '-RT mangrove forest'
to_date = '2019-06-19' # format YYYY-MM-DD HH:MM (hour and minutes optional)
from_date = '2019-12-31' # format YYYY-MM-DD HH:MM (hour and minutes optional)
filename = 'twitter_premium_api_demo1.jsonl' # Where the Tweets should be saved
csvfile = 'mangrove1.csv'
#getTwitterdata.get_data(search_query,api_key,secret_key,to_date,frome_date,filename)
FILENAME = '/home/gongmimi/CMSE802/MangroveConservation/twitter_premium_api_demo1.jsonl'
csvfile = '/home/gongmimi/CMSE802/MangroveConservation/mangrove1.csv'
tweets = getTwitterdata.load_jsonl(FILENAME)
#getTwitterdata.create_csv(tweets,csvfile)
tweets = clean.import_tweet("/home/gongmimi/CMSE802/MangroveConservation/mangrove1.csv")
#tweets = clean.ImportTweet("/Users/DELL/Dropbox/MangroveConservation/mangrove1.csv")
###Output
_____no_output_____
###Markdown
exploratory analysis collect the most fewquent words/phrases and graph wordcloud map
###Code
tweets
sentiment.PlotTopWords(tweets["user_description"],22,1,5)
sentiment.PlotTopWords(tweets["tweet"],20,2,4)
sentiment.PlotWordCloud(tweets['user_description'])
sentiment.PlotWordCloud(tweets['tweet'])
###Output
_____no_output_____
###Markdown
sentiment analysis
###Code
tweets["sentiment"]=sentiment.Sentiment(tweets["tweet"])
tweets.head()
tweets.to_csv("mangrove1_cleaned.csv", index=False, header=True)
sentiment.PlotSentiment(tweets["sentiment"])
negative = tweets[tweets['sentiment']=='Negative']['tweet']
sentiment.PlotTopWords(negative,22,3,5)
positive = tweets[tweets['sentiment']=='Positive']['tweet']
sentiment.PlotTopWords(positive,22,2,4)
neutral = tweets[tweets['sentiment']=='Neutral']['tweet']
sentiment.PlotTopWords(neutral,22,3,5)
neg_user = tweets[tweets['sentiment']=='Negative']['user_description']
sentiment.PlotTopWords(neg_user,22,1,5)
pos_user = tweets[tweets['sentiment']=='Positive']['user_description']
sentiment.PlotTopWords(pos_user,22,1,5)
neu_user = tweets[tweets['sentiment']=='Neutral']['user_description']
sentiment.PlotTopWords(neu_user,22,1,5)
###Output
_____no_output_____
###Markdown
Data visualization Visualization
###Code
# name of the file with the Tweet objects
overlay,world = clip_polygon(tweets)
overlay,points = centroid_polygon(overlay)
visualization_polygons(overlay,world)
###Output
C:\Users\DELL\Anaconda3\lib\site-packages\ipykernel_launcher.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
We start by configuring the output from `cabinetry`. It uses the `logging` module to send messages at different verbosity levels. This customization is optional, and you can also use the `logging` module directly for further customization. The `set_logging` function just sets up a verbose default.
###Code
cabinetry.set_logging()
###Output
_____no_output_____
###Markdown
The configuration fileThe configuration file is the central place to configure `cabinetry`.Let's have a look at the example configuration file used in this notebook.
###Code
cabinetry_config = cabinetry.configuration.load("config_ntuples.yml")
cabinetry.configuration.print_overview(cabinetry_config)
###Output
INFO - cabinetry.configuration - opening config file config_ntuples.yml
INFO - cabinetry.configuration - the config contains:
INFO - cabinetry.configuration - 3 Sample(s)
INFO - cabinetry.configuration - 1 Regions(s)
INFO - cabinetry.configuration - 1 NormFactor(s)
INFO - cabinetry.configuration - 3 Systematic(s)
###Markdown
The configuration file is split into four different blocks of settings. There are general settings:
###Code
cabinetry_config["General"]
###Output
_____no_output_____
###Markdown
The list of phase space regions (channels), in this case we are considering just a single one:
###Code
cabinetry_config["Regions"]
###Output
_____no_output_____
###Markdown
A list of samples, including data:
###Code
cabinetry_config["Samples"]
###Output
_____no_output_____
###Markdown
A list of normalization factors:
###Code
cabinetry_config["NormFactors"]
###Output
_____no_output_____
###Markdown
And finally a list of systematic uncertainties. In this case there are three systematic uncertainties:
###Code
cabinetry_config["Systematics"]
###Output
_____no_output_____
###Markdown
Regions, samples, normalization factors and systematics all can be identified by their names. Creating template histograms from ntuplesWe use the `templates` module to create all histograms needed to build the workspace defined in the configuration file.
###Code
cabinetry.templates.build(cabinetry_config, method="uproot")
###Output
DEBUG - cabinetry.route - in region Signal_region
DEBUG - cabinetry.route - reading sample Data
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Data.npz
DEBUG - cabinetry.route - reading sample Signal
DEBUG - cabinetry.route - variation Nominal
WARNING - cabinetry.histo - Signal_region_Signal has empty bins: [0]
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Signal.npz
DEBUG - cabinetry.route - reading sample Background
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background.npz
DEBUG - cabinetry.route - variation Modeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_Modeling_Up.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Up.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Down
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Down.npz
###Markdown
The histograms are saved to the folder specified under `HistogramFolder` in the `General` settings in the configuration file.In this case, this folder is `histograms/`:
###Code
!ls histograms/
###Output
Signal_region_Background.npz
Signal_region_Background_Modeling_Up.npz
Signal_region_Background_WeightBasedModeling_Down.npz
Signal_region_Background_WeightBasedModeling_Up.npz
Signal_region_Data.npz
Signal_region_Signal.npz
###Markdown
It can be useful to apply additional post-processing after building template histograms.Such processing can for example replace ill-defined statistical uncertainties in empty bins by zero.It is also performed via the `templates` module:
###Code
cabinetry.templates.postprocess(cabinetry_config)
###Output
DEBUG - cabinetry.route - in region Signal_region
DEBUG - cabinetry.route - reading sample Data
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Data_modified.npz
DEBUG - cabinetry.route - reading sample Signal
DEBUG - cabinetry.route - variation Nominal
WARNING - cabinetry.histo - Signal_region_Signal has empty bins: [0]
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Signal_modified.npz
DEBUG - cabinetry.route - reading sample Background
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_modified.npz
DEBUG - cabinetry.route - variation Modeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_Modeling_Up_modified.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Up_modified.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Down
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Down_modified.npz
###Markdown
New histograms have now appeard in the `histograms/` folder.These "modified" histograms include the changes applied by the postprocessor.
###Code
!ls histograms/
###Output
Signal_region_Background.npz
Signal_region_Background_Modeling_Up.npz
Signal_region_Background_Modeling_Up_modified.npz
Signal_region_Background_WeightBasedModeling_Down.npz
Signal_region_Background_WeightBasedModeling_Down_modified.npz
Signal_region_Background_WeightBasedModeling_Up.npz
Signal_region_Background_WeightBasedModeling_Up_modified.npz
Signal_region_Background_modified.npz
Signal_region_Data.npz
Signal_region_Data_modified.npz
Signal_region_Signal.npz
Signal_region_Signal_modified.npz
###Markdown
Optional: reading existing template histogramsBesides providing ntuples that first need to be turned into histograms, it is also possible to provide existing histograms to `cabinetry`.The configuration options for this are slightly different, since less information is needed to read an existing histogram.The following loads a `cabinetry` configuration using histogram inputs, collects all provided histograms (storing them in the format used internally by `cabinetry` for further processing) and applies post-processing.The resulting histograms are equivalent to those created when reading the provided ntuples.
###Code
cabinetry_config_histograms = cabinetry.configuration.load("config_histograms.yml")
cabinetry.templates.collect(cabinetry_config_histograms, method="uproot")
cabinetry.templates.postprocess(cabinetry_config)
###Output
INFO - cabinetry.configuration - opening config file config_histograms.yml
DEBUG - cabinetry.route - in region Signal_region
DEBUG - cabinetry.route - reading sample Data
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Data.npz
DEBUG - cabinetry.route - reading sample Signal
DEBUG - cabinetry.route - variation Nominal
WARNING - cabinetry.histo - Signal_region_Signal has empty bins: [0]
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Signal.npz
DEBUG - cabinetry.route - reading sample Background
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background.npz
DEBUG - cabinetry.route - variation Modeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_Modeling_Up.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Up.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Down
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Down.npz
DEBUG - cabinetry.route - in region Signal_region
DEBUG - cabinetry.route - reading sample Data
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Data_modified.npz
DEBUG - cabinetry.route - reading sample Signal
DEBUG - cabinetry.route - variation Nominal
WARNING - cabinetry.histo - Signal_region_Signal has empty bins: [0]
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Signal_modified.npz
DEBUG - cabinetry.route - reading sample Background
DEBUG - cabinetry.route - variation Nominal
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_modified.npz
DEBUG - cabinetry.route - variation Modeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_Modeling_Up_modified.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Up
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Up_modified.npz
DEBUG - cabinetry.route - variation WeightBasedModeling Down
DEBUG - cabinetry.histo - saving histogram to histograms/Signal_region_Background_WeightBasedModeling_Down_modified.npz
###Markdown
Workspace buildingNext, we build a `pyhf` workspace and serialize it to a file.The `workspace` module takes care of this task.
###Code
workspace_path = "workspaces/example_workspace.json"
ws = cabinetry.workspace.build(cabinetry_config)
cabinetry.workspace.save(ws, workspace_path)
###Output
INFO - cabinetry.workspace - building workspace
DEBUG - cabinetry.workspace - adding NormFactor Signal_norm to sample Signal in region Signal_region
DEBUG - cabinetry.workspace - adding OverallSys Luminosity to sample Signal in region Signal_region
DEBUG - cabinetry.workspace - adding OverallSys Luminosity to sample Background in region Signal_region
DEBUG - cabinetry.workspace - adding OverallSys and HistoSys Modeling to sample Background in region Signal_region
DEBUG - cabinetry.workspace - normalization impact of systematic Modeling on sample Background in region Signal_region is 0.800
DEBUG - cabinetry.workspace - adding OverallSys and HistoSys WeightBasedModeling to sample Background in region Signal_region
INFO - pyhf.workspace - Validating spec against schema: workspace.json
DEBUG - cabinetry.workspace - saving workspace to workspaces/example_workspace.json
###Markdown
FittingWith the workspace built, we can perform a maximum likelihood fit.The fit model (probability density function) and data (including auxiliary data for auxiliary measurements, see the HistFactory documentation https://cds.cern.ch/record/1456844) are obtained from the workspace object.The results for the fitted parameters are reported.The `cabinetry.model_utils.model_and_data` function has an `asimov` keyword argument, which we can set to `True` to instead study the expected performance with an Asimov dataset.
###Code
bws = cabinetry.workspace.load(workspace_path)
model, data = cabinetry.model_utils.model_and_data(ws)
fit_results = cabinetry.fit.fit(model, data)
###Output
INFO - pyhf.workspace - Validating spec against schema: workspace.json
INFO - pyhf.pdf - Validating spec against schema: model.json
INFO - pyhf.pdf - adding modifier staterror_Signal_region (4 new nuisance parameters)
INFO - pyhf.pdf - adding modifier Signal_norm (1 new nuisance parameters)
INFO - pyhf.pdf - adding modifier Luminosity (1 new nuisance parameters)
INFO - pyhf.pdf - adding modifier Modeling (1 new nuisance parameters)
INFO - pyhf.pdf - adding modifier WeightBasedModeling (1 new nuisance parameters)
INFO - cabinetry.fit - performing maximum likelihood fit
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.19 │ Nfcn = 327 │
│ EDM = 1.12e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.194205 at best-fit point
INFO - cabinetry.fit - fit results (with symmetric uncertainties):
INFO - cabinetry.fit - staterror_Signal_region[0] = 1.0010 +/- 0.0411
INFO - cabinetry.fit - staterror_Signal_region[1] = 0.9891 +/- 0.0379
INFO - cabinetry.fit - staterror_Signal_region[2] = 1.0197 +/- 0.0365
INFO - cabinetry.fit - staterror_Signal_region[3] = 0.9830 +/- 0.0425
INFO - cabinetry.fit - Signal_norm = 1.6895 +/- 0.9388
INFO - cabinetry.fit - Luminosity = -0.0880 +/- 0.9913
INFO - cabinetry.fit - Modeling = -0.3246 +/- 0.5555
INFO - cabinetry.fit - WeightBasedModeling = -0.5858 +/- 0.6272
###Markdown
We can also visualize the fit results.Below are the pulls:
###Code
cabinetry.visualize.pulls(fit_results, exclude=["Signal_norm"])
###Output
DEBUG - cabinetry.visualize.utils - saving figure as figures/pulls.pdf
###Markdown
We excluded the `"Signal_norm"` parameter, which does not have an associated constraint term in our fit model. The result for it was reported above in the fit output:```INFO - cabinetry.fit - Signal_norm = 1.6895 +/- 0.9388```We can also look at the correlation between parameters:
###Code
cabinetry.visualize.correlation_matrix(fit_results)
###Output
DEBUG - cabinetry.visualize.utils - saving figure as figures/correlation_matrix.pdf
###Markdown
These visualizations were also saved as `.pdf` figures in the `figures/` folder. Visualizing templatesWhat did we fit?The `visualize` module also contains functionality to plot data/MC distributions: `visualize.data_mc`.We first need to create a model prediction, which is achieved with `model_utils.prediction`.By default this creates the pre-fit model, but the optional `fit_results` argument allows to create the model corresponding to a given best-fit configuration.The `config` keyword argument of `visualize.data_mc` is optional, but required for correct horizontal axis labels, since the observable and bin edges are not part of the `pyhf` workspace.Since this argument is optional, you can use `cabinetry.visualize.data_mc` with any workspace: it does not matter whether it was created with `cabinetry` or otherwise, since you do not need a configuration file.`visualize.data_mc` returns a list of dictionaries, we can extract a figure from there to further customize it.
###Code
model_pred = cabinetry.model_utils.prediction(model)
figures = cabinetry.visualize.data_mc(model_pred, data, config=cabinetry_config)
###Output
DEBUG - cabinetry.model_utils - total stdev is [[69, 58.3, 38.2, 45.3]]
DEBUG - cabinetry.model_utils - total stdev per channel is [137]
DEBUG - cabinetry.visualize.utils - saving figure as figures/Signal_region_prefit.pdf
###Markdown
This figure is also again saved in the `figures/` folder, like all figures in general.To demonstrate figure customization, let's use $\LaTeX$ for the horizontal axis label. We can save the modified figure as well by using `.savefig()`.
###Code
ratio_panel = figures[0]["figure"].get_axes()[1]
ratio_panel.set_xlabel("jet $p_T$")
figures[0]["figure"] # show figure again
###Output
_____no_output_____
###Markdown
Yield tables can also be created from a model prediction, and compared to data. Optional keyword arguments control whether yields per bin are shown (`per_bin=True`, default) and whether bins summed per region are shown (`per_channel=True`, disabled by default).
###Code
cabinetry.tabulate.yields(model_pred, data)
###Output
INFO - cabinetry.tabulate - yields per bin for pre-fit model prediction:
╒════════════╤═════════════════╤════════════════╤════════════════╤═══════════════╕
│ sample │ Signal_region │ │ │ │
│ │ bin 1 │ bin 2 │ bin 3 │ bin 4 │
╞════════════╪═════════════════╪════════════════╪════════════════╪═══════════════╡
│ Background │ 112.74 │ 128.62 │ 88.11 │ 55.25 │
├────────────┼─────────────────┼────────────────┼────────────────┼───────────────┤
│ Signal │ 0.00 │ 1.59 │ 23.62 │ 24.55 │
├────────────┼─────────────────┼────────────────┼────────────────┼───────────────┤
│ total │ 112.74 ± 69.04 │ 130.21 ± 58.34 │ 111.72 ± 38.22 │ 79.79 ± 45.30 │
├────────────┼─────────────────┼────────────────┼────────────────┼───────────────┤
│ data │ 112.00 │ 112.00 │ 124.00 │ 66.00 │
╘════════════╧═════════════════╧════════════════╧════════════════╧═══════════════╛
###Markdown
We can also take a look at the post-fit model.
###Code
model_pred_postfit = cabinetry.model_utils.prediction(model, fit_results=fit_results)
_ = cabinetry.visualize.data_mc(model_pred_postfit, data, config=cabinetry_config)
###Output
DEBUG - cabinetry.model_utils - total stdev is [[11.9, 7.28, 7.41, 7.69]]
DEBUG - cabinetry.model_utils - total stdev per channel is [20.3]
DEBUG - cabinetry.visualize.utils - saving figure as figures/Signal_region_postfit.pdf
###Markdown
Beyond simple maximum likelihood fitting `cabinetry` provides a range of useful utilities for statistical inference besides simple maximum likelihood fitting.To start, let's look at ranking nuisance parameters by their impact on the parameter of interest.
###Code
ranking_results = cabinetry.fit.ranking(model, data)
###Output
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.19 │ Nfcn = 327 │
│ EDM = 1.12e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.194205 at best-fit point
INFO - cabinetry.fit - calculating impact of staterror_Signal_region[0] on Signal_norm
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.25 │ Nfcn = 268 │
│ EDM = 7.11e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.247196 at best-fit point
DEBUG - cabinetry.fit - POI is 1.578854, difference to nominal is -0.110679
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.26 │ Nfcn = 253 │
│ EDM = 0.000144 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.255226 at best-fit point
DEBUG - cabinetry.fit - POI is 1.798746, difference to nominal is 0.109213
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.19 │ Nfcn = 268 │
│ EDM = 7.23e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.190610 at best-fit point
DEBUG - cabinetry.fit - POI is 1.581793, difference to nominal is -0.107740
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.2 │ Nfcn = 268 │
│ EDM = 2.58e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.197942 at best-fit point
DEBUG - cabinetry.fit - POI is 1.795579, difference to nominal is 0.106045
INFO - cabinetry.fit - calculating impact of staterror_Signal_region[1] on Signal_norm
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.26 │ Nfcn = 286 │
│ EDM = 3.13e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.262114 at best-fit point
DEBUG - cabinetry.fit - POI is 1.833033, difference to nominal is 0.143500
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.27 │ Nfcn = 269 │
│ EDM = 3.68e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.267357 at best-fit point
DEBUG - cabinetry.fit - POI is 1.536221, difference to nominal is -0.153312
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.19 │ Nfcn = 286 │
│ EDM = 3.01e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
###Markdown
The previous cell ran a lot of maximum likelihood fits to calculate all the input needed to rank nuisance parameters. We will visualize them next.
###Code
cabinetry.visualize.ranking(ranking_results)
###Output
DEBUG - cabinetry.visualize.utils - saving figure as figures/ranking.pdf
###Markdown
The results are contained in the `ranking_results` object. It is a simple named tuple, we can have a look at its content.
###Code
ranking_results
###Output
_____no_output_____
###Markdown
We can also perform likelihood scans for parameters.The example below performs a scan for the `Modeling` nuisance parameter.
###Code
scan_results = cabinetry.fit.scan(model, data, "WeightBasedModeling")
###Output
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.19 │ Nfcn = 327 │
│ EDM = 1.12e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.194205 at best-fit point
INFO - cabinetry.fit - performing likelihood scan for WeightBasedModeling in range (-1.840, 0.669) with 11 steps
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -1.840
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 20.98 │ Nfcn = 228 │
│ EDM = 4.73e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 20.981846 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -1.589
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 19.63 │ Nfcn = 226 │
│ EDM = 2.04e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 19.634611 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -1.338
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 18.58 │ Nfcn = 227 │
│ EDM = 1.65e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 18.576541 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -1.088
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.81 │ Nfcn = 219 │
│ EDM = 8.63e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.812542 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -0.837
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.35 │ Nfcn = 217 │
│ EDM = 3.05e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.351002 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -0.586
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.19 │ Nfcn = 201 │
│ EDM = 7.47e-05 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
DEBUG - cabinetry.fit - -2 log(L) = 17.194278 at best-fit point
DEBUG - cabinetry.fit - performing fit with WeightBasedModeling = -0.335
INFO - cabinetry.fit - MINUIT status:
┌─────────────────────────────────────────────────────────────────────────┐
│ Migrad │
├──────────────────────────────────┬──────────────────────────────────────┤
│ FCN = 17.35 │ Nfcn = 201 │
│ EDM = 6.45e-06 (Goal: 0.0002) │ │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Valid Minimum │ No Parameters at limit │
├──────────────────────────────────┼──────────────────────────────────────┤
│ Below EDM threshold (goal x 10) │ Below call limit │
├───────────────┬──────────────────┼───────────┬─────────────┬────────────┤
│ Covariance │ Hesse ok │ Accurate │ Pos. def. │ Not forced │
└───────────────┴──────────────────┴───────────┴─────────────┴────────────┘
###Markdown
The resulting figure looks like this:
###Code
cabinetry.visualize.scan(scan_results)
###Output
DEBUG - cabinetry.visualize.utils - saving figure as figures/scan_WeightBasedModeling.pdf
###Markdown
With `cabinetry.fit.limit`, we can evaluate observed and expected 95% confidence level upper parameter limits.The implementation uses Brent bracketing to efficiently find the `CLs=0.05` crossing points.
###Code
limit_results = cabinetry.fit.limit(model, data)
###Output
INFO - cabinetry.fit - calculating upper limit for Signal_norm
DEBUG - cabinetry.fit - setting lower parameter bound for POI to 0
INFO - cabinetry.fit - determining observed upper limit
DEBUG - cabinetry.fit - Signal_norm = 0.1000, observed CLs = 0.9176
DEBUG - cabinetry.fit - Signal_norm = 10.0000, observed CLs = 0.0000
DEBUG - cabinetry.fit - Signal_norm = 9.4606, observed CLs = 0.0000
DEBUG - cabinetry.fit - Signal_norm = 4.7803, observed CLs = 0.0001
DEBUG - cabinetry.fit - Signal_norm = 2.4401, observed CLs = 0.2174
DEBUG - cabinetry.fit - Signal_norm = 4.2427, observed CLs = 0.0012
DEBUG - cabinetry.fit - Signal_norm = 3.3414, observed CLs = 0.0302
DEBUG - cabinetry.fit - Signal_norm = 2.8908, observed CLs = 0.0935
DEBUG - cabinetry.fit - Signal_norm = 3.2002, observed CLs = 0.0444
DEBUG - cabinetry.fit - Signal_norm = 3.1514, observed CLs = 0.0504
DEBUG - cabinetry.fit - Signal_norm = 3.1564, observed CLs = 0.0498
INFO - cabinetry.fit - successfully converged after 11 steps
INFO - cabinetry.fit - observed upper limit: 3.1564
INFO - cabinetry.fit - determining expected -2 sigma upper limit
DEBUG - cabinetry.fit - Signal_norm = 0.1000, expected -2 sigma CLs = 0.7609 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.4401, expected -2 sigma CLs = 0.0002 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.2869, expected -2 sigma CLs = 0.0004
DEBUG - cabinetry.fit - Signal_norm = 1.1935, expected -2 sigma CLs = 0.0279
DEBUG - cabinetry.fit - Signal_norm = 0.6467, expected -2 sigma CLs = 0.1521
DEBUG - cabinetry.fit - Signal_norm = 1.0961, expected -2 sigma CLs = 0.0380
DEBUG - cabinetry.fit - Signal_norm = 0.9927, expected -2 sigma CLs = 0.0525
DEBUG - cabinetry.fit - Signal_norm = 1.0107, expected -2 sigma CLs = 0.0497
DEBUG - cabinetry.fit - Signal_norm = 1.0057, expected -2 sigma CLs = 0.0504
INFO - cabinetry.fit - successfully converged after 9 steps
INFO - cabinetry.fit - expected -2 sigma upper limit: 1.0107
INFO - cabinetry.fit - determining expected -1 sigma upper limit
DEBUG - cabinetry.fit - Signal_norm = 1.1935, expected -1 sigma CLs = 0.0826 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.2869, expected -1 sigma CLs = 0.0033 (cached)
DEBUG - cabinetry.fit - Signal_norm = 1.6432, expected -1 sigma CLs = 0.0263
DEBUG - cabinetry.fit - Signal_norm = 1.4538, expected -1 sigma CLs = 0.0436
DEBUG - cabinetry.fit - Signal_norm = 1.3950, expected -1 sigma CLs = 0.0506
DEBUG - cabinetry.fit - Signal_norm = 1.4000, expected -1 sigma CLs = 0.0500
INFO - cabinetry.fit - successfully converged after 6 steps
INFO - cabinetry.fit - expected -1 sigma upper limit: 1.4000
INFO - cabinetry.fit - determining expected upper limit
DEBUG - cabinetry.fit - Signal_norm = 1.6432, expected CLs = 0.1014 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.2869, expected CLs = 0.0228 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.0639, expected CLs = 0.0405
DEBUG - cabinetry.fit - Signal_norm = 1.9636, expected CLs = 0.0514
DEBUG - cabinetry.fit - Signal_norm = 1.9768, expected CLs = 0.0499
DEBUG - cabinetry.fit - Signal_norm = 1.9718, expected CLs = 0.0505
INFO - cabinetry.fit - successfully converged after 6 steps
INFO - cabinetry.fit - expected upper limit: 1.9768
INFO - cabinetry.fit - determining expected +1 sigma upper limit
DEBUG - cabinetry.fit - Signal_norm = 2.4401, expected +1 sigma CLs = 0.0893 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.8908, expected +1 sigma CLs = 0.0319 (cached)
DEBUG - cabinetry.fit - Signal_norm = 2.7488, expected +1 sigma CLs = 0.0454
DEBUG - cabinetry.fit - Signal_norm = 2.7051, expected +1 sigma CLs = 0.0503
DEBUG - cabinetry.fit - Signal_norm = 2.7101, expected +1 sigma CLs = 0.0497
INFO - cabinetry.fit - successfully converged after 5 steps
INFO - cabinetry.fit - expected +1 sigma upper limit: 2.7101
INFO - cabinetry.fit - determining expected +2 sigma upper limit
DEBUG - cabinetry.fit - Signal_norm = 3.3414, expected +2 sigma CLs = 0.0769 (cached)
DEBUG - cabinetry.fit - Signal_norm = 4.2427, expected +2 sigma CLs = 0.0063 (cached)
DEBUG - cabinetry.fit - Signal_norm = 3.6844, expected +2 sigma CLs = 0.0338
DEBUG - cabinetry.fit - Signal_norm = 3.5554, expected +2 sigma CLs = 0.0469
DEBUG - cabinetry.fit - Signal_norm = 3.5279, expected +2 sigma CLs = 0.0501
DEBUG - cabinetry.fit - Signal_norm = 3.5329, expected +2 sigma CLs = 0.0495
INFO - cabinetry.fit - successfully converged after 6 steps
INFO - cabinetry.fit - expected +2 sigma upper limit: 3.5279
INFO - cabinetry.fit - total of 43 steps to calculate all limits
INFO - cabinetry.fit - summary of upper limits:
INFO - cabinetry.fit - observed : 3.1564
INFO - cabinetry.fit - expected -2 sigma: 1.0107
INFO - cabinetry.fit - expected -1 sigma: 1.4000
INFO - cabinetry.fit - expected : 1.9768
INFO - cabinetry.fit - expected +1 sigma: 2.7101
INFO - cabinetry.fit - expected +2 sigma: 3.5279
###Markdown
Again, the results are visualized:
###Code
cabinetry.visualize.limit(limit_results)
###Output
DEBUG - cabinetry.visualize.utils - saving figure as figures/limit.pdf
###Markdown
The observed limits are above the expected limits.We can calculate the discovery significance with `cabinetry.fit.significance`:
###Code
significance_results = cabinetry.fit.significance(model, data)
###Output
INFO - cabinetry.fit - calculating discovery significance
INFO - cabinetry.fit - observed p-value: 3.584%
INFO - cabinetry.fit - observed significance: 1.801
INFO - cabinetry.fit - expected p-value: 14.775%
INFO - cabinetry.fit - expected significance: 1.046
###Markdown
Create a sessionThe backbone can be either "esp" or "mob" or "res", where the default is "esp"
###Code
# %% Create a session
session = affspec.pipeline.Process(backbone="esp")
###Output
_____no_output_____
###Markdown
Process a numpy imageIt will detect the facial expression for the largest face in the image
###Code
# %% Process a numpy image
img = cv2.imread("images/beibin.jpg")
# Here, img is a numpy array, and we can plot it
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
# Run the session to get the result
rst = session.run_one_img(img)
print(rst)
# You can access each value using the dictionary
print(rst["expression"])
print(rst["valence"])
print(rst["arousal"])
print(rst["action units"])
# a more human-readable action units format
au_description = affspec.config.au_array_2_description(rst["action units"])
print(au_description)
###Output
joy
0.5674614906311035
0.126996248960495
[False, True, False, False, True, False, True, False, False, True, True, False]
Outer Brow Raiser , Cheek Raiser, Lip Corner Puller, Lips Part, Jaw Drop,
###Markdown
Process a image by passing an image path
###Code
rst = session.run_one_img(imgname="images/beibin.jpg")
print(rst)
###Output
{'expression': 'joy', 'expression confidence': 0.9627324938774109, 'valence': 0.5743317604064941, 'arousal': 0.12549209594726562, 'action units': [False, True, False, False, True, False, True, False, False, True, True, False], 'imgname': 'images/beibin.jpg'}
###Markdown
Result is None if no face is in the image
###Code
# %% If there is no face in the image.
# The result will be None
rst = session.run_one_img(imgname="images/no_face.jpg")
print(rst)
###Output
unable to get face 'NoneType' object is not iterable
{'imgname': None, 'expression': None, 'expression confidence': None, 'valence': None, 'arousal': None, 'action units': None}
###Markdown
You can skip detecting face with "detect_face_before_process"Then, the code would not detect the face for you, and it will pass the whole image into the CNN.In this case, no matter is there any faces in the image, the CNN will return some information for you.
###Code
rst = session.run_one_img(imgname="images/no_face.jpg", detect_face_before_process=False)
print(rst)
rst = session.run_one_img(imgname="images/beibin.jpg", detect_face_before_process=False)
print(rst)
###Output
{'expression': 'joy', 'expression confidence': 0.9317041039466858, 'valence': 0.47715964913368225, 'arousal': 0.07181838154792786, 'action units': [False, True, False, False, True, True, True, True, False, False, False, True], 'imgname': 'images/beibin.jpg'}
###Markdown
Process several images at one timeThe batch size can be any positive number. If you have a large GPU, you can use a large "batch size" so that the code can run faster.Note taht the function will give warnings if it could not read an image or find a face.
###Code
imgs = glob.glob("images/*.jpg")
print(imgs)
rsts = session.run_imgs(imgs, batch_size=2)
print("-" * 50, "\n")
for _ in rsts:
print(_)
###Output
--------------------------------------------------
{'expression': 'joy', 'expression confidence': 0.9627886414527893, 'valence': 0.5674615502357483, 'arousal': 0.12699618935585022, 'action units': [False, True, False, False, True, False, True, False, False, True, True, False], 'imgname': 'images\\beibin.jpg'}
{'expression': 'joy', 'expression confidence': 0.5349305868148804, 'valence': 0.018322765827178955, 'arousal': 0.3213616907596588, 'action units': [False, False, False, False, False, False, True, False, False, True, False, False], 'imgname': 'images\\deepali.jpg'}
{'expression': 'fear', 'expression confidence': 0.40272632241249084, 'valence': 0.0030211210250854492, 'arousal': 0.7199287414550781, 'action units': [False, False, False, False, False, False, False, False, False, True, False, True], 'imgname': 'images\\hard.jpg'}
{'imgname': 'images\\no_face.jpg', 'expression': None, 'expression confidence': None, 'valence': None, 'arousal': None, 'action units': None}
{'expression': 'joy', 'expression confidence': 0.7988709211349487, 'valence': 0.28153368830680847, 'arousal': 0.278705894947052, 'action units': [True, False, False, False, True, False, True, False, False, True, True, False], 'imgname': 'images\\sachin.jpg'}
###Markdown
You can also pass the "detect_face_before_process" parameter to the "run_imgs" function.
###Code
imgs = glob.glob("images/*.jpg")
print(imgs)
rsts = session.run_imgs(imgs, batch_size=2, detect_face_before_process=False)
print("-" * 50, "\n")
for _ in rsts:
print(_)
###Output
_____no_output_____
###Markdown
Mixing(all of this is coded in src.utils.mixing - and you can use run_mixing.sh script to achive similar results)
###Code
import skimage
from src.utils.measures import tucker_measure
from sklearn.preprocessing import minmax_scale
def sigmoid(x):
return 1. / (1. + np.exp(-x))
def relu(x):
return np.maximum(x, 0)
def mlp(rs, Z, dim, fun):
if fun == "sigmoid":
f = sigmoid
elif fun == "relu":
f = relu
elif fun == "tanh":
f = np.tanh
else:
raise ValueError("Unknown activation function {}".format(fun))
A = rs.normal(size=(dim, dim))
b = rs.normal(dim)
K1 = f(np.matmul(Z, A) + b)
B = rs.normal(size=(dim, dim))
b = rs.normal(dim)
K2 = f(np.matmul(K1, B) + b)
return K2
def flow_mixing(rs, Z, dim, times):
for t in range(times):
A = rs.normal(size=(dim, dim))
hdim = dim // 2
H = rs.normal(size=(hdim, hdim))
u, s, vh = np.linalg.svd(A, full_matrices=True)
Y = np.dot(Z, np.matmul(u, vh))
i = t % 2
j = 0 if i == 1 else 1
X1 = Y[:, i::2]
X2 = Y[:, j::2]
Y1 = mlp(rs, X2, dim // 2, 'tanh') + X1
Y2 = X2
R = np.zeros(Y.shape)
R[:, i::2] = Y1
R[:, j::2] = Y2
Z = minmax_scale(R)
return Z
def nonlinear_mixing(array_of_pictures):
tmp = array_of_pictures
images = np.c_[[skimage.io.imread(_, as_gray=True).flatten() for _ in tmp]].T
shape = skimage.io.imread(tmp[0], as_gray=True).shape
batch_size, dim = images.shape
rs = np.random.RandomState(seed=42)
Z = images
tm = 1
while tm > .8:
Z = flow_mixing(rs, Z, dim, 10)
tm = tucker_measure(Z, images)
print("Tucker for mix: " + str(tm))
return Z * 255, shape, images
mixed, shape, orig = nonlinear_mixing(["data/ica/0/0-data_orig_img_nonlinear.png", "data/ica/0/1-data_orig_img_nonlinear.png" ])
plt.figure(figsize=(6, 6))
plt.axes().set_aspect('equal')
plt.scatter(orig[:, 0], orig[:, 1])
plt.show()
plt.figure(figsize=(6, 6))
plt.axes().set_aspect('equal')
plt.scatter(mixed[:, 0], mixed[:, 1])
plt.show()
plt.imshow(mixed.T[0].reshape(321,481), cmap=plt.get_cmap('gray'))
plt.axis('off')
plt.title("mixed image")
plt.show()
plt.imshow(mixed.T[1].reshape(321,481), cmap=plt.get_cmap('gray'))
plt.axis('off')
plt.title("mixed image")
plt.show()
###Output
_____no_output_____
###Markdown
Retriving data
###Code
import glob
import numpy as np
import torch
from skimage import io
from torch.utils.data import Dataset
class Dataset(Dataset):
"""
The dataset containing a flattened mix of pictures.
"""
def __init__(self, mixed, orig):
self.mix_dim = mixed.shape[1]
self.orig_dim = orig.shape[1]
self.mix = mixed
self.orig = orig
def __getitem__(self, index):
mix = self.mix[index]
orig = self.orig[index]
return mix, orig
def __len__(self):
return len(self.mix)
import os
import time
import numpy as np
import torch
from torch import optim
from torchvision.utils import save_image
from src.utils.helpers import to_img
from src.utils.measures import tucker_measure, spearman_metric_ilp
class IndependenceAETrainer(Trainer):
def __init__(self, model, loss_class, dataloaders, cuda):
self.model = model
self.loss_class = loss_class
self.train_dataloader, self.test_dataloader = dataloaders
self.cuda = cuda
if self.cuda:
self.model.cuda()
@staticmethod
def save_images_from_epoch(args, train_imgs, test_imgs, epoch):
if (epoch + 1) % args["save_every"] == 0:
if not args["save_raw"]:
# specific for this dataset (selects only the first image)
save_train = to_img(train_imgs[0:321 * 481].T.reshape(2, 1, 321, 481).cpu().data, args["normalize_img"])
save_test = to_img(test_imgs[0:321 * 481].T.reshape(2, 1, 321, 481).cpu().data, args["normalize_img"])
save_image(save_train,
os.path.join(os.path.join(args["folder"], 'images'), 'train_image_{}.png'.format(epoch)))
save_image(save_test,
os.path.join(os.path.join(args["folder"], 'images'), 'test_image_{}.png'.format(epoch)))
else:
save_train = train_imgs.cpu().data
save_test = test_imgs.cpu().data
path_train = os.path.join(os.path.join(args["folder"], 'images'), 'train_{}.npy'.format(epoch))
path_test = os.path.join(os.path.join(args["folder"], 'images'), 'test_{}.npy'.format(epoch))
np.save(path_train, save_train)
np.save(path_test, save_test)
def report(self, epoch, epoch_time, loss_dict):
report_string = '====> Epoch: {} [Time: {:.2f}s] '.format(epoch, epoch_time)
for key, value in loss_dict.items():
report_string += '[{}:{:.4f}]. '.format(key, value)
print(report_string)
def train(self, args):
num_epochs = args["num_epochs"]
lr = args["lr"]
optimizer = optim.Adam(self.model.parameters(), lr)
print("Beginning training...")
for epoch in range(num_epochs):
self.model.train()
train_loss, train_sprm, train_tucker = 0, 0, 0
start = time.time()
images_to_save = []
for batch_idx, (data, orig) in enumerate(self.train_dataloader):
if args["cuda"]:
data = data.cuda()
orig = orig.cuda()
optimizer.zero_grad()
loss, recon, encoded = self.calculate_loss(data)
images_to_save.extend(encoded)
loss.backward()
train_loss += loss.data
train_tucker += self.calculate_tucker(encoded, orig)
train_sprm += self.calculate_spearman(encoded, orig)
optimizer.step()
end = time.time()
recon = torch.stack(images_to_save, dim=0)
self.model.eval()
test_loss, test_sprm, test_tucker = 0, 0, 0
images_to_save = []
for batch_idx, (data, t_orig) in enumerate(self.test_dataloader):
if args["cuda"]:
data = data.cuda()
loss, t_recon, t_encoded = self.calculate_loss(data)
images_to_save.extend(t_encoded)
if epoch == num_epochs - 1 and args["save_raw"]:
save_test = t_encoded.cpu().data
path_test = os.path.join(os.path.join(args["folder"], 'images'),
'icatest_{}_{}.npy'.format(batch_idx, epoch))
np.save(path_test, save_test)
test_loss += loss.data
test_tucker += self.calculate_tucker(t_encoded, t_orig)
test_sprm += self.calculate_spearman(t_encoded, t_orig)
t_recon = torch.stack(images_to_save, dim=0)
epoch_time = end - start
loss_dict = self.get_loss_dict(
train_loss,
test_loss,
train_tucker,
test_tucker,
train_sprm,
test_sprm
)
self.report(epoch, epoch_time, loss_dict)
self.save_images_from_epoch(args, recon, t_recon, epoch)
state = self.get_state_dict(epoch, optimizer, loss_dict)
torch.save(state, os.path.join(args["folder"], "model.th".format(epoch)))
print("Training complete")
def calculate_tucker(self, x, y):
return tucker_measure(x.detach().numpy(), y.detach().numpy())
def calculate_spearman(self, x, y):
return spearman_metric_ilp(x.detach().numpy(), y.detach().numpy())
def get_state_dict(self, epoch, optimizer, loss_dict):
state = {
'epoch': epoch,
'state_dict': self.model.state_dict(),
'optimizer': optimizer.state_dict(),
}
state.update(loss_dict)
return state
def get_loss_dict(self, train_loss, test_loss, train_tucker, test_tucker, train_sprm, test_sprm):
div_train, div_test = len(self.train_dataloader), len(self.test_dataloader)
return {
'rec_loss': train_loss / div_train,
'rec_loss_test': test_loss / div_test,
'train_tucker': train_tucker / div_train,
'test_tucker': test_tucker / div_test,
'train_sprm': 1 - train_sprm / div_train,
'test_sprm': 1 - test_sprm / div_test,
}
def calculate_loss(self, img):
recon, encoded = self.model(img.float())
loss = self.loss_class.loss(recon, img.float(), z=encoded)
return loss, recon, encoded
from torch.utils.data import DataLoader
from src.data_handling.datasets import FlattenedPicturesDataset
from src.decoders.decoder_provider import DecoderProvider
from src.encoders.encoder_provider import EncoderProvider
from src.models.loss_functions import *
from src.models.models import *
class TrainerBuilder:
@staticmethod
def get_trainer(args):
loss = TrainerBuilder.__get_loss(args)
dataloaders = TrainerBuilder.__get_dataloaders(args)
trainer = TrainerBuilder.__get_trainer(args, loss, dataloaders)
return trainer
@staticmethod
def __get_trainer(args, loss, dataloaders):
encoder = EncoderProvider.get_encoder(args["latent_dim"])
decoder = DecoderProvider.get_decoder(args["latent_dim"], args["normalize_img"])
model = Autoencoder(encoder, decoder)
return IndependenceAETrainer(model, loss, dataloaders, args["cuda"])
@staticmethod
def __get_loss(args):
reg_loss = ReconstructionLoss.get_rec_loss(args["rec_loss"])
ind_loss = WeightedICALossFunction(args["power"], args["number_of_gausses"], cuda=args["cuda"])
return JoinedLoss(ind_loss, reg_loss, args["beta"])
@staticmethod
def __get_dataloaders(args):
tmp = Dataset(args["mixes"], args["orig"])
train_dataloader = DataLoader(tmp, batch_size=args["batch_size"], shuffle=True)
test_dataloader = DataLoader(tmp, batch_size=args["batch_size"])
return train_dataloader, test_dataloader
trainer = TrainerBuilder.get_trainer({
"latent_dim":2,
"normalize_img": True,
"cuda": False,
"rec_loss": "mse",
"power": 0,
"number_of_gausses": 10,
"beta":100,
"mixes": mixed,
"orig": orig,
"batch_size": 256
})
trainer.train({
"num_epochs":6,
"lr": 1e-3,
"cuda": False,
"save_every": 1,
"save_raw": False,
"folder":"./results",
"normalize_img": True
})
###Output
Beginning training...
====> Epoch: 0 [Time: 23.45s] [rec_loss:0.4878]. [rec_loss_test:5.6918]. [train_tucker:0.9608]. [test_tucker:0.9215]. [train_sprm:0.8847]. [test_sprm:0.8606].
====> Epoch: 1 [Time: 34.86s] [rec_loss:0.2581]. [rec_loss_test:4.8446]. [train_tucker:0.9574]. [test_tucker:0.9321]. [train_sprm:0.9189]. [test_sprm:0.8821].
====> Epoch: 2 [Time: 28.02s] [rec_loss:0.2481]. [rec_loss_test:5.8189]. [train_tucker:0.9585]. [test_tucker:0.9301]. [train_sprm:0.9311]. [test_sprm:0.8864].
====> Epoch: 3 [Time: 25.86s] [rec_loss:0.2039]. [rec_loss_test:3.8982]. [train_tucker:0.9576]. [test_tucker:0.9193]. [train_sprm:0.9305]. [test_sprm:0.8780].
====> Epoch: 4 [Time: 26.51s] [rec_loss:0.2083]. [rec_loss_test:5.2379]. [train_tucker:0.9568]. [test_tucker:0.9152]. [train_sprm:0.9284]. [test_sprm:0.8825].
====> Epoch: 5 [Time: 26.13s] [rec_loss:0.1982]. [rec_loss_test:4.3848]. [train_tucker:0.9542]. [test_tucker:0.9071]. [train_sprm:0.9237]. [test_sprm:0.8751].
Training complete
###Markdown
Results
###Code
img_1 = mpimg.imread("./results/images/test_image_5.png")
plt.imshow(img_1, cmap=plt.get_cmap('gray'))
plt.axis('off')
plt.title("base image")
###Output
_____no_output_____
###Markdown
I know the large separation is 1.14 cpd. Let's try it out
###Code
x, y, z = echelle.echelle(frequency, amplitude, 5)
z.min()
###Output
_____no_output_____
###Markdown
Nice! But what if we didn't know? Thankfully, there's an interactive echelle module we can use to hone in on the correct value. Judging from the periodogram, the large separation is probably between 0.5 and 2.
###Code
from notebook import notebookapp
servers = list(notebookapp.list_running_servers())
print(servers)
###Output
[{'base_url': '/', 'hostname': 'localhost', 'notebook_dir': '/Users/daniel', 'password': True, 'pid': 56164, 'port': 8888, 'secure': False, 'sock': '', 'token': '', 'url': 'http://localhost:8888/'}]
###Markdown
If you have a large amount of data, it is usually preferable to zoom in on the relevant regions to avoid the expensive re-plotting:
###Code
echelle.interact_echelle(frequency, amplitude, 0.5, 2, step=0.01, fmin=10, fmax=20)
###Output
_____no_output_____
###Markdown
There are a few features in interact_echelle that may be useful for you. One of them is an argument to return any frequencies that were clicked on. To do this, we must specify `return_coords=True`. We can see this in action below:
###Code
clicked_frequencies = echelle.interact_echelle(frequency, amplitude, 0.5, 2, step=0.01, return_coords=True)
###Output
_____no_output_____
###Markdown
You can't see it, but I clicked on a few of the frequencies along the strongest ridge. They're stored as a list and can be accessed like so
###Code
clicked_frequencies
###Output
_____no_output_____
###Markdown
Note that these are the x, y coordinates of the echelle diagram. The first column is the frequency modulo dnu, the second is the frequency. If you want to use your own plotting routines, or want to do something fancy with the echelle values (like plotting a collapsed echelle diagram!), you can just call `echelle.echelle`. Note that `echelle.echelle` is very barebones, and will not perform any smoothing of your data. If you want to do that, you must smooth your amplitudes before passing them in!
###Code
x, y, z = echelle.echelle(frequency, amplitude, 1.14)
plt.plot(x, np.sum(z, axis=0))
plt.xlabel('Frequency mod 1.14')
plt.ylabel('Summed amplitudes')
###Output
_____no_output_____
###Markdown
FABSO Quickstart---Quickstart of the Fitness-Distance-Ratio Archive-Based Swarm Optimization (FABSO) algorithm
###Code
# Import modules from fabso
from fabso.topology import Space
from fabso.optimizer import FABSO
from fabso.utils import viz
# Libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Fitness function--- De Jong's function 1Sum of the square of each value where the value. Defined as ![equation](./images/equation.png)
###Code
# Fitness function
# returning a negative value as we want to minimize the
# objective function using a maximizing algorithm.
def fitness(x):
x = [i**2 for i in x]
return -sum(x)
# Initial Parameters
bounds = (-5.12, 5.12)
n_particles = 20
n_dimensions = 5
# Algorithm Parameters
objective = fitness
params = {'w': 0.9, 'c1': 1, 'c2': 0, 'c3': 2}
generations = 50
archive_size = 5
restart_freq = 20
# Generate random particles
particles = np.random.randn(n_particles, n_dimensions)
# Initialize search space
space = Space(bounds, objective, n_particles, n_dimensions)
space.generate_particles(particles)
# Initialize algorithms with parameters
fabso = FABSO(space, bounds, params, objective, generations,
n_particles, n_dimensions, archive_size, restart_freq)
# Optimize objective function
result = fabso.optimize()
print('Optimal value achieved: ', result[-1])
# Generate plot
fig = viz(result)
fig.show()
###Output
_____no_output_____
###Markdown
Active Inference for Markov Decision Processes> This notebook provides an example of the `inferactively` toolbox EnvironmentsThe `inferactively` includes environments that follow the openAI `gym` API. Here, we will use a grid-world environment. We will assume that the agent is observing and acting in multiple grid environments simultaneously. This will enable us to demonstrate how *factors* are implemented in `inferactively`.We assume $N$ grid worlds, each of some arbitary shape $w \ x \ h$. At each time step $t$, the environment generates observations about the agents positions in each grid world. Formally, it generates a vector $[o_0 , ... , o_N]$. Agents can take one of 5 actions in each grid world - `{UP, RIGHT, DOWN, LEFT, STAY}`, but here we will sample random actions.
###Code
from inferactively.envs import NDGridWorldEnv
env = NDGridWorldEnv(shape=[6, 6], n_dims=2)
obs = env.reset()
for _ in range(5):
controls = env.sample_action()
obs = env.step(controls)
print(f"obs {obs} controls {controls}")
###Output
obs [ 9 23] controls [1 1]
obs [10 17] controls [1 0]
obs [16 11] controls [2 0]
obs [17 5] controls [1 0]
obs [17 5] controls [1 0]
###Markdown
Generative modelNow we have an environment, the next step is to construct an agent's generative model (we will cover learning a model later). `inferactively` contains several useful classes for constructing models.We consider the following generative model:$$ p(\mathbf{o}_{1:T}, \mathbf{s}_{1:T}) = \prod_{t=1}^T p(\mathbf{o}_t|\mathbf{s}_t) p(\mathbf{s}_t|\mathbf{s}_{t-1}, \mathbf{u}_{t-1}) $$Here, $o$ are observations, $s$ are "hidden states" - beliefs about the causes of sensory data - and $u$ are control states. Likelihood distribution (`A`) We will begin by considering the likelihood distribution $p(\mathbf{o}_t|\mathbf{s}_t)$, which will be denoted `A` in the code.To make inference tractble, we factorise the beliefs about hidden states, with one factor for each grid world $N$. In other words, the agent believes that its position in grid $a$ is independent of its position in grid $b$ (which it is).This likelihood distribution is over $M$ modalities (2). Moreover, there are $N$ hidden state factors (10). In practice, a separate likelihood distribution is maintained for each modality $m$, i.e $p(o_t^m|s_t)$, each of which has the dimensions `(o_m, s_0, ..., s_N)`For simplicity, we will assume that each hidden state factor maps to a corresponding observation modality, and that this map is an identity mapping. In short, agents have perfect knowledge about where there are in the world. Later, we will introduce uncertainty into the inference procedure.> _Note: number of hidden states will be `w x h` for each factor_This can be implemented as follows:
###Code
import numpy as np
from inferactively.distributions import Categorical
n_observations = env.n_observations
n_states = env.n_states
print(f"Number of states per factor {n_states} Number of observations per modality {n_observations}")
n_modalities = len(n_observations)
n_factors = len(n_states)
print(f"Number of state factors {n_factors} Number of modalities {n_modalities}")
A = np.empty(n_modalities, dtype="object")
for m in range(n_modalities):
dist = np.zeros((n_observations[m], *n_states))
dist[:, :, :] = np.eye(n_states[0])
A[m] = dist
A = Categorical(values=A)
print(A[0][0, :, :], A[0].shape)
from inferactively.core import cre
obs = (12, 5)
qs = update_posterior_states(A, obs, return_numpy=False)
print(qs[1].values.transpose())
###Output
[[0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778
0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778
0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778
0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778
0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778
0.02777778 0.02777778 0.02777778 0.02777778 0.02777778 0.02777778]]
###Markdown
Create test filesystem1. Create empty file (250MB or choose different size).```sh$ dd if=/dev/zero of=data_fs.img bs=1M count=250```2. Create ext4 filesystem inside this file.```sh$ mkfs.ext4 data_fs.img```3. Attach image file to block device. It shows device it was attached to (e.g. /dev/loop0).```sh$ losetup -fP --show data_fs.img```3. Mount new filesystem:```sh$ mkdir data_fs$ mount /dev/loop0 data_fs```4. When you are finished, reverse all actions using:```sh$ umount data_fs$ rmdir data_fs$ losetup -d /dev/loop0```
###Code
fs = 'data_fs.img'
###Output
_____no_output_____
###Markdown
Generate metadata snapshotCreate or copy test files to new filesystem, e.g. photo.```sh$ cp /home/user/test_photo.jpg data_fs/some_image.jpg```Force filesystem to sync.```sh$ sync -f data_fs```And create snapshot file `snapshot.out`.
###Code
generate_snapshot(fs, 'snapshot.out', 100)
###Output
_____no_output_____
###Markdown
Remove and recover fileNow remove file```sh$ rm data_fs/some_image.jpg```Force filesystem to sync.```sh$ sync -f data_fs```And try to recover it from saved 'snapshot.out'.
###Code
recover_file(fs, 'snapshot.out', '/some_image.jpg', 'recovered.jpg')
###Output
_____no_output_____
###Markdown
VerifyCheck whether files md5 sums matchtes:```sh$ md5sum recovered.jpg /home/user/test_photo.jpg5ae7c956d3ebc1bce3951c5a72714cf7 recovered.jpg5ae7c956d3ebc1bce3951c5a72714cf7 /home/user/test_photo.jpg```View all deleted files, that are present in snapshot:
###Code
list_deleted(fs, 'snapshot.out')
###Output
_____no_output_____
###Markdown
First, imports.
###Code
import numpy as np
import matplotlib.pyplot as plt
import cakeopt
###Output
_____no_output_____
###Markdown
Define a target function that takes named parameters. To make things interesting we can include some categorical values.
###Code
categorical_encoding = dict(A=-2, B=0, C=4)
categories = tuple(sorted(categorical_encoding.keys()))
print(categories)
def target_function(**param_dict):
param_list = []
for (p_name, p_val) in sorted(param_dict.items()):
try:
param_list.append(float(p_val))
except ValueError as err:
param_list.append(categorical_encoding[p_val])
param_arr = np.array(param_list)
result = np.sum(param_arr ** 2)
return result
###Output
('A', 'B', 'C')
###Markdown
Describe the parameter space.
###Code
param_descriptor = {
'x1': ('continuous', (-5, 5)),
'x2': ('continuous', (-5, 5)),
'x3': ('integer', (-5, 5)),
'x4': ('integer', (-5, 5)),
'x5': ('categorical', categories),
'x6': ('categorical', categories),
}
###Output
_____no_output_____
###Markdown
Call the optimiser. We let it know that we have no noise in our measurements, as the test function is deterministic. Depending on your system and the number of iterations, this can take a few minutes.
###Code
%%time
MAX_ITER = 30
opt_res = cakeopt.cakeopt_search(target_function, param_descriptor,
max_iter=MAX_ITER, noise=False, random_state=0)
###Output
CPU times: user 6min 7s, sys: 14.3 s, total: 6min 21s
Wall time: 1min 35s
###Markdown
Finally, plot the results. The optimiser returns the parameter values one-hot encoded, so we have to reconstruct x5 and x6.
###Code
iter_no = np.arange(MAX_ITER) + 1
func_values = opt_res.f
param_values = opt_res.x
x5_values = param_values[:, 4] * (-2) + param_values[:, 5] * 0 + param_values[:, 6] * 4
x6_values = param_values[:, 7] * (-2) + param_values[:, 8] * 0 + param_values[:, 9] * 4
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(12, 9))
ax[0].plot(iter_no, func_values)
ax[0].set_ylabel('Function value')
ax[1].plot(iter_no, iter_no * np.nan) # In order to skip a colour
ax[1].plot(iter_no, param_values[:, 0], alpha=.7, label='x1')
ax[1].plot(iter_no, param_values[:, 1], alpha=.7, label='x2')
ax[1].plot(iter_no, param_values[:, 2], alpha=.7, label='x3')
ax[1].plot(iter_no, param_values[:, 3], alpha=.7, label='x4')
ax[1].plot(iter_no, x5_values, alpha=.7, label='x5')
ax[1].plot(iter_no, x6_values, alpha=.7, label='x6')
ax[1].legend(loc=2)
ax[1].set_xlabel('Iteration number')
ax[1].set_ylabel('Parameter value')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
or
###Code
# import necessary libraries
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib, random
import torch, torchvision
import torchvision.transforms as T
import numpy as np
import numpy.ma as ma
import cv2
from vision.faststyletransfer_eval import FasterStyleTransfer
import collections
# get the pretrained model from torchvision.models
# Note: pretrained=True will get the pretrained weights for the model.
# model.eval() to use the model for inference
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model.eval()
# default COCO objects
# Separating out the object names will be useful in object-specific filtering, but not instance segmentation
COCO_INSTANCE_CATEGORY_NAMES = [
'__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]
def random_colour_masks(image):
"""
random_colour_masks
parameters:
- image - predicted masks
method:
- the masks of each predicted object is given random colour for visualization
"""
colours = [[0, 255, 0],[0, 0, 255],[255, 0, 0],[0, 255, 255],[255, 255, 0],[255, 0, 255],[80, 70, 180],[250, 80, 190],[245, 145, 50],[70, 150, 250],[50, 190, 190]]
r = np.zeros_like(image).astype(np.uint8)
g = np.zeros_like(image).astype(np.uint8)
b = np.zeros_like(image).astype(np.uint8)
r[image == 1], g[image == 1], b[image == 1] = colours[random.randrange(0,10)]
coloured_mask = np.stack([r, g, b], axis=2)
return coloured_mask
def get_prediction(img_path, threshold, objects):
"""
get_prediction
parameters:
- img_path - path of the input image
method:
- Image is obtained from the image path
- the image is converted to image tensor using PyTorch's Transforms
- image is passed through the model to get the predictions
- masks, classes and bounding boxes are obtained from the model and soft masks are made binary(0 or 1) on masks
ie: eg. segment of cat is made 1 and rest of the image is made 0
"""
img = Image.open(img_path)
transform = T.Compose([T.ToTensor()])
img = transform(img)
pred = model([img])
pred_score = list(pred[0]['scores'].detach().numpy())
pred_t = [pred_score.index(x) for x in pred_score if x>threshold][-1]
masks = (pred[0]['masks']>0.5).squeeze().detach().cpu().numpy()
pred_class = [objects[i] for i in list(pred[0]['labels'].numpy())]
pred_boxes = [[(i[0], i[1]), (i[2], i[3])] for i in list(pred[0]['boxes'].detach().numpy())]
masks = masks[:pred_t+1]
pred_boxes = pred_boxes[:pred_t+1]
pred_class = pred_class[:pred_t+1]
return masks, pred_boxes, pred_class
def instance_segmentation(img_path, threshold=0.5, rect_th=3, text_size=3, text_th=3, objects=COCO_INSTANCE_CATEGORY_NAMES):
"""
instance_segmentation
parameters:
- img_path - path to input image
method:
- prediction is obtained by get_prediction
- each mask is given random color
- each mask is added to the image in the ration 1:0.8 with opencv
- final output is displayed
"""
masks, boxes, pred_cls = get_prediction(img_path, threshold, objects)
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
for i in range(len(masks)):
rgb_mask = random_colour_masks(masks[i])
img = cv2.addWeighted(img, 1, rgb_mask, 0.5, 0)
#cv2.rectangle(img, boxes[i][0], boxes[i][1],color=(0, 255, 0), thickness=rect_th) # no bounding boxes required
cv2.putText(img,pred_cls[i], boxes[i][0], cv2.FONT_HERSHEY_SIMPLEX, text_size, (0,255,0),thickness=text_th)
plt.figure(figsize=(20,30))
plt.imshow(img)
plt.xticks([])
plt.yticks([])
plt.show()
return img
def mask_segments(img_path='./payload/IMG-20200401-WA0002.jpg'):
img_original = Image.open(img_path)
img_original_rbg = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)
transform = T.Compose([T.ToTensor()])
img = transform(img_original)
img_rgb = transform(img_original_rbg)
pred = model([img])
print("Finished image segmentation")
masks = (pred[0]['masks']>0.5).squeeze().detach().cpu().numpy()
print("Returned segments: ", len(masks))
return img_original_rbg, img_rgb, masks
def PartialStyleTransfer(segment = 0, img_path='./payload/IMG-20200401-WA0002.jpg', style_path="./fast_neural_style_transfer/models/mosaic_style__200_iter__vgg19_weights.pth"):
print("Started partial style transfer")
# mode can be 'styled' or 'color'
# return indices on number of segments
img_original_rbg, img_rgb, masks = mask_segments(img_path)
if len(masks) > 0:
mask = masks[segment]
# print mask of image with the original image pixels
img_array = np.array(img_original_rbg[:,:,:])
img_array_floating = np.array(img_rgb[:,:,:])
# if False, set as 0 (black)
masked_img = []
for h in range(img_original_rbg.shape[0]):
sub_masked_img = []
for i in range(img_original_rbg.shape[1]):
tmp=[]
for j in range(img_original_rbg.shape[2]):
if mask[h][i] == False:
tmp.append(float(0))
else:
tmp.append(img_array_floating[j][h][i])
sub_masked_img.append(tmp)
masked_img.append(sub_masked_img)
masked_img_array = np.array(masked_img)
plt.imshow(masked_img_array[:,:,:]) # Export this mask image for style transfer
plt.show()
matplotlib.image.imsave(str(img_path[:-4]+str("_MASK")+".png"), masked_img_array)
FasterStyleTransfer(style_path, str(img_path[:-4]+str("_MASK")+".png"), str(img_path[:-4]+str("_FST")+".png"))
style_img = Image.open(str(img_path[:-4]+str("_FST")+".png"))
plt.imshow(style_img)
plt.show()
return style_img, img_array_floating, img_array
def PixelRemoved(img_path='./payload/IMG-20200401-WA0002.jpg'):
transform = T.Compose([T.ToTensor()])
img_original_rbg = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)
img_rgb = transform(img_original_rbg)
img_array_floating = np.array(img_rgb[:,:,:])
style_img_original = Image.open(str(img_path[:-4]+str("_FST")+".png"))
WIDTH, HEIGHT = cv2.cvtColor(cv2.imread(str(img_path[:-4]+str("_MASK")+".png")), cv2.COLOR_BGR2RGB).shape[1], cv2.cvtColor(cv2.imread(str(img_path[:-4]+str("_MASK")+".png")), cv2.COLOR_BGR2RGB).shape[0]
style_img_rbg = cv2.resize(cv2.cvtColor(cv2.imread(str(img_path[:-4]+str("_FST")+".png")), cv2.COLOR_BGR2RGB), (WIDTH,HEIGHT), interpolation=cv2.INTER_CUBIC) # FST reshaped the dimension, this lines reshapes back to consistent dimensions
styled_img = transform(style_img_original)
styled_img_rgb = transform(style_img_rbg)
# remove most frequent pixel
pix_remove = list(dict(collections.Counter(np.hstack(np.hstack(styled_img_rgb))).most_common()).keys())[0]
# img_array = np.array(img_original_rbg[:,:,:])
styled_img_rgb_floating = np.array(styled_img_rgb[:,:,:])
masked_img = []
# When it is detected to be a background pixed, a background pixel from original image is inserted
for h in range(style_img_rbg.shape[0]):
sub_masked_img = []
for i in range(style_img_rbg.shape[1]):
tmp=[]
for j in range(style_img_rbg.shape[2]):
if (float(styled_img_rgb[j][h][i]) > float(pix_remove)-0.1) and (float(styled_img_rgb[j][h][i]) < float(pix_remove)+0.1):
tmp.append(img_array_floating[j][h][i])
else:
tmp.append(styled_img_rgb_floating[j][h][i])
sub_masked_img.append(tmp)
masked_img.append(sub_masked_img)
masked_img_array = np.array(masked_img)
plt.imshow(masked_img_array[:,:,:])
matplotlib.image.imsave(str(img_path[:-4]+str("_MASK+FST")+".png"), masked_img_array)
return masked_img_array
style_img, img_array_floating, img_array = PartialStyleTransfer(segment = 1, img_path='./payload/test.jpg', style_path="./vision/fast_neural_style_transfer/models/mosaic.pth")
masked_img_array = PixelRemoved(img_path='./payload/test.jpg')
###Output
Started partial style transfer
Finished image segmentation
Returned segments: 4
###Markdown
Standard importsIn a script, python module or notebook you tend to include "standard" package imports first. These might be from the standard library or be well known data science libraries.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Local package importsThese are usually found after the standard imports such as `pandas`, `matplotlib` and `numpy`One option is to import the how package and - optionally - provide it an alias (as you would do for `numpy` and `pandas`)
###Code
import ts_emergency as tse
###Output
_____no_output_____
###Markdown
Alterntively you can use the **from** statement to import a package. This is a stylistic choice, but one benefit of using from is that you don't need to retype the full path each time you use a function. That may or may not be important for you application. Another more subtle benefit I've found is that when first designing a package I tend to rename modules and sometimes change package structure. Importing using `from` statement means I only need to update the import section of my code - as opposed to all individual calls to a function.
###Code
from ts_emergency.datasets import load_ed_ts
from ts_emergency.plotting import (plot_single_ed, plot_eds)
###Output
_____no_output_____
###Markdown
Package information.
###Code
print(tse.__version__)
print(tse.__author__)
###Output
0.1.0
<insert your name>
###Markdown
Using docstrings for built in helpWhen you are using an IDE such as spyder, vscode or Jupyter you can use intellisense to dynamically view docstrings of modules, class and functions included in a package. But you can also use classic the `help` built-in.
###Code
help(tse.datasets)
help(load_ed_ts)
###Output
Help on function load_ed_ts in module ts_emergency.datasets:
load_ed_ts(data_format='wide', as_pandas=True)
Load the built-in ED dataset
Params:
------
data_format: str
'Wide' or 'long' format. Wide format provides hospital columns.
Long format provides a categorical hospital column and single attends
column.
as_pandas: bool, optional (default = True)
Return as `pandas.Dataframe`. If False then `numpy.ndarray`
Returns:
-------
pandas.Dataframe or if `as_pandas=False` then returns `numpy.ndarray`
###Markdown
Using the imported package functionsOnce you have imported the package functions then it is just standard python to call them and pass parameters. For example using the functions in the `ts_emergency_datasets` namespace.
###Code
# directly imported function
df = load_ed_ts(data_format='wide', as_pandas=True)
df.head(2)
# full package path to function
df = tse.datasets.load_ed_ts()
df.head(2)
###Output
_____no_output_____
###Markdown
Using functions in the `ts_emergency.plotting` namespace
###Code
fig, ax = plot_single_ed(df, 'hosp_1')
fig = plot_eds(df)
###Output
_____no_output_____
###Markdown
Caffe2 TSNE exampleThis example scripts shows you how to properly load a custom Caffe2 extension, usually in the form of a dynamic library, into Caffe2 Python and then use it.Caffe2 uses a registration pattern, and as a result, one simply needs to use the dyndep module in caffe2.python to load the extension library. What happens under the hood is that the corresponding operators get registered into the Caffe2 operator registry, and then one can create such related operators using the predefined name and calling convention.We will use the TSNE example to show this. If you haven't checked out the C++ part, read the source code, build it, and then invoke this ipython notebook.
###Code
# First, we will import the necessary dependencies.
%matplotlib inline
import os
from matplotlib import pyplot
import numpy as np
import struct
from caffe2.python import core, dyndep, workspace
from caffe2.proto import caffe2_pb2
# This is what you will need to import your custom library.
# It will load the .so file into Python, and register the
# corresponding operators to the Caffe2 operator registry.
dyndep.InitOpsLibrary('libcaffe2_tsne.so')
# Now, since we know that our custom implementation is for
# the TSNE operator, we will do a sanity check to make sure
# it is there.
'TSNE' in workspace.RegisteredOperators()
# We will create a quick helper function to load the MNIST dataset.
# If you don't have it, you can download it here:
# http://yann.lecun.com/exdb/mnist/
# Make sure you gunzip it after downloading. Some browsers may do
# that automatically for you.
def read_mnist(dataset = "training", path = "."):
"""
Python function for importing the MNIST data set. It returns an iterator
of 2-tuples with the first element being the label and the second element
being a numpy.uint8 2D array of pixel data for the given image.
"""
if dataset is "training":
fname_img = os.path.join(path, 'train-images-idx3-ubyte')
fname_lbl = os.path.join(path, 'train-labels-idx1-ubyte')
elif dataset is "testing":
fname_img = os.path.join(path, 't10k-images-idx3-ubyte')
fname_lbl = os.path.join(path, 't10k-labels-idx1-ubyte')
else:
raise ValueError, "dataset must be 'testing' or 'training'"
# Load everything in some numpy arrays
with open(fname_lbl, 'rb') as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.fromfile(flbl, dtype=np.int8)
with open(fname_img, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
img = np.fromfile(fimg, dtype=np.uint8).reshape(len(lbl), rows, cols)
return lbl, img
# We will read in the MNIST dataset, and then take 5000
# examples for the sake of speed.
lbl, img = read_mnist()
img = img.reshape(60000, 28*28).astype(np.double)[:5000]
lbl = lbl[:5000]
# Now, to create an operator for Caffe2 that does TSNE, one simply
# provides the operator name, in this case "TSNE", the input name,
# the output name, and the necessary arguments.
# In the case of TSNE, we will specify that the output dims is 2,
# and we will run the iteration a maximum of 1000 times.
op = core.CreateOperator("TSNE", "img", "Y", dims=2, max_iter=1000)
# The above essentially creates a protocol buffer object that defines
# the operator. We can serialize it into a human readable format.
print(str(op))
# So, to run it, the easiest thing to do is:
# (1) Load the input to the workspace,
# (2) Run the operator,
# (3) Fetch the output from the workspace.
#
# Of course, for more complex runs, you can add this operator
# to either a net or a plan - see the official Caffe2 docs for
# detailed instructions.
workspace.FeedBlob("img", img)
workspace.RunOperatorOnce(op)
Y = workspace.FetchBlob("Y")
# In this case, let's visualize how the TSNE embedding looks like.
my_colors = pyplot.get_cmap("jet")(np.linspace(0.1, 1, 10))
for i in range(10):
pyplot.plot(Y[lbl==i, 0], Y[lbl==i, 1], '.', color=my_colors[i], label=str(i))
pyplot.legend()
pyplot.title("MNIST TSNE embedding")
###Output
_____no_output_____
###Markdown
Read file from google cloud storage bucket Write to parquet
###Code
reader = gcs_reader(auth_file='<SERVICE_ACCOUNT.json>', bucket='<BUCKET_NAME>',
datatype='avro', prefix='<FILE_PREFIX>')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.parquet>')
###Output
_____no_output_____
###Markdown
Write to csv
###Code
reader = gcs_reader(auth_file='<SERVICE_ACCOUNT.json>', bucket='<BUCKET_NAME>',
datatype='avro', prefix='<FILE_PREFIX>')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_csv(outfile='<FOLDER/FILENAME.csv>')
###Output
_____no_output_____
###Markdown
Write to json
###Code
reader = gcs_reader(auth_file='<SERVICE_ACCOUNT.json>', bucket='<BUCKET_NAME>',
datatype='avro', prefix='<FILE_PREFIX>')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_json(outfile='<FOLDER/FILENAME.json>')
###Output
_____no_output_____
###Markdown
Read from S3 bucket Write to parquet
###Code
reader = s3_reader(access_key='<AWS ACCESS KEY>', secret_key='<AWS SECRET KEY>', session_token='<AWS SESSION TOKEN>(if any)',
bucket='<S3 BUCKET>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.parquet>')
###Output
_____no_output_____
###Markdown
Write to csv
###Code
reader = s3_reader(access_key='<AWS ACCESS KEY>', secret_key='<AWS SECRET KEY>', session_token='<AWS SESSION TOKEN>(if any)',
bucket='<S3 BUCKET>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.csv>')
###Output
_____no_output_____
###Markdown
Write to json
###Code
reader = s3_reader(access_key='<AWS ACCESS KEY>', secret_key='<AWS SECRET KEY>', session_token='<AWS SESSION TOKEN>(if any)',
bucket='<S3 BUCKET>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.json>')
###Output
_____no_output_____
###Markdown
Read from local filesystem Write to parquet
###Code
reader = fs_reader(folder='<FOLDER NAME>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.parquet>')
###Output
_____no_output_____
###Markdown
Write to csv
###Code
reader = fs_reader(folder='<FOLDER NAME>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.csv>')
###Output
_____no_output_____
###Markdown
Write to json
###Code
reader = fs_reader(folder='<FOLDER NAME>', prefix='<FILE PREFIX>', datatype='avro')
avro_object = AvroConvert(data=reader.get_data())
avro_object.to_parquet(outfile='<FOLDER/FILENAME.json>')
###Output
_____no_output_____
###Markdown
Download Data Set
###Code
import gdown
google_path = 'https://drive.google.com/uc?id='
file_id = '1VsgbtqBhVnCDpXgp9ydW53eBkfDRq7Tf'
output_name = 'drumdataset.egg'
gdown.download(google_path+file_id,output_name,quiet=False)
###Output
Downloading...
From: https://drive.google.com/uc?id=1VsgbtqBhVnCDpXgp9ydW53eBkfDRq7Tf
To: C:\Users\ADmin\Desktop\drumdataset\drumdataset.egg
5.06GB [07:51, 10.7MB/s]
###Markdown
Data Augmentation Generate Augmentation Folder
###Code
data_augmentation = Data_augmentation(labels)
data_augmentation.make_aug_folders(True)
class_n = len(labels)
aug_n = 3
for i in range(class_n):
data_augmentation.export_data(aug_n,i)
###Output
_____no_output_____
###Markdown
Dataset Overview
###Code
data_overview = Dataset_overview(labels,data_version)
data_overview.overview()
###Output
_____no_output_____
###Markdown
Confirm all of Dataset Image Size
###Code
data_overview.confirm_size()
###Output
All images size is (256,384,3)
###Markdown
Training and Validation Dataset
###Code
data_t_v = Data_train_valid(labels,dataset_t_v_path,data_type,data_time_path,data_mel_path)
###Output
_____no_output_____
###Markdown
Generate Training and Validation Folders
###Code
data_t_v.make_folders(status=True)
###Output
Complete folders generation
###Markdown
Generate Training and Validation Data
###Code
data_t_v.gen_t_v(data_info)
###Output
Label- CH : train- 1202, valid- 118
Label- B+CH : train- 1202, valid- 118
Label- OH : train- 1202, valid- 118
Label- S+CH : train- 1202, valid- 118
Label- S+OH : train- 1200, valid- 120
Label- B+OH : train- 1202, valid- 118
Label- B : train- 1200, valid- 120
Label- S : train- 1200, valid- 120
Label- R : train- 1202, valid- 118
Label- B+R : train- 1202, valid- 118
Label- S+R : train- 1200, valid- 120
Label- B+C : train- 1200, valid- 120
Label- rest : train- 1200, valid- 120
Label- MT : train- 1200, valid- 120
Label- S+B+CH : train- 1200, valid- 120
Label- FT : train- 1200, valid- 120
Label- S+FT : train- 1200, valid- 120
Label- S+B : train- 1200, valid- 120
Label- S+C : train- 1200, valid- 120
Label- S+B+R : train- 1200, valid- 120
Label- B+FT : train- 1200, valid- 120
Label- S+B+OH : train- 1200, valid- 120
Label- MT+FT : train- 1200, valid- 120
###Markdown
Train and Validate - VGG19, ResNet34, EfficientNet-B0, B1, B2
###Code
train_dataset = DrumDataset(labels,train = True, dtype = 'mel')
valid_dataset = DrumDataset(labels,train = False, dtype = 'mel')
train_loader = DataLoader(train_dataset, batch_size = 8, shuffle = True, pin_memory = True)
valid_loader = DataLoader(valid_dataset, batch_size = 8, shuffle = True, pin_memory = True)
del train_dataset
del valid_dataset
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = EfficientNet.from_name('efficientnet-b0',in_channels=3, num_classes = 23) # VGG19(in_channel=4) // ResNet34(ResidualBlock, resnet_para[2], in_channel=4) // EfficientNet.from_name('efficientnet-b"n"',in_channels=4, num_classes = 23)
net.train()
net.to(device)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.Adam(net.parameters(),lr=efficientnet_para[0])
decayRate = efficientnet_para[1]
lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=decayRate)
iteration = 0
iterations = []
acc_trains = []
loss_trains = []
acc_valids = []
loss_valids = []
correct_valid = 0
total_valid = 0
correct_train = 0
total_train = 0
for epoch in range(40): # loop over the dataset multiple times
running_loss_train = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs
net.train()
input_train, label_train = data
del data
input_train, label_train = input_train.to(device).float(), label_train.to(device).long()
# zero the parameter gradients
optimizer.zero_grad()
pred_train = net(input_train)
#del input_train
loss_train = criterion(pred_train, label_train)
loss_train.backward()
optimizer.step()
_, p_train = torch.max(pred_train, 1)
correct_train += torch.sum(p_train == label_train)
total_train += len(label_train)
iteration += 1
# print statistics
running_loss_train += loss_train.item()
if i % 400 == 399: # size of mini-batches (8) (per 400 iteration)
acc_train = 100*correct_train/total_train
net.eval()
with torch.no_grad():
running_loss_valid = 0.0
for j, data in enumerate(valid_loader):
input_valid, label_valid = data
del data
input_valid, label_valid = input_valid.to(device).float(), label_valid.to(device).long()
pred_valid = net(input_valid)
del input_valid
loss_valid = criterion(pred_valid, label_valid)
running_loss_valid += loss_valid.item()
_, p_valid = torch.max(pred_valid, 1)
correct_valid += torch.sum(p_valid == label_valid)
total_valid += len(label_valid)
acc_valid = 100*correct_valid/total_valid
del correct_valid, total_valid
acc_valids.append(acc_valid)
loss_valids.append(running_loss_valid/j)
acc_trains.append(acc_train)
loss_trains.append(running_loss_train/400)
iterations.append(iteration)
print('[%d, %5d] loss_t: %.3f, accuracy_t: %.3f, loss_v: %.3f, accuracy_v: %.3f - iteration(400) : %d' %
(epoch + 1, i + 1, running_loss_train / 400, acc_train, running_loss_valid/j, acc_valid, iteration // 400))
correct_train = 0
total_train = 0
correct_valid = 0
total_valid = 0
running_loss_train = 0.0
running_loss_valid = 0.0
if epoch >= 2:
save_path = "E:/efficientnet_b0_result_epoch_"+str(epoch)+"_"+ str(iteration // 400) +".pth"
else:
save_path = "E:/efficientnet_b0_result.pth"
torch.save(net.state_dict(), save_path)
lr_scheduler.step()
print('Finished Training')
plt.subplot(2,1,1)
plt.plot(iterations,loss_trains, color='green', linewidth=2)
plt.plot(iterations,loss_valids, color='blue', linewidth=2)
plt.xlabel('Iteration')
plt.ylabel('Loss')
#plt.xlim(0,4600*3)
plt.legend(['Train', 'Valid'])
plt.subplot(2,1,2)
plt.plot(iterations,acc_trains, color='green', linewidth=2)
plt.plot(iterations,acc_valids, color='blue', linewidth=2)
plt.xlabel('Iteration')
plt.ylabel('Accuracy')
#plt.xlim(0,4600*3)
plt.legend(['Train', 'Valid'])
efficientnet_b0_loss_acc = {'iteration':iterations, 'loss_train':loss_trains, 'loss_valid':loss_valids, 'acc_train':acc_trains, 'acc_valid':acc_valids}
with open('E:/loss,accuracy/efficientnet_b0_loss_acc.pickle','wb') as fw:
pickle.dump(efficientnet_b0_loss_acc, fw)
# with open('E:/loss,accuracy/efficientnet_b0_loss_acc.pickle','rb') as fr:
# data = pickle.load(fr)
###Output
C:\Users\ADmin\anaconda3\envs\new_torch\lib\site-packages\torch\storage.py:34: FutureWarning: pickle support for Storage will be removed in 1.5. Use `torch.save` instead
warnings.warn("pickle support for Storage will be removed in 1.5. Use `torch.save` instead", FutureWarning)
###Markdown
Validation - VGG19, ResNet34, EfficientNet-B0, B1, B2
###Code
valid_dataset = DrumDataset(labels,train = False,dtype='mel')
valid_loader = DataLoader(valid_dataset, batch_size = 8, shuffle = True, num_workers = 0)
del valid_dataset
total_valid = 0
correct_valid = 0
correct_valid_3 = 0
pred = []
corr = []
save_path="E:parameter/efficientnet_b0_result_mel_aug.pth"
net = EfficientNet.from_name('efficientnet-b0',in_channels=3, num_classes = 23)
net.load_state_dict(torch.load(save_path))
net.eval()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
with torch.no_grad():
for input_valid,label_valid in valid_loader:
input_valid = input_valid.cuda()
label_valid = label_valid.cuda()
input_valid = input_valid.to("cuda").float()
label_valid = label_valid.to("cuda").long()
pred_valid = net(input_valid)
_, predicted = torch.max(pred_valid, 1)
_, predicted_3 = torch.topk(pred_valid,3)
correct_valid += torch.sum(predicted == label_valid)
total_valid += len(label_valid)
if len(label_valid) < 8:
for n,i in enumerate(label_valid): # rest bach size
correct_valid_3 += torch.sum(predicted_3[n] == i)
else:
for i in range(8): # bach size
correct_valid_3 += torch.sum(predicted_3[i] == label_valid[i])
corr.append(label_valid.cpu().tolist())
pred.append(predicted.cpu().tolist())
temp = 100*correct_valid.cpu().tolist()/total_valid
temp_3 = 100*correct_valid_3.cpu().tolist()/total_valid
print('Validset Top-1 Accuracy: ' + str(temp))
print('Validset Top-3 Accuracy: ' + str(temp_3))
c_p_array = np.zeros([23,23])
cor = [[int(y) for y in x] for x in corr]
for i,j in zip(cor,pred):
c_p_array[i,j] += 1
df = pd.DataFrame(c_p_array, labels, labels)
plt.figure(figsize=(15,15))
sns.heatmap(data = df, annot=True, fmt = '.0f', linewidths=.5, cmap='RdYlGn_r')
###Output
_____no_output_____
###Markdown
Test - VGG19, ResNet34, EfficientNet-B0, B1, B2
###Code
test_dataset = DrumDataset_test(labels,dataset_test_path,dtype='mel')
test_loader = DataLoader(test_dataset, batch_size = 8, shuffle = True, num_workers = 0)
del test_dataset
total_test = 0
correct_test = 0
correct_test_3 = 0
pred = []
corr = []
save_path="E:parameter/efficientnet_b0_result_mel_aug.pth"
net = EfficientNet.from_name('efficientnet-b0',in_channels=3, num_classes = 23)
net.load_state_dict(torch.load(save_path))
net.eval()
net.to(device)
with torch.no_grad():
for input_test,label_test in test_loader:
input_test = input_test.cuda()
label_test = label_test.cuda()
input_test = input_test.to("cuda").float()
label_test = label_test.to("cuda").long()
pred_test = net(input_test)
_, predicted = torch.max(pred_test, 1)
_, predicted_3 = torch.topk(pred_test,3)
correct_test += torch.sum(predicted == label_test)
total_test += len(label_test)
if len(label_test) < 8:
for n,i in enumerate(label_test): # rest bach size
correct_test_3 += torch.sum(predicted_3[n] == i)
else:
for i in range(8): # bach size
correct_test_3 += torch.sum(predicted_3[i] == label_test[i])
corr.append(label_test.cpu().tolist())
pred.append(predicted.cpu().tolist())
temp = 100*correct_test.cpu().tolist()/total_test
temp_3 = 100*correct_test_3.cpu().tolist()/total_test
print('Testset Top-1 Accuracy: ' + str(temp))
print('Testset Top-3 Accuracy: ' + str(temp_3))
c_p_array = np.zeros([23,23])
cor = [[int(y) for y in x] for x in corr]
for i,j in zip(cor,pred):
c_p_array[i,j] += 1
df = pd.DataFrame(c_p_array, labels, labels)
plt.figure(figsize=(15,15))
sns.heatmap(data = df, annot=True, fmt = '.0f', linewidths=.5, cmap='RdYlGn_r')
###Output
_____no_output_____
###Markdown
Welcome to an example Binder This notebook uses a Python environment with a few libraries, including `dask`, all of which were specificied using a `conda` [environment.yml](../edit/environment.yml) file. To demo the environment, we'll show a simplified example of using `dask` to analyze time series data, adapted from Matthew Rocklin's excellent repo of [dask examples](https://github.com/blaze/dask-examples) — check out that repo for the full version (and many other examples). Setup plotting
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Turn on a global progress bar
###Code
from dask.diagnostics import ProgressBar
progress_bar = ProgressBar()
progress_bar.register()
###Output
_____no_output_____
###Markdown
Generate fake data
###Code
import dask.dataframe as dd
df = dd.demo.make_timeseries(start='2000', end='2015', dtypes={'A': float, 'B': int},
freq='5s', partition_freq='3M', seed=1234)
###Output
_____no_output_____
###Markdown
Compute and plot a cumulative sum
###Code
df.A.cumsum().resample('1w').mean().compute().plot();
###Output
[########################################] | 100% Completed | 16.5s
###Markdown
DSEPython implementation of discrete skeleton evolution, a skeleton pruning algorithmhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.8377&rep=rep1&type=pdf [Example](https://github.com/originlake/DSE/blob/master/example.ipynb)This algorithm filters out branches by evalutating their reconstruction weights. Original paper measures the weights by calculating the ratio of reconstruction pixel loss to whole reconstruction pixel, here the weights are simply the reconstruction pixel loss.
###Code
from dsepruning import skel_pruning_DSE
import numpy as np
from skimage.io import imread
from skimage.morphology import medial_axis, skeletonize
from scipy.ndimage import distance_transform_edt
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = [10, 10]
mask = imread('img/skel.png')
mask = mask > 0
plt.imshow(mask)
plt.show()
skel = skeletonize(mask)
print("Show skeleton by skeletonize:")
plt.imshow(skel);plt.show()
dist = distance_transform_edt(mask, return_indices=False, return_distances=True)
print("Show distance map:")
plt.imshow(dist);plt.show()
new_skel = skel_pruning_DSE(skel, dist, 100)
print("Show pruned skeleton")
plt.imshow(new_skel);plt.show()
skel, dist = medial_axis(mask, return_distance=True)
print("Show skeleton by skeletonize:")
plt.imshow(skel);plt.show()
print("Show distance map:")
plt.imshow(dist);plt.show()
new_skel = skel_pruning_DSE(skel, dist, 100)
print("Show pruned skeleton")
plt.imshow(new_skel);plt.show()
###Output
Show skeleton by skeletonize:
###Markdown
Using H5Web in the notebook Display a simple HDF5 file
###Code
import numpy as np
import h5py
with h5py.File("simple.h5", "w") as h5file:
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
Xg, Yg = np.meshgrid(X, Y)
h5file['threeD'] = [np.sin(2*np.pi*f*np.sqrt(Xg**2 + Yg**2)) for f in np.arange(0.1, 1.1, 0.1)]
h5file['twoD'] = np.sin(np.sqrt(Xg**2 + Yg**2))
h5file['oneD'] = X
h5file['scalar'] = 42
from jupyterlab_h5web import H5Web
H5Web('simple.h5')
###Output
_____no_output_____
###Markdown
Display a NeXus file
###Code
import numpy as np
import h5py
with h5py.File("nexus.nx", "w") as h5file:
root_group = h5file
root_group.attrs["NX_class"] = "NXroot"
root_group.attrs["default"] = "entry"
entry = root_group.create_group("entry")
entry.attrs["NX_class"] = "NXentry"
entry.attrs["default"] = "process/spectrum"
process = entry.create_group("process")
process.attrs["NX_class"] = "NXprocess"
process.attrs["default"] = "spectrum"
spectrum = process.create_group("spectrum")
spectrum.attrs["NX_class"] = "NXdata"
spectrum.attrs["signal"] = "data"
spectrum.attrs["auxiliary_signals"] = ["aux1", "aux2"]
data = np.array([np.linspace(-x, x, 10) for x in range(1, 6)])
spectrum["data"] = data ** 2
spectrum["aux1"] = -(data ** 2)
spectrum["aux2"] = -data
spectrum["data"].attrs["interpretation"] = "spectrum"
image = process.create_group("image")
image.attrs["NX_class"] = "NXdata"
image.attrs["signal"] = "data"
x = np.linspace(-5, 5, 50)
x0 = np.linspace(10, 100, 10)
image["data"] = [a*x**2 for a in x0]
image["X"] = np.linspace(-2, 2, 50, endpoint=False)
image["X"].attrs["units"] = u"µm"
image["Y"] = np.linspace(0, 0.1, 10, endpoint=False)
image["Y"].attrs["units"] = "s"
image.attrs["axes"] = ["X"]
image.attrs["axes"] = ["Y", "X"]
from jupyterlab_h5web import H5Web
H5Web('nexus.nx')
###Output
_____no_output_____
###Markdown
Imports
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Note on Function Definition for using tf.optimizer The function that is to be optimized using `tf.optimizer` must not take any arguments and should only use global variables. It is possible to wrap a function that takes arguments with a non-parameterised one to use a global variable. The underlying function being wrapped must still be using tensorflow datatypes (specifically `tf.Variable`) and have tensorflow operations performed on them. A TensorFlow `Variable` is the recommended way to represent shared, persistent state your program manipulates. A `tf.Variable` represents a tensor whose value can be changed by running ops on it. Thus functions defined using, e.g, `numpy` datatypes `ndarray` or `float64` and operations on them won't work. numpy Function Definition
###Code
# Sphere function, the basic test
def sphere(x) -> float:
return float(np.sum(x**2))
###Output
_____no_output_____
###Markdown
tensorflow Function Definition
###Code
def tf_sphere(x: tf.Variable) -> tf.Variable:
return tf.math.reduce_sum(x**2)
###Output
_____no_output_____
###Markdown
tensorflow Function Wrapper
###Code
def wrap_tf_sphere() -> tf.Variable:
return tf_sphere(var)
###Output
_____no_output_____
###Markdown
Global Variable Declaration
###Code
x = [float(i) for i in range(2000)]
###Output
_____no_output_____
###Markdown
tensorflow Variable
###Code
var = tf.Variable(x)
var.numpy()
###Output
_____no_output_____
###Markdown
Global Variable Function Definition
###Code
def sphere_global():
return tf.math.reduce_sum(var**2)
###Output
_____no_output_____
###Markdown
Check Function Outputs tensorflow function
###Code
tf_sphere(var).numpy()
###Output
_____no_output_____
###Markdown
global function
###Code
sphere_global().numpy()
###Output
_____no_output_____
###Markdown
numpy function
###Code
sphere(np.array(x))
###Output
_____no_output_____
###Markdown
Define Optimizer SGD
###Code
opt_sgd = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9)
###Output
_____no_output_____
###Markdown
run SGD on wrapped tf function
###Code
for step in range(1000):
step_count = opt_sgd.minimize(wrap_tf_sphere, [var]).numpy()
print(sphere_global().numpy())
print(var.numpy())
###Output
0.0
[ 0.0000000e+00 -5.4671611e-25 -1.0934322e-24 ... -1.0917790e-21
-1.0923251e-21 -1.0928558e-21]
###Markdown
Reset global variable
###Code
y = [float(i) for i in range(2000)]
var = tf.Variable(y)
###Output
_____no_output_____
###Markdown
Run SGD on global function
###Code
for step in range(1000):
step_count = opt_sgd.minimize(sphere_global, [var]).numpy()
print(sphere_global().numpy())
print(var.numpy())
###Output
0.0
[ 0.0000000e+00 -5.4671611e-25 -1.0934322e-24 ... -1.0917790e-21
-1.0923251e-21 -1.0928558e-21]
###Markdown
Run ADAM on wrapped tf function
###Code
z = [float(i) for i in range(2000)]
var = tf.Variable(z)
opt_adam = tf.keras.optimizers.Adam(learning_rate=1, epsilon=0.1)
for step in range(10000):
step_count = opt_adam.minimize(wrap_tf_sphere, [var]).numpy()
print(sphere_global().numpy())
print(var.numpy())
###Output
0.0
[ 0.0000000e+00 -5.8140770e-38 5.8415879e-38 ... -1.5561156e-37
-1.5545717e-37 -1.5528495e-37]
###Markdown
1. If you just want to predict a few of chemicls.
###Code
## step 1. translate smiles to graphs
smiles = ['CC1CCCCC1OC(C)=O', 'CCC(O)CO', 'CCCCCCCC\C=C/CCCCCCCC(O)=O',
'COc1cc(ccc1N)[N+]([O-])=O', 'CCCC[Sn](=O)CCCC','CCCC[Sn](=O)CCCC']
graphs = [smile_to_graph(smile, None, None) for smile in smiles]
bg = dgl.batch(graphs)
## step 2: load model
myGAT = MyGAT(args)
myGAT.load_state_dict(torch.load(args.model_file, map_location=torch.device('cpu')))
output, readout, att = fit(args, myGAT, bg)
torch.argmax(output, dim=1)
###Output
_____no_output_____
###Markdown
2. If you want to predict a large number of chemicals, I suggest take those chemicals into a file of csv.
###Code
# step 1. read chemicals file
import pandas as pd
data_file = pd.read_csv(args.example_file)
data_file.head()
def collate_molgraphs(data):
"""Batching a list of datapoints for dataloader.
Parameters
----------
data : list of 3-tuples or 4-tuples.
Each tuple is for a single datapoint, consisting of
a SMILES, a DGLGraph, all-task labels and optionally a binary
mask indicating the existence of labels.
Returns
-------
smiles : list
List of smiles
bg : DGLGraph
The batched DGLGraph.
labels : Tensor of dtype float32 and shape (B, T)
Batched datapoint labels. B is len(data) and
T is the number of total tasks.
masks : Tensor of dtype float32 and shape (B, T)
Batched datapoint binary mask, indicating the
existence of labels.
"""
if len(data[0]) == 3:
smiles, graphs, labels = map(list, zip(*data))
else:
smiles, graphs, labels, masks = map(list, zip(*data))
bg = dgl.batch(graphs)
bg.set_n_initializer(dgl.init.zero_initializer)
bg.set_e_initializer(dgl.init.zero_initializer)
labels = torch.stack(labels, dim=0)
if len(data[0]) == 3:
masks = torch.ones(labels.shape)
else:
masks = torch.stack(masks, dim=0)
return smiles, bg, labels, masks
def get_dataloader(df, batch_size, collate_fn):
dataset = MoleculeCSVDataset(df = df,
smiles_to_graph=smile_to_graph,
node_featurizer = None,
edge_featurizer = None,
smiles_column='Smiles',
cache_file_path="./degradation_example.bin")
if not get_dataset:
return DataLoader(dataset = dataset, batch_size = batch_size, shuffle = shuffle, collate_fn = collate_fn)
else:
return dataset, DataLoader(dataset = dataset, batch_size = batch_size, shuffle = shuffle, collate_fn = collate_fn)
## step 2. make datset
batch_size = 10
dataset = MoleculeCSVDataset(df = data_file,
smiles_to_graph=smile_to_graph,
node_featurizer = None,
edge_featurizer = None,
smiles_column='smiles',
cache_file_path="./degradation_dataset.bin")
data_loader = DataLoader(dataset = dataset, batch_size = batch_size, collate_fn = collate_molgraphs)
## setp 3. load model
myGAT = MyGAT(args)
myGAT.load_state_dict(torch.load(args.model_file, map_location=torch.device('cpu')))
## step 4. fit
myGAT.eval()
result = []
for idx, data in enumerate(data_loader):
smiles, graphs, _,_ = data
logits, readout, att = fit(args, myGAT, graphs)
result.extend(torch.argmax(logits.detach(), dim=1).tolist())
#result
## step 5. save result
result = pd.DataFrame(result)
result.to_csv(args.result_file, index=None, header=None)
###Output
_____no_output_____
###Markdown
3. If you want to observe attention score on chemicals structure
###Code
from functools import partial
from IPython.display import SVG, display
import matplotlib
import matplotlib.cm as cm
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
def svg_draw(mol, g, node_attention, bond_attention):
"""
mol_to_svg:
args:
mol: mol object of rdkit
grapg:
node_attention: 节点attention
bond_attention: 节点attention
return: svg
"""
# 绘制edge_attention
min_value = torch.min(bond_attention)
max_value = torch.max(bond_attention)
bond_attention = (bond_attention - min_value) // (max_value - min_value) # normalization
# Conver the weights to atom colors
#norm = matplotlib.colors.Normalize(vmin=0, vmax=1.0)
norm = matplotlib.colors.Normalize(vmin=min_value, vmax=max_value)
cmap = cm.get_cmap('Accent')
plt_colors = cm.ScalarMappable(norm=norm, cmap=cmap)
bond_colors = {i: plt_colors.to_rgba(bond_attention[i*2].data.item()) for i in range((g.number_of_edges()-g.number_of_nodes())//2)}
rdDepictor.Compute2DCoords(mol)
drawer = rdMolDraw2D.MolDraw2DSVG(500,250)
drawer.SetFontSize(1)
op = drawer.drawOptions()
mol = rdMolDraw2D.PrepareMolForDrawing(mol)
#print(len(bond_colors), len(list(range(g.number_of_edges() // 2))))
drawer.DrawMolecule(mol,highlightAtoms=None,highlightBonds=list(range(len(bond_colors))), highlightBondColors=bond_colors)
drawer.FinishDrawing()
svg = drawer.GetDrawingText()
svg = svg.replace('svg:','')
return svg
def draw(mol_idxs, dataset, model, col=None):
"""Visualize the learned atom weights in readout && bond attention.
Parameters
----------
mol_id : int
Index for the molecule to visualize in the dataset.
dataset
As the model has multiple rounds of readout, an additional
index is used to specify the round for the weights.
"""
# Get the weights from the model.
smiles = []
graphs = []
for idx in mol_idxs:
smile, g, _, _ = dataset[idx]
smiles.append(smile)
graphs.append(g)
bg = dgl.batch(graphs)
logit, readout, bond_attentions = fit(args, model, bg)
bond_attention_split = []
if col is not None:
bond_attentions = torch.squeeze(bond_attentions)[:, col]
for i in range(len(bg.batch_num_edges())):
if i == 0:
bond_attention = bond_attentions[0:bg.batch_num_edges()[0].item()]
else:
bond_attention = bond_attentions[
torch.sum(bg.batch_num_edges()[:i]).item():
torch.sum(bg.batch_num_edges()[:i+1]).item()]
bond_attention_split.append(bond_attention)
else:
for i in range(len(bg.batch_num_edges())):
if i == 0:
bond_attention, _= torch.max(bond_attentions[0:bg.batch_num_edges()[0].item()], dim=1)
else:
bond_attention, _= torch.max(bond_attentions[
torch.sum(bg.batch_num_edges()[:i]).item() :
torch.sum(bg.batch_num_edges()[:i+1]).item()
], dim=1)
bond_attention = torch.tensor([1 if i > 0.5 else 0 for i in bond_attention.detach().cpu()])
bond_attention_split.append(bond_attention)
mols = [Chem.MolFromSmiles(s) for s in smiles]
svgs = [svg_draw(mols[i], graphs[i], None, bond_attention_split[i].squeeze()) for i in range(len(graphs))]
for i in range(len(graphs)):
display(SVG(svgs[i]))
dataset = MoleculeCSVDataset(df = data_file,
smiles_to_graph=smile_to_graph,
node_featurizer = None,
edge_featurizer = None,
smiles_column='smiles',
cache_file_path="./degradation_dataset.bin")
# step 1. you should specify index on dataste that you want to draw.
draw_list = list(range(0,10))
# step 2. load model
myGAT = MyGAT(args)
myGAT.load_state_dict(torch.load(args.model_file, map_location=torch.device('cpu')))
# step 3. draw
draw(draw_list, dataset, myGAT, col=None)
###Output
Processing dgl graphs from scratch...
###Markdown
Credit Card Number OCR Project 💳 Welcome to a demonstration of how this repository works! This demo notebook will show how you can use this library to extract credit card numbers from images.First we can simply import our core class called `CreditCardOCR` from the library ...
###Code
from core.credit_card_ocr import CreditCardOCR
###Output
_____no_output_____
###Markdown
Next we can create an instance of the class by providing a path to an image to be processed, as well as a path to an image reference for credit card digits.
###Code
card1 = CreditCardOCR(
path_to_image='credit_cards/creditcard2.jpeg',
path_to_reference_image='reference/digit_reference.png'
)
###Output
_____no_output_____
###Markdown
Our class allows us to easily look at our image and how it changes throughout the process of implementing our OCR logic.We can observe the ...- Original Image- Greyscale Image- Tophat Image- Sobel Image- Closed Sobel ImageThis is a typical image processing pipeline for determining boundings boxes on an image. Orginal Image
###Code
card1.show_original_image()
###Output
_____no_output_____
###Markdown
Greyscale Image
###Code
card1.show_grayscale_image()
###Output
_____no_output_____
###Markdown
Tophat Image
###Code
card1.show_tophat_image()
###Output
_____no_output_____
###Markdown
Sobel Gradient Image
###Code
card1.show_sobel_image()
###Output
_____no_output_____
###Markdown
Closed Sobel Gradient Image
###Code
card1.show_closed_sobel_image()
###Output
_____no_output_____
###Markdown
Extract the Card NumberFinally, we can conduct the full process of processing the image and obtaining the 16 digit credit card number by calling the `.process_credit_card()` method. This method will return us the card number as well as show us a final image of the credit card with bounding boxes and images for each box.
###Code
card1.process_credit_card()
###Output
Credit Card Number: 5412 7500 0000 0002
###Markdown
Connect the ftp server.
###Code
dwt = darwinex_ticks.DarwinexTicksConnection(dwx_ftp_user='your username',
dwx_ftp_pass='your pass',
dwx_ftp_hostname='tickdata.darwinex.com',
dwx_ftp_port=21)
dwt.available_assets
dwt.list_of_files('STOXX50E').head(10)
data = dwt.ticks_from_darwinex('EURUSD', cond='2017-10-01 22', side='ASK')
data.head()
data = dwt.ticks_from_darwinex('EURUSD', cond='2018-08-02 13')
data = dwt.ticks_from_darwinex(['NZDUSD','NZDJPY'],
start='2018-10-01 09', end='2018-10-01 11')
data = dwt.ticks_from_darwinex(['STOXX50E', 'SPXm', 'GDAXIm'],
cond='2018-10-01', side='Ask', verbose=True)
data
data.loc['GDAXIm'].tail()
data = dwt.ticks_from_darwinex('EURUSD', cond='2018-11-01 12', separated=True)
data
data['BID'].tail()
###Output
_____no_output_____
###Markdown
Resize border
###Code
print('border:{}'.format(wsi.get_border()))
borders=wsi.get_border()
x_new=wsi.resize_border(borders[0][0],factor=32)
x_new
###Output
border:[(3981.0, 118531.0), (11202.0, 92659.0)]
###Markdown
Detect components
###Code
img,borders=wsi.detect_components()
plt.imshow(img[-1])
###Output
_____no_output_____
###Markdown
Generate region
###Code
wsi.dimensions
border=borders[14]
(x1,x2),(y1,y2)=border
border
wsi2=openslide.OpenSlide(WSI_PATH)
region=wsi2.read_region((4032,48576),3,(50000,50000))
region_new=cv2.resize(np.array(region.convert('RGB')),(250,250))
plt.imshow(region_new)
region=wsi.generate_region(mag=3,x=(x1,x2),y=(y1,y2),x_size=50000,y_size=50000)
plt.imshow(region[0])
###Output
_____no_output_____
###Markdown
Patching
###Code
patch=patching.Patching(wsi,mag_level=3,step=1024,size=(1024,1024))
#patch.save('images',mask_flag=True)
###Output
_____no_output_____
###Markdown
filter patches
###Code
patch.filter_patches(210)
patch.save('images',mask_flag=True)
masks=glob.glob('images/masks/*')
for m in masks:
mask=cv2.imread(m)
mask=mask2rgb(mask[:,:,0])
plt.imshow(mask)
plt.show()
###Output
Num removed: 73
Remaining:23
###Markdown
Get labels
###Code
patch.generate_labels(0.5)
patch.plotlabeldist()
###Output
_____no_output_____
###Markdown
Stitching
###Code
stitch=Stitching('images/images',name='2865 B2 LN.ndpi',mag_level=3)
canvas=stitch.stitch(size=(2500,2500))
plt.imshow(canvas)
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
calculate_weights(mask_path='images/masks',num_cls=4)
calculate_std_mean('images/images')
###Output
[0. 0. 0.]
total number pixels: 24117248
mean: [0.74730681 0.59215717 0.75368948], std: [0.12720346 0.21965498 0.15878633]
###Markdown
Missing
###Code
image=np.array(wsi.get_thumbnail((1000,1000)))
plt.imshow(image)
plt.show()
hist1 = cv2.calcHist([image],[0],None,[256],[0,256])
hist2 = cv2.calcHist([image],[1],None,[256],[0,256])
hist3 = cv2.calcHist([image],[2],None,[256],[0,256])
plt.subplot(222), plt.plot(hist1), plt.plot(hist2),plt.plot(hist3)
###Output
_____no_output_____
###Markdown
Submit your work!To submit your work, [get your slack id](https://moshfeu.medium.com/how-to-find-my-member-id-in-slack-workspace-d4bba942e38c) and assign your slack id to a `slack_id` variable in the cell bellow.Example:```pythonslack_id = "UTS63FC02"```
###Code
### BEGIN SOLUTION
slack_id = "UTS63FC02"
### END SOLUTION
# slack_id =
from submit import submit
assert slack_id is not None
submit(slack_id=slack_id, learning_unit=0)
###Output
_____no_output_____
###Markdown
VisuaLIME exampleThis brief introduction show you how to generate a visual explanation for the classification of an image by a deep learning computer vision model. Load an image-classification modelFor this example, we're using a relatively small pre-trained model provided as part of the Tensorflow deep learning library.If you haven't installed Tensorflow in your current Python environment, uncomment and run the following line:
###Code
#!pip install tensorflow-cpu
###Output
_____no_output_____
###Markdown
Then, we can load the modell and the corresponding preprocessing function:
###Code
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
model = MobileNetV2()
###Output
_____no_output_____
###Markdown
Since LIME is a black box explanation method, it does not "know" anything about how to call the model to produce predictions. Instead, we need to provide a function that simply takes an array of images and returns the corresponding outputs:
###Code
def predict_fn(image):
return model.predict(preprocess_input(image))
###Output
_____no_output_____
###Markdown
Load an imageWe'll load an image hosted on the internet as part of the [XAI Demonstrator project](https://github.com/xai-demonstrator/xai-demonstrator). Instead, you can also load an image from your harddrive or from a different URL.
###Code
from urllib.request import urlopen
import numpy as np
from PIL import Image
full_image = Image.open(urlopen("https://storage.googleapis.com/xai-demo-assets/visual-inspection/images/table.jpg"))
full_image
###Output
_____no_output_____
###Markdown
We'll just select a single object:
###Code
img = full_image.crop((766, 90, 990, 314))
img
###Output
_____no_output_____
###Markdown
VisuaLIME takes in images as Numpy arrays:
###Code
image = np.array(img)
image.shape
###Output
_____no_output_____
###Markdown
Note that the image is 224 by 224 pixels, which is exactly the size our `model` expects. In general, it is advisable to compute explanations on the same scale as the model's input. So in case your image is larger or smaller than the size expected by the model, rescale it prior to passing it to the VisuaLIME algorithm.Let's see whether we can predict what's in the image:
###Code
prediction = predict_fn(image[None,:,:,:])
prediction.shape
###Output
_____no_output_____
###Markdown
We see that the output contains one classification result of length 1000. Each of the 1000 entries corresponds to the likelihood that the image belongs to that particular class.Let's see what the model sees in the picture:
###Code
np.argmax(prediction, axis=1)
###Output
_____no_output_____
###Markdown
So it's class 759. We can decode this either by looking up the ImageNet categories or using the provided decoder function:
###Code
from tensorflow.keras.applications.mobilenet import decode_predictions
decode_predictions(prediction, top=1)
###Output
_____no_output_____
###Markdown
Great, so the model correctly identifies the camera in the image. But what exactly does it look at? Compute an explanationTo find out, import the two main functions of `visualime`:
###Code
from visualime.explain import explain_classification, render_explanation
###Output
_____no_output_____
###Markdown
First, we'll compute the explanation:
###Code
segment_mask, segment_weights = explain_classification(image=image, predict_fn=predict_fn, num_of_samples=128)
###Output
_____no_output_____
###Markdown
Then, we can generate the visual output:
###Code
render_explanation(image, segment_mask, segment_weights, positive="green", negative="red", coverage=0.05)
###Output
_____no_output_____
###Markdown
Standard errors for calibrated parameters: ExampleConsider a simple model with two structural parameters $(\theta_1,\theta_2)$ and three reduced-form moments $(\mu_1,\mu_2,\mu_3)$. The theoretical mapping between parameters and moments is given by$$\begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix} = \begin{pmatrix} \theta_1 \\ \theta_1+\theta_2 \\ 2\theta_2 \end{pmatrix} = h(\theta_1,\theta_2).$$We observe the noisy estimates $(\hat{\mu}_1,\hat{\mu}_2,\hat{\mu}_3) = (1.1,0.8,-0.1)$ of the true moments. The standard errors of the three empirical moments are $(\hat{\sigma}_1,\hat{\sigma}_2,\hat{\sigma}_3)=(0.1,0.2,0.05)$.We will estimate the parameters $(\theta_1,\theta_2)$ by minimum distance, matching the model-implied moments $h(\theta_1,\theta_2)$ to the empirical moments:$$\hat{\theta} = \text{argmin}_{\theta}\; (\hat{\mu}-h(\theta))'\hat{W}(\hat{\mu}-h(\theta)).$$To compute standard errors for the estimated parameters, test hypotheses, and compute the efficient weight matrix $\hat{W}$, we use the formulas in [Cocci & Plagborg-Møller (2021)](https://scholar.princeton.edu/mikkelpm/calibration), which do not require knowledge of the correlation structure of the empirical moments. Define the modelWe first import relevant packages and define the model and data.
###Code
import numpy as np
from stderr_calibration import MinDist # Minimum distance routines
# Define moment function h(.)
G = np.array([[1,0],[1,1],[0,2]])
h = lambda theta: theta @ G.T
# Define empirical moments and their s.e.
mu = np.array([1.1,0.8,-0.1])
sigma = np.array([0.1,0.2,0.05])
# Define MinDist object used in later analysis
obj = MinDist(h,mu,moment_se=sigma)
###Output
_____no_output_____
###Markdown
(Note: In our simple example, we have a formula for the Jacobian of $h(\cdot)$ with respect to the parameters. This could be supplied to the `MinDist` call using the optional argument `moment_fct_deriv`. The default behavior is to compute Jacobians numerically.) Initial parameter estimates and standard errorsWe first estimate the model using an *ad hoc* diagonal weight matrix $\hat{W}=\text{diag}(\hat{\sigma}_1^{-2},\hat{\sigma}_2^{-2},\hat{\sigma}_3^{-2})$. The numerical optimization for computing the estimates $(\hat{\theta}_1,\hat{\theta}_2)$ is started off at $(0,0)$.
###Code
res = obj.fit(opt_init=np.zeros(2), eff=False) # eff=False: estimation based on ad hoc diagonal weight matrix
print('Parameter estimates')
print(res['estim'])
print('Standard errors')
print(res['estim_se'])
print('\n')
for i in range(2):
print(f'Worst-case moment var-cov matrix for estimating theta_{i+1}')
print(res['worstcase_varcov'][i])
###Output
_____no_output_____
###Markdown
(Note 1: In this simple linear example, there exists a closed-form formula for the minimum distance estimator. This formula can be supplied to the `fit()` function using the optional argument `estim_fct`.)(Note 2: In some cases the minimum distance parameter estimate may have already been computed elsewhere. It can then be passed to the `fit()` function via the optional argument `param_estim`. The function will compute the corresponding standard errors without re-estimating the model.) Test of parameter restrictionsLet us test whether the parameters $\theta_1$ and $\theta_2$ equal zero.
###Code
test_res = obj.test(res) # Tests are based on the "res" estimation results
print('\nt-statistics for testing individual parameters')
print(test_res['tstat'])
print('p-value of joint test')
print(test_res['joint_pval'])
###Output
_____no_output_____
###Markdown
Using a 5% significance level, we cannot reject that $\theta_2$ is zero individually based on its t-statistic. However, we can reject the joint hypothesis that both parameters equal zero.Suppose we wanted to test the joint null hypothesis that $(\theta_1,\theta_2)=(1,0)$. To do this, we first reformulate it as the hypothesis that the transformed vector $r(\theta_1,\theta_2)=(\theta_1-1,\theta_2)$ has all elements equal to zero. We can then test the hypothesis as follows.
###Code
r = lambda theta: theta-np.array([1,0]) # Restriction function
res_restr = obj.fit(transf=r, opt_init=res['estim'], eff=False) # Estimate the transformation r(theta)
test_res2 = obj.test(res_restr) # Test using the restriction function
print('\np-value of joint test')
print(test_res2['joint_pval'])
###Output
_____no_output_____
###Markdown
Over-identification testSince we have more moments (3) than parameters (2), we can test the over-identifying restriction. One common way of doing this in applied work is to estimate the model using only two of the moments and then checking whether the third, non-targeted moment at the estimated parameters is approximately consistent with the data.
###Code
weight_mat = np.diag(np.array([1/sigma[0]**2, 1/sigma[1]**2, 0])) # Weight matrix that puts no weight on third moment
res_justid = obj.fit(opt_init=np.zeros(2), eff=False, weight_mat=weight_mat)
print('Just-identified parameter estimates')
print(res_justid['estim'])
print('Model-implied moments')
print(res_justid['moment_fit'])
print('\n')
res_overid = obj.overid(res_justid) # Over-identification test based on just-identified estimates
print('\nError in matching non-targeted moment')
print(res_overid['moment_error'][2]) # The non-targeted moment is the third one
print('Standard error')
print(res_overid['moment_error_se'][2])
print('t-statistic')
print(res_overid['tstat'][2])
###Output
_____no_output_____
###Markdown
Since the t-statistic is below 1.96, we can't reject the validity of the model at the 5% level. Efficient estimationThe above estimation results relied on an *ad hoc* diagonal weight matrix. We can compute the weight matrix that minimizes the worst-case standard errors, and then report the corresponding estimates and standard errors.
###Code
res_eff = obj.fit(opt_init=np.zeros(2), eff=True) # Note: Efficient estimation (eff=True) is the default
print('Efficient parameter estimates')
print(res_eff['estim'])
print('Efficient standard errors')
print(res_eff['estim_se'])
print('\n')
for i in range(2):
print(f'Efficient moment loadings for estimating theta_{i+1}')
print(res_eff['moment_loadings'][:,i])
###Output
_____no_output_____
###Markdown
We see that $\theta_1$ is estimated off the 1st moment only, while $\theta_2$ is estimated off the 3rd moment only (up to small numerical error).(Note: The efficient estimates are not based on a single choice of weight matrix, since the efficient weight matrix depends on the specific parameter of interest. In the background, the analysis is actually run separately for each parameter. For this reason, it is not advised to use the `test()` or `overid()` commands with efficient estimation results. These commands are better used with estimation results that correspond to a single choice of weight matrix.) Inference about transformed parametersSuppose we want a confidence interval for the transformed parameter $\theta_1^2+\theta_2$. In a more realistic setting, this parameter might be some model-implied counterfactual of interest. We can do inference on transformed parameters using the `transf` argument to the `fit` function, as already used above.
###Code
res_transf = obj.fit(transf=lambda theta: theta[0]**2+theta[1], opt_init=np.zeros(2)) # Efficient estimation (the default)
print('Estimated transformation')
print(res_transf['estim'])
print('Standard errors')
print(res_transf['estim_se'])
###Output
_____no_output_____
###Markdown
(Note: If the gradient of the transformed parameter is available, we can supply it to the `fit()` function using the optional `transf_deriv` argument.) More information about the variance-covariance matrixSuppose we happen to also know that the first two empirical moments $\hat{\mu}_1$ and $\hat{\mu}_2$ are (asymptotically) independent. We can use this information to sharpen our inference about the parameters. First we define the known and unknown parts of the var-cov matrix of the empirical moments.
###Code
V = np.array([[sigma[0]**2,0,np.nan],[0,sigma[1]**2,np.nan],[np.nan,np.nan,sigma[2]**2]]) # NaN values are unknown
print('Var-cov matrix of moments')
print(V)
###Output
_____no_output_____
###Markdown
Then we define a `MinDist` object using this var-cov matrix and apply the estimation/testing routines.
###Code
obj_moreinfo = MinDist(h,mu,moment_varcov=V)
res_moreinfo = obj_moreinfo.fit(opt_init=np.zeros(2), eff=False)
print('Initial estimates')
print(res_moreinfo['estim'])
print('Standard errors')
print(res_moreinfo['estim_se'])
res_eff_moreinfo = obj_moreinfo.fit(opt_init=np.zeros(2), eff=True)
print('\nEfficient estimates')
print(res_eff_moreinfo['estim'])
print('Standard errors')
print(res_eff_moreinfo['estim_se'])
###Output
_____no_output_____
###Markdown
Full-information analysisSuppose finally that we know the entire var-cov matrix of the empirical moments. For example:
###Code
V_fullinfo = sigma.reshape(-1,1) * np.array([[1,0,0.5],[0,1,-0.7],[0.5,-0.7,1]]) * sigma
print('Var-cov matrix of moments')
print(V_fullinfo)
###Output
_____no_output_____
###Markdown
In this full-information setting, the econometric analysis is standard ([Newey & McFadden, 1994](https://doi.org/10.1016/S1573-4412%2805%2980005-4)). The estimation and testing routines work as before.
###Code
obj_fullinfo = MinDist(h,mu,moment_varcov=V_fullinfo)
res_fullinfo = obj_fullinfo.fit(opt_init=np.zeros(2), weight_mat=np.diag(sigma**(-2)), eff=False) # Diagonal weight matrix
print('Initial estimates')
print(res_fullinfo['estim'])
print('Standard errors')
print(res_fullinfo['estim_se'])
res_eff_fullinfo = obj_fullinfo.fit(opt_init=np.zeros(2), eff=True) # Efficient weight matrix
print('\nEfficient estimates')
print(res_eff_fullinfo['estim'])
print('Standard errors')
print(res_eff_fullinfo['estim_se'])
test_res_fullinfo = obj_fullinfo.test(res_eff_fullinfo)
print('\np-value for joint test that both parameters are zero')
print(test_res_fullinfo['joint_pval'])
overid_res_fullinfo = obj_fullinfo.overid(res_eff_fullinfo)
print('p-value for over-identification test')
print(overid_res_fullinfo['joint_pval'])
###Output
_____no_output_____
###Markdown
InitializationInitialize either component object, or sky object, from model strings.
###Code
nside = 128
mbb = pysm.preset_models('d1', nside)
sky = pysm.Sky(nside, preset_strings=['d1'])
frequencies = np.array([15., 150., 400.])
bandpasses = [(np.linspace(f-1, f+1, 50), np.ones(50)) for f in [15., 150., 400.]]
fwhms = np.array([120., 30., 10.])
###Output
_____no_output_____
###Markdown
FunctionalityFunctionality of Model objects consists of three main methods: - `Model.get_emission(frequencies)`- `Model.apply_bandpass(bandpasses)`- `Model.apply_smoothing(data, fwhms)`Since `Sky` is subclassed from `Model`, these are all present when defining a sky model composed of a group of components through e.g. `pysm.Sky(nside, preset_strings=['d1', 's1', 'a1'])`.
###Code
mbb_out = mbb.get_emission(frequencies)
mbb_out_bpass = mbb.apply_bandpass(bandpasses)
mbb_out_smoothed = mbb.apply_smoothing(mbb_out, fwhms)
mbb_out_bpass_smoothed = mbb.apply_smoothing(mbb_out, fwhms)
hp.mollview(mbb_out[0, 0], norm='log', min=0.1, max=50)
hp.mollview(mbb_out_bpass[0, 0], norm='log', min=0.1, max=50)
###Output
_____no_output_____
###Markdown
Import Libraries & Prep
###Code
import requests
import sagemaker
import boto3
import s3fs
import json
import io
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sagemaker.estimator import Estimator
from sagemaker.predictor import Predictor
from sagemaker.serializers import NumpySerializer
from sagemaker.deserializers import NumpyDeserializer
from sagemaker.local import LocalSession
from matplotlib import pyplot as plt
import matplotlib as mpl
import seaborn as sns
%matplotlib inline
sns.set()
seed = 42
rand = np.random.RandomState(seed)
local_mode = False # activate to use local mode
with open("config.json") as f:
configs = json.load(f)
default_bucket = configs["default_bucket"] #put your bucket name here
role = configs["role_arn"] # put your sagemaker role arn here
boto_session = boto3.Session()
if local_mode:
sagemaker_session = LocalSession(boto_session = boto_session)
sagemaker_session._default_bucket = default_bucket
else:
sagemaker_session = sagemaker.Session(
boto_session = boto_session,
default_bucket = default_bucket
)
ecr_image = configs["image_arn"] #put the image uri from ECR here
prefix = "modeling/sagemaker"
data_name = f"gauss3"
test_name = "gam-demo"
def get_s3fs():
return s3fs.S3FileSystem(key = boto_session.get_credentials().access_key,
secret = boto_session.get_credentials().secret_key,
token = boto_session.get_credentials().token)
def plot_and_clear():
plt.show()
plt.clf()
plt.cla()
plt.close()
###Output
_____no_output_____
###Markdown
Read, Visualize, & Prep Data
###Code
url = "https://www.itl.nist.gov/div898/strd/nls/data/LINKS/DATA/Gauss3.dat"
r = requests.get(url)
for i,t in enumerate(r.text.splitlines()):
print(f"{i:03d}\t{t}")
y, x = np.loadtxt(io.StringIO(r.text[r.text.index("Data: y x"):]), skiprows=1, unpack=True)
x = x.reshape(-1, 1)
fig, ax = plt.subplots(figsize = (11,9))
ax.plot(x,y)
ax.set_title("Gauss3 Data", size = 20)
plot_and_clear()
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = rand)
# remove entires in X_test outside the extremes of X_train
ind = np.all(
(X_test >= X_train.min(axis = 0, keepdims = True)) &
(X_test <= X_train.max(axis = 0, keepdims = True)),
axis = 1)
X_test = X_test[ind]
y_test = y_test[ind]
file_fn = f"{default_bucket}/{prefix}/{data_name}/train/data.csv"
file_path = f"s3://{file_fn}"
s3 = get_s3fs()
with s3.open(file_fn, 'wb') as f:
np.savetxt(f, np.c_[X_train, y_train], delimiter = ',')
hyperparameters = {
"train-file": "data.csv",
"df": "20"
}
data_channels = {
"train": file_path
}
estimator = Estimator(
role = role,
sagemaker_session = sagemaker_session,
instance_count = 1,
instance_type = "local" if local_mode else "ml.m5.large",
image_uri = ecr_image,
base_job_name = f'{data_name}-{test_name}',
hyperparameters = hyperparameters,
output_path = f"s3://{default_bucket}/{prefix}/{data_name}/model"
)
estimator.fit(data_channels, wait = True, logs = "None")
job_name = estimator.latest_training_job.name
print(job_name)
np_serialize = NumpySerializer()
np_deserialize = NumpyDeserializer()
predictor = estimator.deploy(
initial_instance_count = 1,
instance_type = "local" if local_mode else "ml.t2.medium",
serializer = np_serialize,
deserializer = np_deserialize
)
y_hat_train = predictor.predict(X_train)
y_hat_test = predictor.predict(X_test)
fig, ax = plt.subplots(figsize = (11,9))
ax.plot(x, y, color = "tab:blue", label = "True")
ax.scatter(X_train, y_hat_train,
color = "tab:green", s = 15,
marker = "x", label = "Train")
ax.scatter(X_test, y_hat_test,
color = "tab:red", s = 15,
marker = "o", label = "Test")
leg1 = ax.legend(loc = "upper right")
labels = [
"{:11s}: {:.4f}".format(r"Train $R^2$",
r2_score(y_train, y_hat_train)),
"{:11s}: {:.4f}".format(r"Test $R^2$",
r2_score(y_test, y_hat_test))
]
handles = [
mpl.patches.Rectangle((0, 0), 1, 1,
lw = 0, alpha = 0)
]
handles *= len(labels)
leg2 = ax.legend(
handles, labels, loc='lower left', fontsize = 12,
fancybox=True, framealpha=1.0,
handlelength=0, handletextpad=0, ncol=1,
)
ax.add_artist(leg1)
ax.set_title("Gauss3 Data", size = 20)
plot_and_clear()
predictor.delete_endpoint()
predictor.delete_model()
###Output
_____no_output_____
###Markdown
Make very simple test data
###Code
df = pd.DataFrame()
df['X'] = range(0,10)
df['y'] = np.power(range(0,10),2)
df.to_csv(join(data_path, 'test_data.csv'), index=False)
###Output
_____no_output_____
###Markdown
Load test data
###Code
df = pd.read_csv(join(data_path, 'test_data.csv'))
plt.scatter(df.X, df.y, s=df.y*10, alpha=0.5)
sns.despine()
plt.show()
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Fastscape simple simulator of landscape evolution Model descriptionThe simulator provided in this repository simulates the long-term evolution of topographic surface elevation (hereafter noted $h$) on a 2D regular grid. The local rate of elevation change, $\partial h/\partial t$, is determined by the balance between uplift (uniform in space and time) $U$ and erosion $E$.$$\frac{\partial h}{\partial t} = U - E$$Total erosion $E$ is the combined effect of the erosion of (bedrock) river channels, noted $E_r$, and erosion- transport on hillslopes, noted $E_d$$$E = E_r + E_d$$Erosion of river channels is given by the stream power law:$$E_r = K_r A^m (\nabla h)^n$$where $A$ is the drainage area and $K$, $m$ and $n$ are parameters. In this simulator, $K_r$ is considered as a free parameter while $m=0.4$ and $n=1$.Erosion on hillslopes is given by a linear diffusion law:$$E_d = K_d \nabla^2 h$$ Initial and boundary conditionsThis simulator is configured so that each model run starts with a nearly flat topography with small random perturbations. Elevation at the boundaries of the grid remain fixed during the whole simulation. ExampleThe simulator can be accessed from within Python:
###Code
from fastscape import run_fastscape
###Output
_____no_output_____
###Markdown
Here below we run the model by setting $K_r = 10^{-5}$, $K_d = 10^{-3}$ m$^2$/yr and $U = 10^{-4}$ m/yr.By default, the model is run on a 401 x 601 (y, x) grid with a fixed resolution of 200 m. The total simulation duration is 10 million years.
###Code
out_elevation = run_fastscape(1e-5, 1e-3, 1e-4)
###Output
_____no_output_____
###Markdown
The ouput of this model run is shown below.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(figsize=(12, 8))
ax.imshow(out_elevation);
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Local
import Neuron
import models as models
import train as train
import batch_utils
import data_transforms
import generate_training_data
###Output
Using Theano backend.
###Markdown
Data
###Code
training_data = generate_training_data.y_shape(n_nodes=20,
data_size=1000,
first_length=10,
branching_node=6)
###Output
_____no_output_____
###Markdown
Global parameters
###Code
n_nodes = 20
input_dim = 100
n_epochs = 5
batch_size = 32
n_batch_per_epoch = np.floor(training_data['morphology']['n20'].shape[0]/batch_size).astype(int)
d_iters = 20
lr_discriminator = 0.001
lr_generator = 0.001
train_loss = 'binary_crossentropy'
#train_loss = 'wasserstein_loss'
rule = 'none'
d_weight_constraint = [-.03, .03]
g_weight_constraint = [-33.3, 33.3]
m_weight_constraint = [-33.3, 33.3]
###Output
_____no_output_____
###Markdown
Run
###Code
geom_model, morph_model, disc_model, gan_model = \
train.train_model(training_data=training_data,
n_nodes=n_nodes,
input_dim=input_dim,
n_epochs=n_epochs,
batch_size=batch_size,
n_batch_per_epoch=n_batch_per_epoch,
d_iters=d_iters,
lr_discriminator=lr_discriminator,
lr_generator=lr_generator,
d_weight_constraint=d_weight_constraint,
g_weight_constraint=g_weight_constraint,
m_weight_constraint=m_weight_constraint,
rule=rule,
train_loss=train_loss,
verbose=True)
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 19, 3) 0
____________________________________________________________________________________________________
input_2 (InputLayer) (None, 19, 20) 0
____________________________________________________________________________________________________
merge_1 (Merge) (None, 19, 23) 0 input_1[0][0]
input_2[0][0]
____________________________________________________________________________________________________
lambda_1 (Lambda) (None, 20, 103) 0 merge_1[0][0]
____________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2060) 0 lambda_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1, 200) 412200 reshape_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1, 50) 10050 dense_1[0][0]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 1, 10) 510 dense_2[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 1, 1) 11 dense_3[0][0]
====================================================================================================
Total params: 422,771
Trainable params: 422,771
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
noise_input (InputLayer) (None, 1, 100) 0
____________________________________________________________________________________________________
dense_5 (Dense) (None, 1, 100) 10100 noise_input[0][0]
____________________________________________________________________________________________________
dense_6 (Dense) (None, 1, 100) 10100 dense_5[0][0]
____________________________________________________________________________________________________
dense_7 (Dense) (None, 1, 50) 5050 dense_6[0][0]
____________________________________________________________________________________________________
dense_8 (Dense) (None, 1, 57) 2907 dense_7[0][0]
____________________________________________________________________________________________________
reshape_2 (Reshape) (None, 19, 3) 0 dense_8[0][0]
====================================================================================================
Total params: 28,157
Trainable params: 28,157
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
noise_input (InputLayer) (None, 1, 100) 0
____________________________________________________________________________________________________
dense_9 (Dense) (None, 1, 100) 10100 noise_input[0][0]
____________________________________________________________________________________________________
dense_10 (Dense) (None, 1, 100) 10100 dense_9[0][0]
____________________________________________________________________________________________________
dense_11 (Dense) (None, 1, 380) 38380 dense_10[0][0]
____________________________________________________________________________________________________
reshape_3 (Reshape) (None, 19, 20) 0 dense_11[0][0]
____________________________________________________________________________________________________
lambda_2 (Lambda) (None, 19, 20) 0 reshape_3[0][0]
====================================================================================================
Total params: 58,580
Trainable params: 58,580
Non-trainable params: 0
____________________________________________________________________________________________________
====================
Epoch #0
After 20 iterations
Discriminator Loss = 0.00807452294976
###Markdown
prioritized Gene list(top 10) for *Polycystic kidney dysplasia*, `HP:0000113`
###Code
pred_df = psea.predict_pkl(['HP:0000113'])
pred_df.head(10)
###Output
_____no_output_____
###Markdown
prioritized Gene list(top 10) for HPO `HP:0003100`, `HP:0006504`, `HP:0002107`, `HP:0001679`, `HP:0012019` with additional data that how each specified HPO term contributes to final score.
###Code
pred_df2 = psea.predict_pkl_verbose(['HP:0003100', 'HP:0006504', 'HP:0002107', 'HP:0001679', 'HP:0012019'])
pred_df2.head(10)
###Output
_____no_output_____
###Markdown
Check Some Version Stuff
###Code
from conda_forge_tick.update_upstream_versions import update_upstream_versions
import copy
gxc = copy.deepcopy(gx)
for node in list(gxc.nodes):
if node not in ["tzdata", "openssl", "jpeg", "cddlib"]:
gxc.remove_node(node)
update_upstream_versions(gxc, debug=True)
###Output
_____no_output_____
###Markdown
Look at a Migration
###Code
from conda_forge_tick.auto_tick import migration_factory, add_rebuild_broken_migrator
mgs = []
add_rebuild_broken_migrator(mgs, gx)
mg = mgs[0]
import os
from conda_forge_tick.contexts import MigratorContext, MigratorSessionContext
mctx = MigratorSessionContext(
circle_build_url=os.getenv("CIRCLE_BUILD_URL", ""),
graph=gx,
smithy_version="",
pinning_version="",
github_username="",
github_password="",
github_token="",
dry_run=False,
)
mmctx = MigratorContext(session=mctx, migrator=mg)
mg.bind_to_ctx(mmctx)
mmctx.effective_graph.nodes
mg_name = "libffi33"
mgs = []
migration_factory(mgs, gx, only_keep=[mg_name])
for i in range(len(mgs)):
if mgs[i].name == mg_name:
break
mg = mgs[i]
import copy
attrs = copy.deepcopy(mg.graph.nodes["python"]["payload"].data)
attrs["branch"] = "3.6"
mg.filter(attrs)
import os
from conda_forge_tick.contexts import MigratorContext, MigratorSessionContext
mctx = MigratorSessionContext(
circle_build_url=os.getenv("CIRCLE_BUILD_URL", ""),
graph=gx,
smithy_version="",
pinning_version="",
github_username="",
github_password="",
github_token="",
dry_run=False,
)
mmctx = MigratorContext(session=mctx, migrator=mg)
mg.bind_to_ctx(mmctx)
mmctx.effective_graph.nodes
###Output
_____no_output_____
###Markdown
Check the Status Report
###Code
from conda_forge_tick.status_report import graph_migrator_status
out2, build_sequence, gv = graph_migrator_status(mg, mg.graph)
gv.view()
build_sequence
len(build_sequence)
gx.nodes["proj"]["payload"].data
###Output
_____no_output_____
###Markdown
AltumAge prediction example
###Code
#load necessary packages
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn import linear_model, preprocessing
#load list of selected CpGsites
AltumAge_cpgs = np.array(pd.read_pickle('example_dependencies/multi_platform_cpgs.pkl'))
#load processed example data from GEO30870 data set
#ensure the methylation data has been normalized with BMIQCalibration from Horvath 2013
data = pd.read_pickle('example_dependencies/example_data.pkl')
#load standard scaler
scaler = pd.read_pickle('example_dependencies/scaler.pkl')
#load AltumAge model
AltumAge = tf.keras.models.load_model('example_dependencies/AltumAge.h5')
#regardless of the Illumina platform, select *in order* the 20318 CpG sites from the list
real_age = data.age
methylation_data = data[AltumAge_cpgs]
#scale data
methylation_data_scaled = scaler.transform(methylation_data)
#predict with AltumAge
pred_age_AltumAge = AltumAge.predict(methylation_data_scaled).flatten()
#get AltumAge evaluation metrics
mae = np.median(np.abs(real_age - pred_age_AltumAge))
mse = np.mean((real_age - pred_age_AltumAge)**2)
r = np.corrcoef(real_age, pred_age_AltumAge)[0,1]
print('The Median Absolute Error is: ' + str(mae))
print('The Mean Squared Error is: ' + str(mse))
print("Pearson's Correlation Coefficient is: " + str(r))
###Output
The Median Absolute Error is: 9.992607116699219
The Mean Squared Error is: 98.52502890899538
Pearson's Correlation Coefficient is: 0.9890003295203428
###Markdown
Comparison with Horvath's 2013 model
###Code
#Horvath's age transformation function
def anti_transform_age(exps):
import numpy as np
adult_age = 20
ages = []
for exp in exps:
import numpy as np
if exp < 0:
age = (1 + adult_age)*(np.exp(exp))-1
ages.append(age)
else:
age = (1 + adult_age)*exp + adult_age
ages.append(age)
ages = np.array(ages)
return ages
#loading model parameters for Horvath's 2013 Model
coef_data = pd.read_csv('example_dependencies/coefficients.csv')
intercept = coef_data[0:1].CoefficientTraining[0]
horvath_cpgs = np.array(coef_data.drop(0).CpGmarker)
coefs = np.array(coef_data.drop(0).CoefficientTraining)
horvath_model = linear_model.LinearRegression()
horvath_model.coef_ = coefs
horvath_model.intercept_ = intercept
# horvath_cpgs
#predict with Horvath's 2013 model
pred_ages_Horvath = anti_transform_age(horvath_model.predict(data[horvath_cpgs]))
#get Horvath's 2013 model evaluation metrics
mae = np.median(np.abs(real_age - pred_ages_Horvath))
mse = np.mean((real_age - pred_ages_Horvath)**2)
r = np.corrcoef(real_age, pred_ages_Horvath)[0,1]
print('The Median Absolute Error is: ' + str(mae))
print('The Mean Squared Error is: ' + str(mse))
print("Pearson's Correlation Coefficient is: " + str(r))
###Output
The Median Absolute Error is: 14.140236344179392
The Mean Squared Error is: 171.92677509854278
Pearson's Correlation Coefficient is: 0.9726306089777844
###Markdown
1) Initial Population a) Heuristic ✔ b) Randomized ✔ 2) Selection a) Roulette Wheel Selection ✔ b) Rank Selection ✔ c) Steady State Selection ✔ d) Tournament Selection ✔ e) Elitism Selection ✔ f) Boltzmann Selection ✔ 3) Reproduction a) One-point crossover ✔ b) k-point crossover ✔ c) Uniform crossover ✔ 4) Mutation a) Bit string mutation ✔ b) Flip Bit ❌ c) Boundary ❌ d) Non-Uniform ❌ e) Uniform ❌ f) Gaussian ❌ g) Shrink ❌
###Code
from GeneticFeatureSelection import GeneticFeatureSelection
gfs = GeneticFeatureSelection()
gfs.fit(X_train, y_train)
gfs.sequential_selection(
pop_size=12,
estimator=xgbr,
scoring='neg_mean_absolute_error',
cv=5,
select_method='R',
offspring_size=20,
c_pt=1,
epsilon=.1,
tolerance=3,
verbose=1
)
###Output
---------------------------------------------Gen 1---------------------------------------------
Mean Fitness: -7647.08, Trial Best: -4759.959554703335
Mutation Occured! x 1
Selection Pressure: 0
Time Spent: 3.78
---------------------------------------------Gen 2---------------------------------------------
Mean Fitness: -6878.92, Trial Best: -3206.681960696227
Mutation Occured! x 3
Selection Pressure: 0
Time Spent: 6.72
---------------------------------------------Gen 3---------------------------------------------
Mean Fitness: -6319.14, Trial Best: -3196.371467232995
Mutation Occured! x 6
Selection Pressure: 1
Time Spent: 8.72
---------------------------------------------Gen 4---------------------------------------------
Mean Fitness: -6352.94, Trial Best: -3206.681960696227
Mutation Occured! x 2
Selection Pressure: 2
Time Spent: 8.31
---------------------------------------------Gen 5---------------------------------------------
Mean Fitness: -6982.09, Trial Best: -3281.4081262422887
Mutation Occured! x 4
Selection Pressure: 3
Time Spent: 6.94
The trial best of this generation shows no improvement.
Total Time Spent: 34.46
###Markdown
Example codes for analyses with USVCAM*see also Chapter 4 in the user guide.*You can download example data from [here](https://1drv.ms/u/s!AlpK-j-ONYp37SV3Nf3b7ooyW8eb?e=txUYkZ) (1.7 GB). importing the library
###Code
import usvcam.analysis
###Output
_____no_output_____
###Markdown
USV segmentation, step-1: converting dat file to wav file
###Code
data_dirs = ['./test_data/single_mouse',
'./test_data/two_mice',]
for data_dir in data_dirs:
usvcam.analysis.dat2wav(data_dir, 3)
###Output
_____no_output_____
###Markdown
USV segmentation, step-2: running USVSEG+**Before proceeding**, here, process the data directories with USVSEG+.See usvseg_plus/README.md for detail. Camera calibration
###Code
data_dir = './test_data/single_mouse'
usvcam.analysis.calib_with_voc(data_dir, outpath='./test_data/micpos.h5')
###Output
_____no_output_____
###Markdown
(Optional) Creating a video to visualize USV localization
###Code
data_dir = './test_data/single_mouse'
calibfile = './test_data/micpos.h5'
usvcam.analysis.create_localization_video(data_dir, calibfile, color_eq=True)
###Output
_____no_output_____
###Markdown
Estimating parameters for USV assignment*This process takes hours. If you are using the test data and want to skip the process, download the result from [here](https://1drv.ms/u/s!AlpK-j-ONYp37SS_s967ZveXYM2D?e=h5GUqC).*
###Code
data_dir = './test_data/single_mouse'
calibfile = './test_data/micpos.h5'
assignfile = './test_data/assign_param.h5'
usvcam.analysis.estimate_assign_param([data_dir], [calibfile], assignfile, show_figs=True)
###Output
_____no_output_____
###Markdown
USV assignment
###Code
data_dir = './test_data/two_mice'
calibfile = './test_data/micpos.h5'
assignfile = './test_data/assign_param.h5'
n_mice = 2
usvcam.analysis.assign_vocalizations(data_dir, calibfile, assignfile, n_mice)
###Output
_____no_output_____
###Markdown
(Optional) Creating a video to visualize USV assignment
###Code
data_dir = './test_data/two_mice'
calibfile = './test_data/micpos.h5'
assignfile = './test_data/assign_param.h5'
n_mice = 2
usvcam.analysis.create_assignment_video(data_dir, n_mice, color_eq=True)
###Output
_____no_output_____
###Markdown
Offline notebook exampleYou should see three new buttons:![Offline notebook buttons](./offline-notebook-buttons.png) 1. Make some changes to this notebook (or run it to update the output).2. Do not save the notebook. You can even disconnect from the Jupyter server or your network.3. Click the first button (`Download`). This should prompt you to download the notebook.4. Click the second button (`cloud download`). This should save the current notebook into your browser's [local-storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage).5. Start a new instance of Jupyter, and open the original version of this notebook.6. Click the third button (`cloud upload`). This should restore the copy of the notebook from your browser's local-storage.
###Code
from datetime import datetime
print(datetime.now())
import os
for (k, v) in sorted(os.environ.items()):
print(f'{k}\t{v}')
###Output
_____no_output_____
###Markdown
Using the provided Layers and NetworkIn this short tutorial we show how the proposed layers and network can be used. Using and Configuring LayersBefore we can use the layer, we need to define the types (channels and their orders) and q-space sampling schemas of the input and output feature maps.For the input, these are the same as the ones of the output feature map of the previous layer.In the first layer the q-space sampling schema is the one used in the dataset and the type is [1] (1 scalar channel)when (raw) dMRI scans as input.For the purpose of this example we will just hard-code these values:
###Code
# only 1 scalar channel
type_in = [1]
# one scan with b=0 and the cubic sampling scheme (all 6 directions of the cube)
q_sampling_schema_in = [[0., 0., 0.],
[1., 0., 0.], [-1., 0., 0.], [0., 1., 0.],
[0., -1., 0.], [0., 0., 1.], [0., 0., -1.]]
# 2 scalar channels and 1 vector channel
type_out = [2, 1]
# we'll use the same sampling schema for the output, but we could instead also use a different one
q_sampling_schema_out = q_sampling_schema_in
###Output
_____no_output_____
###Markdown
pq-diff+p LayerLet's first define a layer using the pq-diff+p kernel, which is based on the pq-diff and the p-space kernel.
###Code
from equideepdmri.layers.EquivariantPQLayer import EquivariantPQLayer
layer = EquivariantPQLayer(type_in, type_out,
kernel_definition="sum(pq_diff;p_space)",
p_kernel_size=5,
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50})
print('Layer:', layer)
print('Input: ', layer.type_in, 'with Q:', layer.Q_in)
print('Output: ', layer.type_out, 'with Q:', layer.Q_out)
print('Kernel:', layer.kernel)
###Output
Layer: <EquivariantPQLayer (1,)->(2, 1)>
Input: <SphericalTensorType (1,)> with Q: 7
Output: <SphericalTensorType (2, 1)> with Q: 7
Kernel: SumKernel(
(kernels): ModuleList(
(0): <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * Y(p-q) of type (1,) -> (2, 1) with basis size (2, 1) * 200>
(1): <Kernel_PQ φ_cos(|p|) * Y(p) of type (1,) -> (2, 1) with basis size (2, 1) * 50>
)
)
###Markdown
We defined to use the kernel size 5 in p-space.We also defined to use the cosine radial basis functionwith a 3 layer FC (and 50 units in each layer) applied to it for p-space.The default radial basis function would be the Gaussian without a FC applied to it,which is what is used for q-space as we did not define anything here. pq-diff+q LayerThe definition of a layer using the pq-diff+q kernel is similar:
###Code
layer = EquivariantPQLayer(type_in, type_out,
kernel_definition="sum(pq_diff;q_space)",
p_kernel_size=5,
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50})
print('Layer:', layer)
print('Input: ', layer.type_in, 'with Q:', layer.Q_in)
print('Output: ', layer.type_out, 'with Q:', layer.Q_out)
print('Kernel:', layer.kernel)
###Output
Layer: <EquivariantPQLayer (1,)->(2, 1)>
Input: <SphericalTensorType (1,)> with Q: 7
Output: <SphericalTensorType (2, 1)> with Q: 7
Kernel: SumKernel(
(kernels): ModuleList(
(0): <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * Y(p-q) of type (1,) -> (2, 1) with basis size (2, 1) * 200>
(1): <Kernel_PQ (φ_gauss(|q_out|) * φ_gauss(|q_in|)) * Y(q) of type (1,) -> (2, 1) with basis size (2, 1) * 4>
)
)
###Markdown
TP-vec LayerTo define a layer using the TP-vec kernel we do the following:
###Code
layer = EquivariantPQLayer(type_in, type_out,
kernel_definition="pq_TP",
p_kernel_size=5,
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
sub_kernel_selection_rule={0: [(0, 0)],
1: [(0, 1), (1, 0), (1, 1)],
2: [(2, 2)]})
print('Layer:', layer)
print('Input: ', layer.type_in, 'with Q:', layer.Q_in)
print('Output: ', layer.type_out, 'with Q:', layer.Q_out)
print('Kernel:', layer.kernel)
###Output
Layer: <EquivariantPQLayer (1,)->(2, 1)>
Input: <SphericalTensorType (1,)> with Q: 7
Output: <SphericalTensorType (2, 1)> with Q: 7
Kernel: <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * (Y(q) x Y(p)) of type (1,) -> (2, 1) with basis size (2, 3) * 200>
###Markdown
where we can see that the used tuples $(l_\mathrm{filter}, l_p, l_q)$ are definedin the `sub_kernel_selection_rule` parameter as a dict where the keys are the $l_\mathrm{filter}$and the values are lists of pairs $(l_p, l_q)$. TP$\pm$1 LayerTo define a layer using the TP$\pm$1 kernel we only adapt the `sub_kernel_selection_rule`:
###Code
layer = EquivariantPQLayer(type_in, type_out,
kernel_definition="pq_TP",
p_kernel_size=5,
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
sub_kernel_selection_rule={"l_diff_to_out_max": 1})
print('Layer:', layer)
print('Input: ', layer.type_in, 'with Q:', layer.Q_in)
print('Output: ', layer.type_out, 'with Q:', layer.Q_out)
print('Kernel:', layer.kernel)
###Output
Layer: <EquivariantPQLayer (1,)->(2, 1)>
Input: <SphericalTensorType (1,)> with Q: 7
Output: <SphericalTensorType (2, 1)> with Q: 7
Kernel: <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * (Y(q) x Y(p)) of type (1,) -> (2, 1) with basis size (4, 6) * 200>
###Markdown
We could also remove the `sub_kernel_selection_rule` parameter as this value is the default. Stacking Layers, q-Reduction, and NonlinearitiesNow let's define multiple layers, add nonlinearities, q-reduction, and then p-space only layers.This is an architecture similar to the one used in the paper.First start with the pq-layers. There is a utility function that builds an `EquivariantPQLayer` together with a nonlinearity, it is called `build_pq_layer`:
###Code
from equideepdmri.layers.layer_builders import build_pq_layer
type_in = [1]
type_out = [2, 1]
pq_layer_1 = build_pq_layer(type_in, type_out,
p_kernel_size=5,
kernel="pq_TP",
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
sub_kernel_selection_rule={"l_diff_to_out_max": 1},
non_linearity_config={"tensor_non_lin":"gated", "scalar_non_lin":"swish"})
print(pq_layer_1)
print('Input: ', pq_layer_1[0].type_in, 'with Q:', pq_layer_1[0].Q_in)
print('Output before nonlinearity: ', pq_layer_1[0].type_out, 'with Q:', pq_layer_1[0].Q_out)
print('Kernel:', pq_layer_1[0].kernel)
###Output
Sequential(
(conv): <EquivariantPQLayer (1,)->(3, 1)>
(non_linearity): GatedBlockNonLin()
)
Input: <SphericalTensorType (1,)> with Q: 7
Output before nonlinearity: <SphericalTensorType (3, 1)> with Q: 7
Kernel: <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * (Y(q) x Y(p)) of type (1,) -> (3, 1) with basis size (6, 6) * 200>
###Markdown
Note that the used `non_linearity_config` is the default so it could also be omitted.The output before the nonlinearity has additional scalar channels (more than we defined), because these channels are needed for the gates in the non-linearity (one additional scalar channel for each non-scalar channel).Let's define the other pq-layers:
###Code
type_in = type_out # output of previous layer is input to this one
type_out = [3, 2, 1]
pq_layer_2_type_out = type_out
pq_layer_2 = build_pq_layer(type_in, type_out,
p_kernel_size=5,
kernel="pq_TP",
q_sampling_schema_in=q_sampling_schema_in,
q_sampling_schema_out=q_sampling_schema_out,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
sub_kernel_selection_rule={"l_diff_to_out_max": 1},
non_linearity_config={"tensor_non_lin":"gated", "scalar_non_lin":"swish"})
print(pq_layer_2)
print('Input: ', pq_layer_2[0].type_in, 'with Q:', pq_layer_2[0].Q_in)
print('Output before nonlinearity: ', pq_layer_2[0].type_out, 'with Q:', pq_layer_2[0].Q_out)
print('Kernel:', pq_layer_2[0].kernel)
###Output
Sequential(
(conv): <EquivariantPQLayer (2, 1)->(6, 2, 1)>
(non_linearity): GatedBlockNonLin()
)
Input: <SphericalTensorType (2, 1)> with Q: 7
Output before nonlinearity: <SphericalTensorType (6, 2, 1)> with Q: 7
Kernel: <Kernel_PQ (φ_cos(|p|) * φ_gauss(|q_out|) * φ_gauss(|q_in|)) * (Y(q) x Y(p)) of type (2, 1) -> (6, 2, 1) with basis size (28, 78, 45, 9) * 200>
###Markdown
As we now have non-scalar input and output channels, the kernel basis gets much larger and does not only have scalar and vector channels (as before) but also 45 l=2 and 9 l=3 channels (as can be seen in the basis size (29, 78, 45, 9).Now define the q-reduction. We'll use the `QLengthWeightedAvgPool` as used in the `late` approach.It can either be used by importing `QLengthWeightedAvgPool` from `layers.QLengthWeightedPool` or we can again use a layer builder as follows:
###Code
from equideepdmri.layers.layer_builders import build_q_reduction_layer
type_in = type_out
q_reduction, type_out = build_q_reduction_layer(type_in, q_sampling_schema_in, reduction='length_weighted_average')
print(q_reduction)
print(q_reduction.type_in_out)
###Output
QLengthWeightedAvgPool(
(radial_basis): FiniteElement_RadialBasis(
(model): FC()
)
)
<SphericalTensorType (3, 2, 1)>
###Markdown
Note that besides `length_weighted_average` we could also use the unweighted `mean` or specify `conv` (as used in `gradual` q-reduction).Now (as q-space is reduced) let's define p-space layers. Note that no kernel needs to be specified as it is always `p_space`.
###Code
from equideepdmri.layers.layer_builders import build_p_layer
type_out = [1, 1]
p_layer_1 = build_p_layer(type_in, type_out,
kernel_size=5,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
non_linearity_config={"tensor_non_lin":"gated", "scalar_non_lin":"swish"})
print(p_layer_1)
print('Input: ', p_layer_1[0].type_in, 'has Q:', p_layer_1[0].has_Q_in)
print('Output before nonlinearity: ', p_layer_1[0].type_out, 'has Q:', p_layer_1[0].has_Q_out)
print('Kernel:', p_layer_1[0].kernel, '\n')
type_in = type_out
type_out = [1] # only 1 scalar channel as output
# don't use nonlinearity as this is the last layer
p_layer_2 = build_p_layer(type_in, type_out,
kernel_size=5,
p_radial_basis_type="cosine",
p_radial_basis_params={"num_layers": 3, "num_units": 50},
use_non_linearity=False)
print(p_layer_2) # no non-linearity => only EquivariantPLayer
print('Input: ', p_layer_2.type_in, 'has Q:', p_layer_2.has_Q_in)
print('Output before nonlinearity: ', p_layer_2.type_out, 'has Q:', p_layer_2.has_Q_out)
print('Kernel:', p_layer_2.kernel)
###Output
Sequential(
(conv): <EquivariantPLayer (3, 2, 1)->(2, 1)>
(non_linearity): GatedBlockNonLin()
)
Input: <SphericalTensorType (3, 2, 1)> has Q: False
Output before nonlinearity: <SphericalTensorType (2, 1)> has Q: False
Kernel: <Kernel_PQ φ_cos(|p|) * Y(p) of type (3, 2, 1) -> (2, 1) with basis size (8, 10, 5, 1) * 50>
<EquivariantPLayer (1, 1)->(1,)>
Input: <SphericalTensorType (1, 1)> has Q: False
Output before nonlinearity: <SphericalTensorType (1,)> has Q: False
Kernel: <Kernel_PQ φ_cos(|p|) * Y(p) of type (1, 1) -> (1,) with basis size (1, 1) * 50>
###Markdown
Applying the LayersThe layers can now be applied to some input feature map, where we'll use some random feature map:
###Code
import torch
x = torch.randn(1, 1, 7, 10, 10, 10) # (batch_size x dim_in x Q_in x P_z x P_y x P_x)
print("Input: ", x.size()) # Channel dim: 1*1 = 1
x = pq_layer_1(x)
print("After pq-layer 1: ", x.size()) # Channel dim: 2*1 + 1*3 = 5
x = pq_layer_2(x)
print("After pq-layer 2: ", x.size()) # Channel dim: 3*1 + 2*3 + 1*5 = 14
x = q_reduction(x)
print("After q-reduction: ", x.size()) # Channel dim unchanged (14), q-dim removed
x = p_layer_1(x)
print("After p-layer 1: ", x.size()) # Channel dim: 1*1 + 1*3 = 4
x = p_layer_2(x)
print("After p-layer 2: ", x.size()) # Channel dim: 1*1 = 1
###Output
Input: torch.Size([1, 1, 7, 10, 10, 10])
After pq-layer 1: torch.Size([1, 5, 7, 10, 10, 10])
After pq-layer 2: torch.Size([1, 14, 7, 10, 10, 10])
After q-reduction: torch.Size([1, 14, 10, 10, 10])
After p-layer 1: torch.Size([1, 4, 10, 10, 10])
After p-layer 2: torch.Size([1, 1, 10, 10, 10])
###Markdown
Using the provided Voxel-Wise Segmentation NetworkAs shown before, the provided equivariant layers can be stacked to build equivariant networks.For voxel-wise prediction (e.g. voxel-wise segmentation) we included a network.This network uses the architecture described in the paper where first pq-layers are applied, then a q-reduction, and then p-layers. This is the same structure as we defined previusly in this example with the layer builders.In the following sections we show how the segmentation network might be used an trained. Preparation of the DatasetFor the purpose of this example we will use a randomly generated dataset.This means that real learning might not be possible but it still shows how the segmentation network could be used.
###Code
from example.utils import RandomDMriSegmentationDataset
dataset = RandomDMriSegmentationDataset(N=10, Q=8, num_b0=2, p_size=(10, 10, 10))
###Output
_____no_output_____
###Markdown
Note that the `RandomDMriSegmentationDataset` contains samples with the same p-size.This is just for simplicity of this example, in practice the `VoxelWiseSegmentationNetwork` can handle different sizes of the samples (as it is fully-convolutional). In our training we for example cropped all scans to the bounding boxes of their brain masks to save memory and speed up the training. Defining the NetworkNow we define the network. The hyperparameters are the same as the ones used in our best model shown in the paper.
###Code
from equideepdmri.network.VoxelWiseSegmentationNetwork import VoxelWiseSegmentationNetwork
model = VoxelWiseSegmentationNetwork(
q_sampling_schema_in=dataset.q_sampling_schema,
pq_channels=[
[7, 4]
],
p_channels=[
[20, 5],
[10, 3],
[5, 2],
[1]
],
pq_kernel={
'kernel':'pq_TP',
'p_radial_basis_type':'cosine'
},
p_kernel={
'p_radial_basis_type':'cosine'
},
kernel_sizes=5,
non_linearity={
'tensor_non_lin':'gated',
'scalar_non_lin':'swish'
},
q_reduction={
'reduction':'length_weighted_average'
}
)
print(model)
###Output
VoxelWiseSegmentationNetwork(
(pq_layers): ModuleList(
(0): Sequential(
(conv): <EquivariantPQLayer (1,)->(11, 4)>
(non_linearity): GatedBlockNonLin()
)
)
(q_reduction_layer): QLengthWeightedAvgPool(
(radial_basis): FiniteElement_RadialBasis(
(model): FC()
)
)
(p_layers): ModuleList(
(0): Sequential(
(conv): <EquivariantPLayer (7, 4)->(25, 5)>
(non_linearity): GatedBlockNonLin()
)
(1): Sequential(
(conv): <EquivariantPLayer (20, 5)->(13, 3)>
(non_linearity): GatedBlockNonLin()
)
(2): Sequential(
(conv): <EquivariantPLayer (10, 3)->(7, 2)>
(non_linearity): GatedBlockNonLin()
)
(3): <EquivariantPLayer (5, 2)->(1,)>
)
)
###Markdown
Training the NetworkNow we train the network using our random dataset. (This example may never converge as the data is random).The following code is a simplified version of the training code we used for our paper, e.g. validation (and computing metrics), logging and saving predicted samples, saving checkpoints, and early stopping were removed for simplicity.
###Code
from torch import nn
from torch.utils.data.dataloader import DataLoader
from example.utils import compute_binary_label_weights
epochs = 3
dataloader = DataLoader(dataset=dataset, batch_size=1, shuffle=True)
pos_weight = compute_binary_label_weights(dataloader)
criterion = nn.BCEWithLogitsLoss(pos_weight=pos_weight)
optimizer = torch.optim.Adam(model.parameters(), lr=5.0e-03)
for epoch in range(epochs):
for batch in iter(dataloader):
sample_ids, x, target, brain_mask = batch['sample_id'], batch['input'], batch['target'], batch['brain_mask']
assert brain_mask.size(0) == 1 and len(sample_ids) == 1 and target.size(0) == 1 and x.size(0) == 1, \
'Currently only batch-size 1 is supported'
sample_ids = sample_ids[0]
brain_mask = brain_mask.squeeze(0).bool() # (Z x Y x X)
target = target.squeeze(0)[brain_mask] # (num_non_masked_voxels)
# note: x is not squeezed as model expected batch dim, it is squeezed after model is applied
optimizer.zero_grad()
predicted_scores = model(x).squeeze(0) # (Z x Y x X)
predicted_scores = predicted_scores[brain_mask] # (num_non_masked_voxels)
loss = criterion(predicted_scores, target)
print('Loss:', float(loss))
loss.backward()
optimizer.step()
###Output
Loss: 14.580911636352539
Loss: 13.333956718444824
Loss: 10.981715202331543
Loss: 9.749805450439453
Loss: 7.926243305206299
Loss: 6.447773456573486
Loss: 5.228459358215332
Loss: 4.740058422088623
Loss: 4.1125640869140625
Loss: 3.674058675765991
Loss: 3.2416303157806396
Loss: 2.5309395790100098
Loss: 2.121861219406128
Loss: 1.8830928802490234
Loss: 1.4352169036865234
Loss: 1.174275279045105
Loss: 0.8579174876213074
Loss: 0.7635501027107239
Loss: 0.64266437292099
Loss: 0.5872637033462524
Loss: 0.5563927888870239
Loss: 0.5204508900642395
Loss: 0.5092914700508118
Loss: 0.5066750645637512
Loss: 0.5145171880722046
Loss: 0.47252941131591797
Loss: 0.48094403743743896
Loss: 0.4816356301307678
Loss: 0.4658873379230499
Loss: 0.45935654640197754
###Markdown
IntroductionMulti-instance (MI) machine learning approaches can be used to solve the issues of representation of each molecule by multiple conformations (instances) and automatic selection of the most relevant ones. In the multi-instance approach, an example (i.e., a molecule) is presented by a bag of instances (i.e., a set of conformations), and a label (a molecule property value) is available only for a bag (a molecule), but not for individual instances (conformations).Here, we report an application of Multi-Instance Learning approach to predictive modeling of enantioselectivity of chiral catalysts. Catalysts were represented by ensembles of conformers encoded by the pmapper physicochemical descriptors capturing stereo configuration of the molecule. Each catalyzed chemical reaction was transformed to a Condensed Graph of Reaction for which ISIDA fragment descriptors were generated. This approach does not require any conformers’ alignment and can potentially be used for diverse set of catalysts bearing different scaffolds. DescriptorsEach reaction was transformed to a Condensed Graph of Reaction (CGR) with a CGRtools package. CGR is a single graph, which encodes an ensemble of reactants and products. CGR results from the superposition of the atoms of products and reactants having the same numbers. It contains both conventional chemical bonds (single, double, triple, aromatic, etc.) and so-called “dynamic” bonds describing chemical transformations, i.e. breaking or forming a bond or changing bond order. Given CGRs were encoded by ISIDA (In Silico Design and Data Analysis) fragment descriptors, counting the occurrence of particular subgraphs (structural fragments) of different topologies and sizes. In this study, atom-centered subgraphs containing a given atom with the atoms and bonds of its n coordination spheres (n = 1-4) are used.For each catalyst, up to 50 conformations (**nconfs**) within a 10 kcal/mol energy window (**energy**) have been generated using the distance geometry algorithm implemented in RDKit13. The conformations with RMSD values below 0.5Å with respect to selected conformers were removed in order to reduce redundancy. Then, selected conformers were encoded by a vector of pmapper descriptors. Each conformer is represented by an ensemble of physicochemical features assigned to atoms, functional groups, or rings: H-donor, H-acceptor, or hydrophobic, or positively or negatively charged. Rings are characterized by either hydrophobic or aromatic features. All possible combinations of features quadruplets are enumerated. Each quadruplet is encoded by a canonical signature, which contains information about comprising features, the distance between them, and stereoconfiguration. To enable fuzzy matching of quadruplets to identify similar ones, the distances between features are binned with the step of 1Å. Each unique quadruplet is considered as a descriptor whereas its count is a descriptor value. Vectors of 2D fragment reaction descriptors and 3D physicochemical quadruplets were then concatenated to form combined reaction/catalyst descriptor vector. The descriptor calculation function reads an RDF file with reactions where the CATALYST_SMILES field contains the catalyst smiles. The SELECTIVITY field stores the experimental value of the selectivity (ΔΔG) in the reaction. The ID field contains a unique reaction index.
###Code
import os
from miqssr.utils import calc_descriptors
input_fname = os.path.join('data', 'input_data.rdf')
nconfs = 50 # max number of conformers to generate
energy = 10 # energy window
ncpu = 20 # number of cpus
path = './descriptors' # where to store the calculated descriptors
out_fname = calc_descriptors(input_fname=input_fname, nconfs=nconfs, energy=energy, ncpu=ncpu, path=path)
###Output
_____no_output_____
###Markdown
Model trainingThe descriptors file contains columns: *react_id* (reaction index), *mol_title* (reaction name), *act* - selectivity of reaction.One should to implement a function to create a n × m × k list of bags (n - number of reactions, m - bag size (number of conformers generated), k - number of descriptors).
###Code
import numpy as np
import pandas as pd
def load_data(fname):
#data = pd.read_csv(fname, index_col='react_id').sort_index()
data = pd.read_csv(fname, index_col='mol_id').sort_index()
bags, labels, idx = [], [], []
for i in data.index.unique():
bag = data.loc[i:i].drop(['mol_title', 'act'], axis=1).values
label = float(data.loc[i:i]['act'].unique()[0])
bags.append(bag)
labels.append(label)
idx.append(i)
return np.array(bags), np.array(labels), idx
dsc_fname = os.path.join('descriptors', 'PhFprPmapper_concat-data_50.csv') # descriptors file
bags, labels, idx = load_data(dsc_fname)
print(f'There are {len(bags)} reactions encoded with {bags[0].shape[1]} descriptors')
###Output
_____no_output_____
###Markdown
Training set was 384 reactions (24 catalysts × 16 substrate combinations = 384 reactions), and the external test set was composed of the 691 reactions.
###Code
def train_test_split_default(bags, labels, idx):
test_reactions_idx = open('test_reactions.txt').read().split(',')
x_train, x_test = [], []
y_train, y_test = [], []
idx_train, idx_test = [], []
for bag, label, i in zip(bags, labels, idx):
if i in test_idx:
x_test.append(bag)
y_test.append(label)
idx_test.append(i)
else:
x_train.append(bag)
y_train.append(label)
idx_train.append(i)
x_train, x_test, y_train, y_test = np.array(x_train), np.array(x_test), np.array(y_train), np.array(y_test)
return x_train, x_test, y_train, y_test, idx_train, idx_test
x_train, x_test, y_train, y_test, idx_train, idx_test = train_test_split_default(bags, labels, idx)
###Output
_____no_output_____
###Markdown
The number of generated pmapper descriptors may be quite large, which can hinder model training. A representative set of descriptors can be selected by removing redundant descriptors with rare occurrences. Namely, descriptors with non-zero values in less than N % of the training conformations are removed.
###Code
from sklearn.preprocessing import MinMaxScaler
def remove_dsc(bags, tresh_down=0.1, tresh_up=1):
bags_concat = np.concatenate(bags)
tresh_down = tresh_down * len(bags_concat)
tresh_up = tresh_up * len(bags_concat)
out = []
for dsc in range(bags_concat.shape[-1]):
p = sum(np.where(bags_concat[:, dsc] == 0, 0, 1))
if p < tresh_down or p > tresh_up:
out.append(dsc)
bags = [np.delete(bag, out, axis=1) for bag in bags]
return out, np.array(bags)
def scale_data(x_train, x_test):
scaler = MinMaxScaler()
scaler.fit(np.vstack(x_train))
x_train_scaled = x_train.copy()
x_test_scaled = x_test.copy()
for i, bag in enumerate(x_train):
x_train_scaled[i] = scaler.transform(bag)
for i, bag in enumerate(x_test):
x_test_scaled[i] = scaler.transform(bag)
return np.array(x_train_scaled), np.array(x_test_scaled)
out_dsc, x_train_selected = remove_dsc(x_train, tresh_down=0.1, tresh_up=1)
x_test_selected = np.array([np.delete(x, out_dsc, axis=1) for x in x_test])
x_train_scaled, x_test_scaled = scale_data(x_train_selected, x_test_selected)
###Output
_____no_output_____
###Markdown
Models were developed with a multi-instance neural network with an attention mechanism , which highlights a few reactive conformations, responsible for observed selectivity and ignores the irrelevant conformations introducing noise in the modeling process. Namely, the attention mechanism assigns each conformation a weight from 0 to 1, determining its importance in terms of predicting catalyst selectivity. The sum of all attention weights equals 1. In learning process, each instance (conformation descriptor vector) runs through three fully-connected layers with 256, 128, and 64 hidden neurons (**ndim** parameter). Then the learned instance representations inputs to the attention network with 64 hidden neurons (**det_ndim**) and the number of output neurons equal to the number of input instances. The output neurons are followed by a Softmax unit, calculating attention weights for each instance. The learned instance representations are averaged considering the attention weights, resulting in the embedding vector, which is used to predict selectivity. One should implement a protocol for optimizing the hyperparameters of the neural network model. Here we assign the optimal hyperparameters found with the *hyperopt* package.
###Code
from miqssr.estimators.attention_nets import AttentionNetRegressor
ndim = (x_train_selected[0].shape[1], 256, 128, 64) # number of hidden layers and neurons in the main network
det_ndim = (64,) # number of hidden layers and neurons in the attention network
n_epoch = 100 # maximum number of learning epochs
lr = 0.001 # learning rate
weight_decay = 0.1 # l2 regularization
att_weight_dropout = 0.9 # attention weights regularization
batch_size = 64 # batch size
init_cuda = True # True if GPU is available
net = AttentionNetRegressor(ndim=ndim, det_ndim=det_ndim, init_cuda=init_cuda)
net.fit(x_train_selected, y_train,
n_epoch=n_epoch,
lr=lr,
weight_decay=weight_decay,
dropout=att_weight_dropout,
batch_size=batch_size)
from sklearn.metrics import r2_score, mean_absolute_error
y_pred = net.predict(x_test_selected)
print(f'Determination coefficient (test set): {r2_score(y_test, y_pred):.2f}')
print(f'ΔΔG Mean absolute error (test set): {mean_absolute_error(y_test, y_pred):.2f} kcal/mol')
###Output
Determination coefficient (test set): 0.81
ΔΔG Mean absolute error (test set): 0.23 kcal/mol
###Markdown
Example without one-hot
###Code
data = pd.read_csv('./data/titanic.csv')
data.head()
# from sklearn.datasets import load_breast_cancer
# X,y = load_breast_cancer(True)
# data = data.drop(['Name','Ticket','Cabin','Embarked','PassengerId'],axis=1)
data = data.drop(['Name'], axis=1)
data = data.dropna()
X_df = data.drop(['Survived'],axis=1)
y = data['Survived']
for d in X_df.columns[X_df.dtypes=='O']:
le = LabelEncoder()
X_df[d] = le.fit_transform(X_df[d])
print(d," ..... ",le.classes_)
X = X_df.values
y = y.values
X_train, X_test, y_train, y_test = train_test_split(X,y)
clf_titanic = RandomForestClassifier(n_estimators=10).fit(X_train, y_train)
idx_test = 0
ga_titanic = GAdvExample(feature_names=list(X_df.columns),
sol_per_pop=30, num_parents_mating=10, cat_vars_ohe=None,
num_generations=100, n_runs=10, black_list=[],
verbose=False, beta=.95)
x_all, x_changes, x_sucess = ga_titanic.attack(clf_titanic, x=X_test[idx_test,:],x_train=X_train)
ga_titanic.results
plot_graph(x_changes, 0)
x_changes
###Output
_____no_output_____
###Markdown
IRIS
###Code
import numpy as np
from sklearn import datasets
iris_X, iris_y = datasets.load_iris(return_X_y=True)
np.unique(iris_y)
names = datasets.load_iris().feature_names
np.random.seed(0)
indices = np.random.permutation(len(iris_X))
X_train = iris_X[indices[:-10]]
y_train = iris_y[indices[:-10]]
X_test = iris_X[indices[-10:]]
y_test = iris_y[indices[-10:]]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
pd.crosstab(knn.predict(X_train),y_train)
inds = np.where(knn.predict(X_train) !=y_train)
inds, y_train[inds]
idx_test = 37
ga_iris = GAdvExample(feature_names=names, target=None,
sol_per_pop=30, num_parents_mating=10, cat_vars_ohe=None,
num_generations=100, n_runs=10, black_list=[0,2],
verbose=False, beta=.95)
x_all, x_changes, x_sucess = ga_iris.attack(knn, x=X_train[idx_test,:],x_train=X_train)
ga_iris.results
plot_graph(x_changes, 0, False)
###Output
No edges exist!
###Markdown
Writing xarray -> COGsHi all,I'm looking for guidance / best practices on writing an xarray object to (a collection of) COGs. Let's start with a common case of a DataArray that's indexed by `(time, band, y, x)`. Let's also assume that it's a chunked DataArray, with a chunksize of 1 for `time` and `band`, and it might be chunked along `y` and `x` as well.My high-level questions:1. Does rioxarray's `.rio.to_raster(path, driver="COG")` have the right defaults? Anything special we should do to make sure we write "good" COGs for a single chunk?2. Is there an established convention for organizing a directory of COG files that represent a 4-d datacube?I'm particularly interested in item 2. My proposed naming convention is```/time=/band=-y=-x=.tif```This works well for xarray: we have coordinate information available when writing the chunk, so we can safely generate a unique name for a chunk using the `(time, band, y, x)` coordinates of, say, the top-left value in the chunk.Here's a small example:
###Code
!pip install -q -U --no-deps git+https://github.com/TomAugspurger/xcog
###Output
_____no_output_____
###Markdown
Data generationWe'll mock up some data that has the right structure for pystac / rioxarray to do their thing.
###Code
import xarray as xr
import numpy as np
import dask.array as da
import stackstac
import rioxarray
import pystac
import pandas as pd
values = da.random.uniform(size=(2, 3, 10980, 10980), chunks=(1, 1, 5490, 5490))
x = np.arange(399960, 509751, step=10.)
y = np.arange(4800000, 4690210 - 1, step=-10.)
band = np.array(["B02", "B03", "B04"])
time = pd.to_datetime(["2021-01-01T17:07:19.024000000", "2021-01-04T17:17:19.024000000"])
data = xr.DataArray(
values,
dims=("time", "band", "y", "x"),
coords={
"time": xr.DataArray(time, name="time", dims="time"),
"band": xr.DataArray(band, name="band", dims="band"),
"y": xr.DataArray(y, name="y", dims="y"),
"x": xr.DataArray(x, name="x", dims="x"),
"common_name": xr.DataArray(['blue', 'green', 'red'], dims="band", name="common_name"),
"center_wavelength":xr.DataArray([0.49 , 0.56 , 0.665], dims="band", name="center_wavelength"),
"full_width_half_max": xr.DataArray([0.098, 0.045, 0.038], dims="band", name="full_width_half_max"),
},
attrs={
"crs": "epsg:32615",
},
)
data
###Output
_____no_output_____
###Markdown
Data writingWe're using `xcog` here, a simple little library with some utilities for writing out chunks. We'll write to local disk, but we should be able to use any fsspec-compatible file-system.
###Code
from pathlib import Path
import xcog
dst = Path("/tmp/cogs/")
dst.mkdir(parents=True, exist_ok=True)
template = xcog.make_template(data)
r = data.map_blocks(
xcog.write_block,
kwargs=dict(
prefix=str(dst),
storage_options=dict(auto_mkdir=True),
),
template=template
)
r
%time result = r.compute()
###Output
ERROR 4: `/vsimem/0333842e-13d1-4d4c-bae8-91078f97cfea/0333842e-13d1-4d4c-bae8-91078f97cfea.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/3277a413-6d34-4fa4-bd6b-ffe4f84e24a7/3277a413-6d34-4fa4-bd6b-ffe4f84e24a7.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/25cbb39e-c42d-4371-8ec3-87635548454e/25cbb39e-c42d-4371-8ec3-87635548454e.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/58437de5-77dd-4328-b7c3-a40fcc9f5473/58437de5-77dd-4328-b7c3-a40fcc9f5473.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/97ca0b27-49a3-4408-a59f-cf5d84cb6514/97ca0b27-49a3-4408-a59f-cf5d84cb6514.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/6746481a-aa5e-4a7b-ba60-9188f8c0cb3d/6746481a-aa5e-4a7b-ba60-9188f8c0cb3d.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/a5fc1e2f-05da-40e1-b426-d6ca0cdfd1a2/a5fc1e2f-05da-40e1-b426-d6ca0cdfd1a2.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/5b58cf03-9dbf-481d-b906-394c0c6a35a0/5b58cf03-9dbf-481d-b906-394c0c6a35a0.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/1ef50cf2-16e8-4b8a-a996-79702c169547/1ef50cf2-16e8-4b8a-a996-79702c169547.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/939a1672-6ccf-45e0-9fad-f533e8e49cd2/939a1672-6ccf-45e0-9fad-f533e8e49cd2.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/735c91ac-2ce6-470d-9989-4872b1c407b3/735c91ac-2ce6-470d-9989-4872b1c407b3.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/321482dd-8af7-48a1-ab3d-26ba6a2b3a26/321482dd-8af7-48a1-ab3d-26ba6a2b3a26.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/fafb6251-03f2-4515-88e4-5c51e6ea2924/fafb6251-03f2-4515-88e4-5c51e6ea2924.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/d944aa38-3947-4503-ae1a-5e384132078f/d944aa38-3947-4503-ae1a-5e384132078f.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/544de44e-5574-48f6-afcb-18b2bd8d5e75/544de44e-5574-48f6-afcb-18b2bd8d5e75.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/ef7dc61a-97e9-4258-9f77-77061824677a/ef7dc61a-97e9-4258-9f77-77061824677a.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/f2b2489a-5382-4284-93ae-f84bffeb327b/f2b2489a-5382-4284-93ae-f84bffeb327b.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/f131c431-965e-4b46-8ce7-62797c293e3c/f131c431-965e-4b46-8ce7-62797c293e3c.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/c14c0381-be15-4955-9ad7-94294fe883e6/c14c0381-be15-4955-9ad7-94294fe883e6.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/472be3d2-fc92-450c-ad9f-5b87b07b5c17/472be3d2-fc92-450c-ad9f-5b87b07b5c17.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/17fd26c8-a442-4be3-ae04-ef8a8b443f13/17fd26c8-a442-4be3-ae04-ef8a8b443f13.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/417c54a8-39e1-4faa-9c9a-b4ae598ec1fa/417c54a8-39e1-4faa-9c9a-b4ae598ec1fa.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/603a2f00-8dd8-45aa-8f5f-31d3640b7821/603a2f00-8dd8-45aa-8f5f-31d3640b7821.tif' not recognized as a supported file format.
ERROR 4: `/vsimem/381bd740-f95f-4f20-88aa-bbe7a660e5e7/381bd740-f95f-4f20-88aa-bbe7a660e5e7.tif' not recognized as a supported file format.
###Markdown
Here's paths of the COGs we wrote out:
###Code
!tree /tmp/cogs/
###Output
[01;34m/tmp/cogs/[00m
├── [01;34mtime=2021-01-01T17:07:19.024000[00m
│ ├── [01;35mband=B02-y=4745100.0-x=399960.0.tif[00m
│ ├── [01;35mband=B02-y=4745100.0-x=454860.0.tif[00m
│ ├── [01;35mband=B02-y=4800000.0-x=399960.0.tif[00m
│ ├── [01;35mband=B02-y=4800000.0-x=454860.0.tif[00m
│ ├── [01;35mband=B03-y=4745100.0-x=399960.0.tif[00m
│ ├── [01;35mband=B03-y=4745100.0-x=454860.0.tif[00m
│ ├── [01;35mband=B03-y=4800000.0-x=399960.0.tif[00m
│ ├── [01;35mband=B03-y=4800000.0-x=454860.0.tif[00m
│ ├── [01;35mband=B04-y=4745100.0-x=399960.0.tif[00m
│ ├── [01;35mband=B04-y=4745100.0-x=454860.0.tif[00m
│ ├── [01;35mband=B04-y=4800000.0-x=399960.0.tif[00m
│ └── [01;35mband=B04-y=4800000.0-x=454860.0.tif[00m
└── [01;34mtime=2021-01-04T17:17:19.024000[00m
├── [01;35mband=B02-y=4745100.0-x=399960.0.tif[00m
├── [01;35mband=B02-y=4745100.0-x=454860.0.tif[00m
├── [01;35mband=B02-y=4800000.0-x=399960.0.tif[00m
├── [01;35mband=B02-y=4800000.0-x=454860.0.tif[00m
├── [01;35mband=B03-y=4745100.0-x=399960.0.tif[00m
├── [01;35mband=B03-y=4745100.0-x=454860.0.tif[00m
├── [01;35mband=B03-y=4800000.0-x=399960.0.tif[00m
├── [01;35mband=B03-y=4800000.0-x=454860.0.tif[00m
├── [01;35mband=B04-y=4745100.0-x=399960.0.tif[00m
├── [01;35mband=B04-y=4745100.0-x=454860.0.tif[00m
├── [01;35mband=B04-y=4800000.0-x=399960.0.tif[00m
└── [01;35mband=B04-y=4800000.0-x=454860.0.tif[00m
2 directories, 24 files
###Markdown
Read back STAC + COGsOur `result` DataArray is a bunch of STAC items (one per original chunk).
###Code
result[0, 0, 0, 0].item()
###Output
_____no_output_____
###Markdown
We can group those together (all the assets with the same ID are merged into a single item)
###Code
new_items = xcog.collate(result)
new_items[:5]
###Output
_____no_output_____
###Markdown
And those can be fed back to stackstac, so kind of a round-trip from DataArray -> {STAC + COG} -> DataArray
###Code
stackstac.stack([x.to_dict() for x in new_items], chunksize=5490).groupby("time").apply(stackstac.mosaic)
###Output
_____no_output_____
###Markdown
Example use of the package
###Code
import etf_db as etf
import seaborn as sns
import pandas as pd
import pickle
###Output
_____no_output_____
###Markdown
Download the data and store it locally
###Code
data = etf.download_clean_public_data()
with open('data.pickle', 'wb') as handle:
pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('data.pickle', 'rb') as handle:
data = pickle.load(handle)
data.head()
###Output
_____no_output_____
###Markdown
Explore it in the way you wish
###Code
sns.distplot(data['price'])
###Output
_____no_output_____
###Markdown
Make a function `interval_count` that is called on the intervals in windows of size 5. Note that the `window` decorator only handles a single chromosome so you always need to group your data by chromosome:
###Code
@window(size=5)
def interval_count(df):
return len(df.index)
df = data.groupby('chrom').apply(interval_count)
df
###Output
_____no_output_____
###Markdown
You can get rid of the extra index like this:
###Code
df.reset_index(drop=True, level=-1)
###Output
_____no_output_____
###Markdown
You can further convert the index to colums like this:
###Code
df.reset_index(drop=True, level=-1).reset_index()
###Output
_____no_output_____
###Markdown
You can group by more than just the chromosome if you like:
###Code
data.groupby(['chrom', 'species']).apply(interval_count).reset_index(drop=True, level=-1).reset_index()
###Output
_____no_output_____
###Markdown
You can use hte `even` keyword to put approximately the same amount of interval in each window (to the extent that this is possible):
###Code
@window(size=10)
def interval_sum(df):
return (df.end-df.start).sum()
data.groupby('chrom').apply(interval_sum).reset_index(drop=True, level=-1).reset_index()
###Output
_____no_output_____
###Markdown
You can return any number of values from your function. Just do so as a Series or a dictionary:
###Code
@window(size=10)
def multiple_stats(df):
# return a Series
return df[['analysis','run']].sum()
data.groupby(['chrom']).apply(multiple_stats).reset_index(drop=True, level=-1).reset_index()
@window(size=10)
def multiple_stats(df):
# return dictionary
return dict(tot_length=(df.end-df.start).sum(), interval_count=len(df), mean_length=(df.end-df.start).mean())
data.groupby(['chrom']).apply(multiple_stats).reset_index(drop=True, level=-1).reset_index()
@window(size=100000000, empty=True, fill='hg19')
def count1(df):
return len(df.index)
data.groupby('chrom').apply(count1).reset_index(drop=True, level=-1).reset_index()
###Output
_____no_output_____
###Markdown
Use the `logbase` argument to make windows increase logarithmically with the specified base, starting from size. Usefull if the density of intervals decrease with distance (E.g. reletive to some annotation.)
###Code
@window(size=2, logbase=2)
def count2(df):
return len(df.index)
data.groupby('chrom').apply(count2).reset_index(drop=True, level=-1).reset_index()
###Output
_____no_output_____
###Markdown
If you get fed up with adding `.reset_index(drop=True, level=-1).reset_index()` you can make your own reset_index to pipe it trough:
###Code
def reset_group_index(df):
return df.reset_index(drop=True, level=-1).reset_index()
@window(size=10)
def count(df):
return len(df.index)
data.groupby(['chrom']).apply(count).pipe(reset_group_index)
###Output
_____no_output_____
###Markdown
tables, (x)arrays, and rastersHi all,I've been playing around with some ideas for working with geospatial raster data. I'd be curious for any feedback you have.The core question: *what's the best data model for raster data in Python?* Unsurprisingly, I think the answer is "it depends". Let's use work through a concrete task and evaluate the various options. Suppose we wanted to compute NDVI for all the scenes captured by Landsat 8 over a couple of hours.We'll use the Planetary Computer's STAC API to find the scenes, and geopandas to plot the bounding boxes of each scene on a map.
###Code
import warnings
warnings.simplefilter("ignore", FutureWarning)
import pystac_client
import geopandas
import planetary_computer
import pystac
import pandas as pd
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1"
)
items = catalog.search(
collections=["landsat-8-c2-l2"],
datetime="2021-07-01T08:00:00Z/2021-07-01T10:00:00Z"
).get_all_items()
items = [planetary_computer.sign(item) for item in items]
items = pystac.ItemCollection(items, clone_items=False)
df = geopandas.GeoDataFrame.from_features(items.to_dict(), crs="epsg:4326")
# https://github.com/geopandas/geopandas/issues/1208
df["id"] = [x.id for x in items]
m = df[["geometry", "id", "datetime"]].explore()
m
###Output
_____no_output_____
###Markdown
This type of data *can* be represented as an xarray DataArray. But it's not the most efficient way to store the data:
###Code
import stackstac
ds = stackstac.stack(
[x.to_dict() for x in items],
assets=["SR_B2", "SR_B3", "SR_B4", "SR_B5"],
epsg=32631,
chunksize=(7691, 7531)
)
ds
###Output
_____no_output_____
###Markdown
To build this `(time, band, y, x)` DataArray, we end up with many missing values. If you think about the data*cube* literally, with some "volume" of observed pixels, we have a lot of empty space. In this case, the DataArray takes 426 TiB to store.Even if we collapse the time dimension, which probably makes sense for this dataset, we still have empty space in the "corners"
###Code
ds2 = stackstac.mosaic(ds)
ds2
###Output
_____no_output_____
###Markdown
This helps a lot, getting us down to 3.6 TiB (the curse of dimensionality works in reverse too!) But it's still not as efficient as possible because of that empty space in the corners for this dataset. To actually load all these rasters into, say, a list would take much less memory.
###Code
import dask
import math
assets = ["SR_B2", "SR_B3", "SR_B4", "SR_B5"]
dask.utils.format_bytes(sum([
8 * math.prod(item.assets[asset].extra_fields["proj:shape"])
for item in items
for asset in assets
]))
###Output
_____no_output_____
###Markdown
So *for this dataset* (I cannot emphasize that enough; this example was deliberatly designed to look bad for a data cube) it doesn't make sense to model the data as a DataArray.| data model | memory (TiB) || ---------- | ------ || xarray `(time, band, y, x)` | 426 || xarray `(band, y, x)` | 3.6 || list | 0.2 |I've haven't really considered an `xarray.Dataset` here. I suspect that the memory usage could get down to approximately what would be required by a list of rasters. That said, something like the following seems to cause some issues.```pythonimport xarray as xrarrays = {item.id: stackstac.stack(item.to_dict(), assets=assets, chunksize=-1) for item in items}ds3 = xr.Dataset(arrays)```This causes warnings from Dask about slicing an array producing many chunks. I haven't looked into why (slightly overlapping / offset x and y coordinates?). I have a feeling that this would be a bit "untidy", but I haven't worked with Datasets much.In the Python data science space, we're fortunate to have both xarray and pandas (and geopandas and dask.dataframe). So we have choices! pandas provides an [extension array interface](https://pandas.pydata.org/docs/development/extending.htmlextension-types) to store non-NumPy arrays inside a pandas DataFrame. What would it look like to store STAC items (and more interestingly, rasters stored as DataArrays) inside a pandas DataFrame? Here's a prototype:Let's load those STAC items into a an "ItemArray".
###Code
import rasterpandas
sa = rasterpandas.ItemArray(items)
sa
###Output
_____no_output_____
###Markdown
That `ItemArray` can be put inside a pandas Series:
###Code
series = pd.Series(sa, name="stac_items")
series
###Output
_____no_output_____
###Markdown
Pandas lets you register accessors. For example, we could have a `stac` accessor that knows how to do stuff with STAC metadata, for example adding a column for each asset in the collection.
###Code
rdf = series[:10].stac.with_rasters(assets=["SR_B2", "SR_B3", "SR_B4", "SR_B5"])
rdf
###Output
_____no_output_____
###Markdown
Now things are getting more interesting! The repr is a bit messy, but this new DataFrame has a column for each of the blue, green, red, and nir bands. Each of those is a column of rasters. And each raster is just an xarray.DataArray!
###Code
rdf.iloc[1, 1]
###Output
_____no_output_____
###Markdown
And we can have fun with operations. For example, computing NDVI on two columns:
###Code
ndvi = rdf.raster.ndvi("SR_B4", "SR_B5")
type(ndvi)
###Output
_____no_output_____
###Markdown
That returned a pandas Series. Each element is again a raster:
###Code
ndvi.iloc[0]
###Output
_____no_output_____
###Markdown
Data Processing set time and regional domain
###Code
#"""
region = 'CONUS'
ilon_start = 45
ilon_end = 120
ilat_start = 110
ilat_end = 150
#"""
isl = 1
pcp_thrs = 0
YYYY_list = [2012];
###Output
_____no_output_____
###Markdown
read data
###Code
data_dir = './data/'
island_fname = 'island_1deg.nc'
ncin_island = Dataset(data_dir+island_fname,'r')
island_in = ncin_island.variables['island'][ilat_start:ilat_end,ilon_start:ilon_end]
nYYYY = np.shape(YYYY_list)[0]
for iYYYY in range(nYYYY):
YYYY = YYYY_list[iYYYY];
WWLLN_F_fname = 'WWLLN_'+str(YYYY)+'_F_cg_1deg3hr_US.nc'
ERA5_cape_fname = 'ERA5_'+str(YYYY)+'_cape_cg_1deg3hr_US.nc'
TRMM_pcp_fname = 'TRMM_'+str(YYYY)+'_pcp_cg_1deg3hr_US.nc'
ncin_F = Dataset(data_dir+WWLLN_F_fname,'r')
ncin_cape = Dataset(data_dir+ERA5_cape_fname,'r')
ncin_pcp = Dataset(data_dir+TRMM_pcp_fname,'r')
if (iYYYY==0):
F_in = ncin_F.variables['F'][:,:,:]
cape_in = ncin_cape.variables['cape'][:,:,:]
pcp_in = ncin_pcp.variables['pcp'][:,:,:]
else:
F_in = np.append(F_in,ncin_F.variables['F'][:,:,:],axis=0)
cape_in = np.append(cape_in,ncin_cape.variables['cape'][:,:,:],axis=0)
pcp_in = np.append(pcp_in,ncin_pcp.variables['pcp'][:,:,:],axis=0)
F_in = F_in * (1/((111.19492664455873)**2)) * (365.25*8) # turn unit into [km-2 yr-1]
isLightning_in = np.where(F_in>0,1,0)
sqrtcape_in = cape_in ** 0.5;
island_in3d = np.broadcast_to(island_in, F_in.shape)
mask_island = np.where(island_in3d==1, 1, np.nan);
print(mask_island.shape)
F_lnd = F_in*mask_island
isLightning_lnd = isLightning_in*mask_island
cape_lnd = cape_in*mask_island
sqrtcape_lnd = sqrtcape_in*mask_island
pcp_lnd = pcp_in*mask_island
dataset = pd.DataFrame(data=np.column_stack((F_lnd.ravel(),isLightning_lnd.ravel(),cape_lnd.ravel(),pcp_lnd.ravel())), columns=['F','IL','CAPE','pcp']).dropna()
###Output
_____no_output_____
###Markdown
check data
###Code
dataset.info(verbose=True)
###Output
_____no_output_____
###Markdown
formatting input (training/test) data
###Code
from sklearn.model_selection import train_test_split
feature_names = ['CAPE','pcp']
output_name = ['IL']
X = dataset[feature_names]
y = dataset[output_name]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=None)
print(X_train.info())
print(y_train.info())
###Output
_____no_output_____
###Markdown
ML R14
###Code
import scipy as sp
from sklearn.metrics import accuracy_score, precision_score, f1_score, confusion_matrix
from sklearn.preprocessing import normalize
class R14:
def fit(CAPE,pcp,y):
#thrs = sp.optimize.fminbound(lambda x: -f1_score(y, ((CAPE*pcp > x) * 1.0).astype(int)), 0, 4000)
thrs = 0.1
fval = f1_score(y, ((CAPE*pcp >= thrs) * 1.0).astype(int))
return thrs, fval
def predict(CAPE,pcp,thrs):
y_predict = ((CAPE*pcp >= thrs) * 1.0).astype(int)
y_predict_proba = CAPE*pcp
return y_predict, y_predict_proba/np.max(y_predict_proba)
[r14_thrs,fval] = R14.fit(X_train['CAPE'],X_train['pcp'],y_train)
print(r14_thrs, fval)
y_predict_r14, y_predict_prob_r14 = R14.predict(X_test['CAPE'],X_test['pcp'],r14_thrs)
###Output
_____no_output_____
###Markdown
random forest
###Code
from sklearn.ensemble import RandomForestClassifier
#rfclf = RandomForestClassifier(n_estimators=10, max_depth=4, min_samples_split=1000, random_state=0)
rfclf = RandomForestClassifier(n_estimators=3, max_depth=1, min_samples_split=1000, random_state=0)
rfclf.fit(X_train[feature_names], y_train[output_name])
y_predict_rfclf = rfclf.predict(X_test[feature_names])
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
from sklearn import metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
from sklearn.metrics import plot_roc_curve
print(precision_score(y_test['IL'], y_predict_rfclf))
print(precision_score(y_test['IL'], y_predict_r14))
print(recall_score(y_test['IL'], y_predict_rfclf))
print(recall_score(y_test['IL'], y_predict_r14))
print(f1_score(y_test['IL'], y_predict_rfclf))
print(f1_score(y_test['IL'], y_predict_r14))
auc_rfclf = metrics.roc_auc_score(y_test, rfclf.predict_proba(X_test)[:,1])
auc_r14 = metrics.roc_auc_score(y_test, y_predict_prob_r14)
print(auc_rfclf, auc_r14)
xthrs = np.linspace(0,4000,20)
fpr = []
tpr = []
for i in range(np.size(xthrs)):
yp, fv = R14.predict(X_test['CAPE'],X_test['pcp'],xthrs[i])
tn, fp, fn, tp = confusion_matrix(y_test['IL'], yp).ravel()
fpr.append( (fp/(fp+tn)) )
tpr.append( (tp/(tp+fn)) )
print(tpr)
plot_roc_curve(rfclf, X_test, y_test, label='RFC (AUC = %0.2f)'%(auc_rfclf) )
plt.plot(fpr, tpr, 'r-',label='R14 (AUC = %0.2f)'%(auc_r14))
plt.legend(fontsize=16)
# avoid interactive calls for running as a script
#plt.show()
plt.savefig('roc.pdf')
pd.DataFrame(
confusion_matrix(y_test['IL'], y_predict_rfclf),
columns=['Predicted No Lightning', 'Predicted Lightning'],
index=['True No Lightning', 'True Lightning']
)
pd.DataFrame(
confusion_matrix(y_test['IL'], y_predict_r14),
columns=['Predicted No Lightning', 'Predicted Lightning'],
index=['True No Lightning', 'True Lightning']
)
###Output
_____no_output_____
###Markdown
Imports
###Code
from barycentricLagrangeInterpolation import InterpolatingFunction
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
If the barycentricInterpolation module can not be found ("ModuleNotFoundError: No module named 'barycentricInterpolation'"), your installation of the this module failed. Probably you just forgot to run (sudo -H) python3 setup.py install within the basedirectory of this project.For further information look into the readme on Github. PreparationIf you need further information on the mathematical background and literature on the following, look into the readme on Github.Please don't be scared by the warnings due to dividing by zero or multiplying with nan. All cases are handled properly.Before we can interpolate a function, we have to do some preperations.Firstly we define the number of nodes (nNodes).
###Code
nNodes = 14
###Output
_____no_output_____
###Markdown
Now we create an object of the InterpolatingFunction class.
###Code
f = InterpolatingFunction()
###Output
initializing interpolating function class
###Markdown
Here we calculate the position of the nodes, namely Chebyshev points of the second kind (aka Gauss-Lobatto grid points). This choice is essential to obtain later a smooth as possible function an to avoid effects like Runge's phenomenon. All nodes are on the intervall [-1,1].
###Code
nodes = f.calculateNodes(nNodes,plot=True)
###Output
_____no_output_____
###Markdown
Next we calculate the barycentric weight of every node. These weights enable us later to interpolate and differentiate functions extremly simple, enormously performant and nummerically highly stable.
###Code
weights = f.calculateWeights(nodes,plot=True)
###Output
_____no_output_____
###Markdown
Now we are able to generate the base functions of the interpolation. For every node exists one base function. These base functions are independent of the values of the function to interpolate. This is probably the most important advantage of barycentric interpolation.
###Code
l= f.calculateBasisFunctions(nodes,weights,1000,plot=True)
###Output
_____no_output_____
###Markdown
Interpolate Testfunction Now we are ready, to interpolate an test function. Firstly we have to calculate the values of a testfunction at the sampling points. Here f(x)=x^3 is chosen.
###Code
values=np.power(nodes,2)
#values=np.cbrt(nodes)
plt.plot(nodes,values)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Now we generate the interpolating function.
###Code
p=f.interpolateFunction(values,l,plot=True,nodes=nodes)
###Output
_____no_output_____
###Markdown
Differentiate Testfunction Before we can differentiate the testfunction, we have to create a differentiation matrix. Here the matrix for the first derivative is calculated, but you can simply change it to a higher derivative by adapting the parameter accordingly.
###Code
D = f.differentiationMatrix(1,nodes,weights)
###Output
_____no_output_____
###Markdown
Now we can calulate the values of the derivative at the sampling points.
###Code
der_nodes = f.derivative(values,D)
###Output
_____no_output_____
###Markdown
Finally, we generate an interpolating function for the derivative.
###Code
der=f.interpolateFunction(der_nodes,l,plot=True,nodes=nodes)
###Output
_____no_output_____
###Markdown
Short example on how to parse a whole fileIn my main blog post I walked though the steps of how I managed to extract tabular data from a PDF. I wrapped the whole thing in a few functions to make extracting from an entire file possible.First we impore the relevant function:
###Code
from PDFFixup.fixer import get_tables
###Output
_____no_output_____
###Markdown
Next we run it over the whole file:
###Code
file_path = "data/DH_Ministerial_gifts_hospitality_travel_and_external_meetings_Jan_to_Mar_2015.pdf"
extracted_table = get_tables(file_path)
len(extracted_table)
###Output
_____no_output_____
###Markdown
The returned object is a list of pages, each page containing the tabular data:
###Code
extracted_table[2]
###Output
_____no_output_____
###Markdown
To get things into a format that can be dumped into csv, we need to do a bit more work. The lists returned for each row can be different lengths. This reflects different sizes of the column widths in the original tables. To get around this we simply pad each row to the same length. The code below will do this, concatenate the pages and save the whole thing as a csv file:
###Code
def table_to_csv(extracted_table):
max_length = 0
#concatenate the pages
concatenated_table = [row for page in extracted_table for row in page]
#find the maximum length
for row in concatenated_table:
if len(row) > max_length:
max_length = len(row)
# convert to string
out = ""
for row in concatenated_table:
# pad the row
if len(row) < max_length:
row += [""] * (max_length - len(row))
out += ",".join(row) + "\n"
return out
csved = table_to_csv(extracted_table)
# Note: you might want to change the encoding, depending on what format your document is
open("data/example_out.csv", "wb").write(csved.encode("utf-8"))
###Output
_____no_output_____
###Markdown
Or make p_drophead model's parameter
###Code
class XLMRobertaClf():
def __init__(self, p_drophead=0.1):
self.any_backbone_name = XLMRobertaModel.from_pretrained("xlm-roberta-base")
set_drophead(self.any_backbone_name, p_drophead)
self.clf = nn.Linear(self.any_backbone_name.pooler.dense.out_features, 1)
def forward(ids):
x = self.any_backbone_name(ids)
x = self.clf(x)
return x
model = XLMRobertaClf(p_drophead=0.2)
###Output
_____no_output_____
###Markdown
Example for each class
###Code
import numpy as np
from dipy.sims.voxel import multi_tensor, multi_tensor_odf
from dipy.core.sphere import disperse_charges, HemiSphere
from dipy.core.gradients import gradient_table
import torch
from DELIMIT.SphericalHarmonicTransformation import Signal2SH, SH2Signal
from DELIMIT.SphericalConvolution import LocalSphericalConvolution, SphericalConvolution
from DELIMIT.loss import MSESignal
###Output
_____no_output_____
###Markdown
Parameters that need to be set
###Code
num_gradients = 30
sh_order = 4
###Output
_____no_output_____
###Markdown
Signal Generation
###Code
theta = np.pi * np.random.rand(num_gradients)
phi = 2 * np.pi * np.random.rand(num_gradients)
hsph_initial = HemiSphere(theta=theta, phi=phi)
hsph_updated, potential = disperse_charges(hsph_initial, 5000)
gradients = hsph_updated.vertices
gtab = gradient_table(np.concatenate((np.zeros(1), np.ones(30)*1000)),
np.concatenate((np.zeros((1, 3)), gradients)))
mevals = np.array([[0.0015, 0.0003, 0.0003],
[0.0015, 0.0003, 0.0003]])
angles = [(0, 0), (60, 0)]
fractions = [50, 50]
signal, sticks = multi_tensor(gtab, mevals, S0=1, angles=angles,
fractions=fractions, snr=None)
###Output
_____no_output_____
###Markdown
Signal Domain to Spherical Harmonic Domain transformation
###Code
s2sh = Signal2SH(gradients=gradients, sh_order=sh_order, lb_lambda=0.006)
input_tensor = torch.from_numpy(signal[1:]).reshape(1, num_gradients, 1, 1, 1).float()
input_tensor_sh = s2sh(input_tensor)
print(input_tensor_sh.shape)
###Output
torch.Size([1, 15, 1, 1, 1])
###Markdown
Local Spherical Convolution
###Code
lsc = LocalSphericalConvolution(shells_in=1, shells_out=3,
sh_order_in=sh_order, sh_order_out=sh_order, lb_lambda=0.006,
sampled_gradients=gradients, kernel_sizes=[5, 5],
angular_distance=(np.pi / 10))
lsc_tensor_sh = lsc(input_tensor_sh)
num_coefficients = int((sh_order + 1) * (sh_order / 2 + 1)) # just for visualization
print(lsc_tensor_sh.reshape(1, -1, num_coefficients, 1, 1, 1).shape)
###Output
torch.Size([1, 3, 15, 1, 1, 1])
###Markdown
Spherical Convolution
###Code
sc = SphericalConvolution(shells_in=3, shells_out=1, sh_order=sh_order)
sc_tensor_sh = sc(lsc_tensor_sh)
print(sc_tensor_sh.shape)
###Output
torch.Size([1, 15, 1, 1, 1])
###Markdown
Loss calculation
###Code
loss = MSESignal(sh_order=sh_order, gradients=gradients)
print(loss(sc_tensor_sh, input_tensor_sh, torch.from_numpy(np.ones(1)).reshape(1, 1, 1, 1)))
###Output
tensor(0.4749, grad_fn=<MeanBackward1>)
###Markdown
Spherical Harmonic Domain to Signal domain transformation
###Code
sh2s = SH2Signal(sh_order=sh_order, gradients=gradients)
output_signal = sh2s(sc_tensor_sh)
print(output_signal.shape)
###Output
torch.Size([1, 30, 1, 1, 1])
###Markdown
APE-Gen Example
###Code
%mkdir -p example
%cd example
%pwd
###Output
/Users/jayveeabella/kavrakilab/APE-Gen/example
###Markdown
1. Input Preparation Obtaining a receptor structure from PDB
###Code
# Run this cell if ...
# call get_pMHC_pdb script
###Output
_____no_output_____
###Markdown
Obtaining a receptor structure through sequence
###Code
from subprocess import call
call(["wget 'https://www.uniprot.org/uniprot/P01892.fasta'"], shell=True)
%pwd
%ls
import get_pMHC_pdb
get_pMHC_pdb.main(["3I6L"])
%ls
import model_receptor
model_receptor.main(["P01892.fasta", "3I6L.pdb"])
###Output
Removing signal and transmembrane portions of alpha chain seq
Length of original alpha chain seq: 365
Length of processed alpha chain seq: 270
Length of alpha chain in template: 274
Preparing receptor template
Preparing target sequence
Aligning target sequence with receptor template
Aligning took 22.750502109527588 seconds.
Creating model
0 atoms in HETATM/BLK residues constrained
to protein atoms within 2.30 angstroms
and protein CA atoms within 10.00 angstroms
0 atoms in residues without defined topology
constrained to be rigid bodies
>> Model assessment by DOPE potential
DOPE score : -40488.113281
>> Model assessment by DOPE potential
DOPE score : -40846.265625
>> Summary of successfully produced models:
Filename molpdf DOPE score GA341 score
----------------------------------------------------------------------
target_sequence.B99990001.pdb 2269.25073 -40488.11328 1.00000
target_sequence.B99990002.pdb 2202.93115 -40846.26562 1.00000
Top model: target_sequence.B99990002.pdb (DOPE score -40846.2656250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000)
Homology modelling took 121.1713240146637 seconds.
###Markdown
2. Generate Models
###Code
import APE_Gen
APE_Gen.main(["LLWTLVVLL", "HLA-A*02:01", "-o"])
###Output
Preparing peptide and MHC
Peptide sequence: LLWTLVVLL
Receptor class: HLA-A*02:01
Aligning peptide anchors to MHC pockets
Sampling peptide backbone
Found RCD folder. Skipping this step
Loading sampled conformations
Num full confs: 39
Saving filtered peptide confs
Found peptide_confs.pdb, Please move to recompute.
Num filtered confs: 16
Average filtered energy: -21.0134997437
Saving complete peptide-HLA complexes
Found full_system_confs/ folder. Please move to recompute.
energy of selected binding mode: -23.2795639 38
Scoring/Minimizing with OpenMM ...
Found full_system_confs/openmm-minimized folder. Please move to recompute.
###Markdown
3. Postprocessing / Visualization
###Code
import nglview
import glob
import mdtraj as md
from matplotlib.colors import to_hex
import matplotlib as mpl
widget = nglview.NGLWidget()
widget.clear_representations()
structure = glob.glob("0/*.pdb")[0]
s = md.load(structure)
#widget.add_structure(s[0])
widget.add_trajectory(s)
#comp.add_licorice()
widget
###Output
_____no_output_____
###Markdown
Kriging exampleSince the global data base we used cannot be shared, we demonstrate using freelyavailable data from Assumpcao et al. (2013) for South America, how the codescan be used.For simplicity's sake we did not use two different categories here, but focusedon the continental area instead, by simply discarding all points where the Mohodepth is less than 30 km.
###Code
import numpy as np
import matplotlib.pyplot as plt
import clean_kriging
import sklearn.cluster as cluster
from func_dump import get_pairwise_geo_distance
import logging
logging.basicConfig(level=logging.DEBUG)
def test_cluster_size(point_data,max_size,do_plot=False,chosen_range=None,
perc_levels=20):
"""Test effect of number of clusters on cluster radius and size
"""
cluster_sizes = range(5,max_size,1)
radius_1 = np.zeros((len(cluster_sizes),3))
cluster_N = np.zeros((len(cluster_sizes),3))
percentages = np.zeros((len(cluster_sizes),perc_levels+1))
X = point_data
Xsel = X
pd = get_pairwise_geo_distance(Xsel[:,0],Xsel[:,1])
for k,n_clusters in enumerate(cluster_sizes):
model = cluster.AgglomerativeClustering(linkage='complete',
affinity='precomputed',
n_clusters=n_clusters)
model.fit(pd)
radius = np.zeros((n_clusters))
cluster_members = np.zeros((n_clusters))
for i,c in enumerate(np.unique(model.labels_)):
ix = np.where(model.labels_==c)[0]
radius[i] = 0.5*pd[np.ix_(ix,ix)].max()
cluster_members[i] = np.sum(model.labels_==c)
r1i,r1a,r1s = (radius.min(),radius.max(),radius.std())
radius_1[k,0] = r1i
radius_1[k,1] = r1a
radius_1[k,2] = np.median(radius)
percentages[k,:] = np.percentile(radius,np.linspace(0,100,perc_levels+1))
radius_1 = radius_1*110.0
percentages = percentages*110.0
if do_plot:
plt.plot(cluster_sizes,radius_1)
for i in range(perc_levels):
if i<perc_levels/2:
alpha = (i+1)*2.0/perc_levels
else:
alpha = (perc_levels-i)*2.0/perc_levels
plt.fill_between(cluster_sizes,percentages[:,i],percentages[:,i+1],
alpha=alpha,facecolor='green',edgecolor='none')
if not chosen_range is None:
return cluster_sizes[np.argmin(np.abs(radius_1[:,2]-chosen_range))]
def cluster_map(krigor):
"""Visualize distribution spatial distribution of a cluster
"""
fig = plt.figure(figsize=(7,11))
Xsel = krigor.X
model = krigor.cluster_results[0]
n_clusters = model.n_clusters
cmap = plt.cm.get_cmap("jet",n_clusters)
clu = model.cluster_centers_
pointsize = np.sqrt(np.bincount(model.labels_))
for i in range(len(Xsel)):
j = model.labels_[i]
if (Xsel[i,0]*clu[j,0])<0 and np.abs(np.abs(clu[j,0])-180.0) < 10.0:
continue
plt.plot((Xsel[i,0],clu[j,0]),(Xsel[i,1],clu[j,1]),
color=cmap(model.labels_[i]),alpha=0.5)
print clu.shape,n_clusters,pointsize.shape
plt.scatter(clu[:,0],clu[:,1],7.5*pointsize,np.linspace(0,n_clusters,n_clusters),'s',
alpha=1.0,cmap=cmap,edgecolor='r',linewidth=1.5)
plt.scatter(Xsel[:,0],Xsel[:,1],2,model.labels_,cmap=cmap,alpha=1.0,edgecolor='k')
plt.axis('equal')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
#plt.xlim([-90,-20])
###Output
_____no_output_____
###Markdown
Data inputWe load the file shipped together with this example. See the inside of the files for references to the sources.
###Code
point_data = np.loadtxt("Seismic_Moho_Assumpcao.txt",delimiter=",")
point_data[:,2] = -0.001*point_data[:,2]
point_data = point_data[point_data[:,2]>30.0,:]
lon = np.arange(np.round(point_data[:,0].min()),np.round(point_data[:,0].max()+1),1)
lat = np.arange(np.round(point_data[:,1].min()),np.round(point_data[:,1].max()+1),1)
lonGrid,latGrid = np.meshgrid(lon,lat)
test_cluster_size(point_data,30,True)
###Output
_____no_output_____
###Markdown
Prior specificationWe want to use inverse gamma priors for nugget, sill and range. The inverse gamma distribution is defined in terms of the parameters $\alpha$ and $\beta$, which we derive here from a specified mean and variance. $$\mu = \mathrm{Mean} = \frac{\beta}{\alpha-1} \quad \text{and}\quad \sigma^2= \mathrm{var} = \frac{\beta^2}{(\alpha-1)^2(\alpha-2)}$$Thus,$$\alpha = 2 + \frac{\mu^2}{\sigma^2} \quad \text{and}\quad\beta = \frac{\mu^3}{\sigma^2} + \mu$$The variable `moments` contains mean and variance for all nugget, sill and range. The last dimension of `moments` would be used, if there are different categories (i.e. ocean vs. continent), but in this example this is not required.
###Code
moments = np.zeros((3,2,1))
moments[:,:,0] = np.array(((1.0,3.0**2),(40.0,40.0**2),(10.0,10.0**2)))
beta = moments[:,0,:]**3/moments[:,1,:]+moments[:,0,:]
alpha = 2 + moments[:,0,:]**2 / moments[:,1,:]
###Output
_____no_output_____
###Markdown
ClusteringAll important routines are contained in objects of the class `MLEKrigor`. Such an object is created by passing it longitude,latitude,value and category. In this example, all category values are simply zero. Any clustering algorithm from the scikit-learn package can be used. Any options contained in the dictionary `clusterOption` will be passed to the constructor.After clustering, the covariance parameters for all clusters are determined (`krigor._fit_all_clusters`).
###Code
cat = np.ones((point_data.shape[0]),dtype=int)
krigor = clean_kriging.MLEKrigor(point_data[:,0],point_data[:,1],point_data[:,2],cat)
clusterOptions=[{'linkage':'complete','affinity':'precomputed','n_clusters':16}]
krigor._cluster_points(cluster.AgglomerativeClustering,options=clusterOptions,use_pd=True)
krigor._detect_dupes()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=np.dstack((alpha,beta)),prior="inv_gamma",maxRange=None)
krigDict = {"threshold":1,"lambda_w":1.0,"minSill":1.0,
"minNugget":0.5,
"maxAbsError":4.0,"maxRelError":2.0,"badPoints":None,
"hyperPars":np.dstack((alpha,beta)),"prior":"inv_gamma",
"blocks":10}
cluster_map(krigor)
###Output
(16L, 2L) 16 (16L,)
###Markdown
In this map, the individual points are connected with lines to their respective cluster center Outlier detectionThis is the most time-consuming step. The routine `jacknife` performs the hold-one-out cross validation to detect possible outliers. Two criteria are used to determine if a point is an outlier. 1. The **absolute** prediction error needs to be 4 km or more.2. The prediction error is twice as high as the estimated error.This is controlled by the variables `maxAbsErr` and `maxRelErr` passed to the function `jacknife`. The third parameter ($\lambda_w$) controls how the covariance parameters are interpolated.There are two rounds of outlier detection (see main text for explanation).
###Code
sigma1,new_chosen = krigor.jacknife(4.0,2.0,100.0)
krigor.chosen_points = new_chosen.copy()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=krigDict["hyperPars"],prior="inv_gamma",maxRange=None)
sigma2,new_new_chosen = krigor.jacknife(4.0,2.0,100.0)
krigor.chosen_points = new_new_chosen.copy()
krigor._fit_all_clusters(minNugget=0.5,minSill=1.0,
hyperpars=krigDict["hyperPars"],prior="inv_gamma",maxRange=None)
###Output
clean_kriging.py:119: RuntimeWarning: divide by zero encountered in log
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
clean_kriging.py:119: RuntimeWarning: divide by zero encountered in divide
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
clean_kriging.py:119: RuntimeWarning: invalid value encountered in subtract
return np.sum(-(hyperpars[:,0]+1)*np.log(vals) - hyperpars[:,1]/vals)
INFO:root:Jacknife category 0 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/146
DEBUG:root:Jacknife_kriging_all_chosen: 1/146
DEBUG:root:Jacknife_kriging_all_chosen: 2/146
DEBUG:root:Jacknife_kriging_all_chosen: 3/146
DEBUG:root:Jacknife_kriging_all_chosen: 4/146
DEBUG:root:Jacknife_kriging_all_chosen: 5/146
DEBUG:root:Jacknife_kriging_all_chosen: 6/146
DEBUG:root:Jacknife_kriging_all_chosen: 7/146
DEBUG:root:Jacknife_kriging_all_chosen: 8/146
DEBUG:root:Jacknife_kriging_all_chosen: 9/146
DEBUG:root:Jacknife_kriging_all_chosen: 10/146
DEBUG:root:Jacknife_kriging_all_chosen: 11/146
DEBUG:root:Jacknife_kriging_all_chosen: 12/146
DEBUG:root:Jacknife_kriging_all_chosen: 13/146
DEBUG:root:Jacknife_kriging_all_chosen: 14/146
DEBUG:root:Jacknife_kriging_all_chosen: 15/146
DEBUG:root:Jacknife_kriging_all_chosen: 16/146
DEBUG:root:Jacknife_kriging_all_chosen: 17/146
DEBUG:root:Jacknife_kriging_all_chosen: 18/146
DEBUG:root:Jacknife_kriging_all_chosen: 19/146
DEBUG:root:Jacknife_kriging_all_chosen: 20/146
DEBUG:root:Jacknife_kriging_all_chosen: 21/146
DEBUG:root:Jacknife_kriging_all_chosen: 22/146
DEBUG:root:Jacknife_kriging_all_chosen: 23/146
DEBUG:root:Jacknife_kriging_all_chosen: 24/146
DEBUG:root:Jacknife_kriging_all_chosen: 25/146
DEBUG:root:Jacknife_kriging_all_chosen: 26/146
DEBUG:root:Jacknife_kriging_all_chosen: 27/146
DEBUG:root:Jacknife_kriging_all_chosen: 28/146
DEBUG:root:Jacknife_kriging_all_chosen: 29/146
DEBUG:root:Jacknife_kriging_all_chosen: 30/146
DEBUG:root:Jacknife_kriging_all_chosen: 31/146
DEBUG:root:Jacknife_kriging_all_chosen: 32/146
DEBUG:root:Jacknife_kriging_all_chosen: 33/146
DEBUG:root:Jacknife_kriging_all_chosen: 34/146
DEBUG:root:Jacknife_kriging_all_chosen: 35/146
DEBUG:root:Jacknife_kriging_all_chosen: 36/146
DEBUG:root:Jacknife_kriging_all_chosen: 37/146
DEBUG:root:Jacknife_kriging_all_chosen: 38/146
DEBUG:root:Jacknife_kriging_all_chosen: 39/146
DEBUG:root:Jacknife_kriging_all_chosen: 40/146
DEBUG:root:Jacknife_kriging_all_chosen: 41/146
DEBUG:root:Jacknife_kriging_all_chosen: 42/146
DEBUG:root:Jacknife_kriging_all_chosen: 43/146
DEBUG:root:Jacknife_kriging_all_chosen: 44/146
DEBUG:root:Jacknife_kriging_all_chosen: 45/146
DEBUG:root:Jacknife_kriging_all_chosen: 46/146
DEBUG:root:Jacknife_kriging_all_chosen: 47/146
DEBUG:root:Jacknife_kriging_all_chosen: 48/146
DEBUG:root:Jacknife_kriging_all_chosen: 49/146
DEBUG:root:Jacknife_kriging_all_chosen: 50/146
DEBUG:root:Jacknife_kriging_all_chosen: 51/146
DEBUG:root:Jacknife_kriging_all_chosen: 52/146
DEBUG:root:Jacknife_kriging_all_chosen: 53/146
DEBUG:root:Jacknife_kriging_all_chosen: 54/146
DEBUG:root:Jacknife_kriging_all_chosen: 55/146
DEBUG:root:Jacknife_kriging_all_chosen: 56/146
DEBUG:root:Jacknife_kriging_all_chosen: 57/146
DEBUG:root:Jacknife_kriging_all_chosen: 58/146
DEBUG:root:Jacknife_kriging_all_chosen: 59/146
DEBUG:root:Jacknife_kriging_all_chosen: 60/146
DEBUG:root:Jacknife_kriging_all_chosen: 61/146
DEBUG:root:Jacknife_kriging_all_chosen: 62/146
DEBUG:root:Jacknife_kriging_all_chosen: 63/146
DEBUG:root:Jacknife_kriging_all_chosen: 64/146
DEBUG:root:Jacknife_kriging_all_chosen: 65/146
DEBUG:root:Jacknife_kriging_all_chosen: 66/146
DEBUG:root:Jacknife_kriging_all_chosen: 67/146
DEBUG:root:Jacknife_kriging_all_chosen: 68/146
DEBUG:root:Jacknife_kriging_all_chosen: 69/146
DEBUG:root:Jacknife_kriging_all_chosen: 70/146
DEBUG:root:Jacknife_kriging_all_chosen: 71/146
DEBUG:root:Jacknife_kriging_all_chosen: 72/146
DEBUG:root:Jacknife_kriging_all_chosen: 73/146
DEBUG:root:Jacknife_kriging_all_chosen: 74/146
DEBUG:root:Jacknife_kriging_all_chosen: 75/146
DEBUG:root:Jacknife_kriging_all_chosen: 76/146
DEBUG:root:Jacknife_kriging_all_chosen: 77/146
DEBUG:root:Jacknife_kriging_all_chosen: 78/146
DEBUG:root:Jacknife_kriging_all_chosen: 79/146
DEBUG:root:Jacknife_kriging_all_chosen: 80/146
DEBUG:root:Jacknife_kriging_all_chosen: 81/146
DEBUG:root:Jacknife_kriging_all_chosen: 82/146
DEBUG:root:Jacknife_kriging_all_chosen: 83/146
DEBUG:root:Jacknife_kriging_all_chosen: 84/146
DEBUG:root:Jacknife_kriging_all_chosen: 85/146
DEBUG:root:Jacknife_kriging_all_chosen: 86/146
DEBUG:root:Jacknife_kriging_all_chosen: 87/146
DEBUG:root:Jacknife_kriging_all_chosen: 88/146
DEBUG:root:Jacknife_kriging_all_chosen: 89/146
DEBUG:root:Jacknife_kriging_all_chosen: 90/146
DEBUG:root:Jacknife_kriging_all_chosen: 91/146
DEBUG:root:Jacknife_kriging_all_chosen: 92/146
DEBUG:root:Jacknife_kriging_all_chosen: 93/146
DEBUG:root:Jacknife_kriging_all_chosen: 94/146
DEBUG:root:Jacknife_kriging_all_chosen: 95/146
DEBUG:root:Jacknife_kriging_all_chosen: 96/146
DEBUG:root:Jacknife_kriging_all_chosen: 97/146
DEBUG:root:Jacknife_kriging_all_chosen: 98/146
DEBUG:root:Jacknife_kriging_all_chosen: 99/146
DEBUG:root:Jacknife_kriging_all_chosen: 100/146
DEBUG:root:Jacknife_kriging_all_chosen: 101/146
DEBUG:root:Jacknife_kriging_all_chosen: 102/146
DEBUG:root:Jacknife_kriging_all_chosen: 103/146
DEBUG:root:Jacknife_kriging_all_chosen: 104/146
DEBUG:root:Jacknife_kriging_all_chosen: 105/146
DEBUG:root:Jacknife_kriging_all_chosen: 106/146
DEBUG:root:Jacknife_kriging_all_chosen: 107/146
DEBUG:root:Jacknife_kriging_all_chosen: 108/146
DEBUG:root:Jacknife_kriging_all_chosen: 109/146
DEBUG:root:Jacknife_kriging_all_chosen: 110/146
DEBUG:root:Jacknife_kriging_all_chosen: 111/146
DEBUG:root:Jacknife_kriging_all_chosen: 112/146
DEBUG:root:Jacknife_kriging_all_chosen: 113/146
DEBUG:root:Jacknife_kriging_all_chosen: 114/146
DEBUG:root:Jacknife_kriging_all_chosen: 115/146
DEBUG:root:Jacknife_kriging_all_chosen: 116/146
DEBUG:root:Jacknife_kriging_all_chosen: 117/146
DEBUG:root:Jacknife_kriging_all_chosen: 118/146
DEBUG:root:Jacknife_kriging_all_chosen: 119/146
DEBUG:root:Jacknife_kriging_all_chosen: 120/146
DEBUG:root:Jacknife_kriging_all_chosen: 121/146
DEBUG:root:Jacknife_kriging_all_chosen: 122/146
DEBUG:root:Jacknife_kriging_all_chosen: 123/146
DEBUG:root:Jacknife_kriging_all_chosen: 124/146
DEBUG:root:Jacknife_kriging_all_chosen: 125/146
DEBUG:root:Jacknife_kriging_all_chosen: 126/146
DEBUG:root:Jacknife_kriging_all_chosen: 127/146
DEBUG:root:Jacknife_kriging_all_chosen: 128/146
DEBUG:root:Jacknife_kriging_all_chosen: 129/146
DEBUG:root:Jacknife_kriging_all_chosen: 130/146
DEBUG:root:Jacknife_kriging_all_chosen: 131/146
DEBUG:root:Jacknife_kriging_all_chosen: 132/146
DEBUG:root:Jacknife_kriging_all_chosen: 133/146
DEBUG:root:Jacknife_kriging_all_chosen: 134/146
DEBUG:root:Jacknife_kriging_all_chosen: 135/146
DEBUG:root:Jacknife_kriging_all_chosen: 136/146
DEBUG:root:Jacknife_kriging_all_chosen: 137/146
DEBUG:root:Jacknife_kriging_all_chosen: 138/146
DEBUG:root:Jacknife_kriging_all_chosen: 139/146
DEBUG:root:Jacknife_kriging_all_chosen: 140/146
DEBUG:root:Jacknife_kriging_all_chosen: 141/146
DEBUG:root:Jacknife_kriging_all_chosen: 142/146
DEBUG:root:Jacknife_kriging_all_chosen: 143/146
DEBUG:root:Jacknife_kriging_all_chosen: 144/146
DEBUG:root:Jacknife_kriging_all_chosen: 145/146
INFO:root:Jacknife category 1 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/8
DEBUG:root:Jacknife_kriging_all_chosen: 1/8
DEBUG:root:Jacknife_kriging_all_chosen: 2/8
DEBUG:root:Jacknife_kriging_all_chosen: 3/8
DEBUG:root:Jacknife_kriging_all_chosen: 4/8
DEBUG:root:Jacknife_kriging_all_chosen: 5/8
DEBUG:root:Jacknife_kriging_all_chosen: 6/8
DEBUG:root:Jacknife_kriging_all_chosen: 7/8
INFO:root:Jacknife category 2 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/18
DEBUG:root:Jacknife_kriging_all_chosen: 1/18
DEBUG:root:Jacknife_kriging_all_chosen: 2/18
DEBUG:root:Jacknife_kriging_all_chosen: 3/18
DEBUG:root:Jacknife_kriging_all_chosen: 4/18
DEBUG:root:Jacknife_kriging_all_chosen: 5/18
DEBUG:root:Jacknife_kriging_all_chosen: 6/18
DEBUG:root:Jacknife_kriging_all_chosen: 7/18
DEBUG:root:Jacknife_kriging_all_chosen: 8/18
DEBUG:root:Jacknife_kriging_all_chosen: 9/18
DEBUG:root:Jacknife_kriging_all_chosen: 10/18
DEBUG:root:Jacknife_kriging_all_chosen: 11/18
DEBUG:root:Jacknife_kriging_all_chosen: 12/18
DEBUG:root:Jacknife_kriging_all_chosen: 13/18
DEBUG:root:Jacknife_kriging_all_chosen: 14/18
DEBUG:root:Jacknife_kriging_all_chosen: 15/18
DEBUG:root:Jacknife_kriging_all_chosen: 16/18
DEBUG:root:Jacknife_kriging_all_chosen: 17/18
INFO:root:Jacknife category 3 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/125
DEBUG:root:Jacknife_kriging_all_chosen: 1/125
DEBUG:root:Jacknife_kriging_all_chosen: 2/125
DEBUG:root:Jacknife_kriging_all_chosen: 3/125
DEBUG:root:Jacknife_kriging_all_chosen: 4/125
DEBUG:root:Jacknife_kriging_all_chosen: 5/125
DEBUG:root:Jacknife_kriging_all_chosen: 6/125
DEBUG:root:Jacknife_kriging_all_chosen: 7/125
DEBUG:root:Jacknife_kriging_all_chosen: 8/125
DEBUG:root:Jacknife_kriging_all_chosen: 9/125
DEBUG:root:Jacknife_kriging_all_chosen: 10/125
DEBUG:root:Jacknife_kriging_all_chosen: 11/125
DEBUG:root:Jacknife_kriging_all_chosen: 12/125
DEBUG:root:Jacknife_kriging_all_chosen: 13/125
DEBUG:root:Jacknife_kriging_all_chosen: 14/125
DEBUG:root:Jacknife_kriging_all_chosen: 15/125
DEBUG:root:Jacknife_kriging_all_chosen: 16/125
DEBUG:root:Jacknife_kriging_all_chosen: 17/125
DEBUG:root:Jacknife_kriging_all_chosen: 18/125
DEBUG:root:Jacknife_kriging_all_chosen: 19/125
DEBUG:root:Jacknife_kriging_all_chosen: 20/125
DEBUG:root:Jacknife_kriging_all_chosen: 21/125
DEBUG:root:Jacknife_kriging_all_chosen: 22/125
DEBUG:root:Jacknife_kriging_all_chosen: 23/125
DEBUG:root:Jacknife_kriging_all_chosen: 24/125
DEBUG:root:Jacknife_kriging_all_chosen: 25/125
DEBUG:root:Jacknife_kriging_all_chosen: 26/125
DEBUG:root:Jacknife_kriging_all_chosen: 27/125
DEBUG:root:Jacknife_kriging_all_chosen: 28/125
DEBUG:root:Jacknife_kriging_all_chosen: 29/125
DEBUG:root:Jacknife_kriging_all_chosen: 30/125
DEBUG:root:Jacknife_kriging_all_chosen: 31/125
DEBUG:root:Jacknife_kriging_all_chosen: 32/125
DEBUG:root:Jacknife_kriging_all_chosen: 33/125
DEBUG:root:Jacknife_kriging_all_chosen: 34/125
DEBUG:root:Jacknife_kriging_all_chosen: 35/125
DEBUG:root:Jacknife_kriging_all_chosen: 36/125
DEBUG:root:Jacknife_kriging_all_chosen: 37/125
DEBUG:root:Jacknife_kriging_all_chosen: 38/125
DEBUG:root:Jacknife_kriging_all_chosen: 39/125
DEBUG:root:Jacknife_kriging_all_chosen: 40/125
DEBUG:root:Jacknife_kriging_all_chosen: 41/125
DEBUG:root:Jacknife_kriging_all_chosen: 42/125
DEBUG:root:Jacknife_kriging_all_chosen: 43/125
DEBUG:root:Jacknife_kriging_all_chosen: 44/125
DEBUG:root:Jacknife_kriging_all_chosen: 45/125
DEBUG:root:Jacknife_kriging_all_chosen: 46/125
DEBUG:root:Jacknife_kriging_all_chosen: 47/125
DEBUG:root:Jacknife_kriging_all_chosen: 48/125
DEBUG:root:Jacknife_kriging_all_chosen: 49/125
DEBUG:root:Jacknife_kriging_all_chosen: 50/125
DEBUG:root:Jacknife_kriging_all_chosen: 51/125
DEBUG:root:Jacknife_kriging_all_chosen: 52/125
DEBUG:root:Jacknife_kriging_all_chosen: 53/125
DEBUG:root:Jacknife_kriging_all_chosen: 54/125
DEBUG:root:Jacknife_kriging_all_chosen: 55/125
DEBUG:root:Jacknife_kriging_all_chosen: 56/125
DEBUG:root:Jacknife_kriging_all_chosen: 57/125
DEBUG:root:Jacknife_kriging_all_chosen: 58/125
DEBUG:root:Jacknife_kriging_all_chosen: 59/125
DEBUG:root:Jacknife_kriging_all_chosen: 60/125
DEBUG:root:Jacknife_kriging_all_chosen: 61/125
DEBUG:root:Jacknife_kriging_all_chosen: 62/125
DEBUG:root:Jacknife_kriging_all_chosen: 63/125
DEBUG:root:Jacknife_kriging_all_chosen: 64/125
DEBUG:root:Jacknife_kriging_all_chosen: 65/125
DEBUG:root:Jacknife_kriging_all_chosen: 66/125
DEBUG:root:Jacknife_kriging_all_chosen: 67/125
DEBUG:root:Jacknife_kriging_all_chosen: 68/125
DEBUG:root:Jacknife_kriging_all_chosen: 69/125
DEBUG:root:Jacknife_kriging_all_chosen: 70/125
DEBUG:root:Jacknife_kriging_all_chosen: 71/125
DEBUG:root:Jacknife_kriging_all_chosen: 72/125
DEBUG:root:Jacknife_kriging_all_chosen: 73/125
DEBUG:root:Jacknife_kriging_all_chosen: 74/125
DEBUG:root:Jacknife_kriging_all_chosen: 75/125
DEBUG:root:Jacknife_kriging_all_chosen: 76/125
DEBUG:root:Jacknife_kriging_all_chosen: 77/125
DEBUG:root:Jacknife_kriging_all_chosen: 78/125
DEBUG:root:Jacknife_kriging_all_chosen: 79/125
DEBUG:root:Jacknife_kriging_all_chosen: 80/125
DEBUG:root:Jacknife_kriging_all_chosen: 81/125
DEBUG:root:Jacknife_kriging_all_chosen: 82/125
DEBUG:root:Jacknife_kriging_all_chosen: 83/125
DEBUG:root:Jacknife_kriging_all_chosen: 84/125
DEBUG:root:Jacknife_kriging_all_chosen: 85/125
DEBUG:root:Jacknife_kriging_all_chosen: 86/125
DEBUG:root:Jacknife_kriging_all_chosen: 87/125
DEBUG:root:Jacknife_kriging_all_chosen: 88/125
DEBUG:root:Jacknife_kriging_all_chosen: 89/125
DEBUG:root:Jacknife_kriging_all_chosen: 90/125
DEBUG:root:Jacknife_kriging_all_chosen: 91/125
DEBUG:root:Jacknife_kriging_all_chosen: 92/125
DEBUG:root:Jacknife_kriging_all_chosen: 93/125
DEBUG:root:Jacknife_kriging_all_chosen: 94/125
DEBUG:root:Jacknife_kriging_all_chosen: 95/125
DEBUG:root:Jacknife_kriging_all_chosen: 96/125
DEBUG:root:Jacknife_kriging_all_chosen: 97/125
DEBUG:root:Jacknife_kriging_all_chosen: 98/125
DEBUG:root:Jacknife_kriging_all_chosen: 99/125
DEBUG:root:Jacknife_kriging_all_chosen: 100/125
DEBUG:root:Jacknife_kriging_all_chosen: 101/125
DEBUG:root:Jacknife_kriging_all_chosen: 102/125
DEBUG:root:Jacknife_kriging_all_chosen: 103/125
DEBUG:root:Jacknife_kriging_all_chosen: 104/125
DEBUG:root:Jacknife_kriging_all_chosen: 105/125
DEBUG:root:Jacknife_kriging_all_chosen: 106/125
DEBUG:root:Jacknife_kriging_all_chosen: 107/125
DEBUG:root:Jacknife_kriging_all_chosen: 108/125
DEBUG:root:Jacknife_kriging_all_chosen: 109/125
DEBUG:root:Jacknife_kriging_all_chosen: 110/125
DEBUG:root:Jacknife_kriging_all_chosen: 111/125
DEBUG:root:Jacknife_kriging_all_chosen: 112/125
DEBUG:root:Jacknife_kriging_all_chosen: 113/125
DEBUG:root:Jacknife_kriging_all_chosen: 114/125
DEBUG:root:Jacknife_kriging_all_chosen: 115/125
DEBUG:root:Jacknife_kriging_all_chosen: 116/125
DEBUG:root:Jacknife_kriging_all_chosen: 117/125
DEBUG:root:Jacknife_kriging_all_chosen: 118/125
DEBUG:root:Jacknife_kriging_all_chosen: 119/125
DEBUG:root:Jacknife_kriging_all_chosen: 120/125
DEBUG:root:Jacknife_kriging_all_chosen: 121/125
DEBUG:root:Jacknife_kriging_all_chosen: 122/125
DEBUG:root:Jacknife_kriging_all_chosen: 123/125
DEBUG:root:Jacknife_kriging_all_chosen: 124/125
INFO:root:Jacknife category 4 label 1
DEBUG:root:Jacknife_kriging_all_chosen: 0/40
DEBUG:root:Jacknife_kriging_all_chosen: 1/40
DEBUG:root:Jacknife_kriging_all_chosen: 2/40
DEBUG:root:Jacknife_kriging_all_chosen: 3/40
DEBUG:root:Jacknife_kriging_all_chosen: 4/40
DEBUG:root:Jacknife_kriging_all_chosen: 5/40
DEBUG:root:Jacknife_kriging_all_chosen: 6/40
DEBUG:root:Jacknife_kriging_all_chosen: 7/40
DEBUG:root:Jacknife_kriging_all_chosen: 8/40
DEBUG:root:Jacknife_kriging_all_chosen: 9/40
DEBUG:root:Jacknife_kriging_all_chosen: 10/40
DEBUG:root:Jacknife_kriging_all_chosen: 11/40
DEBUG:root:Jacknife_kriging_all_chosen: 12/40
DEBUG:root:Jacknife_kriging_all_chosen: 13/40
DEBUG:root:Jacknife_kriging_all_chosen: 14/40
DEBUG:root:Jacknife_kriging_all_chosen: 15/40
DEBUG:root:Jacknife_kriging_all_chosen: 16/40
DEBUG:root:Jacknife_kriging_all_chosen: 17/40
DEBUG:root:Jacknife_kriging_all_chosen: 18/40
DEBUG:root:Jacknife_kriging_all_chosen: 19/40
DEBUG:root:Jacknife_kriging_all_chosen: 20/40
###Markdown
InterpolationTo run the actual interpolation, the `predict` method of the `MLEKrigor` is used. It takes, longitude, latitude and category as main input. In addition, $\lambda_w$ needs to be specified. This mainly affects the obtained uncertainties. If desired, the full covariance matrix can also be calculated, but due to memory constraints, by default only the variance (main diagonal) is computed. Note that `predict` does not respect the shape of the input points and the outputs needs to be reshaped. Furtheremore, the **variance** of the error is returned (to be compatible with the full covariance case) not the standard deviation!
###Code
cat_grid = np.ones(lonGrid.shape,dtype=int)
pred,krigvar,predPars = krigor.predict(lonGrid.flatten(),latGrid.flatten(),cat_grid.flatten(),
lambda_w=1000.0,get_covar=False)
pred = pred.reshape(lonGrid.shape)
krigvar = krigvar.reshape(lonGrid.shape)
plt.figure()
plt.contourf(lonGrid,latGrid,pred)
cbar = plt.colorbar()
cbar.set_label('Moho depth [km]')
plt.axis('equal')
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.figure()
plt.contourf(lonGrid,latGrid,np.sqrt(krigvar))
cbar = plt.colorbar()
cbar.set_label('Moho uncertainty [km]')
plt.axis('equal')
###Output
_____no_output_____
###Markdown
Using equation with LaTeX notation in a markdown cellThe well known Pythagorean theorem$x^2 + y^2 = z^2$ was proved to be invalid for other exponents. Meaning the next equation has no integer solutions:\begin{equation} x^n + y^n = z^n \end{equation}\begin{equation}\frac{1}{\sqrt{2}}\int_{-\infty}^{\infty} f(t)^{-2\pi i \omega t} \mathrm{d}t\end{equation}\begin{equation}-\left[ \frac{\hbar}{2m} \nabla^2 + V(x) \right] \psi (x) = E \psi (x)\end{equation}
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig("test.png")
plt.show()
###Output
_____no_output_____
###Markdown
Laplacian Score-regularized Concrete Autoencoders Demo Let import some tools
###Code
from pathlib import Path
from torch.utils import data
from scipy.stats import uniform
from sklearn.datasets import make_moons
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from omegaconf import OmegaConf
import numpy as np
import torch
from torch.utils import data
###Output
_____no_output_____
###Markdown
Do not forget to import lscae
###Code
import lscae
###Output
_____no_output_____
###Markdown
The default config could be found in src/config.yaml, but you can also pass these arguments as here:
###Code
cfg = OmegaConf.create({
"input_dim": None, # Dimension of input dataset (total #features)
"k_selected": 2, # Number of selected features
"decoder_lr": 1e-3, # Decoder learning rate
"selector_lr": 1e-1, # Concrete layer learning rate
"min_lr": 1e-5, # Minimal layer learning rate
"weight_decay": 0, # l2 weight penalty
"batch_size": 64, # Minibatch size
"hidden_dim": 128, # Hidden layers size
"model": 'lscae', # lscae | cae | ls
"scale_k": 2, # Number of neighbors for computation of local scales for the kernel
"laplacian_k": 50, # Number of neighbors of each pooint, used for computation of the Laplacian
"start_temp": 10, # Initial temperature
"min_temp": 1e-2, # Final temperature
"rec_lambda": .5, # Balance between reconstruction and LS terms
"num_epochs": 300, # Number of training epochs
"verbose": True # Whether to print to console during training
})
###Output
_____no_output_____
###Markdown
Read a dataset / build a demo two-moons dataset
###Code
from scipy.stats import uniform
from sklearn.datasets import make_moons
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import numpy as np
def create_twomoon_dataset(n=1200, d=10, noise=0.1):
"""
Creates two moon clusters in 2D, adding p nuisance features and d noisy copies of one of the original features
n: size of data (int)
d: number of nuisance dimensions (int), and number of redundant copies
noise: noise level (double)
"""
relevant, y = make_moons(n_samples=n, shuffle=True, noise=noise, random_state=None)
nuisance = uniform.rvs(size=[n, d])
data = np.concatenate([relevant, nuisance], axis=1)
scaler = StandardScaler()
data = scaler.fit_transform(data)
plt.scatter(data[:, 0], data[:, 1])
plt.show()
return data
X = create_twomoon_dataset()
# You can load your own dataset as below in numpy format
# path = Path(args.data_dir, args.filename)
# X = np.load(path)
# print('Data shape: ', X.shape)
dataset = data.TensorDataset(torch.Tensor(X))
loader = torch.utils.data.DataLoader(dataset, batch_size=cfg.batch_size, shuffle=True, drop_last=True)
cfg.input_dim = X.shape[1]
lscae_model = lscae.Lscae(kwargs=cfg)
selected_features = lscae_model.select_features(loader)
###Output
Epoch 1\300, loss: -0.000, ls loss: -0.00069, recon loss: 0.961
Selection probs:
[0.08895258 0.09500632 0.08862478 0.08452889 0.09643977 0.08996537
0.09198611 0.08099049 0.09697609 0.09512109 0.09342799 0.08812779]
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 6\300, loss: 0.000, ls loss: -0.00085, recon loss: 0.834
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 11\300, loss: -0.000, ls loss: -0.00107, recon loss: 0.829
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 16\300, loss: -0.000, ls loss: -0.00141, recon loss: 0.830
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 21\300, loss: -0.000, ls loss: -0.00193, recon loss: 0.828
Selection probs:
[0.5231066 0.49032244 0.05844297 0.06868261 0.12676029 0.05727994
0.06390956 0.08747183 0.11194766 0.15766361 0.10915606 0.14120847]
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 26\300, loss: 0.000, ls loss: -0.00280, recon loss: 0.822
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 31\300, loss: -0.000, ls loss: -0.00674, recon loss: 0.822
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 36\300, loss: -0.000, ls loss: -0.02289, recon loss: 0.811
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 41\300, loss: -0.000, ls loss: -0.05022, recon loss: 0.814
Selection probs:
[9.9994719e-01 9.9994564e-01 2.8626307e-06 3.5599148e-06 1.8559253e-05
2.0758216e-06 2.7501924e-06 5.8499304e-06 1.8195131e-05 2.1477214e-05
9.5566738e-06 2.2326019e-05]
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 46\300, loss: -0.000, ls loss: -0.09505, recon loss: 0.817
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 51\300, loss: -0.000, ls loss: -0.14985, recon loss: 0.821
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 56\300, loss: -0.000, ls loss: -0.21413, recon loss: 0.824
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 61\300, loss: -0.000, ls loss: -0.29676, recon loss: 0.826
Selection probs:
[1.0000000e+00 1.0000000e+00 1.2572693e-09 1.5553744e-09 1.0513823e-08
6.1067196e-10 9.6102237e-10 1.8712045e-09 1.0610825e-08 8.9010719e-09
4.6905440e-09 1.0042603e-08]
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 66\300, loss: 0.000, ls loss: -0.36053, recon loss: 0.830
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 71\300, loss: 0.000, ls loss: -0.43887, recon loss: 0.830
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 76\300, loss: -0.000, ls loss: -0.52814, recon loss: 0.833
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 81\300, loss: -0.000, ls loss: -0.59740, recon loss: 0.832
Selection probs:
[1.0000000e+00 1.0000000e+00 1.6753903e-12 2.0609616e-12 1.4637262e-11
1.0140303e-12 1.7439189e-12 3.1174399e-12 1.5125221e-11 1.6348867e-11
6.5527093e-12 1.8924222e-11]
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 86\300, loss: -0.000, ls loss: -0.65538, recon loss: 0.837
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 91\300, loss: 0.000, ls loss: -0.71276, recon loss: 0.833
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
Epoch 96\300, loss: 0.000, ls loss: -0.74690, recon loss: 0.834
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0010000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 101\300, loss: 0.000, ls loss: -0.78312, recon loss: 0.827
Selection probs:
[1.00000000e+00 1.00000000e+00 7.51085883e-15 9.36008168e-15
7.02029513e-14 5.07903910e-15 9.29057161e-15 1.58797570e-14
7.33698516e-14 9.13632234e-14 3.12543231e-14 1.06493014e-13]
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 106\300, loss: 0.000, ls loss: -0.82260, recon loss: 0.826
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 111\300, loss: 0.000, ls loss: -0.84325, recon loss: 0.828
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 116\300, loss: -0.000, ls loss: -0.86038, recon loss: 0.828
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 121\300, loss: 0.000, ls loss: -0.86533, recon loss: 0.831
Selection probs:
[1.0000000e+00 1.0000000e+00 1.7292420e-16 2.2049444e-16 2.0405953e-15
1.1309326e-16 2.2098788e-16 3.9805636e-16 2.1436710e-15 2.7553751e-15
8.3586598e-16 3.2625791e-15]
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 126\300, loss: 0.000, ls loss: -0.88300, recon loss: 0.829
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 131\300, loss: -0.000, ls loss: -0.89127, recon loss: 0.831
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 136\300, loss: -0.000, ls loss: -0.90565, recon loss: 0.828
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 141\300, loss: 0.000, ls loss: -0.89796, recon loss: 0.831
Selection probs:
[1.0000000e+00 1.0000000e+00 1.7344925e-18 2.2772789e-18 2.7555615e-17
1.0802542e-18 2.2921299e-18 4.4247574e-18 2.9133079e-17 3.8705343e-17
1.0144920e-17 4.6766519e-17]
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
LS-CAE learning rate = 0.0001000
Epoch 146\300, loss: -0.000, ls loss: -0.91300, recon loss: 0.829
LS-CAE learning rate = 0.0001000
###Markdown
Load data
###Code
x = np.load('./data/func.npz')['fc']
df = pd.read_csv('./data/covariates.csv', index_col=0)
print(x.shape, df.shape)
df.head()
###Output
(297, 4950) (297, 4)
###Markdown
Target variable
###Code
y = df.group.replace({'ASD': 1, 'TD': 0}).to_numpy().astype(int)
# and remove it from covariates
df.drop('group', axis=1, inplace=True)
y.shape
###Output
_____no_output_____
###Markdown
Matching and stratification
###Code
df_match, x, y = prepare_data(x, y, df, site_col='site', cat=['sex'],
n_strata=10, caliper=0.2, dec=None) # No deconfounding used
print(df_match.shape, x.shape, y.shape)
df_match.head()
###Output
(290, 4) (290, 4950) (290,)
###Markdown
Predictive model
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
# Classifier - No param tunning here
clf = Pipeline([
('ss', StandardScaler()),
('clf', LogisticRegression(C=1, penalty='l2', random_state=0))]
)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Params chosen for fast execution
df_strata, coef, score = evaluate_diversity(clf, x, y, df_match, n_train_strata=7,
n_splits=2, n_jobs=6, verbose=True)
###Output
Within-distribution elapsed time: 11.94s
Out-of-distribution elapsed time: 12.48s
###Markdown
Gather results
###Code
# Within-distribution predition performance (ROC-AUC)
div_wd = df_strata.train.intra_ps
auc_wd = score['default']['cv']['auc']
wd = pd.DataFrame(np.c_[div_wd, auc_wd], columns=['Div', 'ROC-AUC'])
# Out-of-distribution predition performance (ROC-AUC)
div_ood = np.concatenate(df_strata.test.inter_ps_strata)
auc_ood = score['default']['test'].loc['strata', 'auc']
ood = pd.DataFrame(np.c_[div_ood, auc_ood], columns=['Div', 'ROC-AUC'])
scores = pd.concat([wd, ood], keys=['WD', 'OOD'], names=['perf'])
scores.reset_index(0, inplace=True)
scores.head()
###Output
_____no_output_____
###Markdown
Plot results
###Code
import seaborn as sns
sns.lmplot(x='Div', y='ROC-AUC', data=scores, col='perf', sharex=False,
sharey=False, truncate=False)
###Output
_____no_output_____
###Markdown
Create sqlalchemy model code (without alembic support)
###Code
from modelgen import create_model
create_model('example', filepath='modelgen/templates/example.yaml')
###Output
16-Apr-21 11:49:05 - Reading file modelgen/templates/example.yaml and converting it to python dict
16-Apr-21 11:49:05 - Reading file modelgen/templates/example.yaml and converting it to python dict
16-Apr-21 11:49:05 - Creating schema from YAML
16-Apr-21 11:49:05 - Getting structure from table example_user_table
16-Apr-21 11:49:05 - Getting structure from table example_meta_table
###Markdown
Create sqlalchemy model code with alembic support
###Code
from modelgen import create_model
create_model('userinfo', alembic=True)
###Output
10-Apr-21 02:45:24 - Reading file /Users/shrinivasdeshmukh/Desktop/personal_projects/sqlalchemy-modelgen/templates/userinfo.yaml and converting it to python dict
10-Apr-21 02:45:24 - Creating schema from YAML
10-Apr-21 02:45:24 - Getting structure from table userinfo
10-Apr-21 02:45:24 - Getting structure from table orders
###Markdown
1. `alembic init ./scripts` (RUN THIS IN THE CLI/TERMINAL)2. Edit the file scripts/env.py and add: on `line 7` add: `from metadata import metadata` AND on `line 67` add: `compare_type=True` 3. Edit the file `alembic.ini` and add your sqlalchemy connection url on line 42 If you are using the docker-compose.yaml from this repository, `line 42 of alembic.ini` would be: `sqlalchemy.url = mysql+mysqlconnector://root:example@localhost:3306/testdb` RUN THE FOLLOWING COMMANDS FROM YOUR TERMINAL4. `alembic revision --autogenerate -m 'Initial Migration'`5. `alembic upgrade head` Alter the schema To change the table schema, edit the YAML file and change the column datatypes to your desired type. Once that is done, run the following code:
###Code
from modelgen import create_model
create_model('userinfo', alembic=True)
# FROM YOUR CLI, RUN:
#. alembic revision --autogenerate -m 'YOUR MESSAGE'
#. alembic upgrade head
import os
os.path.isabs('/modelgen')
a = 'po/sb/ghj/mmmm'
os.path.join(*(a.split('/')[:-1]))
import modelgen
p = modelgen.__file__
os.path.join('/',*(p.split('/')[:-1]))
from yaml import safe_load
from modelgen.validator.schema import table_key_schema, columns_key_schema, columns_value_schema
from cerberus import Validator
with open('modelgen/templates/example.yaml', 'r') as f:
data = safe_load(f)
tab_schema = safe_load(table_key_schema)
colk_schema = safe_load(columns_key_schema)
colv_schema = safe_load(columns_value_schema)
v = Validator()
v.validate(data, tab_schema)
print("v1", v.errors)
v.validate(data['tables']['example_user_table'], colk_schema)
print("v2", v.errors)
cnt = 0
err_dict = dict()
for table_name, table_data in data['tables'].items():
err_list = list()
for i in table_data:
v.validate(table_data, colk_schema)
if bool(err_list):
err_dict.update({table_name: err_list})
print("col", err_dict)
err_dict = dict()
for table_name, table_data in data['tables'].items():
err_list = list()
for i in table_data['columns']:
v.validate(i, colv_schema)
if bool(v.errors):
err_list.append(v.errors)
cnt += 1
if bool(err_list):
err_dict.update({table_name: err_list})
print("val", err_dict)
if err_dict:
raise ValidationError('', err_dict)
err_dict
print(v2.document)
print(v2.errors)
class ValidationError(ValueError):
def __init__(self, message, errors):
err_str = str()
if isinstance(errors, dict):
for table_name, error in errors.items():
err_str += f'\ntable = {table_name}, error = {error}'
message = f'{message} {err_str}'
super().__init__(message)
from queue import Queue
q = Queue()
q.put({"abc"})
###Output
_____no_output_____
###Markdown
Load the required libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import obspy
from obspy.signal.konnoohmachismoothing import konno_ohmachi_smoothing
import pykooh
%matplotlib inline
plt.rcParams['figure.dpi'] = 150
###Output
_____no_output_____
###Markdown
Comparison between the `obspy` and `pyko`Load an example time series.
###Code
trace = obspy.read('tests/data/example_ts.mseed').traces[0]
trace.plot();
###Output
_____no_output_____
###Markdown
Compute the Fourier amplitudes and apply the smoothing operators.
###Code
fourier_amps = np.abs(np.fft.rfft(trace.data))
freqs = np.fft.rfftfreq(len(trace), d=trace.stats['delta'])
b = 188.5
ko_amps = konno_ohmachi_smoothing(fourier_amps, freqs, b, normalize=True)
pyko_amps = pykooh.smooth(freqs, freqs, fourier_amps, b)
###Output
_____no_output_____
###Markdown
Plot the smoothing from `obspy` and `pyko`.
###Code
fig, ax = plt.subplots()
ax.plot(freqs, fourier_amps, label='Original', linewidth=0.5)
ax.plot(freqs, ko_amps, label='Smoothed (Obspy)')
ax.plot(freqs, pyko_amps, label='Smoothed (pyko)', linestyle='--')
ax.set(
xlabel='Frequency (Hz)', xscale='log', xlim=(0.1, 50),
ylabel='Fourier Ampl. (cm/s)', yscale='log'
)
ax.legend()
fig;
###Output
_____no_output_____
###Markdown
Save data to be used for test cases.
###Code
np.savez(
'tests/data/test_data.npz',
freqs=freqs, fourier_amps=fourier_amps,
ko_amps=pyko_amps, b=b
)
###Output
_____no_output_____
###Markdown
Calculation time
###Code
%time _ = konno_ohmachi_smoothing(fourier_amps, freqs, b, normalize=True)
%time _ = pykooh.smooth(freqs, freqs, fourier_amps, b, use_cython=True)
###Output
CPU times: user 9.91 ms, sys: 0 ns, total: 9.91 ms
Wall time: 9.89 ms
###Markdown
Call once to compile the `numba` functions.
###Code
pykooh.smooth(freqs, freqs, fourier_amps, b, use_cython=False)
%time _ = pykooh.smooth(freqs, freqs, fourier_amps, b, use_cython=False)
###Output
CPU times: user 3.96 s, sys: 72 µs, total: 3.96 s
Wall time: 3.97 s
###Markdown
`Cython` and `numba` implementations provide very similar speed ups. Effective amplitude calculation
###Code
def read_at2(fname):
with open(fname) as fp:
for _ in range(3):
next(fp)
time_step = float(next(fp).split()[3])
accels = np.array([p for l in fp for p in l.split()]).astype(float)
return time_step, accels
time_step, accels_h1 = read_at2('./tests/data/RSN4863_CHUETSU_65036EW.AT2')
accels_h2 = read_at2('./tests/data/RSN4863_CHUETSU_65036NS.AT2')[1]
accels = np.c_[accels_h1, accels_h2]
fourier_amps = np.fft.rfft(accels, axis=0)
freqs = np.fft.rfftfreq(accels.shape[0], d=time_step)
freqs_ea, eff_ampl = pykooh.effective_ampl(freqs, fourier_amps[:, 0], fourier_amps[:, 1], missing='nan')
###Output
_____no_output_____
###Markdown
Mask out the missing values.
###Code
mask = ~np.isnan(eff_ampl)
###Output
_____no_output_____
###Markdown
Create a little comparison plot.
###Code
to_cmps = 981.
fig, ax = plt.subplots()
ax.plot(freqs, np.abs(fourier_amps) * to_cmps, linewidth=0.5)
ax.plot(freqs_ea[mask], eff_ampl[mask] * to_cmps, label='EAS')
ax.set(
xlabel='Frequency (Hz)', xscale='log',
ylabel='Fourier Ampl. (cm/s)', yscale='log'
)
ax.legend(
ax.get_lines(),
['FAS, EW', 'FAS, NS', 'EAS'],
)
fig;
###Output
_____no_output_____
###Markdown
Crop Image InteractivelyWith **img_crop**, we are able to crop image **interactively**. **img_crop** simply takes the image numpy array.
###Code
path = 'Data/rabbit.jpeg'
img = Image.open(path).convert("RGB")
img_array = numpy.asarray(img)
display(img)
newIm = img_crop(img_array)
img = Image.fromarray(newIm, 'RGB')
display(img)
###Output
_____no_output_____
###Markdown
Checking the original periodogram -
###Code
bm.plot()
###Output
_____no_output_____
###Markdown
Let's prewhiten freqs between 15 and 24
###Code
bm.run(steps=100, fmin=15, fmax=24)
bm.plot()
###Output
_____no_output_____
###Markdown
ExamplesYou can produce simple colored and labelled graphs.
###Code
<svg height="200" width="100%"><desc>Created with Snap</desc><defs><filter id="Sixhpdg6r1r" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixhpdg6r1v"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixhprh8n3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixhprh8n7"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixhpxwpo3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixhpxwpo7"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixhzwme43" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixhzwme47"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixi01yic3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixi01yic7"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixi0groa3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixi0groa7"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixi980f13" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixi980f17"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixqqlt913" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixqqlt917"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixtebe623" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixtebe627"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixtegz2q3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixtegz2q7"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixx6e5eu3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixx6jomg3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixx6lmz73" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixxd75yc3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sj9qxj0bh3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker><filter id="Sj9vsebcj3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker><filter id="Sj9vzv5si3" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker></defs><g id="drup_elem_1" class="drupElem"><circle cx="142.578125" cy="107.8125" r="41" vector-effect="non-scaling-stroke" fill="lightgreen" stroke="#000000" style="stroke-width: 1;" class="core alignable sub" transform="matrix(1.2986,0,0,1.2986,-70.3119,-52.9495)"></circle><circle cx="142.578125" cy="66.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,-27.7396,-33)"></circle><circle cx="142.578125" cy="148.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,-27.7396,-8.5157)"></circle><circle cx="101.578125" cy="107.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,-39.9817,-20.7578)"></circle><circle cx="183.578125" cy="107.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,-15.4974,-20.7578)"></circle><text x="142.578125" y="107.8125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 1; visibility: visible;" class="egal-label sub" data-src="Sender" transform="matrix(1,0,0,1,-27.7419,-20.6832)">Sender</text></g><g id="drup_elem_2" class="drupElem" first-frame="3" style="display: inline;"><circle cx="142.578125" cy="107.8125" r="41" vector-effect="non-scaling-stroke" fill="lightblue" stroke="#000000" style="stroke-width: 1;" class="core alignable sub" transform="matrix(1.2986,0,0,1.2986,426.6881,-52.9495)"></circle><circle cx="142.578125" cy="66.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,469.2604,-33)"></circle><circle cx="142.578125" cy="148.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,469.2604,-8.5157)"></circle><circle cx="101.578125" cy="107.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,457.0183,-20.7578)"></circle><circle cx="183.578125" cy="107.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,481.5026,-20.7578)"></circle><text x="142.578125" y="107.8125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 1; visibility: visible;" class="egal-label sub" transform="matrix(1,0,0,1,469.2558,-20.6832)" data-src="Receiver">Receiver</text></g><g id="drup_elem_3" class="drupElem" first-frame="3" style="display: inline;"><rect x="220.578125" y="71.8125" width="159" height="58" fill="orange" stroke="#000000" vector-effect="non-scaling-stroke" style="stroke-width: 1;" class="core alignable sub" transform="matrix(1,0,0,1,64,-13)"></rect><circle cx="300.078125" cy="71.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="300.078125" cy="129.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="220.578125" cy="100.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="379.578125" cy="100.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="220.578125" cy="71.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_4" style="stroke-width: 1; opacity: 0;" class="endPoint left-up sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="220.578125" cy="129.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_5" style="stroke-width: 1; opacity: 0;" class="endPoint left-down sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="379.578125" cy="71.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_6" style="stroke-width: 1; opacity: 0;" class="endPoint right-up sub" transform="matrix(1,0,0,1,64,-13)"></circle><circle cx="379.578125" cy="129.8125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_7" style="stroke-width: 1; opacity: 0;" class="endPoint right-down sub" transform="matrix(1,0,0,1,64,-13)"></circle><text x="300.078125" y="100.8125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 1; visibility: visible;" class="egal-label sub" data-src="Channel" transform="matrix(1,0,0,1,64,-13)">Channel</text></g><g id="drup_elem_4" class="drupElem" first-frame="2" style="display: inline;"><line x1="168.080725" x2="284.578125" y1="87.0547" y2="87.8125" stroke="#000000" data-n1="drup_elem_1_endpoint_3" style="marker-end: url("#arrowEndMarker");" class="drupElem connector egal-line core" data-n2="drup_elem_3_endpoint_2"></line><text x="217.3294219970703" y="88.93360137939453" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,0,0)">|</text></g><g id="drup_elem_5" class="drupElem" first-frame="4" style="display: inline;"><line x1="443.578125" x2="558.5964250000001" y1="87.8125" y2="87.0547" stroke="#000000" data-n1="drup_elem_3_endpoint_3" style="marker-end: url("#arrowEndMarker");" class="drupElem connector egal-line core" data-n2="drup_elem_2_endpoint_2"></line><text x="473.5872802734375" y="88.93360137939453" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_8" class="drupElem" first-frame="2" style="display: inline;"><text x="196.578125" y="73.015625" class="core alignable sub egal-label" style="text-anchor: middle; alignment-baseline: central; font-size: 20px;" data-src="p(y)" transform="matrix(1,0,0,1,27,0)">p(y)</text></g><g id="drup_elem_9" class="drupElem" first-frame="4" style="display: inline;"><text x="186.578125" y="158.015625" class="core alignable sub egal-label" style="text-anchor: middle; alignment-baseline: central; font-size: 20px;" data-src="p(x|y)" transform="matrix(1,0,0,1,314,-85)">p(x|y)</text></g></svg>
###Output
_____no_output_____
###Markdown
LatexYou can even use latex to label nodes in the graph (with several caveats, including the inability to render the converted latex in static views).
###Code
<svg height="250" width="100%"><g id="drup_elem_25" class="drupElem"><rect x="459.578125" y="33.03125" width="59" height="168" fill="lightgray" stroke="#000000" vector-effect="non-scaling-stroke" style="stroke-width: 1;" class="core alignable sub" transform="matrix(1,0,0,1,-1,0)"></rect><circle cx="489.078125" cy="33.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="489.078125" cy="201.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="459.578125" cy="117.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="518.578125" cy="117.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="459.578125" cy="33.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_4" style="stroke-width: 1; opacity: 0;" class="endPoint left-up sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="459.578125" cy="201.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_5" style="stroke-width: 1; opacity: 0;" class="endPoint left-down sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="518.578125" cy="33.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_6" style="stroke-width: 1; opacity: 0;" class="endPoint right-up sub" transform="matrix(1,0,0,1,-1,0)"></circle><circle cx="518.578125" cy="201.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_25_endpoint_7" style="stroke-width: 1; opacity: 0;" class="endPoint right-down sub" transform="matrix(1,0,0,1,-1,0)"></circle><text x="489.078125" y="117.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,-1,0)">|</text></g><g id="drup_elem_24" class="drupElem"><rect x="30.578125" y="14.03125" width="62" height="211" fill="lightgray" stroke="#000000" vector-effect="non-scaling-stroke" style="stroke-width: 1;" class="core alignable sub" transform="matrix(1,0,0,1,1,0)"></rect><circle cx="61.578125" cy="14.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="61.578125" cy="225.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="30.578125" cy="119.53125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="92.578125" cy="119.53125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="30.578125" cy="14.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_4" style="stroke-width: 1; opacity: 0;" class="endPoint left-up sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="30.578125" cy="225.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_5" style="stroke-width: 1; opacity: 0;" class="endPoint left-down sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="92.578125" cy="14.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_6" style="stroke-width: 1; opacity: 0;" class="endPoint right-up sub" transform="matrix(1,0,0,1,1,0)"></circle><circle cx="92.578125" cy="225.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_24_endpoint_7" style="stroke-width: 1; opacity: 0;" class="endPoint right-down sub" transform="matrix(1,0,0,1,1,0)"></circle><text x="61.578125" y="119.53125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,1,0)">|</text></g><desc>Created with Snap</desc><defs><filter id="Sixhpdg6r1z" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="markerSixhpdg6r23"><polygon points="0,0,0,6,9,3,0,0" fill="#323232"></polygon></marker><filter id="Sixx6lmz717" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixx8jo7l17" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixx8nu1817" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixxcms1d17" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sixxnixh117" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow"></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow"></polygon></marker><filter id="Sj9qxge8317" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker><filter id="Sj9qxj0bh17" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker><filter id="Sj9r6fhq617" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker><filter id="Sj9vsebcj17" filterUnits="userSpaceOnUse"><feGaussianBlur in="SourceAlpha" stdDeviation="3"></feGaussianBlur><feOffset dx="0" dy="2" result="offsetblur"></feOffset><feFlood flood-color="#000000"></feFlood><feComposite in2="offsetblur" operator="in"></feComposite><feComponentTransfer><feFuncA type="linear" slope="1"></feFuncA></feComponentTransfer><feMerge><feMergeNode></feMergeNode><feMergeNode in="SourceGraphic"></feMergeNode></feMerge></filter><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="9" refY="3" id="arrowEndMarker"><polygon points="0,0,0,6,9,3,0,0" fill="#323232" id="arrow" style=""></polygon></marker><marker viewBox="0 0 10 10" markerWidth="10" markerHeight="10" orient="auto" refX="0" refY="3" id="arrowStartMarker"><polygon points="0,3,9,0,9,6,0,3" fill="#323232" id="startArrow" style=""></polygon></marker></defs><g id="drup_elem_1" class="drupElem"><rect x="111.578125" y="27.03125" width="301" height="86" fill="#ffffff" stroke="#000000" vector-effect="non-scaling-stroke" style="stroke-width: 1;" class="core alignable sub" transform="matrix(0.6479,0,0,1,109.2933,49)"></rect><circle cx="262.078125" cy="27.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,17.0014,49)"></circle><circle cx="262.078125" cy="113.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,17.0014,49)"></circle><circle cx="111.578125" cy="70.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,70.0007,49)"></circle><circle cx="412.578125" cy="70.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,-35.9978,49)"></circle><circle cx="111.578125" cy="27.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_4" style="stroke-width: 1; opacity: 0;" class="endPoint left-up sub" transform="matrix(1,0,0,1,70.0007,49)"></circle><circle cx="111.578125" cy="113.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_5" style="stroke-width: 1; opacity: 0;" class="endPoint left-down sub" transform="matrix(1,0,0,1,70.0007,49)"></circle><circle cx="412.578125" cy="27.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_6" style="stroke-width: 1; opacity: 0;" class="endPoint right-up sub" transform="matrix(1,0,0,1,-35.9978,49)"></circle><circle cx="412.578125" cy="113.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_1_endpoint_7" style="stroke-width: 1; opacity: 0;" class="endPoint right-down sub" transform="matrix(1,0,0,1,-35.9978,49)"></circle><text x="279.07957458496094" y="119.03125" data-src="$$s(x) = \frac{1}{1+ e^{-Wx}}$$" style="font-size: 20px; text-anchor: middle; alignment-baseline: central;" class="egal-label sub">$$s(x) = \frac{1}{1+ e^{-Wx}}$$</text></g><g id="drup_elem_2" class="drupElem"><rect x="324.578125" y="61.03125" width="0" height="0" fill="#ffffff" stroke="#000000" vector-effect="non-scaling-stroke" style="stroke-width: 1;" class="core alignable sub"></rect><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_4" style="stroke-width: 1; opacity: 0;" class="endPoint left-up sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_5" style="stroke-width: 1; opacity: 0;" class="endPoint left-down sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_6" style="stroke-width: 1; opacity: 0;" class="endPoint right-up sub"></circle><circle cx="324.578125" cy="61.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_2_endpoint_7" style="stroke-width: 1; opacity: 0;" class="endPoint right-down sub"></circle><text x="324.578125" y="61.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_3" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="green" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,54,-1)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,54,-1)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,54,-1)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,54,-1)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_3_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,54,-1)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,54,-1)">|</text></g><g id="drup_elem_17" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="red" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,480,19)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_17_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,480,19)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_17_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,480,19)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_17_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,480,19)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_17_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,480,19)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,480,19)">|</text></g><g id="drup_elem_5" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="green" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,54,51)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_5_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,54,51)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_5_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,54,51)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_5_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,54,51)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_5_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,54,51)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,54,51)">|</text></g><g id="drup_elem_6" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="green" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,54,-54)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_6_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,54,-54)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_6_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,54,-54)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_6_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,54,-54)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_6_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,54,-54)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,54,-54)">|</text></g><g id="drup_elem_16" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="red" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,480,-30)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_16_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,480,-30)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_16_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,480,-30)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_16_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,480,-30)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_16_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,480,-30)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,480,-30)">|</text></g><g id="drup_elem_7" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="green" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,55,-105)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_7_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,55,-105)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_7_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,55,-105)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_7_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,55,-105)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_7_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,55,-105)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,55,-105)">|</text></g><g id="drup_elem_15" class="drupElem"><circle cx="7.578125" cy="149.03125" r="19" vector-effect="non-scaling-stroke" fill="#ffffff" stroke="red" style="stroke-width: 2;" class="core alignable sub" transform="matrix(1,0,0,1,480,-78)"></circle><circle cx="7.578125" cy="130.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_15_endpoint_0" style="stroke-width: 1; opacity: 0;" class="endPoint up sub" transform="matrix(1,0,0,1,480,-78)"></circle><circle cx="7.578125" cy="168.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_15_endpoint_1" style="stroke-width: 1; opacity: 0;" class="endPoint down sub" transform="matrix(1,0,0,1,480,-78)"></circle><circle cx="-11.421875" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_15_endpoint_2" style="stroke-width: 1; opacity: 0;" class="endPoint left sub" transform="matrix(1,0,0,1,480,-78)"></circle><circle cx="26.578125" cy="149.03125" r="5" stroke="#000000" fill="#ffffff" id="drup_elem_15_endpoint_3" style="stroke-width: 1; opacity: 0;" class="endPoint right sub" transform="matrix(1,0,0,1,480,-78)"></circle><text x="7.578125" y="149.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub" transform="matrix(1,0,0,1,480,-78)">|</text></g><g id="drup_elem_8" class="drupElem"><line x1="80.578125" x2="181.578825" y1="200.03125" y2="119.03125" stroke="#000000" data-n1="drup_elem_5_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_1_endpoint_2"></line><text x="131.078125" y="160.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_9" class="drupElem"><line x1="80.578125" x2="181.578825" y1="148.03125" y2="119.03125" stroke="#000000" data-n1="drup_elem_3_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_1_endpoint_2"></line><text x="130.578125" y="134.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_10" class="drupElem"><line x1="80.578125" x2="181.578825" y1="95.03125" y2="119.03125" stroke="#000000" data-n1="drup_elem_6_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_1_endpoint_2"></line><text x="130.578125" y="107.53125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_11" class="drupElem"><line x1="81.578125" x2="181.578825" y1="44.03125" y2="119.03125" stroke="#000000" data-n1="drup_elem_7_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_1_endpoint_2"></line><text x="130.578125" y="82.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_19" class="drupElem"><line x1="376.580325" x2="468.578125" y1="119.03125" y2="71.03125" stroke="#000000" data-n1="drup_elem_1_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_15_endpoint_2"></line><text x="422.5792236328125" y="95.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_20" class="drupElem"><line x1="376.580325" x2="468.578125" y1="119.03125" y2="119.03125" stroke="#000000" data-n1="drup_elem_1_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_16_endpoint_2"></line><text x="422.5792236328125" y="120.53125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g><g id="drup_elem_21" class="drupElem"><line x1="376.580325" x2="468.578125" y1="119.03125" y2="168.03125" stroke="#000000" data-n1="drup_elem_1_endpoint_3" class="drupElem connector egal-line core" data-n2="drup_elem_17_endpoint_2"></line><text x="422.5792236328125" y="147.03125" style="font-size: 20px; text-anchor: middle; alignment-baseline: central; opacity: 0;" class="egal-label sub">|</text></g></svg>
###Output
_____no_output_____
###Markdown
`interp-acf` demoGenerate time series fluxes with two oscillation periods, and missing data:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Make flux time-series with random noise, and
# two periodic oscillations, one 70% the amplitude
# of the other:
np.random.seed(42)
n_points = 1000
primary_period = 2.5*np.pi
secondary_period = 1.3*np.pi
all_times = np.linspace(0, 6*np.pi, n_points)
all_fluxes = 10 + (0.1*np.random.randn(len(all_times)) +
np.sin(2*np.pi/primary_period * all_times) +
0.7*np.cos(2*np.pi/secondary_period * (all_times - 2.5)))
# Remove some fluxes, times from those data:
n_points_missing = 200 # This number is approximate
missing_indices = np.unique(np.random.randint(0, n_points,
size=n_points_missing))
mask = list(set(np.arange(len(all_times))).difference(set(missing_indices)))
times_incomplete = all_times[mask]
fluxes_incomplete = all_fluxes[mask]
# Plot these fluxes before and after data are removed:
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
ax[0].plot(all_times, all_fluxes, '.')
ax[0].set(title='All fluxes (N={0})'.format(len(all_fluxes)))
ax[1].plot(times_incomplete, fluxes_incomplete, '.')
ax[1].set(title='With fluxes missing (N={0})'.format(len(fluxes_incomplete)))
plt.show()
###Output
_____no_output_____
###Markdown
Now we'll use two `interpacf` methods on these simulated fluxes: * `interpacf.interpolated_acf` will interpolate over the missing fluxes and compute the autocorrelation function. Don't forget to subtract the flux its mean!* `interpacf.dominant_period` returns the lag with the highest peak in the smoothed autocorrelation function. The default smoothing kernel matches that of [McQuillan, Aigrain & Mazeh (2013)](http://adsabs.harvard.edu/abs/2013MNRAS.432.1203M)
###Code
from interpacf import interpolated_acf, dominant_period
# Need zero-mean fluxes:
fluxes_incomplete -= np.mean(fluxes_incomplete)
# Compute autocorrelation function
lag, acf = interpolated_acf(times_incomplete, fluxes_incomplete)
# Find dominant period in autocorrelation function
detected_period = dominant_period(lag, acf, plot=True)
print("Actual dominant period: {0:.3f}\nDetected dominant period: "
"{1:.3f}\nDifference: {2:.3f}%"
.format(primary_period, detected_period,
(primary_period - detected_period)/primary_period))
###Output
Actual dominant period: 7.854
Detected dominant period: 7.962
Difference: -0.014%
###Markdown
Comparing with McQuillan, Aigrain & Mazeh (2013)...for my favorite star, HAT-P-11. McQuillan et al. find a rotation period of 29.472 d. What do we find? This example makes use of the `kplr` package to download Kepler data. You'll need to install it to run this example, which you can do with: ```pip install kplr```First download and normalize each quarter of the HAT-P-11 Kepler light curve:
###Code
import numpy as np
import kplr
client = kplr.API()
# Find the target KOI.
koi = client.koi(3.01)
# Get a list of light curve datasets.
lcs = koi.get_light_curves(short_cadence=False)
# Loop over the datasets and read in the data.
time, flux, ferr, quality = [], [], [], []
for lc in lcs[1:]:
with lc.open() as f:
# The lightcurve data are in the first FITS HDU.
hdu_data = f[1].data
time.append(hdu_data["time"])
flux.append(hdu_data["sap_flux"])
ferr.append(hdu_data["sap_flux_err"])
quality.append(hdu_data["sap_quality"])
time = np.array(time)
# Median normalize each quarter of observations
flux = np.array([f/np.nanmedian(f) - 1 for f in flux])
###Output
_____no_output_____
###Markdown
Now measure the peak in the autocorrelation function for each quarter's light curve:
###Code
%matplotlib inline
periods = []
for i, t, f in zip(range(len(time)), time, flux):
lag, acf = interpolated_acf(t[~np.isnan(f)], f[~np.isnan(f)])
period = dominant_period(lag, acf)
periods.append(period)
print("HAT-P-11 period in Q{0}: {1} d".format(i, period))
###Output
HAT-P-11 period in Q0: 27.87245554981928 d
HAT-P-11 period in Q1: 30.242006851767655 d
HAT-P-11 period in Q2: 29.97501441694476 d
HAT-P-11 period in Q3: 29.91453296065447 d
HAT-P-11 period in Q4: 28.424119883165986 d
HAT-P-11 period in Q5: 58.62456519744592 d
HAT-P-11 period in Q6: 29.260692983400077 d
HAT-P-11 period in Q7: 30.590147634444293 d
HAT-P-11 period in Q8: 29.220380285405554 d
HAT-P-11 period in Q9: 30.507091305764334 d
HAT-P-11 period in Q10: 51.67836866840298 d
HAT-P-11 period in Q11: 28.546062678855378 d
HAT-P-11 period in Q12: 29.281246874539647 d
HAT-P-11 period in Q13: 27.606573058765207 d
###Markdown
Compare with McQuillan+ 2013:
###Code
print("Median period (interpacf): {0};\n"
"Period McQuillan+ 2013: 29.472"
.format(np.median(periods))
###Output
_____no_output_____
###Markdown
Simple Addition
###Code
a = Symbol('a', val=1, err=0.01)
b = Symbol('b', val=2, err=0.01)
r1 = a + b
r1.simp()
r1.display(name="r1")
###Output
_____no_output_____
###Markdown
Multiplication
###Code
r1 = a * b
r1.simp()
r1.display(name="r1")
###Output
_____no_output_____
###Markdown
Exponentiation
###Code
r1 = a ** (1/2)
r1.simp()
r1.display(name="r1")
###Output
_____no_output_____
###Markdown
Standard error of the mean
###Code
mean, err = Symbol.std_err_of_mean(1.3, 1.5, 1.7)
mean, err
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
r1 = a / b + a
r1.simp()
r1.display(name="r1")
###Output
_____no_output_____
###Markdown
A more complex example
###Code
U1 = Symbol('U_1', val=0.888, err=0.007)
U2 = Symbol('U_2', val=0.203, err=0.002)
V = Symbol('V', val=5.637, err=0.001)
C1 = (U1 - (U1**2 - 4*U1*U2)**0.5) / (V**2)
C2 = (U1 + (U1**2 - 4*U1*U2)**0.5) / (V**2)
C1.simp()
C2.simp()
C1.display(name="C_1")
C2.display(name="C_2")
###Output
_____no_output_____
###Markdown
load graphml and xlsx data
###Code
import pandas as pd
import sys
from flowmater.graph_util import draw_graph
from flowmater.ExperimentManager import ExperimentManager
#init class
em=ExperimentManager()
#load databases
em.load_experiments("example_database/db1")
em.load_experiments("example_database/db2")
#process
em.classify_experiments()
#show
df=pd.DataFrame.from_dict(em.database).T
df
#show flowcharts
draw_graph(em.graph_list[0])
draw_graph(em.graph_list[2])
#calcualte fingerprint of florcharts
from flowmater.FlowchartFP import FlowchartFP
FFP=FlowchartFP(em.graph_list)
graph_fp_dict={num:FFP(g) for num,g in enumerate(em.graph_list)}
#merge dataframe
fp_df=pd.DataFrame.from_dict(graph_fp_dict).T
fp_df.columns=["FP_"+i for i in FFP.v_to_i.keys()]
merge_df=pd.merge(df,fp_df,left_on="graphID",right_index=True)
merge_df
#simplify database (drop columns of duplicates)
simple_cols=[]
for col in merge_df.columns:
if len(list(merge_df[col].drop_duplicates()))>1:
simple_cols.append(col)
merge_df[simple_cols]
###Output
_____no_output_____
###Markdown
Line detection with PCLinesThis notebook shows step by step how to use `pclines` package for line detection.
###Code
import numpy as np
from skimage.io import imread
from skimage.filters import sobel
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
from pclines import PCLines
from pclines import utils
%matplotlib inline
###Output
_____no_output_____
###Markdown
Prapare dataHere we extract edges with a *sobel filter* which may not be suitable for your application. It is just for demonstration purposes. You need to develop your, application specific way. The input to PClines is simply `Nx2` matrix with coordinates enclosed in known bounding box. Any point outside the devined box is ignored.
###Code
image = imread("doc/test.png", as_gray=True)
_,ax = plt.subplots(1, figsize=(5,5))
ax.imshow(image, cmap="gray")
ax.set(title="Input image", xticks=[], yticks=[])
plt.tight_layout()
edges = sobel(image)
r,c = np.nonzero(edges > 0.5) # Locations of edges
x = np.array([c,r],"i").T # Matrix with edges [(x1,y1), ... ]
weights = edges[r,c]
weights.shape
_,ax = plt.subplots(1, figsize=(5,5))
ax.imshow(edges, cmap="Greys")
ax.set(title="Edge map - observations", xticks=[], yticks=[])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Accumulate the observationsAn instance of `PCLines` must be created and observations - 2D point coordinates are inserted using `insert` method. This can be called multiple times to fill the accumulator space. The peaks are located using `find_peaks` - each peak corresponds to a line in the original space.
###Code
h,w = image.shape[:2]
bbox=(0,0,w,h)
d = 1024
# Create new accumulator
P = PCLines(bbox, d)
# Insert observations
P.insert(x, weights)
# Find local maxima
p, w = P.find_peaks(min_dist=10, prominence=1.3, t=0.1)
f,ax = plt.subplots(1, figsize=(10,5))
ax.plot(p[:,1], p[:,0], "r+")
ax.imshow(np.sqrt(P.A), cmap="Greys")
ax.set(title="Accumulator space",xticks=[],yticks=[])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Obtain line parameters from the accumulatorLocal maxima form `find_peaks` can be transformed usin `inverse` to line parameters in $(a,b,c)$ form - line $ax + by + c = 0$
###Code
h = P.inverse(p)
X,Y = utils.line_segments_from_homogeneous(h, bbox)
f,ax = plt.subplots(figsize=(5,5))
ax.imshow(image, cmap="gray")
for x,y in zip(X,Y):
if x is None or y is None:
continue
l = Line2D(x,y, color="r")
ax.add_artist(l)
ax.set(title="Image with detected lines", xticks=[], yticks=[])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
User Guide[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/imartinezl/cpab/HEAD) Introduction The CPAB library allows to create transformations $\phi(x,t)$ based on the integration of a continuous piecewise affine velocity field $v(x)$. Let us bring some clarity to this sentence by including some definitions:- The transformation $\phi(x,t)$ is created by the integration of a velocity field. For that, we need to solve a differential equation of the form: $$\frac{\partial\phi(x,t)}{\partial t} = v(\phi(x))$$The transformation $\phi(x,t)$ depend on two variables $x$ (spatial dimension) and $t$ (integration time).- The velocity field $v(x)$ can be a function of any form and shape, but in this library we focus on an specific type of functions, which are continuous piecewise affine functions.- Continous function: there are no discontinuities in the function domain- Piecewise function: is a function that is defined by parts- Affine: is a geometric transformation that consist on a linear transformation + a translation.Thus, a continous, piecewise, and affine function is just a set of lines joined together. In summary, in this library integrate (efficiently) these functions to create diffeomorphic transformations $\phi(x,t)$ that are very useful for a lot of tasks in machine learning. Loading libraries First, we need to import the necessary Python libraries: ``cpab`` library to compute the transformations, ``matplotlib`` for data visualization, ``numpy`` for array manipulation and ``pytorch`` for autodifferentiation and gradient descent optimization.
###Code
import numpy as np
import torch
import matplotlib.pyplot as plt
import cpab
plt.rcParams["figure.figsize"] = (10, 7)
###Output
_____no_output_____
###Markdown
Transformation parameters In order to create a transformation $\phi(x,t)$, several options need to be specified. CPAB transformations are built by integrating a continuous piecewise affine velocity field $v(x)$. Such velocity field is defined onto a regular grid, or tesselation. In this example, we will set the number of intervals to 5 (``tess_size=5``).The ``backend`` option let us choose between ``numpy`` backend and the ``pytorch`` backend, the preferred option for optimization tasks. These computations can be also executed on CPU or GPU ``device`` (for the ``pytorch`` backend). The ``zero_boundary`` condition set to ``True`` constraints the velocity $v(x)$ at the tesselation boundary to 0, so $v(0)=0$ and $v(1)=0$. The ``basis`` option let us choose between {``svd``, ``sparse``, ``rref``, ``qr``}, and it represents the method to obtain the null space representation for continuous piecewise affine functions with ``tess_size`` intervals. In this case, we have used the QR decomposition to build the basis.
###Code
tess_size = 5
backend = "numpy" # ["pytorch", "numpy"]
device = "cpu" # ["cpu", "gpu"]
zero_boundary = True # [True, False]
basis = "qr" # ["svd", "sparse", "rref", "qr"]
T = cpab.Cpab(tess_size, backend, device, zero_boundary, basis)
###Output
_____no_output_____
###Markdown
Transformation example Then, we need to create the one-dimensional grid that is going to be transformed. For that, we use the ``uniform_meshgrid`` method, and we set the number of equally spaced points in the grid to 100. The velocity field $v(x)$ in CPAB transformations are parameterized by a vector $\theta$. In this example, taking into account the zero velocity constraints at the boundary, only 4 dimensions or degree of freedom are left to play with, and that indeed is the dimensionality of $\theta$, a vector of 4 values.Finally, we can pass the ``grid`` and the ``theta`` parameters to the ``transform_grid`` method and compute the transformed grid ``grid_t`` $\phi(x)$.
###Code
outsize = 100
grid = T.uniform_meshgrid(outsize)
batch_size = 1
theta = T.identity(batch_size, epsilon=2)
grid_t = T.transform_grid(grid, theta)
###Output
_____no_output_____
###Markdown
We can use the methods ``visualize_velocity`` and ``visualize_deformgrid`` to plot the velocity field $v(x)$ and the transformed grid $\phi(x,t)$ respectively.
###Code
T.visualize_velocity(theta);
T.visualize_deformgrid(theta);
###Output
_____no_output_____
###Markdown
The dotted black line represents the identity tranformation $\phi(x,t) = x$. Integration details By default, the velocity field is integrated up to $t==1$. The following figure shows the how the transformed grid changes along the integration time $t$.
###Code
grid = T.uniform_meshgrid(outsize)
theta = T.identity(batch_size, epsilon=2)
fig, ax = plt.subplots()
ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25])
ax.axline((0,0),(1,1), color="blue", ls="dashed")
ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed")
N = 11
for i in range(N):
time = i / (N-1)
grid_t = T.transform_grid(grid, theta, time=time)
ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time)
ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=time)
ax.grid()
ax.set_xlabel("Original Time")
ax.set_ylabel("Transformed Time")
sm = plt.cm.ScalarMappable(cmap="gray_r")
cbar = plt.colorbar(sm, ax=ax)
cbar.ax.get_yaxis().labelpad = 15
cbar.ax.set_ylabel('Integration time', rotation=270)
ax_zoom.grid()
ax_zoom.set_xlim(.25, .35)
ax_zoom.set_ylim(.25, .35)
ax_zoom.set_xticklabels([])
ax_zoom.set_yticklabels([])
ax_zoom.xaxis.set_ticks_position('none')
ax_zoom.yaxis.set_ticks_position('none')
from matplotlib.patches import Rectangle
import matplotlib.lines as lines
r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1)
ax.add_patch(r)
line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1)
ax.add_line(line)
line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1)
ax.add_line(line);
###Output
_____no_output_____
###Markdown
Scaling and squaringThe CPAB library allows to use the scaling and squaring method to approximate the velocity field integration. This method uses the following property of diffeomorphic transformations to accelerate the computation of the integral:$$\phi(x,t+s) = \phi(x,t) \circ \phi(x,s)$$Thus, computing the transformation $\phi$ at time $t+s$ is equivalent to composing the transformations at time $t$ and $s$. In the scaling and squaring method, we impose $t=s$, so that we need to compute only one transformation and self-compose it: $$\phi(x,2t) = \phi(x,t) \circ \phi(x,t)$$Repeating this procedure multiple times (N), we can efficienty approximate the integration:$$\phi(x,t^{2N}) = \phi(x,t) \; \underbrace{\circ \; \cdots \; \circ}_{N} \; \phi(x,t)$$
###Code
grid = T.uniform_meshgrid(outsize)
theta = T.identity(batch_size, epsilon=2)
fig, ax = plt.subplots()
ax_zoom = fig.add_axes([0.2,0.58,0.2,0.25])
ax.axline((0,0),(1,1), color="blue", ls="dashed")
ax_zoom.axline((0,0),(1,1), color="blue", ls="dashed")
N = 11
for i in range(N):
alpha = i / (N-1)
grid_t = T.transform_grid_ss(grid, theta / 2**N, N=i+1)
ax.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha)
ax_zoom.plot(grid, grid_t.T, label=round(time, 2), color="black", alpha=alpha)
ax.grid()
ax.set_xlabel("Original Time")
ax.set_ylabel("Transformed Time")
sm = plt.cm.ScalarMappable(cmap="gray_r")
cbar = plt.colorbar(sm, ax=ax)
cbar.ax.get_yaxis().labelpad = 15
cbar.ax.set_ylabel('Scaling-Squaring iteration', rotation=270)
ax_zoom.grid()
ax_zoom.set_xlim(.25, .35)
ax_zoom.set_ylim(.25, .35)
ax_zoom.set_xticklabels([])
ax_zoom.set_yticklabels([])
ax_zoom.xaxis.set_ticks_position('none')
ax_zoom.yaxis.set_ticks_position('none')
from matplotlib.patches import Rectangle
import matplotlib.lines as lines
r = Rectangle((.25,.25), 0.1, 0.1, edgecolor="red", facecolor="none", lw=1)
ax.add_patch(r)
line = lines.Line2D([0.085,0.25], [0.62, 0.35], color="red", lw=1)
ax.add_line(line)
line = lines.Line2D([0.435,0.35], [0.62, 0.35], color="red", lw=1)
ax.add_line(line);
###Output
_____no_output_____
###Markdown
Data transformationThe time series data must have a shape (batch, length, channels). In this example, we have created a sinusoidal dataset of one batch, 50 points in length, and 2 channels. Then, to transform time series data, we can use the ``transform_data`` method and pass as arguments:- data: n-dimensional array of shape (batch, length, channels)- theta: transformation parameters- outsize: length of the transformed data, with final shape (batch, outsize, channels)
###Code
batch_size = 1
length = 50
channels = 2
outsize = 100
# Generation
m = np.ones((batch_size, channels))
x = np.linspace(m*0, m*2*np.pi, length, axis=1)
data = np.sin(x)
theta = T.identity(batch_size, epsilon=1)
data_t = T.transform_data(data, theta, outsize)
###Output
_____no_output_____
###Markdown
And we can visualize this data transformation with the ``visualize_deformdata`` method. The red curves represent the original data and the blue ones are the transformed data after applying the transformation.
###Code
T.visualize_deformdata(data, theta);
###Output
_____no_output_____
###Markdown
Auxiliary Functions
###Code
from baselines.ViT.ViT_LRP import vit_base_patch16_224 as vit_LRP
from baselines.ViT.ViT_explanation_generator import LRP
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# create heatmap from mask on image
def show_cam_on_image(img, mask):
heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET)
heatmap = np.float32(heatmap) / 255
cam = heatmap + np.float32(img)
cam = cam / np.max(cam)
return cam
# initialize ViT pretrained
model = vit_LRP(pretrained=True).cuda()
model.eval()
attribution_generator = LRP(model)
def generate_visualization(original_image, class_index=None):
transformer_attribution = attribution_generator.generate_LRP(original_image.unsqueeze(0).cuda(), method="transformer_attribution", index=class_index).detach()
transformer_attribution = transformer_attribution.reshape(1, 1, 14, 14)
transformer_attribution = torch.nn.functional.interpolate(transformer_attribution, scale_factor=16, mode='bilinear')
transformer_attribution = transformer_attribution.reshape(224, 224).cuda().data.cpu().numpy()
transformer_attribution = (transformer_attribution - transformer_attribution.min()) / (transformer_attribution.max() - transformer_attribution.min())
image_transformer_attribution = original_image.permute(1, 2, 0).data.cpu().numpy()
image_transformer_attribution = (image_transformer_attribution - image_transformer_attribution.min()) / (image_transformer_attribution.max() - image_transformer_attribution.min())
vis = show_cam_on_image(image_transformer_attribution, transformer_attribution)
vis = np.uint8(255 * vis)
vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR)
return vis
def print_top_classes(predictions, **kwargs):
# Print Top-5 predictions
prob = torch.softmax(predictions, dim=1)
class_indices = predictions.data.topk(5, dim=1)[1][0].tolist()
max_str_len = 0
class_names = []
for cls_idx in class_indices:
class_names.append(CLS2IDX[cls_idx])
if len(CLS2IDX[cls_idx]) > max_str_len:
max_str_len = len(CLS2IDX[cls_idx])
print('Top 5 classes:')
for cls_idx in class_indices:
output_string = '\t{} : {}'.format(cls_idx, CLS2IDX[cls_idx])
output_string += ' ' * (max_str_len - len(CLS2IDX[cls_idx])) + '\t\t'
output_string += 'value = {:.3f}\t prob = {:.1f}%'.format(predictions[0, cls_idx], 100 * prob[0, cls_idx])
print(output_string)
###Output
_____no_output_____
###Markdown
Examples Cat-Dog
###Code
image = Image.open('samples/catdog.png')
dog_cat_image = transform(image)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(image);
axs[0].axis('off');
output = model(dog_cat_image.unsqueeze(0).cuda())
print_top_classes(output)
# cat - the predicted class
cat = generate_visualization(dog_cat_image)
# dog
# generate visualization for class 243: 'bull mastiff'
dog = generate_visualization(dog_cat_image, class_index=243)
axs[1].imshow(cat);
axs[1].axis('off');
axs[2].imshow(dog);
axs[2].axis('off');
###Output
Top 5 classes:
282 : tiger cat value = 10.559 prob = 68.6%
281 : tabby, tabby cat value = 9.059 prob = 15.3%
285 : Egyptian cat value = 8.414 prob = 8.0%
243 : bull mastiff value = 7.425 prob = 3.0%
811 : space heater value = 5.152 prob = 0.3%
###Markdown
Tusker-Zebra
###Code
image = Image.open('samples/el2.png')
tusker_zebra_image = transform(image)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(image);
axs[0].axis('off');
output = model(tusker_zebra_image.unsqueeze(0).cuda())
print_top_classes(output)
# tusker - the predicted class
tusker = generate_visualization(tusker_zebra_image)
# zebra
# generate visualization for class 340: 'zebra'
zebra = generate_visualization(tusker_zebra_image, class_index=340)
axs[1].imshow(tusker);
axs[1].axis('off');
axs[2].imshow(zebra);
axs[2].axis('off');
image = Image.open('samples/dogbird.png')
dog_bird_image = transform(image)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(image);
axs[0].axis('off');
output = model(dog_bird_image.unsqueeze(0).cuda())
print_top_classes(output)
# basset - the predicted class
basset = generate_visualization(dog_bird_image, class_index=161)
# generate visualization for class 87: 'African grey, African gray, Psittacus erithacus (grey parrot)'
parrot = generate_visualization(dog_bird_image, class_index=87)
axs[1].imshow(basset);
axs[1].axis('off');
axs[2].imshow(parrot);
axs[2].axis('off');
###Output
Top 5 classes:
161 : basset, basset hound value = 10.514 prob = 78.8%
163 : bloodhound, sleuthhound value = 8.604 prob = 11.7%
166 : Walker hound, Walker foxhound value = 7.446 prob = 3.7%
162 : beagle value = 5.561 prob = 0.6%
168 : redbone value = 5.249 prob = 0.4%
###Markdown
###Code
!git clone https://github.com/Siahkamari/Faster-Convex-Lipschitz-Regression.git
%cd /content/Faster-Convex-Lipschitz-Regression
%load_ext autoreload
%autoreload 2
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
from utils import test
task = 'regression'
reg_data_names = [ # n x dim : xgboost seconds
# 'solar_flare', # 1066 x 23 : 13.7xgbs
# 'airfoil_self_noise', # 1503 x 5 : 15.3xgbs
# 'concrete_data', # 1030 x 8 : 17.9xgbs
'garment_productivity', # 905 x 37 : 20.6xgbs
# 'parkinson_multiple_sound_recording_reg', # 702 x 52 : 25.2xgbs
# 'CCPP', # 9568 x 4 : 29.9xgbs
# 'geographical_original_of_music', # 1059 x 68 : 37.7xgbs
# 'communities', # 1994 x 122 : 42.6xgbs
# 'air_quality', # 7110 x 21 : 45.9xgbs
# 'wine_quality', # 4898 x 11 : 56.0xgbs
# 'bias_correction_ucl', # 6200 x 52 : 57.3xgbs
# 'sml2010', # 3000 x 24 : 86.6xgbs
# 'bike_sharing', # 6570 x 19 : 123.xgbs
# 'parkinson_updrs', # 4406 x 25 : 134.xgbs
]
# !pip install rarfile
# from rarfile import RarFile
for data_name in reg_data_names:
test(data_name, task, n_folds=2)
from utils import test
cl_data_names = [ # n x dim xgboost seconds
# 'iris', # 149 x 4 4.5s
'wine', # 178 x 13 5.4s
# 'transfusion', # 748 x 4 5.6s
# 'ionosphere', # 351 x 34 8.7s
# 'wdbc', # 569 x 30 10.7s
# 'balance_scale', # 625 x 4 11.6s
# 'parkinson_multiple_sound_recording_cl', # 944 x 75 87s
# 'coil_2000', # 5822 x 85 240s
# 'abalone', # 4177 x 10
]
task = 'classification'
# !pip install rarfile
for data_name in cl_data_names:
test(data_name, 'classification', n_folds=2)
###Output
_____no_output_____
###Markdown
Here are some examples to show how bizy works.
###Code
import match
import compare
# Compare a focal and an alter list of firm names
fname_focal = '_edgar_biznames.csv'
fname_alter = '_sdc_biznames.csv'
match.match_lists(fname_focal, fname_alter)
# view result in '~matched.000'
# Are two firm names referring to the same firm?
focal = 'Facebook, Inc.'
alter = 'Facebook'
result = compare.compare_biznames(focal, alter)
result
###Output
_____no_output_____
###Markdown
###Code
%matplotlib inline
# !pip install git+https://github.com/jpdeleon/video2nlp.git
###Output
_____no_output_____
###Markdown
Make a wordcloud of closed caption (cc) of [this Trump's speech video](https://www.youtube.com/watch?v=sBYdIPZDYsU).
###Code
# !./video2nlp.py -id sBYdIPZDYsU
from video2nlp import Base
bc = Base(youtube_video_id="sBYdIPZDYsU", verbose=True)
# measure sentiment
senti = bc.get_sentiment()
print("Sentiment: ", senti)
# visualize wordcloud
fig = bc.plot_wordcloud()
###Output
Raw word count: 8488
stopwords removed: 5107
Sentiment: Sentiment(polarity=0.21995583825509454, subjectivity=0.5443397329642683)
###Markdown
Adaptive-scheduler example[Read the documentation](https://adaptive-scheduler.readthedocs.io/en/latest/what-is-this) to see what this is all about. Step 1: define the simulationOften one wants to sweep a continuous 1D or 2D space for multiple parameters. [Adaptive](http://adaptive.readthedocs.io) is the ideal program to do this. We define a simulation by creating several `adaptive.Learners`. We **need** to define the following variables:* `learners` a list of learners* `fnames` a list of file names, one for each learner
###Code
from functools import partial
import adaptive
def h(x, width=0.01, offset=0):
import numpy as np
import random
for _ in range(10): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
a = width
return x + a ** 2 / (a ** 2 + (x - offset) ** 2)
offsets = [i / 10 - 0.5 for i in range(5)]
combos = adaptive.utils.named_product(offset=offsets, width=[0.01, 0.05])
learners = []
fnames = []
for combo in combos:
f = partial(h, **combo)
learner = adaptive.Learner1D(f, bounds=(-1, 1))
fnames.append(f"data/{combo}")
learners.append(learner)
###Output
_____no_output_____
###Markdown
Step 2: run the `learners`After defining the `learners` and `fnames` in an file (above) we can start to run these learners.We split up all learners into seperate jobs, all you need to do is to specify how many cores per job you want. Simple example
###Code
import adaptive_scheduler
def goal(learner):
return learner.npoints > 200
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10, executor_type="ipyparallel",
) # PBS or SLURM
run_manager = adaptive_scheduler.server_support.RunManager(
scheduler, learners, fnames, goal=goal, log_interval=30, save_interval=30,
)
run_manager.start()
# See the current queue with
import pandas as pd
queue = scheduler.queue()
df = pd.DataFrame(queue).transpose()
df.head()
# Read the logfiles and put it in a `pandas.DataFrame`.
# This only returns something when there are log-files to parse!
# So after `run_manager.log_interval` has passed.
df = run_manager.parse_log_files()
df.head()
# See the database
df = run_manager.get_database() # or see `run_manager.database_manager.as_dict()`
df.head()
# After the calculation started and some data has been saved, we can display the learners
import adaptive
adaptive.notebook_extension()
run_manager.load_learners()
learner = adaptive.BalancingLearner(learners, cdims=combos)
learner.plot()
###Output
_____no_output_____
###Markdown
Simple sequential exampleSometimes you cannot formulate your problem with Adaptive, instead you just want to run a function as a sequence of parameters.Surprisingly, this approach with a `SequenceLearner` [is slightly faster than `ipyparallel.Client.map`](https://github.com/python-adaptive/adaptive/pull/193issuecomment-491062073).
###Code
import numpy as np
from adaptive import SequenceLearner
from adaptive_scheduler.utils import split, combo_to_fname
def g(xyz):
x, y, z = xyz
for _ in range(5): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
return x ** 2 + y ** 2 + z ** 2
xs = np.linspace(0, 10, 11)
ys = np.linspace(-1, 1, 11)
zs = np.linspace(-3, 3, 11)
xyzs = [(x, y, z) for x in xs for y in ys for z in zs]
# We have only one learner so one fname
learners = [SequenceLearner(g, sequence=xyzs)]
fnames = ["data/xyzs"]
import adaptive_scheduler
def goal(learner):
return learner.done()
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10, executor_type="ipyparallel",
) # PBS or SLURM
run_manager2 = adaptive_scheduler.server_support.RunManager(
scheduler, learners, fnames, goal=goal, log_interval=30, save_interval=30,
)
run_manager2.start()
run_manager2.load_learners()
learner = learners[0]
try:
result = learner.result()
print(result)
except:
print("`learner.result()` is only available when all values are calculated.")
partial_data = learner.data
print(partial_data)
###Output
_____no_output_____
###Markdown
Extended exampleThis example shows how to run split up a list into 100 `SequenceLearner`s and runs it in 100 jobs.
###Code
import numpy as np
from adaptive import SequenceLearner
from adaptive_scheduler.utils import split, combo2fname
from adaptive.utils import named_product
def g(combo):
x, y, z = combo["x"], combo["y"], combo["z"]
for _ in range(5): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
return x ** 2 + y ** 2 + z ** 2
combos = named_product(x=np.linspace(0, 10), y=np.linspace(-1, 1), z=np.linspace(-3, 3))
print(f"Length of combos: {len(combos)}.")
# We could run this as 1 job with N nodes, but we can also split it up in multiple jobs.
# This is desireable when you don't want to run a single job with 300 nodes for example.
# Note that
# `adaptive_scheduler.utils.split_sequence_in_sequence_learners(g, combos, 100, "data")`
# does the same!
njobs = 100
split_combos = list(split(combos, njobs))
print(
f"Length of split_combos: {len(split_combos)} and length of split_combos[0]: {len(split_combos[0])}."
)
learners = [SequenceLearner(g, combos_part) for combos_part in split_combos]
fnames = [combo2fname(combos_part[0], folder="data") for combos_part in split_combos]
###Output
_____no_output_____
###Markdown
We now start the `RunManager` with a lot of arguments to showcase some of the options you can use to customize your run.
###Code
from functools import partial
import adaptive_scheduler
from adaptive_scheduler.scheduler import DefaultScheduler, PBS, SLURM
def goal(learner):
return learner.done() # the standard goal for a SequenceLearner
extra_scheduler = (
["--exclusive", "--time=24:00:00"] if DefaultScheduler is SLURM else []
)
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10,
executor_type="ipyparallel",
extra_scheduler=extra_scheduler,
extra_env_vars=["PYTHONPATH='my_dir:$PYTHONPATH'"],
python_executable="~/miniconda3/bin/python",
log_folder="logs",
) # PBS or SLURM
run_manager3 = adaptive_scheduler.server_support.RunManager(
scheduler,
learners,
fnames,
goal=goal,
log_interval=10,
save_interval=30,
runner_kwargs=dict(retries=5, raise_if_retries_exceeded=False),
kill_on_error="srun: error:", # cancel a job if this is inside a log
job_name="example-sequence", # this is used to generate unqiue job names
db_fname="example-sequence.json", # the database keeps track of job_id <-> (learner, is_done)
start_job_manager_kwargs=dict(
max_fails_per_job=10, # the RunManager is cancelled after njobs * 10 fails
max_simultaneous_jobs=300, # limit the amount of simultaneous jobs
),
)
run_manager3.start()
df = run_manager3.parse_log_files()
df.head()
run_manager3.load_learners() # load the data into the learners
result = sum(
[l.result() for l in learners], []
) # combine all learner's result into 1 list
###Output
_____no_output_____
###Markdown
Setup evnironment
###Code
import os
import numpy as np
import pandas as pd
import json
from skimage.io import imread
from psf import compute, plotPSF
###Output
_____no_output_____
###Markdown
Setup plotting
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context('paper', font_scale=2.0)
sns.set_style('ticks')
from IPython.html.widgets import interactive
from IPython.html.widgets import IntSliderWidget
from IPython.display import display
###Output
/Users/sofroniewn/anaconda/lib/python2.7/site-packages/IPython/html.py:14: ShimWarning: The `IPython.html` package has been deprecated. You should import from `notebook` instead. `IPython.html.widgets` has moved to `ipywidgets`.
"`IPython.html.widgets` has moved to `ipywidgets`.", ShimWarning)
###Markdown
Define parameters
###Code
FOVumLat = 61.0
FOVpxLat = 512.0 # 512
pxPerUmLat = FOVpxLat/FOVumLat
pxPerUmAx = 2.0 # 2.0
wavelength = 970.0
NA = 0.6
windowUm = [12, 2, 2]
options = {'FOVumLat':FOVumLat, 'FOVpxLat':FOVpxLat, 'pxPerUmLat':FOVpxLat/FOVumLat, 'pxPerUmAx':pxPerUmAx, 'wavelength':970.0, 'NA':0.6, 'windowUm':windowUm}
options['thresh'] = .05
options
###Output
_____no_output_____
###Markdown
Get PSF
###Code
im = imread('./data/images.tif', plugin='tifffile')
data, beads, maxima, centers, smoothed = compute(im, options)
PSF = pd.concat([x[0] for x in data])
PSF['Max'] = maxima
PSF = PSF.reset_index().drop(['index'],axis=1)
latProfile = [x[1] for x in data]
axProfile = [x[2] for x in data]
PSF
print len(PSF)
print PSF.mean()
print PSF.std()
###Output
14
FWHMlat 0.951830
FWHMax 4.772319
Max 286.214286
dtype: float64
FWHMlat 0.061514
FWHMax 0.425010
Max 212.956904
dtype: float64
###Markdown
Plot max projection
###Code
plt.figure(figsize=(5,5));
plt.imshow(smoothed);
plt.plot(centers[:, 2], centers[:, 1], 'r.', ms=10);
plt.xlim([0, smoothed.shape[0]])
plt.ylim([smoothed.shape[1], 0])
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plot max projection
###Code
beadInd = 1
average = beads[beadInd]
plane = IntSliderWidget(min=0, max=average.shape[0]-1, step=1, value=average.shape[0]/2)
interactive(plotAvg, i=plane)
###Output
_____no_output_____
###Markdown
Plot 2D slices
###Code
plt.imshow(average.mean(axis=0));
plt.axis('off');
plt.imshow(average.mean(axis=1), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
plt.imshow(average.mean(axis=2), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plotting
###Code
plotPSF(latProfile[beadInd][0],latProfile[beadInd][1],latProfile[beadInd][2],latProfile[beadInd][3],pxPerUmLat,PSF.Max.iloc[beadInd])
plotPSF(axProfile[beadInd][0],axProfile[beadInd][1],axProfile[beadInd][2],axProfile[beadInd][3],pxPerUmAx,PSF.Max.iloc[beadInd])
###Output
_____no_output_____
###Markdown
ModelAnimation Example NotebookThis sample notebook demos the use of ModelAnimation on a simple TF model.The first few cells have nothing to do with ModelAnimation, other than setting up the model.
###Code
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
from sklearn.model_selection import train_test_split
import numpy as np
import random
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Creating a simple 'home-made' data set.
###Code
template = np.array([[2.0,2.0,2.0,0.1,0.1,0.1,0.1,0.1,0.1],
[0.1,0.1,2.0,0.1,0.1,2.0,0.1,0.1,2.0],
[0.1,0.1,0.1,0.1,0.1,0.1,2.0,2.0,2.0],
[2.0,0.1,0.1,2.0,0.1,0.1,2.0,0.1,0.1],
[2.0,0.1,0.1,0.1,2.0,0.1,0.1,0.1,2.0],
[0.1,0.1,2.0,0.1,2.0,0.1,2.0,0.1,0.1]])
X = []
y = []
for _ in range(2000):
for i in range(len(template)):
r = np.random.rand(9)
X.append(template[i] * r)
y.append(i)
X = np.array(X)
y = np.array(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0)
plt.figure(figsize=(15,5))
for i in range(16*4):
plt.subplot(4,16,i+1)
plt.xticks([])
plt.yticks([])
plt.imshow(X_train[i].reshape(3,3), cmap='Greys')
plt.xlabel(y_train[i])
plt.show()
###Output
_____no_output_____
###Markdown
ModelAnimation The following cell includes the ModelAnimation code, and sets up a custom TensorFlow Keras callback. There are three callbacks:- on_train_begin - This stores the model weights in a list called `model_weights` at the start of the training. Boradly speaking this will store the randomised starting position.- on_epoch_end - This will append to `model_weights` after each epoch- on_train_end - When the training is complete, this callback will trigger the rendering of the frames and optionally the animation. When calling `create_animation` there are several named parameters you can pass in. These are listed inthe readme file in github.
###Code
from ModelAnimation import ModelAnimation
class CustomCallback(tf.keras.callbacks.Callback):
def on_train_begin(self, logs=None):
model_weights.append(model.get_weights())
def on_epoch_end(self, epoch, logs=None):
model_weights.append(model.get_weights())
def on_train_end(self, logs=None):
animation = ModelAnimation()
animation.create_animation(model_weights, model.input.shape.as_list(),
margin=150,
node_size=50,
node_gap=20,
conn_max_width=10,
background_rgba=(220,220,220,255),
gif=True,
frame_numbers=True)
###Output
_____no_output_____
###Markdown
At the start of this cell we create `model_weights`. We do this here so that if we re-run the training, the list will start again.
###Code
# Clear some value (incase we run multiple times)
tf.keras.backend.clear_session()
model_weights = []
# Create a TF sequential model with Keras
model = tf.keras.models.Sequential([
tf.keras.layers.Input(9),
tf.keras.layers.Dense(12, activation='relu'),
tf.keras.layers.Dense(6, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
When we call `model.fit` we pass in the `callbacks` object, and the animation will run automatically at the end of the training.
###Code
e = model.fit(X_train, y_train,
epochs=10,
callbacks=[CustomCallback()])
###Output
_____no_output_____
###Markdown
Training
###Code
class Config:
# Same default parameters as run_clm_no_trainer.py in tranformers
# https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm_no_trainer.py
num_train_epochs = 3
weight_decay = 0.01
learning_rate = 0.01
lr_scheduler_type = "linear"
num_warmup_steps = 0
max_train_steps = num_train_epochs
# Prompt-tuning
# number of prompt tokens
n_prompt_tokens = 20
# If True, soft prompt will be initialized from vocab
# Otherwise, you can set `random_range` to initialize by randomization.
init_from_vocab = True
# random_range = 0.5
args = Config()
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# Initialize GPT2LM with soft prompt
model = GPT2PromptTuningLM.from_pretrained(
"gpt2",
n_tokens=args.n_prompt_tokens,
initialize_from_vocab=args.init_from_vocab
)
model.soft_prompt.weight
# Prepare dataset
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
print(inputs)
# Only update soft prompt'weights for prompt-tuning. ie, all weights in LM are set as `require_grad=False`.
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if n == "soft_prompt.weight"],
"weight_decay": args.weight_decay,
}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
lr_scheduler = get_scheduler(
name=args.lr_scheduler_type,
optimizer=optimizer,
num_warmup_steps=args.num_warmup_steps,
num_training_steps=args.max_train_steps,
)
model.train()
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
print(f"loss: {loss}")
loss.backward()
optimizer.step()
model.soft_prompt.weight
# Confirmed the weights were changed!
# save the prompt model
save_dir_path = "."
model.save_soft_prompt(save_dir_path)
# Once it's done, `soft_prompt.model` is in the dir
###Output
_____no_output_____
###Markdown
InferenceIn the inference phase, you need to input ids to the model by using `model.forward()` so that you cannot use `model.generate()` attribute. After you get `next_token_logits` as below, you will need additional codes for your decoding method.
###Code
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# Load the model
model = GPT2PromptTuningLM.from_pretrained(
"gpt2",
soft_prompt_path="./soft_prompt.model"
)
model.eval()
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='pt')
input_ids
outputs = model.forward(input_ids=input_ids)
next_token_logits = outputs[0][0, -1, :]
...
###Output
_____no_output_____
###Markdown
Sample notebookAuthor: (arl)
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.keras as K
from skimage import io
import cellx
print(cellx.example_function())
###Output
hello world
###Markdown
Gradient Checkpointing Model-Agnostic Meta-LearningWe demonstrate how to use memory efficient MAML on CIFAR10.This notebook performs one forward and backward for MAML with a large number of iterations* Data: Random tensors (batch_size, 3, 224, 224) * Model: ResNet18* Optimizer: SGD with 0.01 learning rate* Batch size: 16* MAML steps: 100 (works with >500 on 11GB GPU)* GPU: whatever colab has to spare, probably K80
###Code
%env CUDA_VISIBLE_DEVICES=0
# colab dependencies
!pip install torch==1.3.1 torchvision==0.4.2 torch_maml
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import torch, torch.nn as nn
import torch.nn.functional as F
import torchvision.models as models
import torch_maml
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# For reproducibility
import random
random.seed(42)
np.random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmarks = False
###Output
env: CUDA_VISIBLE_DEVICES=0
Requirement already satisfied: torch==1.3.1 in /usr/local/lib/python3.6/dist-packages (1.3.1)
Requirement already satisfied: torchvision==0.4.2 in /usr/local/lib/python3.6/dist-packages (0.4.2)
Collecting torch_maml
Downloading https://files.pythonhosted.org/packages/be/4c/a37a23fe88d41a47589e7653b398762a71d98d7dff8b2111759cc1a173e0/torch_maml-1.0.tar.gz
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.3.1) (1.17.4)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.2) (1.12.0)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.4.2) (4.3.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.1.1->torchvision==0.4.2) (0.46)
Building wheels for collected packages: torch-maml
Building wheel for torch-maml (setup.py) ... [?25l[?25hdone
Created wheel for torch-maml: filename=torch_maml-1.0-cp36-none-any.whl size=9396 sha256=4e6f09e990198a915667d462af6b5a50c0088c153a75144003320df25f071cba
Stored in directory: /root/.cache/pip/wheels/79/67/b2/923f59310ddb7a8de189573c3322a1af7754659ee472081bcc
Successfully built torch-maml
Installing collected packages: torch-maml
Successfully installed torch-maml-1.0
###Markdown
Define compute_loss function and create model
###Code
# Interface:
# def compute_loss(model, data, **kwargs):
# <YOUR CODE HERE> # ideally this should be stateless (does not change global variables)
# return loss
# Our example
def compute_loss(model, data, device='cuda'):
inputs, targets = data
preds = model(inputs.to(device=device))
loss = F.cross_entropy(preds, targets.to(device=device))
return loss
# Model is a torch.nn.Module
model = models.resnet18(num_classes=10).to(device)
# Optimizer is a custom MAML optimizer, e.g. SGD
optimizer = torch_maml.IngraphGradientDescent(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
Create NaiveMAML and GradientCheckpointMAML for comparison
###Code
efficient_maml = torch_maml.GradientCheckpointMAML(
model, compute_loss, optimizer=optimizer, checkpoint_steps=5)
naive_maml = torch_maml.NaiveMAML(model, compute_loss, optimizer=optimizer)
###Output
_____no_output_____
###Markdown
Sanity check: small number of stepsBoth naive and memory-efficient maml should produce the same output.
###Code
# First, we set such max steps that fits memory for naive MAML to check the implementation
maml_steps = 10
# Clip meta-learning gradients by global norm to avoid explosion
max_grad_grad_norm = 1e2
# Generate batch for demonstration. Note that we support using different batches for each MAML step (a-la SGD)
x_batch, y_batch = torch.randn((16, 3, 224, 224)), torch.randint(0, 10, (16, ))
inputs = [(x_batch, y_batch)] * maml_steps # use the same batch for each step
updated_model, loss_history, _ = naive_maml(inputs, loss_kwargs={'device':device},
max_grad_grad_norm=max_grad_grad_norm)
final_loss = compute_loss(updated_model, (x_batch, y_batch), device=device)
final_loss.backward()
grads_naive = [params.grad for params in model.parameters()]
print("Loss naive: %.4f" % final_loss.item())
updated_model, loss_history, _ = efficient_maml(inputs, loss_kwargs={'device':device},
max_grad_grad_norm=max_grad_grad_norm)
final_loss = compute_loss(updated_model, (x_batch, y_batch), device=device)
final_loss.backward()
grads_efficient = [params.grad for params in model.parameters()]
print("Loss memory-efficient: %.4f" % final_loss.item())
for grad1, grad2 in zip(grads_naive, grads_efficient):
assert torch.allclose(grad1, grad2)
print("All grads match!")
# alternative: use rmsprop optimizer
rmsprop_maml = torch_maml.GradientCheckpointMAML(
model, compute_loss, optimizer=torch_maml.IngraphRMSProp(learning_rate=1e-3, beta=0.9, epsilon=1e-5),
checkpoint_steps=5)
updated_model, loss_history, _ = rmsprop_maml(inputs, loss_kwargs={'device':device},
max_grad_grad_norm=max_grad_grad_norm)
final_loss = compute_loss(updated_model, (x_batch, y_batch), device=device)
final_loss.backward()
grads_efficient = [params.grad for params in model.parameters()]
print("Loss RMSProp: %.4f" % final_loss.item())
###Output
Loss RMSProp: 0.0224
###Markdown
The real meta-learning: 100 steps and beyond
###Code
maml_steps = 100 # feel free to tweak (works with >500)
inputs = [(x_batch, y_batch)] * maml_steps
torch.cuda.empty_cache()
updated_model, loss_history, _ = efficient_maml(inputs, loss_kwargs={'device':device},
max_grad_grad_norm=max_grad_grad_norm)
final_loss = compute_loss(updated_model, (x_batch, y_batch), device=device)
final_loss.backward()
grads_efficient = [params.grad for params in model.parameters()]
plt.plot(loss_history)
print("Loss memory-efficient: %.4f" % final_loss.item())
# naive maml can't handle this...
updated_model, loss_history, _ = naive_maml(inputs, loss_kwargs={'device':device},
max_grad_grad_norm=max_grad_grad_norm)
final_loss = compute_loss(updated_model, (x_batch, y_batch), device=device)
final_loss.backward()
grads_naive = [params.grad for params in model.parameters()]
print("Loss naive: %.4f" % final_loss.item())
###Output
_____no_output_____
###Markdown
Example file meant to illustrate a basic use case for one of the Environments in ACME Gym.
###Code
import numpy as np
import gym
import acme_gym
def determine_step_size(mode, i, threshold=20):
"""
A helper function that determines the next action to take based on the designated mode.
Parameters
----------
mode (int)
Determines which option to choose.
i (int)
the current step number.
threshold (float)
The upper end of our control.
Returns
-------
decision (float)
The value to push/pull the cart by, positive values push to the right.
"""
if mode == 1:
return 0
if mode == 2:
return np.random.uniform(low=-threshold, high=threshold)
if mode == 3:
side = -1 if i%2 == 0 else 1
return threshold*side
if mode == 4:
inp_str = "Enter a float value from -{} to {}:\n".format(threshold, threshold)
return float(input(inp_str))
def run_gym_example():
"""
Implement as a function so that we can properly exit early if needed
without crashing our Kernel
"""
input_str = """Enter one of the following commands:
1) Do no action, ever
2) Choose the direction randomly
3) Alternate between left and right
4) Pick the direction at each state
5) Terminate
"""
mode = int(input(input_str))
if mode == 5:
return
env = gym.make('CartPoleContinuous-v0')
T = round(6/0.02)
# Initial state is drawn randomly, let the user pick a good starting point
init_state = True
while init_state:
obs = env.reset()
env.render()
print("X: {}, X': {}, θ: {}, θ': {}".format(obs[0], obs[1], obs[2], obs[3]))
init_state = input("Enter to begin simulation, anything else to pick new starting values:\n")
for i in range(T):
# Determine the step size based on our mode
step = np.array([determine_step_size(mode,i)])
# Step in designated direction and update the visual
obs, reward, state, info = env.step(step)
env.render()
if mode == 4:
exit = input("Enter q to exit, all else continues")
if exit == 'q':
env.close()
return
input("Enter any key to exit:")
env.close()
# WARNING! The pop up for rendering may not appear on the front of your screen
# check and see if it appeared underneath your files
run_gym_example()
###Output
_____no_output_____
###Markdown
Example of Correcting Seeing in Flare ObservationsJohn Armstrong, 08/12/2020The following notebook will demonstrate how to use the trained models from the MNRAS paper that can be downloaded as part of the v1.0 release or from Zenodo. Here we introduce the two ways to do inference with the trained models, which involve objects from the `inference.py` script: `Corrector` and `SpeedyCorrector`. The timings made using the magic method `%%time` are based on running the models on my 2017 13" MacBook Pro (non-touch bar) with timings from an NVIDIA Titan Xp quoted in the flavour text.
###Code
%matplotlib inline
import torch
from crispy.crisp import CRISP
from inference import *
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
There are two different ways to do the correction as stated above:1. `SpeedyCorrector`: this is the preferred GPU method for low-mid range GPUs as it utilises a [traced torchscript model](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html) with a fixed batch size of 16 to correct for seeing. This is typically faster on a GPU as it utilises torch's just-in-time (JIT) compiler to compile the network operations.2. `Corrector`: this is a normal class that invokes an instance of the full network and runs interpretively in Python. When batch size needs to be altered or other scaling needs are needed then this is the correct route to go.Both methods have an error kwarg that can be assigned to add a previously pre-computed uncertainty to any estimations made.
###Code
sc_ha = SpeedyCorrector("traced_shaun.pt")
c_ha = Corrector(1,1,64,model_path="Halpha_final.pth")
###Output
loading model Halpha_final.pth
=> model loaded.
###Markdown
Next we load in the data using the [crispy](https://github.com/rhero12/crisPy2) package for optical imaging spectropolarimetric data. The first data we load is Hα. For this particular flare, the helioprojective plane is rotated with respect to the image plane meaning the observations have been rotated to be aligned with the helioprojective plane. This introduces a background padding that the network has not seen before. As such, we use the `rotate_crop` class method to obtain only the data from the cube with an accompanying dictionary added to transform the data back into the helioprojective frame.
###Code
c = CRISP("halpha_example.fits")
c[5].intensity_map()
c_rot, c_rot_dict = c.rotate_crop()
###Output
_____no_output_____
###Markdown
To correct for the seeing, we use the `mysticalman` class method for each of the types of correctors. This works by segmenting the image into 256 x 256 pixel tiles and each of these are corrected for the bad seeing before being mosaicked back together (this was a choice made in training due to limited GPU VRAM). This takes a 3D data cube as input of the format (λ, y, x) and returns a cube of the same shape of the corrected data.
###Code
%%time
out = sc_ha.mysticalman(c_rot)
%%time
out_slow = c_ha.mysticalman(c_rot)
###Output
Segmenting image cube: 100%|██████████| 15/15 [00:00<00:00, 41.10it/s]
###Markdown
The following is the result for correcting the Hα observation using both techniques. The images plotted correspond to Δλ = - 0.4 &8491; from the line core of Hα.
###Code
fig = plt.figure(figsize=(14,10))
ax1 = fig.add_subplot(1,3,1)
ax1.imshow(c_rot[5], cmap="Greys_r", origin="lower")
ax1.set_title("Uncorrected")
ax2 = fig.add_subplot(1,3,2)
ax2.imshow(out[5], cmap="Greys_r", origin="lower")
ax2.set_title("Corrected using TorchScript")
ax3 = fig.add_subplot(1,3,3)
ax3.imshow(out_slow[5], cmap="Greys_r", origin="lower")
ax3.set_title("Corrected")
###Output
_____no_output_____
###Markdown
Next we will demonstrate the same principal but for Ca II 8542&8491; spectral line.
###Code
sc_ca = SpeedyCorrector("traced_shaun_ca8542.pt")
c_ca = Corrector(1,1,64,model_path="ca8542_final.pth")
ca = CRISP("ca8542_example.fits")
ca[10].intensity_map()
ca_rot, ca_rot_dict = ca.rotate_crop()
%%time
out_ca = sc_ca.mysticalman(ca_rot)
%%time
out_slow_ca = c_ca.mysticalman(ca_rot)
###Output
Segmenting image cube: 100%|██████████| 25/25 [00:00<00:00, 34.41it/s]
###Markdown
The images plotted correspond to Δλ = - 0.1 &8491; from the line core of Ca II 8542&8491;.
###Code
fig = plt.figure(figsize=(14,10))
ax1 = fig.add_subplot(1,3,1)
ax1.imshow(ca_rot[11], cmap="Greys_r", origin="lower")
ax1.set_title("Uncorrected")
ax2 = fig.add_subplot(1,3,2)
ax2.imshow(out_ca[11], cmap="Greys_r", origin="lower")
ax2.set_title("Corrected using TorchScript")
ax3 = fig.add_subplot(1,3,3)
ax3.imshow(out_slow_ca[11], cmap="Greys_r", origin="lower")
ax3.set_title("Corrected")
###Output
_____no_output_____
###Markdown
Plot Subpackage DescriptionThis file provides an example file for the functionality of the plot subpackage.The plot subpackage consists of two modules:**plotter.py**This module includes the Plotter parent class and includes the following:- Plotter(data, plot_title=None, label_names=None) - creates Plotter class object with input from a Pandas DataFrame- Plotter.add_title(title) - add or update the plot title- Plotter.add_label_names(x_label, y_label) - add or update plot x-axis and y-axis labels- Plotter.show_plot() - provides a plot of the Plotter class object- Plotter.save_plot(save_loc="", file_name=None)This Plotter class is not meant to be called directly but is used as the parent class for the grapher module.**grapher.py**This module includes the HistogramPlot, ScatterPlot, and ScatterMatrix child classes of the Plotter parent class as follows:- HistogramPlot(data, plot_title=None, label_names=None) - creates Histogram class object with input from a Pandas DataFrame - methods from Plotter class inherited- ScatterPlot(data, plot_title=None, label_names=None) - creates ScatterPlot class object with input from a Pandas DataFrame - methods from Plotter class inherited- ScatterMatrix(data, plot_title=None, label_names=None) - creates ScatterMatrix class object with input from a Pandas DataFrame - methods from Plotter class inherited ExampleExamples use the CarPrice.csv dataset saved in the data folder of this repo
###Code
# import packages
import pandas as pd
import quickscreen.plot.grapher as grapher
# load data
df = pd.read_csv("./data/CarPrice.csv")
# Histogram
hist_plot = grapher.HistogramPlot(data=df)
hist_plot.add_title("this is the title")
hist_plot.add_label_names("x axis", "y axis")
hist_plot.histogram("curbweight")
hist_plot.show_plot()
# Scatterplot
scatter_plot = grapher.ScatterPlot(data=df, plot_title = "this is the title", label_names = ("x axis", "y axis"))
scatter_plot.scatter("curbweight", "horsepower")
scatter_plot.show_plot()
# ScatterMatrix
scatter_matrix = grapher.ScatterMatrix(data=df)
scatter_matrix.scatter_matrix()
scatter_matrix.show_plot()
# save plot (MacOS only)
hist_plot = grapher.HistogramPlot(data=df)
hist_plot.add_title("this is the title")
hist_plot.add_label_names("x axis", "y axis")
hist_plot.histogram("curbweight")
# will also display the plot in a jupyter notebook
hist_plot.save_plot()
###Output
HistogramPlot_2020-12-02_13:31:28.826009
###Markdown
Summary Subpackage DescriptionThis provides examples of the modules and methods in the summary subpackage.The first module:**summary_classes.py**This module has the class:- Df_Info (df, type="columns") - Creates a Df_Info class object from a Pandas DataFrame. - This class is the building block for the Missing and Stats class.Which contains the methods:- Df_Info.total_max() - This returns the maximum value of the database.- Df_Info.total_min() - This returns the minimum value of the database.- Df_Info.total_mean() - This returns the average value of the database.- Df_Info.total_missing() - This returns the total amount of missing values. This module also has the class:- Missing (df, type="columns") - Creates a Missing class object from a Pandas DataFrame. - Inherits methods from the Df_Info class. - Returns the total missing values and the percentage of missing values for each column (or row if type specified "row") upon initializiation.This final class in this module is:- Stats(df, type="columns") - Creates a Stats class object from a Pandas Dataframe. - Inherits methods from the Df_Info class. - Returns the maximum, minimum, and average value for each column (or row if type specified "row") upon initialization. The second module:**summary_stats.py**This module has the methods:- missing_summary(df, type="columns") - Takes a Pandas Dataframe and generates a Missing class object from the summary_classes module. - Returns the total missing values and the percentage of missing values for each column (or row if type specified "row").- stats_summary(df, type="columns") - Takes a Pandas Dataframe and generates a Stats class object from the summary_classes module. - Returns the maximum, minimum, and average value for each column (or row if type specified "row").- all_summary(df, type="columns") - Takes a Pandas Dataframe and calls upon the missing_summary() and stats_summary methods.- simple_summary(df, type="columns") - Takes a Pandas Dataframe and generates a Df_Info class object from the summary_classes module. - Returns minimum, maximum, average, number of rows, number of columns, and number of missing values. ExamplesExamples use the CarPrice.csv dataset saved in the data folder of this repository
###Code
#import packages
import pandas as pd
import quickscreen.summary.summary_stats as ss
# load data
df = pd.read_csv("./data/Carprice.csv")
# missing summary
ss.missing_summary(df)
# stats summary
ss.stats_summary(df)
# stats summary by row
ss.stats_summary(df, "rows").head(5)
# all summary
ss.all_summary(df)
# simple summary
ss.simple_summary(df)
###Output
_____no_output_____
###Markdown
Analysis Subpackage DescriptionThis file provides an example file for the functionality of the analysis subpackage.The analysis subpackage consists of two modules:**datafill.py**This module includes the DataEdit parent class and includes the following:- DataEdit(data) - creates DataEdit class object with input from a Pandas DataFrame- DataEdit.display() - getter for the data attribute of the DataEdit instance- DataEdit.columntype(column) - returns the datatype of the column given (either as a column name or column index)- DataEdit.\_\_add__(other) - appends other (given as pandas.DataFrame) to DataEdit's data- DataEdit.\_\_sub__(other) - removes from DataEdit.data the rows it shares with other (other is given as a pandas.DataFrame object)- DataEdit.rm_duplicates() - removes duplicates from DataEdit.data- DataEdit.rm_nan() - removes rows that contain NaN/None values from DataEdit.data- DataEdit.quick_clean() - removes duplicate rows as well as removes rows with NaN/None values This DataEdit class is can be used directly or in conjunction with the Lm class.**linear_analysis.py**This module includes the Lm child class of the DataEdit parent class as follows:- Lm(data) - creates Lm class object with input from a Pandas DataFrame - methods from DataEdit class inherited- Lm.single_linear(predictor, estimator) - creates a single linear model between predictor and estimator (predictor is y, estimator is x) - returns the linear model's prediction on the estimator values- Lm.single_linear_plot(predictor, estimator) - creates a plot of predictor vs estimator, displays the data as well as the created best fit line- Lm.single_linear_eqn(predictor, estimator) - fits a single linear model to the predictor vs estimator - prints the equation of the line ExampleExamples use the CarPrice.csv dataset saved in the data folder of this repo
###Code
import pandas as pd
import numpy as np
import quickscreen.analysis.datafill as dfl
import quickscreen.analysis.linear_analysis as la
###Output
_____no_output_____
###Markdown
Initializing a DataEdit object
###Code
df = pd.read_csv("./data/CarPrice.csv")
de = dfl.DataEdit(df)
print(type(de).__name__)
###Output
DataEdit
###Markdown
Example of display
###Code
display(de.display().head())
###Output
_____no_output_____
###Markdown
Example of columtypeGetting the data type of a column using the index
###Code
print(de.data.columns[2])
print(de.columntype(2))
###Output
enginesize
int64
###Markdown
Getting the data type of a column using the column name
###Code
print(de.data.columns[2])
print(de.columntype("enginesize"))
###Output
enginesize
int64
###Markdown
Example of addition
###Code
# creating demo data
data1 = {
"a":[1,2,3],
"b":[11,12,13]
}
df = pd.DataFrame(data1, columns=["a", "b"])
de1 = dfl.DataEdit(df)
data2 = {
"a":[x for x in range(0, 3)],
"b":[2*x for x in range(0, 3)]
}
df2 = pd.DataFrame(data2, columns=["a", "b"])
# adding method
de2 = de1 + df2
display(de1.display())
display(df2.head(10))
display(de2.display())
###Output
_____no_output_____
###Markdown
Example of subtraction
###Code
# data set up
df = pd.read_csv("./data/CarPrice.csv")
de = dfl.DataEdit(df)
de1 = dfl.DataEdit(df)
# select a subset of 2 rows
two_row = (df.iloc[0:2])
# subtract method
d = de - two_row
print("number of rows before subtraction", df.shape[0])
print("number of rows after subtraction", d.data.shape[0])
print("we can see that indeed two rows have been subtracted from the data")
###Output
number of rows before subtraction 205
number of rows after subtraction 203
we can see that indeed two rows have been subtracted from the data
###Markdown
Example of dropping duplicates
###Code
# rm_duplicates example
data = {
"a":[1,2,3,4,5,6,4],
"b":[11,12,13,14,15,np.nan,14]
}
df = pd.DataFrame(data, columns=["a", "b"])
de = dfl.DataEdit(df)
print("before dropping duplicates")
print(de.data.head(10))
de_no_na = de.rm_duplicates()
print("\nafter dropping duplices")
print(de_no_na.data.head(10))
###Output
before dropping duplicates
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
5 6 NaN
6 4 14.0
after dropping duplices
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
5 6 NaN
###Markdown
We can see that the last row, a duplicate, has been removed Example of removing nan's
###Code
data = {
"a":[1,2,3,4,5,6,7],
"b":[11,12,13,14,15,np.nan,17]
}
df = pd.DataFrame(data, columns=["a", "b"])
de = dfl.DataEdit(df)
print(de.data.head(10))
print(" ")
de_no_na = de.rm_nan()
print(de_no_na.data.head(10))
###Output
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
5 6 NaN
6 7 17.0
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
6 7 17.0
###Markdown
We can see that the 5th row that contained a NaN has been removed Example of quick clean
###Code
data = {
"a":[1,2,3,4,5,6,4],
"b":[11,12,13,14,15,np.nan,14]
}
df = pd.DataFrame(data, columns=["a", "b"])
de = dfl.DataEdit(df)
print(de.data.head(10))
print(" ")
de_no_na = de.quick_clean()
print(de_no_na.data.head(10))
###Output
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
5 6 NaN
6 4 14.0
a b
0 1 11.0
1 2 12.0
2 3 13.0
3 4 14.0
4 5 15.0
###Markdown
We can see that the duplicate and the NaN rows have been removed Example of initializing the linear model class
###Code
df = pd.read_csv("./data/CarPrice.csv")
lm = la.Lm(df)
###Output
_____no_output_____
###Markdown
Example of single linear regression
###Code
slr = lm.single_linear("horsepower", "enginesize")
print(slr[:5])
###Output
[[106.49522718]
[106.49522718]
[123.41237951]
[ 90.34703631]
[111.108996 ]]
###Markdown
Example of single lienar plot
###Code
lm.single_linear_plot("horsepower", "enginesize")
###Output
_____no_output_____
###Markdown
Example of getting the parameters of a single linear regression model
###Code
lm.single_linear_eqn("horsepower", "enginesize")
###Output
y=0.77x+6.53
###Markdown
Setup
###Code
! [[ -d box-unet ]] || git clone --quiet https://github.com/sdll/box-unet.git
%cd box-unet
! [[ -f data.zip ]] || wget https://www.dropbox.com/s/m1ie2zq8nkburar/data.zip?raw=1 -O data.zip && unzip data.zip
! pip install -q gsheet-keyring ipython-secrets comet_ml tqdm
! python3 -m pip install -q git+https://github.com/shrubb/box-convolutions.git
###Output
Building wheel for box-convolution (setup.py) ... [?25l[?25hdone
###Markdown
Imports
###Code
from comet_ml import Experiment
import argparse
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
import torch.nn.functional as F
from tqdm import tqdm as tqdm_base
from box_unet import BoxUNet as Model
from ipython_secrets import get_secret
from pytorch_ssim import ssim
from timeit import default_timer as timer
sns.set()
def tqdm(*args, **kwargs):
if hasattr(tqdm_base, "_instances"):
for instance in list(tqdm_base._instances):
tqdm_base._decr_instances(instance)
return tqdm_base(*args, **kwargs)
###Output
_____no_output_____
###Markdown
Environment
###Code
DATA_PATH = "data"
GROUND_TRUTH_LABEL = "ground_truth"
NOISY_IMAGES_LABEL = "noisy"
TRAIN_LABEL = "train"
TEST_LABEL = "val"
TRAIN_POSTFIX = "normed_crops.33.tensor"
TEST_POSTFIX = "normalized_data.tensor"
TRAIN_GT_DATA = Path(DATA_PATH) / TRAIN_LABEL / GROUND_TRUTH_LABEL / TRAIN_POSTFIX
TRAIN_NOISY_DATA = Path(DATA_PATH) / TRAIN_LABEL / NOISY_IMAGES_LABEL / TRAIN_POSTFIX
TEST_GT_DATA = Path(DATA_PATH) / TEST_LABEL / GROUND_TRUTH_LABEL / TEST_POSTFIX
TEST_NOISY_DATA = Path(DATA_PATH) / TEST_LABEL / NOISY_IMAGES_LABEL / TEST_POSTFIX
PATCH_SIZE = (33, 33)
N_CROPS = 64
DEVICE = "cuda"
PROJECT = "fastrino"
COMET_ML_API_KEY = get_secret("comet-{}".format(PROJECT))
experiment = Experiment(
api_key=COMET_ML_API_KEY,
project_name=PROJECT,
workspace=PROJECT,
auto_output_logging=None,
)
###Output
COMET INFO: ----------------------------
COMET INFO: Comet.ml Experiment Summary:
COMET INFO: Data:
COMET INFO: url: https://www.comet.ml/fastrino/fastrino/64bf84ce12ed4dd09e69ab952575767f
COMET INFO: Metrics [count] (min, max):
COMET INFO: sys.cpu.percent.01 : (9.3, 9.3)
COMET INFO: sys.cpu.percent.02 : (7.8, 7.8)
COMET INFO: sys.cpu.percent.avg : (8.55, 8.55)
COMET INFO: sys.gpu.0.free_memory : (14872870912.0, 14872870912.0)
COMET INFO: sys.gpu.0.gpu_utilization: (0.0, 0.0)
COMET INFO: sys.gpu.0.total_memory : (17071734784.0, 17071734784.0)
COMET INFO: sys.gpu.0.used_memory : (2198863872.0, 2198863872.0)
COMET INFO: sys.ram.total : (13655232512.0, 13655232512.0)
COMET INFO: sys.ram.used : (6161641472.0, 6161641472.0)
COMET INFO: ----------------------------
COMET INFO: Experiment is live on comet.ml https://www.comet.ml/fastrino/fastrino/73dc5a6c39814356941977e22ffde4e3
###Markdown
Utilities
###Code
def get_arg_parser():
parser = argparse.ArgumentParser()
parser.add_argument("--max-input-h", type=int, default=64)
parser.add_argument("--max-input-w", type=int, default=64)
parser.add_argument("--lr", type=float, default=1e-4)
parser.add_argument("--batch-size", type=int, default=32)
parser.add_argument("--num-epochs", type=int, default=5)
parser.add_argument("--seed", type=int, default=42)
return parser
def get_criterion():
return nn.MSELoss()
def get_optimizer(model, lr=0.001):
return optim.Adam(model.parameters(), lr)
def psnr(prediction, target, max_pixel=255.0):
return 10.0 * ((max_pixel ** 2) / ((prediction - target) ** 2).mean()).log10()
def compute_padding(img_shape, padding_shape):
"""
x -> dim=-2
y -> dim=-1
"""
return_pad = [0, 0, 0, 0]
*_, im_x, im_y = img_shape
pad_x, pad_y = padding_shape
if (pad_x - (im_x % pad_x)) % 2 == 0:
return_pad[2] = (pad_x - (im_x % pad_x)) // 2
return_pad[3] = (pad_x - (im_x % pad_x)) // 2
else:
return_pad[2] = (pad_x - (im_x % pad_x)) // 2
return_pad[3] = (pad_x - (im_x % pad_x)) // 2 + 1
if (pad_y - (im_y % pad_y)) % 2 == 0:
return_pad[0] = (pad_y - (im_y % pad_y)) // 2
return_pad[1] = (pad_y - (im_y % pad_y)) // 2
else:
return_pad[0] = (pad_y - (im_y % pad_y)) // 2
return_pad[1] = (pad_y - (im_y % pad_y)) // 2 + 1
return return_pad
def split_image(image, patch_size=PATCH_SIZE, n_crops=N_CROPS):
p_x, p_y = patch_size
image = F.pad(
image,
compute_padding(image.shape, patch_size),
mode="constant",
value=image.mean(),
)
splits = torch.split(torch.stack(torch.split(image, p_x)), p_y, dim=-1)
crops = torch.stack(splits, dim=-1)
crops = crops.view(-1, 1, p_x, p_y)
crops = torch.split(crops, n_crops, dim=0)
return crops, image.shape
def combine_crops(crops, shape):
combined = torch.cat(
crops, dim=0
)
return combined.view(*shape)
def predict_image(model, image, patch_size=PATCH_SIZE, n_crops=N_CROPS):
crops, shape = split_image(image, patch_size, n_crops)
return combine_crops(
[crop - model(crop.to(DEVICE)).data for crop in crops],
shape
)
class PlaneLoader(torch.utils.data.Dataset):
def __init__(self, gt_data, noisy_data):
self.gt_data = torch.load(gt_data)
self.noisy_data = torch.load(noisy_data)
def __len__(self):
return len(self.noisy_data)
def __getitem__(self, index):
noisy_image = self.noisy_data[index]
gt_image = self.gt_data[index]
noise = noisy_image - gt_image
return (
noisy_image,
noise
)
def train(experiment):
parser = get_arg_parser()
args = parser.parse_args(args=[])
train_loader = torch.utils.data.DataLoader(
PlaneLoader(TRAIN_GT_DATA, TRAIN_NOISY_DATA),
batch_size=args.batch_size,
shuffle=True,
)
image, noise = next(iter(train_loader))
args.in_channels = 1 if len(image.shape) == 3 else image.shape[1]
experiment.log_parameters(vars(args))
model = Model(
args.in_channels, args.in_channels, args.max_input_h, args.max_input_w,
).to(DEVICE)
criterion = get_criterion()
optimizer = get_optimizer(model, args.lr)
for epoch in tqdm(range(args.num_epochs), desc="Epoch", unit="epochs"):
with experiment.train():
model.train()
train_psnr = []
train_ssim = []
for image, noise in tqdm(train_loader, desc="Train images", unit="images"):
image = image.to(DEVICE)
noise = noise.to(DEVICE)
prediction = model(image)
loss = criterion(prediction, noise)
loss.backward()
optimizer.step()
optimizer.zero_grad()
current_psnr = psnr(image - prediction, image - noise).data.item()
current_ssim = ssim(image - prediction, image - noise).data.item()
train_psnr.append(current_psnr)
train_ssim.append(current_ssim)
experiment.log_metric("psnr", current_psnr)
experiment.log_metric("ssim", current_ssim)
experiment.log_metric("loss", loss.data.item())
experiment.log_metric("mean_psnr", np.mean(train_psnr))
experiment.log_metric("mean_ssim", np.mean(train_ssim))
return model
def test(experiment, model, patch_size=PATCH_SIZE, n_crops=N_CROPS):
test_loader = torch.utils.data.DataLoader(
PlaneLoader(TEST_GT_DATA, TEST_NOISY_DATA),
batch_size=1,
shuffle=False,
)
with experiment.test():
model.eval()
test_psnr = []
test_ssim = []
test_prediction_times = []
for image, noise in test_loader:
image = image.to(DEVICE)
noise = noise.to(DEVICE)
start = timer()
prediction = predict_image(model, image, patch_size, n_crops)
end = timer()
prediction_time = end - start
test_prediction_times.append(prediction_time)
experiment.log_metric("prediction_time", prediction_time)
gt_image = image - noise
gt_image_crops, gt_shape = split_image(gt_image)
gt_image = combine_crops(gt_image_crops, gt_shape)
assert (
gt_image.shape == prediction.shape
), "Prediction and ground truth do not match in size, aborting."
if len(gt_image.shape) == 3:
gt_image = gt_image[:, None, :, :]
prediction = prediction[:, None, :, :]
current_psnr = psnr(prediction, gt_image).data.item()
current_ssim = ssim(prediction, gt_image).data.item()
test_psnr.append(current_psnr)
test_ssim.append(current_ssim)
test_psnr = np.mean(test_psnr)
test_ssim = np.mean(test_ssim)
test_prediction_time = np.mean(test_prediction_times)
experiment.log_metric("mean_psnr", test_psnr)
experiment.log_metric("mean_ssim", test_ssim)
experiment.log_metric("mean_prediction_time", test_prediction_time)
return test_psnr, test_ssim, test_prediction_time
model = train(experiment)
test_psnr, test_ssim, test_prediction_time = test(experiment, model)
print(
"Mean Test PSNR: {}\nMean Test SSIM: {}\nMean Prediction Time: {}".format(
test_psnr, test_ssim, test_prediction_time
)
)
train_loader = torch.utils.data.DataLoader(
PlaneLoader(TRAIN_GT_DATA, TRAIN_NOISY_DATA),
batch_size=1,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
PlaneLoader(TEST_GT_DATA, TEST_NOISY_DATA),
batch_size=1,
shuffle=False)
train_it = iter(train_loader)
test_it = iter(test_loader)
image, noise = next(test_it)
fig = plt.figure(figsize=(17, 8))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.imshow((image - noise).squeeze(), interpolation='nearest', aspect='auto')
ax2.imshow(predict_image(model, image.to(DEVICE)).to("cpu").squeeze(),
interpolation='nearest', aspect='auto')
plt.show()
image.shape
###Output
_____no_output_____
###Markdown
------------------------------------
###Code
pxd = PixelDrill(nthreads=32)
%%time
pix1 = pxd.read(urls, pixel=test_coords['pixel'])
%%time
pix2 = pxd.read(urls, xy=test_coords['xy'])
plt.plot(pix1, 'ks', pix2, 'y.');
###Output
_____no_output_____
###Markdown
Serialize Class to TensorFlow Graph Francesco Saverio ZuppichiniWould it be cool to automatically bind class fields to tensorflow variables in a graph and restore them without manually get each variable back from it?Image you have a `Model` class
###Code
import tensorflow as tf
class Model():
def __init__(self):
self.variable = None
def __call__(self):
self.variable = tf.Variable([1], name='variable')
###Output
/usr/local/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Usually, you first **build** your model and then you **train** it. After that, you want to **get** from the saved graph the old variables without rebuild the whole model from scratch.
###Code
tf.reset_default_graph()
model = Model()
model() # now model.variable exists
print(model.variable)
###Output
<tf.Variable 'variable:0' shape=(1,) dtype=int32_ref>
###Markdown
Now, imagine we have just trained our model and we want to store it. The usual pattern is
###Code
EPOCHS = 10
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(EPOCHS):
# train
pass
saver.save(sess,'/tmp/model.ckpt')
###Output
_____no_output_____
###Markdown
Now you want to perform **inference**, aka get your stuff back, by loading the stored graph. In our case, we want the variable named `variable`
###Code
# reset the graph
tf.reset_default_graph()
with tf.Session() as sess:
saver = tf.train.import_meta_graph("{}.meta".format('/tmp/model.ckpt'))
saver.restore(sess, '/tmp/model.ckpt')
###Output
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt
###Markdown
Now we can get back our `variable` from the graph
###Code
graph = tf.get_default_graph()
variable = graph.get_operation_by_name('variable')
print(variable)
###Output
name: "variable"
op: "VariableV2"
attr {
key: "container"
value {
s: ""
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
}
}
}
attr {
key: "shared_name"
value {
s: ""
}
}
###Markdown
But, what if we want to use our `model` class again? If we try now to call `model.variable` we get None
###Code
model = Model() # recreate the model
print(model.variable)
###Output
None
###Markdown
One solution is to **build again** the whole model and restore the graph after that
###Code
# reset the graph
tf.reset_default_graph()
with tf.Session() as sess:
model = Model()
model()
saver = tf.train.import_meta_graph("{}.meta".format('/tmp/model.ckpt'))
saver.restore(sess, '/tmp/model.ckpt')
print(model.variable)
###Output
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt
<tf.Variable 'variable:0' shape=(1,) dtype=int32_ref>
###Markdown
You can already see that is a big waste of time. We can bind `model.variable` directly to the correct graph node by
###Code
model = Model()
model.variable = graph.get_operation_by_name('variable')
print(model.variable)
###Output
name: "variable"
op: "VariableV2"
attr {
key: "container"
value {
s: ""
}
}
attr {
key: "dtype"
value {
type: DT_INT32
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
}
}
}
attr {
key: "shared_name"
value {
s: ""
}
}
###Markdown
Now image we have a very big model with nested variables. In order to correct restore each variable pointer in the model you need to:* name each variable* get the variables back from the graph Would it be cool if we can automatically retrieve all the variables setted as a field in the Model class? TFGraphConvertibleI have created a class, called `TFGraphConvertible`. You can use the `TFGraphConvertible` to automatically **serialize** and **deserialize**" a class.Let's recreate our model
###Code
from TFGraphConvertible import TFGraphConvertible
class Model(TFGraphConvertible):
def __init__(self):
self.variable = None
def __call__(self):
self.variable = tf.Variable([1], name='variable')
model = Model()
model()
###Output
_____no_output_____
###Markdown
It exposes two methods: `to_graph` and `from_graph` Serialize - to_graphIn order to **serialize a class** you can call the **to_graph** method that creates a dictionary of field names -> tensorflow variables name. You need to pass a `fields` arguments, a dictionary of what field we want to serialize. In our case, we can just pass all of them.
###Code
serialized_model = model.to_graph(model.__dict__)
print(serialized_model)
###Output
{'variable': 'variable_2:0'}
###Markdown
It will create a dictionary with all the fields as keys and the corresponding tensorflow variables name as values Deserialize - from_graph In order to **deserialize a class** you can call the **from_graph** method that takes the previous created dictionary and bind each class fields to the correct tensorflow variables
###Code
model = Model() # simulate an empty model
print(model.variable)
model.from_graph(serialized_model, tf.get_default_graph())
model.variable # now it exists again
###Output
None
###Markdown
And now you have your `model` back! Full Example Let's see a more interesting example! We are going to train/restore a model for the MNIST dataset
###Code
class MNISTModel(Model):
def __call__(self, x, y, lr=0.001):
self.x = tf.cast(x, tf.float32)
self.x = tf.expand_dims(self.x, axis=-1) # add grey channel
self.lr = lr
self.y = tf.one_hot(y, N_CLASSES, dtype=tf.float32)
out = tf.layers.Conv2D(filters=32, kernel_size=5, activation=tf.nn.relu, padding="same", )(self.x)
out = tf.layers.MaxPooling2D(2, strides=2)(out)
out = tf.layers.Dropout(0.2)(out)
out = tf.layers.Conv2D(filters=64, kernel_size=5, activation=tf.nn.relu, padding="same", )(out)
out = tf.layers.MaxPooling2D(2, strides=2)(out)
out = tf.layers.Dropout(0.2)(out)
out = tf.layers.flatten(out)
out = tf.layers.Dense(units=512, activation=tf.nn.relu)(out)
out = tf.layers.Dropout(0.2)(out)
self.forward_raw = tf.layers.Dense(units=N_CLASSES)(out)
forward = tf.nn.softmax(out)
self.accuracy = tf.reduce_mean(
tf.cast(tf.equal(tf.argmax(self.forward_raw, -1), tf.argmax(self.y, -1)), tf.float32))
self.loss = self.get_loss()
self.train_step = self.get_train()
return forward
def get_loss(self):
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=self.y, logits=self.forward_raw))
return loss
def get_train(self):
return tf.train.AdamOptimizer(self.lr).minimize(self.loss)
mnist_model = MNISTModel()
###Output
_____no_output_____
###Markdown
Let's get the dataset!
###Code
from keras.datasets import mnist
tf.reset_default_graph()
N_CLASSES = 10
train, test = mnist.load_data()
x_, y_ = tf.placeholder(tf.float32, shape=[None, 28, 28]), tf.placeholder(tf.uint8, shape=[None])
train_dataset = tf.data.Dataset.from_tensor_slices((x_, y_)).batch(64).shuffle(10000).repeat()
test_dataset = tf.data.Dataset.from_tensor_slices((x_, y_)).batch(64).repeat()
iter = tf.data.Iterator.from_structure(train_dataset.output_types,
train_dataset.output_shapes)
x, y = iter.get_next(name='iter_next')
train_init_op = iter.make_initializer(train_dataset)
test_init_op = iter.make_initializer(test_dataset)
###Output
Using TensorFlow backend.
###Markdown
Now it is time to train it
###Code
with tf.Session() as sess:
mnist_model(x, y) # build the model
sess.run(tf.global_variables_initializer())
sess.run(train_init_op, feed_dict={x_: train[0], y_: train[1]})
saver = tf.train.Saver()
for i in range(150):
acc, _ = sess.run([mnist_model.accuracy, mnist_model.train_step])
if i % 15 == 0:
print(acc)
saver.save(sess,'/tmp/model.ckpt')
###Output
0.125
0.46875
0.8125
0.953125
0.828125
0.890625
0.796875
0.9375
0.953125
0.921875
###Markdown
Perfect! Let's store the serialized model in memory
###Code
serialized_model = mnist_model.to_graph(mnist_model.__dict__)
print(serialized_model)
###Output
{'x': 'ExpandDims:0', 'y': 'one_hot:0', 'forward_raw': 'dense_1/BiasAdd:0', 'accuracy': 'Mean:0', 'loss': 'Mean_1:0', 'train_step': 'Adam'}
###Markdown
Then we reset the graph and recreat the model
###Code
tf.reset_default_graph()
mnist_model = MNISTModel()
with tf.Session() as sess:
saver = tf.train.import_meta_graph("{}.meta".format('/tmp/model.ckpt'))
saver.restore(sess, '/tmp/model.ckpt')
graph = tf.get_default_graph()
###Output
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt
###Markdown
Of course, our variables in the `mnist_model` do not exist
###Code
mnist_model.accuracy
###Output
_____no_output_____
###Markdown
Let's recreate them by calling the `from_graph` method.
###Code
mnist_model.from_graph(serialized_model, tf.get_default_graph())
mnist_model.accuracy
###Output
_____no_output_____
###Markdown
Now `mnist_model` is ready to go, let's see the accuracy on a bacth of the test set
###Code
with tf.Session() as sess:
saver = tf.train.import_meta_graph("{}.meta".format('/tmp/model.ckpt'))
saver.restore(sess, '/tmp/model.ckpt')
graph = tf.get_default_graph()
x, y = graph.get_tensor_by_name('iter_next:0'), graph.get_tensor_by_name('iter_next:1')
print(sess.run(mnist_model.accuracy, feed_dict={x: test[0][0:64], y: test[1][0:64]}))
###Output
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt
1.0
###Markdown
Part 0Create a **Pong** environent and import the required libraries
###Code
from advertorch.attacks import *
from atari_wrapper import wrap_deepmind
import copy
import torch
from drl_attacks.uniform_attack import uniform_attack_collector
from utils import A2CPPONetAdapter
def make_atari_env_watch(env_name):
return wrap_deepmind(env_name, frame_stack=4,
episode_life=False, clip_rewards=False)
# define Pong Atari environment
env = make_atari_env_watch("PongNoFrameskip-v4")
state_shape = env.observation_space.shape or env.observation_space.n
action_shape = env.env.action_space.shape or env.env.action_space.n
device = 'cuda' if torch.cuda.is_available() else 'cpu'
###Output
_____no_output_____
###Markdown
Part 1Attack **Pong-PPO** policy with **Uniform Attack** with 3 different attack frequencies: 0, 0.5, 1.
###Code
# load pretrained Pong-PPO policy
ppo_pong_path = "log/PongNoFrameskip-v4/ppo/policy.pth"
ppo_policy, _ = torch.load(ppo_pong_path)
ppo_policy.to(device).init(device)
# adapt PPO policy to Advertorch library
ppo_adv_net = A2CPPONetAdapter(copy.deepcopy(ppo_policy)).to(device)
ppo_adv_net.eval()
# define image adversarial attack
eps = 0.1
obs_adv_atk = GradientSignAttack(ppo_adv_net, eps=eps*255,
clip_min=0, clip_max=255, targeted=False)
# define RL adversarial attack
collector = uniform_attack_collector(policy, env, obs_adv_atk,
perfect_attack=False,
atk_frequency=0.5,
device=device)
# perform uniform attack with attack frequency of 0.5
collector.atk_frequency = 0.5
test_adversarial_policy = collector.collect(n_episode=10)
avg_atk_rate = test_adversarial_policy['atk_rate(%)']
avg_rew = test_adversarial_policy['rew']
avg_num_atks = test_adversarial_policy['n_atks']
avg_succ_atks_rate = test_adversarial_policy['succ_atks(%)']
print("attack frequency (%) =", avg_atk_rate)
print("number of attacks =", avg_num_atks)
print("number of successful attacks (%) =", avg_succ_atks_rate)
print("reward =", avg_rew)
# perform uniform attack with attack frequency of 1
collector.atk_frequency = 1.
test_adversarial_policy = collector.collect(n_episode=10)
avg_atk_rate = test_adversarial_policy['atk_rate(%)']
avg_rew = test_adversarial_policy['rew']
avg_num_atks = test_adversarial_policy['n_atks']
avg_succ_atks_rate = test_adversarial_policy['succ_atks(%)']
print("attack frequency (%) =", avg_atk_rate)
print("number of attacks =", avg_num_atks)
print("number of successful attacks (%) =", avg_succ_atks_rate)
print("reward =", avg_rew)
# perform uniform attack with attack frequency of 0. (no attack is performed)
collector.atk_frequency = 0.
test_adversarial_policy = collector.collect(n_episode=10)
avg_atk_rate = test_adversarial_policy['atk_rate(%)']
avg_rew = test_adversarial_policy['rew']
avg_num_atks = test_adversarial_policy['n_atks']
avg_succ_atks_rate = test_adversarial_policy['succ_atks(%)']
print("attack frequency (%) =", avg_atk_rate)
print("number of attacks =", avg_num_atks)
print("number of successful attacks (%) =", avg_succ_atks_rate)
print("reward =", avg_rew)
###Output
attack frequency (%) = 0.0
number of attacks = 0.0
number of successful attacks (%) = 0
reward = 20.8
###Markdown
Part 2Attack **Pong-PPO** policy with **Uniform Attack** with attack frequenc7 0.5. Moreover, let's suppose we don't know the agent policy is PPO and let's perform attacks on a **A2C** policy trained on the same environment.
###Code
# load pretrained Pong-A2C policy
a2c_pong_path = "log/PongNoFrameskip-v4/a2c/policy.pth"
a2c_policy, _ = torch.load(a2c_pong_path)
a2c_policy.to(device).init(device)
# adapt PPO policy to Advertorch library
a2c_adv_net = A2CPPONetAdapter(copy.deepcopy(a2c_policy)).to(device)
a2c_adv_net.eval()
# define image adversarial attack
eps = 0.1
obs_adv_atk = GradientSignAttack(a2c_adv_net, eps=eps*255,
clip_min=0, clip_max=255, targeted=False)
# define RL adversarial attack
collector = uniform_attack_collector(policy, env, obs_adv_atk,
perfect_attack=False,
atk_frequency=0.5,
device=device)
# perform uniform attack with attack frequency of 0.5
collector.atk_frequency = 0.5
test_adversarial_policy = collector.collect(n_episode=10)
avg_atk_rate = test_adversarial_policy['atk_rate(%)']
avg_rew = test_adversarial_policy['rew']
avg_num_atks = test_adversarial_policy['n_atks']
avg_succ_atks_rate = test_adversarial_policy['succ_atks(%)']
print("attack frequency (%) =", avg_atk_rate)
print("number of attacks =", avg_num_atks)
print("number of successful attacks (%) =", avg_succ_atks_rate)
print("reward =", avg_rew)
###Output
attack frequency (%) = 0.5018479033404406
number of attacks = 706.1
number of successful attacks (%) = 0.7777935136666194
reward = -17.1
###Markdown
Adaptive-scheduler example[Read the documentation](https://adaptive-scheduler.readthedocs.io/en/latest/what-is-this) to see what this is all about. Step 1: define the simulationOften one wants to sweep a continuous 1D or 2D space for multiple parameters. [Adaptive](http://adaptive.readthedocs.io) is the ideal program to do this. We define a simulation by creating several `adaptive.Learners`. We **need** to define the following variables:* `learners` a list of learners* `fnames` a list of file names, one for each learner
###Code
%%writefile learners_file.py
import adaptive
from functools import partial
def h(x, width=0.01, offset=0):
import numpy as np
import random
for _ in range(10): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
a = width
return x + a ** 2 / (a ** 2 + (x - offset) ** 2)
offsets = [i / 10 - 0.5 for i in range(5)]
combos = adaptive.utils.named_product(offset=offsets, width=[0.01, 0.05])
learners = []
fnames = []
for combo in combos:
f = partial(h, **combo)
learner = adaptive.Learner1D(f, bounds=(-1, 1))
fnames.append(f"data/{combo}")
learners.append(learner)
###Output
_____no_output_____
###Markdown
Step 2: run the `learners_file`After defining the `learners` and `fnames` in an file (above) we can start to run these learners.We split up all learners into seperate jobs, all you need to do is to specify how many cores per job you want. Simple example
###Code
import adaptive_scheduler
def goal(learner):
return learner.npoints > 200
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10,
executor_type="ipyparallel",
) # PBS or SLURM
run_manager = adaptive_scheduler.server_support.RunManager(
scheduler=scheduler,
learners_file="learners_file.py",
goal=goal,
log_interval=30,
save_interval=30,
)
run_manager.start()
# See the current queue with
import pandas as pd
queue = scheduler.queue()
df = pd.DataFrame(queue).transpose()
df.head()
# Read the logfiles and put it in a `pandas.DataFrame`.
# This only returns something when there are log-files to parse!
# So after `run_manager.log_interval` has passed.
df = run_manager.parse_log_files()
df.head()
# See the database
df = run_manager.get_database() # or see `run_manager.database_manager.as_dict()`
df.head()
# After the calculation started and some data has been saved, we can display the learners
import adaptive
adaptive.notebook_extension()
learners = run_manager.learners_module.learners # or `from learners_file import learners`
combos = run_manager.learners_module.combos # or `from learners_file import combos`
run_manager.load_learners()
learner = adaptive.BalancingLearner(learners, cdims=combos)
learner.plot()
###Output
_____no_output_____
###Markdown
Simple sequential exampleSometimes you cannot formulate your problem with Adaptive, instead you just want to run a function as a sequence of parameters.Surprisingly, this approach with a `SequenceLearner` [is slightly faster than `ipyparallel.Client.map`](https://github.com/python-adaptive/adaptive/pull/193issuecomment-491062073).
###Code
%%writefile learners_file_sequence.py
import numpy as np
from adaptive import SequenceLearner
from adaptive_scheduler.utils import split, combo_to_fname
def g(xyz):
x, y, z = xyz
for _ in range(5): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
return x ** 2 + y ** 2 + z ** 2
xs = np.linspace(0, 10, 11)
ys = np.linspace(-1, 1, 11)
zs = np.linspace(-3, 3, 11)
xyzs = [(x, y, z) for x in xs for y in ys for z in zs]
# We have only one learner so one fname
learners = [SequenceLearner(g, sequence=xyzs)]
fnames = ['data/xyzs']
import adaptive_scheduler
def goal(learner):
return learner.done()
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10,
executor_type="ipyparallel",
) # PBS or SLURM
run_manager2 = adaptive_scheduler.server_support.RunManager(
scheduler=scheduler,
learners_file="learners_file_sequence.py",
goal=goal,
log_interval=30,
save_interval=30,
)
run_manager2.start()
run_manager2.load_learners()
learner = run_manager2.learners_module.learners[0]
try:
result = learner.result()
print(result)
except:
print('`learner.result()` is only available when all values are calculated.')
partial_data = learner.data
print(partial_data)
###Output
_____no_output_____
###Markdown
Extended exampleThis example shows how to run split up a list into 100 `SequenceLearner`s and runs it in 100 jobs.
###Code
%%writefile learners_file_sequence2.py
import numpy as np
from adaptive import SequenceLearner
from adaptive_scheduler.utils import split, combo_to_fname
from adaptive.utils import named_product
def g(combo):
x, y, z = combo['x'], combo['y'], combo['z']
for _ in range(5): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
return x ** 2 + y ** 2 + z ** 2
combos = named_product(x=np.linspace(0, 10), y=np.linspace(-1, 1), z=np.linspace(-3, 3))
print(f"Length of combos: {len(combos)}.")
# We could run this as 1 job with N nodes, but we can also split it up in multiple jobs.
# This is desireable when you don't want to run a single job with 300 nodes for example.
njobs = 100
split_combos = list(split(combos, njobs))
print(f"Length of split_combos: {len(split_combos)} and length of split_combos[0]: {len(split_combos[0])}.")
learners, fnames = [], []
learners = [SequenceLearner(g, combos_part) for combos_part in split_combos]
fnames = [combo_to_fname(combos_part[0], folder="data") for combos_part in split_combos]
###Output
_____no_output_____
###Markdown
We now start the `RunManager` with a lot of arguments to showcase some of the options you can use to customize your run.
###Code
from functools import partial
import adaptive_scheduler
from adaptive_scheduler.scheduler import DefaultScheduler, PBS, SLURM
def goal(learner):
return learner.done() # the standard goal for a SequenceLearner
extra_scheduler = ["--exclusive", "--time=24:00:00"] if DefaultScheduler is SLURM else []
scheduler = adaptive_scheduler.scheduler.DefaultScheduler(
cores=10,
executor_type="ipyparallel",
extra_scheduler=extra_scheduler,
extra_env_vars=["PYTHONPATH='my_dir:$PYTHONPATH'"],
python_executable="~/miniconda3/bin/python",
log_folder="logs",
) # PBS or SLURM
run_manager3 = adaptive_scheduler.server_support.RunManager(
scheduler,
goal=goal,
log_interval=10,
save_interval=30,
runner_kwargs=dict(retries=5, raise_if_retries_exceeded=False),
kill_on_error="srun: error:", # cancel a job if this is inside a log
learners_file="learners_file_sequence2.py", # the file that has `learners` and `fnames`
job_name="example-sequence", # this is used to generate unqiue job names
db_fname="example-sequence.json", # the database keeps track of job_id <-> (learner, is_done)
start_job_manager_kwargs=dict(
max_fails_per_job=10, # the RunManager is cancelled after njobs * 10 fails
max_simultaneous_jobs=300, # limit the amount of simultaneous jobs
),
)
run_manager3.start()
df = run_manager3.parse_log_files()
df.head()
run_manager3.load_learners() # load the data into the learners
learners = run_manager3.learners_module.learners
result = sum([l.result() for l in learners], []) # combine all learner's result into 1 list
###Output
_____no_output_____
###Markdown
GradCAM
###Code
partial_gradcam_analyzer = GradCAM(
model=partial_model,
target_id=target_class,
layer_name=target_layer,
relu=use_relu,
)
analysis_partial_grad_cam = partial_gradcam_analyzer.analyze(input_imgs)
heatmap(analysis_partial_grad_cam[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
Guided Back Propagation
###Code
guidedbackprop_analyzer = GBP(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_guidedbackprop = guidedbackprop_analyzer.analyze(input_imgs)
heatmap(analysis_guidedbackprop[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
GuidedGradCAM
###Code
guidedgradcam_analyzer = GuidedGradCAM(
partial_model,
target_id=target_class,
layer_name=target_layer,
relu=False,
)
analysis_guidedgradcam = guidedgradcam_analyzer.analyze(input_imgs)
heatmap(analysis_guidedgradcam[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
LRP
###Code
lrp_analyzer = LRP(
partial_model,
target_id=target_class,
relu=use_relu,
low=min_input,
high=max_input,
)
analysis_lrp = lrp_analyzer.analyze(input_imgs)
heatmap(analysis_lrp[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
CLRP
###Code
clrp_analyzer = CLRP(
partial_model,
target_id=target_class,
relu=use_relu,
low=min_input,
high=max_input,
)
analysis_clrp = clrp_analyzer.analyze(input_imgs)
heatmap(analysis_clrp[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
SGLRP
###Code
sglrp_analyzer = SGLRP(
partial_model,
target_id=target_class,
relu=use_relu,
low=min_input,
high=max_input,
)
analysis_sglrp = sglrp_analyzer.analyze(input_imgs)
heatmap(analysis_sglrp[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
SGLRP Sequential A
###Code
sglrpa_analyzer = SGLRPSeqA(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_sglrpa = sglrpa_analyzer.analyze(input_imgs)
heatmap(analysis_sglrpa[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
SGLRP Sequential B
###Code
sglrpb_analyzer = SGLRPSeqB(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_sglrpb = sglrpb_analyzer.analyze(input_imgs)
heatmap(analysis_sglrpb[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
LRP Sequential A
###Code
lrpa_analyzer = LRPA(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_lrpa = lrpa_analyzer.analyze(input_imgs)
heatmap(analysis_lrpa[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
LRP Sequential B
###Code
lrpb_analyzer = LRPB(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_lrpb = lrpb_analyzer.analyze(input_imgs)
heatmap(analysis_lrpb[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
LRP Epsilon
###Code
lrpe_analyzer = LRPE(
partial_model,
target_id=target_class,
relu=use_relu,
)
analysis_lrpe = lrpe_analyzer.analyze(input_imgs)
heatmap(analysis_lrpe[example_id].sum(axis=(2)))
plt.show()
###Output
_____no_output_____
###Markdown
`nx_force()`Provided a NetworkX graph, render it in JS using D3js. Required Arguments* `G`: a NetworkX graph. `'weight'` attributes on the edges causes D3 to draw a heavier line, and adding a `'group'` attribute to the nodes will have them appear a different color.Provided a NetworkX graph (`G`), render it in JS using D3js. Keyword Arguments* *size*, a 2-ple with the width and height in pixels. Default: (600, 400)* *labels* can be `None` or `'always'`* *linkdistance* is the relaxed link distance. Default: 30
###Code
G = nx.Graph()
G.add_star(range(5))
G.add_cycle(range(4, 10))
d3shims.nx_force(G, size=(200, 200))
G = nx.read_dot('lesmis.dot')
d3shims.nx_force(G, size=(600, 600), labels='always', linkdistance=100)
###Output
_____no_output_____
###Markdown
pyiron example notebookThis is an example notebook to demonstrate the functionality of the publication template. The notebook loads an existing Si calculation and calculates the total energy using LAMMPS and the quip potential provided as additional resource in this repository. The calculation archive was created using the following commands: ```from pyiron_atomistics import Projectpr = Project("old_calculation")job = pr.create.job.Lammps(job_name="lmp_si")job.structure = pr.create.structure.ase.bulk("Si")job.run()pr.pack(destination_path="save")```The pyiron project class is imported using:
###Code
from pyiron_atomistics import Project
###Output
_____no_output_____
###Markdown
To validate the previous calculation have been successfully imported:
###Code
pr_data = Project("pyiron/calculation")
pr_data.job_table()
###Output
_____no_output_____
###Markdown
Reload the existing calculation to continue with the previous structure:
###Code
job_reload = pr_data.load("lmp_si")
structure_reload = job_reload.get_structure()
###Output
_____no_output_____
###Markdown
Create a new LAMMPS job object and assign the structure from the previous calculation:
###Code
pr_new = Project("new_calculation")
job = pr_new.create.job.Lammps(job_name="lmp_quip")
job.structure = structure_reload
###Output
_____no_output_____
###Markdown
List all available interatomic potentials:
###Code
job.view_potentials()
###Output
_____no_output_____
###Markdown
Select the LAMMPS quip potential provided in the resource directory and execute the calculation:
###Code
job.potential = "Si-quip-xml"
job.run()
###Output
The job lmp_quip was saved and received the ID: 2
###Markdown
Print the total energy of both calculation:
###Code
print(job["output/generic/energy_tot"], job_reload["output/generic/energy_tot"])
###Output
[-8.66999651] [-8.67319651]
###Markdown
- square area = $(2 r)^2$ - circle area = $\pi r^2$ - circle / square = $\pi r^2 / 4 r^2$ = $\pi / 4$ - $\pi$ = 4 * (circle/square) ![Darts](https://coderefinery.github.io/jupyter/img/darts.svg) Here I import the random module
###Code
import random
from ipywidgets import interact
N = 100000
points = []
hits = 0
for i in range(N):
x, y = random.random(), random.random()
if x**2 + y**2 < 1.0:
hits += 1
points.append((x, y, True))
else:
points.append((x, y, False))
%matplotlib inline
from matplotlib import pyplot
x, y, colors = zip(*points)
pyplot.scatter(x, y, c=colors)
fraction = hits / N
print("pi="+ str(4 * fraction))
from ipywidgets import interact
@interact(x=True, y=1.0, s="Hello")
def g(x, y, s):
return (x, y, s)
###Output
_____no_output_____
###Markdown
Synthetic ExampleThis notebook shows how to use the algorithm for spike inference on a synthetic example.
###Code
from spikeFRInder import sliding_window_predict
import numpy as np
from scipy.signal import convolve
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Function to Generate Synthetic Signal
###Code
def generate_signal(FR, tau_decay, Fs, noise_sigma, duration):
dt = 1 / Fs
N = int(duration / dt)
spikes = np.random.rand(N) < FR * dt
num_spikes = np.sum(spikes)
amplitudes = np.random.normal(loc=1, scale=0.5, size=(num_spikes,))
amplitudes[amplitudes<0.2] = 0.25
spike_train = np.zeros(spikes.shape)
spike_train[spikes==True] = amplitudes
t = np.arange(-duration//2, duration//2, dt)
exponential = np.zeros_like(t)
exponential[t>=0] = np.exp(-t[t>=0]/tau_decay)
signal = convolve(spikes, exponential, mode='same')
signal += np.random.normal(scale=noise_sigma, size=signal.size)
time = np.arange(0, duration, dt)
return signal, spikes, time, num_spikes
###Output
_____no_output_____
###Markdown
Estimate Spikes
###Code
# Generate calcium signal
np.random.seed(100)
FR = 1 # average firing rate over time
tau_decay = 0.25 # true decay rate of exponentials
Fs = 50 # sampling rate
noise_sigma = 0.15 # STD of gaussian noise
duration = 30 # full signal duration in seconds
signal, spikes, time, num_spikes = generate_signal(FR, tau_decay, Fs, noise_sigma, duration)
print('True number of spikes = {}'.format(num_spikes))
print('Assumed number of spikes input to the method = {}'.format(int(FR*duration)))
# Estimate spikes
output = sliding_window_predict(signal,
Fs=50,
K=FR*duration,
window_lengths=[101, 201, 301],
jump_size=15,
smoothing_sigma=1.5)
fig, ax = plt.subplots(3,1, figsize=(10, 5))
ax[0].plot(time, signal)
ax[0].set_title('Raw Calcium')
ax[1].stem(time, spikes, use_line_collection=True, markerfmt=" ", basefmt=" ")
ax[1].set_title('True Spike Locations')
ax[2].plot(time, output, 'g')
ax[2].set_title('Output')
ax[2].set_xlabel('Time (sec)')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Weighted spectral embedding This is an example of the weighted spectral embedding of a graph, using unit weights or internal node weights (node degrees for an unweighted graph).
###Code
from spectral_embedding import *
spectral = SpectralEmbedding()
weighted_spectral = SpectralEmbedding(node_weights = 'degree')
###Output
_____no_output_____
###Markdown
Toy example
###Code
import networkx as nx
graph = nx.karate_club_graph()
ground_truth_labels = list(nx.get_node_attributes(graph, 'club').values())
adjacency = nx.to_scipy_sparse_matrix(graph)
###Output
_____no_output_____
###Markdown
Embeddings
###Code
spectral.fit(adjacency)
weighted_spectral.fit(adjacency)
embedding = spectral.embedding_
weighted_embedding = weighted_spectral.embedding_
normalized_embedding = (embedding.T / np.linalg.norm(embedding,axis = 1)).T
normalized_weighted_embedding = (weighted_embedding.T / np.linalg.norm(weighted_embedding,axis = 1)).T
###Output
_____no_output_____
###Markdown
Clusterings
###Code
from sklearn.cluster import KMeans
n_clusters = 2
kmeans = KMeans(n_clusters)
kmeans.fit(embedding)
labels = list(kmeans.labels_)
kmeans.fit(normalized_embedding)
normalized_labels = list(kmeans.labels_)
kmeans.fit(weighted_embedding)
weighted_labels = list(kmeans.labels_)
kmeans.fit(normalized_weighted_embedding)
normalized_weighted_labels = list(kmeans.labels_)
# Ground truth
Counter(ground_truth_labels)
# Spectral embedding
Counter(labels), Counter(normalized_labels)
# Weighted spectral embedding
Counter(weighted_labels), Counter(normalized_weighted_labels)
###Output
_____no_output_____
###Markdown
Real data
###Code
import urllib.request
url = "http://perso.telecom-paristech.fr/~bonald/graphs/"
dataset = "openflights.graphml.gz"
download = urllib.request.urlretrieve(url + dataset, dataset)
graph = nx.read_graphml(dataset, node_type=int)
print(nx.info(graph))
adjacency = nx.to_scipy_sparse_matrix(graph)
###Output
_____no_output_____
###Markdown
Embeddings
###Code
spectral.fit(adjacency)
weighted_spectral.fit(adjacency)
embedding = spectral.embedding_
weighted_embedding = weighted_spectral.embedding_
normalized_embedding = (embedding.T / np.linalg.norm(embedding,axis = 1)).T
normalized_weighted_embedding = (weighted_embedding.T / np.linalg.norm(weighted_embedding,axis = 1)).T
###Output
_____no_output_____
###Markdown
Clusterings
###Code
from sklearn.cluster import KMeans
n_clusters = 10
kmeans = KMeans(n_clusters)
kmeans.fit(embedding)
labels = list(kmeans.labels_)
kmeans.fit(normalized_embedding)
normalized_labels = list(kmeans.labels_)
kmeans.fit(weighted_embedding)
weighted_labels = list(kmeans.labels_)
kmeans.fit(normalized_weighted_embedding)
normalized_weighted_labels = list(kmeans.labels_)
from collections import Counter
Counter(labels)
Counter(normalized_labels)
Counter(weighted_labels)
Counter(normalized_weighted_labels)
###Output
_____no_output_____
###Markdown
MovieLense DatasetUsing the MovieLens 20M Dataset dataset for examples. You can download this data here: https://grouplens.org/datasets/movielens/20m/
###Code
ratings = pd.read_csv('../movie_similarity_flask_api/data/ml-20m/ratings.csv')
ratings = ratings.query('rating >=3')
ratings.reset_index(drop=True, inplace=True)
#only consider ratings from users who have rated over n movies
n=1000
users = ratings.userId.value_counts()
users = users[users>n].index.tolist()
ratings = ratings.query('userId in @users')
print(ratings.shape)
ratings.head(3)
# get movie features
rated_movies = ratings.movieId.tolist()
movies = pd.read_csv('../movie_similarity_flask_api/data/ml-20m/movies.csv')
movies = movies.query('movieId in @rated_movies')
movies.set_index("movieId", inplace=True, drop=True)
movies = movies.genres.str.split("|", expand=True)
movies.reset_index(inplace=True)
movies = pd.melt(movies, id_vars='movieId', value_vars=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
movies.drop_duplicates("movieId", inplace=True)
movies.set_index('movieId', inplace=True)
movies = pd.get_dummies(movies.value)
#movies = movies[['Action', 'Romance', 'Western', 'Comedy', 'Crime']]
movies.head()
###Output
_____no_output_____
###Markdown
Long Tail Plot Example
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 7))
recmetrics.long_tail_plot(df=ratings,
item_id_column="movieId",
interaction_type="movie ratings",
percentage=0.5,
x_labels=False)
###Output
_____no_output_____
###Markdown
Collaborative Filter RecommenderCreating a simple CF to demonstrate recommender metrics in action. I've implemented collaborative filtering using a SVD approach in the surprise package. The surprise package also takes care of the test train split. The collaborative filter transforms user-item interactions into latent space, and reconstructs the user-item matrix to impute ratings missing movie ratings. The predicted rating is the dot product between the user and movie vectors in latent space.
###Code
#format data for surprise
reader = Reader(rating_scale=(0, 5))
data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)
trainset, testset = train_test_split(data, test_size=0.25)
#train SVD recommender
algo = SVD()
algo.fit(trainset)
#make predictions on test set.
test = algo.test(testset)
test = pd.DataFrame(test)
test.drop("details", inplace=True, axis=1)
test.columns = ['userId', 'movieId', 'actual', 'cf_predictions']
test.head()
#evaluate model with MSE and RMSE
print(recmetrics.mse(test.actual, test.cf_predictions))
print(recmetrics.rmse(test.actual, test.cf_predictions))
#create model (matrix of predicted values)
cf_model = test.pivot_table(index='userId', columns='movieId', values='cf_predictions').fillna(0)
def get_users_predictions(user_id, n, model):
recommended_items = pd.DataFrame(model.loc[user_id])
recommended_items.columns = ["predicted_rating"]
recommended_items = recommended_items.sort_values('predicted_rating', ascending=False)
recommended_items = recommended_items.head(n)
return recommended_items.index.tolist()
#get example prediction
get_users_predictions(156, 10, cf_model)
#format test data
test = test.copy().groupby('userId')['movieId'].agg({'actual': (lambda x: list(set(x)))})
#make recommendations for all members in the test data
cf_recs = [] = []
for user in test.index:
cf_predictions = get_users_predictions(user, 10, cf_model)
cf_recs.append(cf_predictions)
test['cf_predictions'] = cf_recs
test.head()
###Output
/Users/clairelongo/Documents/Work/prof_dev/recmetrics/venv/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: using a dict on a Series for aggregation
is deprecated and will be removed in a future version
###Markdown
Popularity RecommenderCreating a simple popularity recommender to demonstrate recommender metrics in action. The popularity recommender simply recommends the top 10 movies to every user.
###Code
#make recommendations for all members in the test data
popularity_recs = ratings.movieId.value_counts().head(10).index.tolist()
pop_recs = []
for user in test.index:
pop_predictions = popularity_recs
pop_recs.append(pop_predictions)
test['pop_predictions'] = pop_recs
test.head()
###Output
_____no_output_____
###Markdown
Random RecommenderCreating a simple random recommender to demonstrate recommender metrics in action. The random recommender simply recommends 10 random movies to every user.
###Code
#make recommendations for all members in the test data
ran_recs = []
for user in test.index:
random_predictions = ratings.movieId.sample(10).values.tolist()
ran_recs.append(random_predictions)
test['random_predictions'] = ran_recs
test.head()
###Output
_____no_output_____
###Markdown
Recall
###Code
actual = test.actual.values.tolist()
cf_predictions = test.cf_predictions.values.tolist()
pop_predictions = test.pop_predictions.values.tolist()
random_predictions = test.random_predictions.values.tolist()
pop_mark = []
for K in np.arange(1, 11):
pop_mark.extend([recmetrics.mark(actual, pop_predictions, k=K)])
pop_mark
random_mark = []
for K in np.arange(1, 11):
random_mark.extend([recmetrics.mark(actual, random_predictions, k=K)])
random_mark
cf_mark = []
for K in np.arange(1, 11):
cf_mark.extend([recmetrics.mark(actual, cf_predictions, k=K)])
cf_mark
###Output
_____no_output_____
###Markdown
Mark Plot
###Code
mark_scores = [random_mark, pop_mark, cf_mark]
index = range(1,10+1)
names = ['Random Recommender', 'Popularity Recommender', 'Collaborative Filter']
fig = plt.figure(figsize=(15, 7))
recmetrics.mark_plot(mark_scores, model_names=names, k_range=index)
###Output
_____no_output_____
###Markdown
Prediction Coverage
###Code
catalog = ratings.movieId.unique().tolist()
random_coverage = recmetrics.prediction_coverage(ran_recs, catalog)
pop_coverage = recmetrics.prediction_coverage(pop_recs, catalog)
cf_coverage = recmetrics.prediction_coverage(cf_recs, catalog)
###Output
_____no_output_____
###Markdown
Catalog Coverage
###Code
# N=100 observed recommendation lists
random_cat_coverage = recmetrics.catalog_coverage(ran_recs, catalog, 100)
pop_cat_coverage = recmetrics.catalog_coverage(pop_recs, catalog, 100)
cf_cat_coverage = recmetrics.catalog_coverage(cf_recs, catalog, 100)
###Output
_____no_output_____
###Markdown
Coverage Plot
###Code
# plot of prediction coverage
coverage_scores = [random_coverage, pop_coverage, cf_coverage]
model_names = ['Random Recommender', 'Popularity Recommender', 'Collaborative Filter']
fig = plt.figure(figsize=(7, 5))
recmetrics.coverage_plot(coverage_scores, model_names)
###Output
_____no_output_____
###Markdown
Novelty
###Code
nov = ratings.movieId.value_counts()
pop = dict(nov)
random_novelty,random_mselfinfo_list = novelty(ran_recs, pop, len(users), 10)
pop_novelty,pop_mselfinfo_list = novelty(pop_recs, pop, len(users), 10)
cf_novelty,cf_mselfinfo_list = novelty(cf_recs, pop, len(users), 10)
print(random_novelty, pop_novelty, cf_novelty)
###Output
_____no_output_____
###Markdown
Personalization
###Code
example_predictions = [
['1', '2', 'C', 'D'],
['4', '3', 'm', 'X'],
['7', 'B', 't', 'X']
]
recmetrics.personalization(predicted=example_predictions)
###Output
_____no_output_____
###Markdown
Intra-list Similarity
###Code
example_predictions = [
[3, 7, 5, 9],
[9, 6, 12, 623],
[7, 894, 6, 623]
]
feature_df = movies[['Action', 'Comedy', 'Romance']]
recmetrics.intra_list_similarity(example_predictions, feature_df)
###Output
_____no_output_____
###Markdown
Classification Probability Plot
###Code
#make fake classification probability data
class_one_probs = np.random.normal(loc=.7, scale=0.1, size=1000)
class_zero_probs = np.random.normal(loc=.3, scale=0.1, size=1000)
actual = [1] * 1000
class_zero_actual = [0] * 1000
actual.extend(class_zero_actual)
pred_df = pd.DataFrame([np.concatenate((class_one_probs, class_zero_probs), axis=None), actual]).T
pred_df.columns = ["probability", "truth"]
pred_df.head()
recmetrics.class_separation_plot(pred_df, n_bins=45, class0_label="True class 0", class1_label="True class 1")
###Output
_____no_output_____
###Markdown
ROC Plot
###Code
model_probs = np.concatenate([np.random.normal(loc=.2, scale=0.5, size=500), np.random.normal(loc=.9, scale=0.5, size=500)])
actual = [0] * 500
class_zero_actual = [1] * 500
actual.extend(class_zero_actual)
recmetrics.roc_plot(actual, model_probs, model_names="one model", figsize=(10, 5))
###Output
_____no_output_____
###Markdown
Precision Recall Curve
###Code
recmetrics.precision_recall_plot(targs=actual, preds=model_probs)
###Output
_____no_output_____
###Markdown
Example UsageThis is a basic example using the torchvision COCO dataset from coco.py, it assumes that you've already downloaded the COCO images and annotations JSON. You'll notice that the scale augmentations are quite extreme.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import cv2
import numpy as np
from copy_paste import CopyPaste
from coco import CocoDetectionCP
from visualize import display_instances
import albumentations as A
import random
from matplotlib import pyplot as plt
transform = A.Compose([
# A.RandomScale(scale_limit=(-0.9, 1), p=1), #LargeScaleJitter from scale of 0.1 to 2
A.RandomScale(scale_limit=(-0.9, 1), p=1), #LargeScaleJitter from scale of 0.1 to 2
# A.PadIfNeeded(256, 256, border_mode=0), #pads with image in the center, not the top left like the paper
# A.RandomCrop(256, 256),
A.PadIfNeeded(800, 1200, border_mode=0),
A.RandomCrop(800, 1200),
CopyPaste(blend=True, sigma=1, pct_objects_paste=0.8, p=1.) #pct_objects_paste is a guess
], bbox_params=A.BboxParams(format="coco", min_visibility=0.05)
)
data = CocoDetectionCP(
'../agilent-repos/mmdetection/data/bead_cropped_detection/images',
'../agilent-repos/mmdetection/data/custom/object-classes.json',
transform
)
f, ax = plt.subplots(1, 2, figsize=(16, 16))
#index = random.randint(0, len(data))
index = random.randint(0, 5) # We are testing on the 6 with annotations
img_data = data[index]
image = img_data['image']
masks = img_data['masks']
bboxes = img_data['bboxes']
empty = np.array([])
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[0])
if len(bboxes) > 0:
boxes = np.stack([b[:4] for b in bboxes], axis=0)
box_classes = np.array([b[-2] for b in bboxes])
mask_indices = np.array([b[-1] for b in bboxes])
show_masks = np.stack(masks, axis=-1)[..., mask_indices]
class_names = {k: data.coco.cats[k]['name'] for k in data.coco.cats.keys()}
display_instances(image, boxes, show_masks, box_classes, class_names, show_bbox=True, ax=ax[1])
else:
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[1])
###Output
_____no_output_____
###Markdown
basic usethe next cells show how the %cache magic should be used Note: This notebook requires the the packages scikit-learn, numpy and cache_magic to be installed
###Code
import cache_magic
# delete everthing currently cached
%cache --reset
# store a new value for a
%cache a = "111"
# fetch the cached value for a
%cache a = "111"
# an examle for an actual use-case
import cache_magic
import numpy as np
from sklearn import svm
%cache --reset
%timeit -n 1 -r 5 %cache -v 1 clf = svm.LinearSVC().fit(np.random.randint(5, size=(5000, 40)), np.random.randint(5, size=(5000)))
# the following 4 cases use the same version
%cache -r
# without explicit version, the expression (=right hand site of assignment) is used as version
%cache a = 0
# if parameter is an integer, it will be the version
%cache -v 0 a = 1
# if parameter is a variable name, it's value is used as version
my_version = 0
%cache -v my_version a = 1
# new and old version are converted into a string before comparing them
my_version_2 = "0"
%cache -v my_version_2 a = 1
# show everything, that is cached
%cache
# generate some variables
%cache b=3
def fun(x):
return x+1
%cache c = fun(b)
%cache -v c d = fun(1.1)
# show the new cache
%cache
###Output
_____no_output_____
###Markdown
power usethe next cells show how the %cache magic can be used
###Code
import cache_magic
import numpy as np
from sklearn import svm
%cache --reset
# even if the expression changes, but not the version, the old value will still be loaded
# in which case there will be a warning
%cache -v 1 clf = svm.LinearSVC().fit(np.random.randint(5, size=(1000, 40)), np.random.randint(5, size=(1000)))
%cache -v 1 clf = "not a classifier"
print(clf.predict(np.random.randint(5,size=(1,40)))[0])
# without an expression, it will always try to reload the cached value
del clf
%cache -v 1 clf
print(clf.predict(np.random.randint(5,size=(1,40)))[0])
# you can store the current value of a var without an actual statement by assigning it to itself
clf="not a classifier"
%cache -v 2 clf=clf
print(clf)
# while the cache still exists in the file system, the cell can be executed alone
import cache_magic
%cache clf
print(clf)
# the cache is stored in the directory where the kernel was first started in
import cache_magic
import os
%cache -r
%cache a=1
%cache b=1
%cache c=1
%cache
for root, dirs, files in os.walk(".cache_magic"):
# there is one folder per cache variable
print(root)
# if the working dir changes, the .cache-dir stays where it is
%cd ..
%cache
for root, dirs, files in os.walk(".cache_magic"):
# no output, because no .cache-dir
print(root)
%cd -
%cache
for root, dirs, files in os.walk(".cache_magic"):
# now we see the cache directory againg
print(root)
# always store a new value and never read from cache
%cache -r a=1
# remove a single variable from cache
%cache -r a
# Error:
%cache a
# load last value if possible, and store new value on miss
%cache a = a
# load last value if possible, but don't store new value on miss
import cache_magic
del a
%cache a
# You can use this magic-module as a regular python module
from cache_magic import CacheCall
cache = CacheCall(get_ipython().kernel.shell)
# setting all parameter by name
cache(
version="*",
reset=False,
var_name="aaa",
var_value="1+1",
show_all=False,
set_debug=True)
# setting all parameter by ordering
cache("1",False,"bbb","1+1",False, False)
# setting parameter selectivly
cache(show_all=True)
###Output
creating new value for variable 'aaa'
creating new value for variable 'bbb'
###Markdown
development teststhe next cells show how the %cache magic should not be usedthese examples are for debug-purposes only
###Code
#testing successfull calls
import cache_magic
# Dev-Note: use reload, so you don't have to restart the kernel everytime you change the
from imp import reload
reload(cache_magic)
my_version = 3
%cache --reset
print(" exptecting: new values")
%cache -v 2 a = "ex3"
%cache -v my_version c = "ex3"
%cache --version my_version sadsda = "ex3"
%cache -v 3 a=""
%cache -v 3 -r a=""
%cache -v 3 -r a=""
print(" exptecting: warnings")
%cache -v 3 a= " _ "
%cache -v 3 sadsda = "ex4"
print(" exptecting: stored values")
%cache -v my_version sadsda = "ex3"
%cache -v 3 sadsda = "ex3"
# testing errors
import cache_magic
reload(cache_magic)
%cache -v "a" a = "ex3"
%cache -v a 1=a
# testing loading without storing
import cache_magic
reload(cache_magic)
%cache --reset
a=1
del a
# error:
%cache a
%cache a=1
del a
%cache a
del a
# error:
%cache -v '1' a
# error
%cache -v 1 a
%cache --reset
a=1
del a
# Error
%cache -v 0 a
%cache -v * a=1
# Error:
%cache -v * a
%cache -v "1" a
%cache -v 213 a = "1"
# get stored version via error message
%cache -v * a
# testing debug flag '-d'
%cache -d -v 1 -r a = "1"
%cache -d a = "1"
import cache_magic
from imp import reload
reload(cache_magic)
%cache -r a=1
%cache -r a
%cache -r a=1
import cache_magic
from imp import reload
reload(cache_magic)
%cache -r a = 1
%cache a
print (a)
import cache_magic
from imp import reload
reload(cache_magic)
def foo(x):
return x+1
%cache --reset
%cache -v * a= foo(3)
%cache -v * a= foo(3)
%cache -v * a= 2
#!pip install -e .
###Output
_____no_output_____
###Markdown
`ipython-gremlin`
###Code
%reload_ext gremlin
%matplotlib inline
import os
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
from draw_graph import draw_simple_graph # A utility function that uses NetworkX plotting API
###Output
_____no_output_____
###Markdown
Load up the Grateful Dead data into an instance of TinkerGraph
###Code
dir_path = os.path.dirname(os.path.realpath('__file__'))
file_path = os.path.join(dir_path, 'grateful-dead.xml')
%gremlin graph.io(graphml()).readGraph(file_path)
###Output
Alias-- localhost --created for database at ws://localhost:8182/gremlin
###Markdown
Get some basic stats
###Code
num_verts = %gremlin g.V().count()
num_verts
%gremlin g.E().count()
###Output
_____no_output_____
###Markdown
Get the degree distribution
###Code
deg_dist = %gremlin g.V().groupCount().by(both().count())
degree = map(lambda x: int(x), deg_dist.results.keys())
prob = map(lambda x: x / num_verts.results, deg_dist.results.values())
plt.scatter(list(degree), list(prob))
###Output
_____no_output_____
###Markdown
Count vertex labels
###Code
label_count = %gremlin g.V().label().groupCount()
label_count.dataframe.plot(kind='bar', color=['c', 'b'])
###Output
_____no_output_____
###Markdown
Count edge labels
###Code
label_count = %gremlin g.E().label().groupCount()
label_count.dataframe.plot(kind='bar', color=['c', 'b', 'r'])
###Output
_____no_output_____
###Markdown
Find the most prolific artist
###Code
artist = %gremlin g.V().hasLabel('artist').order().by(inE().count(), decr).limit(1)
vid = artist.results.id
%gremlin g.V(vid).valueMap(true)
%gremlin g.V(vid).inE().count()
jerrys_labels = %gremlin g.V(vid).inE().label().groupCount()
jerrys_labels.dataframe.plot(kind='bar', color=['c', 'b'])
###Output
_____no_output_____
###Markdown
Get Jerry's ego network
###Code
jerrys_ego_net = %gremlin g.V(vid).bothE()
graph = jerrys_ego_net.graph
print(len(graph.nodes()), len(graph.edges()))
nodes = graph.nodes()
names = %gremlin g.V(nodes).properties('name')
labels = %gremlin g.V(nodes).label()
# Add names/labels to nodes
name_map = {}
label_map = {}
for i in range(len(nodes)):
node = nodes[i]
name_map[node] = names[i].value
label_map[node] = labels[i]
nx.set_node_attributes(graph, 'name', name_map)
nx.set_node_attributes(graph, 'label', label_map)
plt.rcParams['figure.figsize'] = (18, 12)
draw_simple_graph(graph,
node_type_attr='label',
edge_label_attr='',
show_edge_labels=False,
label_attrs=['name'],
k=0.005)
###Output
_____no_output_____
###Markdown
That's a lot of vertices for matplotlib...maybe `ipython-gremlin` needs a D3 interface... Run some graph algos using NetworkX
###Code
edges = %gremlin g.E()
full_graph = edges.graph
print(len(full_graph.nodes()), len(full_graph.edges()))
bc = nx.betweenness_centrality(full_graph)
cc = nx.closeness_centrality(full_graph)
dc = nx.degree_centrality(full_graph)
cent_df = pd.DataFrame({'closeness': cc, 'betweenness': bc, 'degree': dc})
cent_df.describe()
###Output
_____no_output_____
###Markdown
**Pytorch implementation of StyleGAN2** source: https://arxiv.org/pdf/1912.04958.pdf
###Code
import torch
from torch import nn
import torch.nn.functional as F
import numpy as np
from modules import *
from loss import *
from misc import *
from torchvision.datasets import MNIST
import torchvision.transforms as T
import matplotlib.pyplot as plt
from IPython import display
from tqdm import tqdm
plt.rcParams['figure.figsize'] = (11,11)
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
Generator architecture
###Code
class Generator(nn.Module):
def __init__(self, min_res, max_res, min_fmaps, max_fmaps, act,
k_size, blocks, img_channels, latent_size, n_layers, style_mixing_prob = 0.8,
dlatent_avg_beta = 0.995, weights_avg_beta=0.99, **kwargs):
super().__init__()
dres = min_res*2**blocks - max_res
assert dres >= 0
# building mapping net
self.latent_size = latent_size
self.mapping = Mapping(n_layers, latent_size, act)
# learnable const
self.const = nn.Parameter(torch.randn(max_fmaps, min_res, min_res))
# building main layers
fmaps = np.linspace(max_fmaps, min_fmaps, blocks+1).astype('int')
self.layers = []
for i in range(blocks):
layer = G_Block(fmaps[i],fmaps[i+1], k_size, latent_size, act, img_channels=img_channels)
self.add_module(str(i), layer)
self.layers.append(layer)
if dres > 0:
self.crop = torch.nn.ZeroPad2d(-dres//2)
# style mixing
self.style_mixing_prob = style_mixing_prob
# running average of dlatents
self.dlatent_avg_beta = dlatent_avg_beta
self.register_buffer('dlatent_avg', torch.zeros(latent_size))
# running average of weights
self.weights_avg_beta = weights_avg_beta
self.Src_Net = deepcopy(self).apply(parameters_to_buffers)
self.Src_Net.train(False)
# update running average of weights
def update_avg_weights(self):
params = dict(self.named_parameters())
buffers = dict(self.named_buffers())
for n,b in self.Src_Net.named_buffers():
try:
b.data.copy_(self.weights_avg_beta*b + (1-self.weights_avg_beta)*params[n])
except:
b.data.copy_(buffers[n])
def load_avg_weights(self):
buffers = dict(self.Src_Net.named_buffers())
for n,p in self.named_parameters():
p.data.copy_(buffers[n])
# sample dlatents
def sample_dlatents(self, n):
v = self._sample_dlatents(n)
if self.training and self.style_mixing_prob > 0:
v = self._bcast_dlatents(v)
l = len(self.layers)
cut_off = torch.randint(l-1,())
v2 = self._bcast_dlatents(self._sample_dlatents(n))
mask = torch.empty(n, dtype=torch.bool).bernoulli_(self.style_mixing_prob).view(-1, 1) \
* (torch.arange(l)>cut_off)
v = torch.where(mask.unsqueeze(-1).to(device=v.device), v2, v)
return v
def _sample_dlatents(self, n):
device = self.const.device
z = torch.randn(n, self.latent_size).to(device)
v = self.mapping(z)
# update dlatent average
if self.training:
self.dlatent_avg = self.dlatent_avg_beta*self.dlatent_avg + (1-self.dlatent_avg_beta)*v.data.mean(0)
return v
def _bcast_dlatents(self, v):
# broadcast dlatents [N, dlatent_size] --> [N, num_layers, dlatent_size]
return v.unsqueeze(1).expand(-1, len(self.layers), -1)
# generate from dlatents and input noises (optionally)
def generate(self, v, input_noises=None):
x = self.const.expand(v.shape[0], *self.const.shape).contiguous()
input_noises = input_noises if input_noises else [None]*len(self.layers)
y = None
if v.ndim < 3:
v = self._bcast_dlatents(v)
for i,layer in enumerate(self.layers):
x, y = layer(x,v[:,i],y, input_noises[i])
if hasattr(self, 'crop'):
y = self.crop(y)
return y
# for training
def sample(self, n):
dlatents = self.sample_dlatents(n)
x = self.generate(dlatents)
return x
# for evaluation
def sample_images(self,n, truncation_psi=1):
with torch.no_grad():
v = self.Src_Net.sample_dlatents(n)
# truncation trick
if truncation_psi < 1:
v = self.dlatent_avg + truncation_psi*(v-self.dlatent_avg)
images = to_img(self.Src_Net.generate(v))
return images
###Output
_____no_output_____
###Markdown
Discriminator architecture
###Code
class Discriminator(nn.Module):
def __init__(self, min_res, max_res, min_fmaps, max_fmaps, act,
k_size, blocks, img_channels, dense_size=128, **kwargs):
super().__init__()
assert max_res <= min_res*2**blocks and max_res >= (min_res-1)*2**blocks
# building layers
fmaps = np.linspace(min_fmaps, max_fmaps, blocks+1).astype('int')
self.from_channels = nn.Conv2d(img_channels, fmaps[0], 1)
self.layers = []
for i in range(blocks):
layer = D_Block(fmaps[i],fmaps[i+1], k_size, act)
self.add_module(str(i), layer)
self.layers.append(layer)
self.minibatch_sttdev = Minibatch_Stddev()
self.conv = nn.Conv2d(fmaps[-1]+1,fmaps[-1], 3)
self.dense = nn.Linear(fmaps[-1]*(min_res-2)**2, dense_size)
self.output = nn.Linear(dense_size, 1)
self.act = act
def get_score(self, imgs):
x = self.act(self.from_channels(imgs))
for layer in self.layers:
x = layer(x)
x = self.minibatch_sttdev(x)
x = self.act(self.conv(x))
x = x.view(x.shape[0],-1)
x = self.act(self.dense(x))
x = self.output(x)
return x
###Output
_____no_output_____
###Markdown
Define training loop
###Code
def train(G, D, dataset, max_iter, batch_size,
G_opt_args, D_opt_args, mapping_opt_args,
D_steps, pl_weight, r1_weight,
r1_interval, pl_interval, val_interval, num_workers, pl_batch_part, checkpoint=None):
pl_batch = int(pl_batch_part*batch_size)
device = next(D.parameters()).device
Path_length_reg = Path_length_loss()
# create dataloader
dataloader = NextDataLoader(dataset, batch_size, num_workers=num_workers)
mean = dataset.transforms.transform.transforms[1].mean[0]
std = dataset.transforms.transform.transforms[1].std[0]
# load state
if checkpoint:
G.load_state_dict(checkpoint['G'])
D.load_state_dict(checkpoint['D'])
Path_length_reg.avg = checkpoint['pl_loss_avg']
# create optimizer
G_params = []
for n,m in G.named_children():
if n != 'mapping':
G_params.extend(m.parameters())
gen_optimizer = torch.optim.Adam([{'params': G_params},
{'params': G.mapping.parameters(), **mapping_opt_args},
{'params': G.const, **mapping_opt_args},
], **G_opt_args)
disc_optimizer = torch.optim.Adam(D.parameters(), **D_opt_args)
G.train()
D.train()
for i in tqdm(range(max_iter)):
# discriminator update
for j in range(D_steps):
real_imgs = next(dataloader)[0].to(device)
real_imgs.requires_grad = True
fake_imgs = G.sample(real_imgs.shape[0])
real_scores = D.get_score(real_imgs)
fake_scores = D.get_score(fake_imgs)
loss = D_logistic(real_scores, fake_scores)
if i % r1_interval == 0 and j == D_steps-1:
loss += r1_weight*r1_interval*R1_reg(real_imgs, real_scores)
real_imgs.requires_grad = False
disc_optimizer.zero_grad()
loss.backward()
disc_optimizer.step()
# generator update
dlatent = G.sample_dlatents(batch_size)
if i % pl_interval == 0:
# hack to compute path length loss with smaller minibatch (for reducing memory consumption)
dlatent_part1, dlatent_part_2 = dlatent[:pl_batch], dlatent[pl_batch:]
fake_imgs = G.generate(torch.cat((dlatent_part1, dlatent_part_2), 0))
fake_scores = D.get_score(fake_imgs)
loss = G_logistic_ns(fake_scores) \
+ pl_weight*pl_interval*Path_length_reg(dlatent_part1, fake_imgs[:pl_batch])
else:
fake_imgs = G.generate(dlatent)
fake_scores = D.get_score(fake_imgs)
loss = G_logistic_ns(fake_scores)
gen_optimizer.zero_grad()
loss.backward()
gen_optimizer.step()
# updating running average
G.update_avg_weights()
if i % val_interval == 0:
display.clear_output(wait=True)
# print pictures
gen = G.sample_images(32)*std+mean
plt.imshow(grid(gen).squeeze())
plt.show()
# print prob distribution
plt.figure(figsize=(5,5))
plt.title('Generated vs real data')
plt.hist(torch.sigmoid(real_scores.data).cpu().numpy(), label='D(x)', alpha=0.5,range=[0,1])
plt.hist(torch.sigmoid(fake_scores.data).cpu().numpy(), label='D(G(z))',alpha=0.5,range=[0,1])
plt.legend(loc='best')
plt.show()
if i % (20*val_interval) == 0:
torch.save({
'G': G.state_dict(),
'D': D.state_dict(),
'pl_loss_avg': Path_length_reg.avg.item()
}, 'checkpoint.pt')
###Output
_____no_output_____
###Markdown
Hyperparams
###Code
img_channels = 1
n_layers = 4 # number of layers in mapping from latents to dlatents
latent_size = 160 # for simplicity dim of latent space = dim of dlatent space
###Output
_____no_output_____
###Markdown
Parameters for building models.
###Code
min_res = 4 # resolution from which the synthesis starts
max_res = 28 # out resolution
blocks = 3 # number of building blocks for both the generator and dicriminator
k_size = 3 # convolutions kernel size
max_fmaps = 128 # number of feature maps at the beginning of generation
min_fmaps = 64 # number of feature maps before going to the number of channels
weights_avg_beta=0.995 # beta for running average of generator weights
act = Scaled_Act(nn.LeakyReLU(0.2)) # activation function
device = 'cuda'
train_params = {'max_iter': 50000, 'batch_size' : 160,
'G_opt_args' : {'lr' : 0.001, 'betas' : (0.1, 0.99)},
'D_opt_args' : {'lr' : 0.001, 'betas' : (0, 0.99), 'eps' : 1e-08},
'mapping_opt_args' : {'lr' : 1e-5},
'D_steps': 1, 'pl_weight': 2, 'r1_weight': 8, 'pl_batch_part': 0.5,
'pl_interval': 4, 'r1_interval': 16, 'num_workers': 2, 'val_interval': 20}
###Output
_____no_output_____
###Markdown
Training
###Code
G = Generator(min_res, max_res, min_fmaps, max_fmaps, act,
k_size, blocks, img_channels, latent_size, n_layers, weights_avg_beta=weights_avg_beta).to(device)
D = Discriminator(min_res, max_res, min_fmaps, max_fmaps, act,
k_size, blocks, img_channels).to(device)
###Output
_____no_output_____
###Markdown
Equalized learning rate
###Code
G = Equal_LR('weight')(G)
D = Equal_LR('weight')(D)
###Output
_____no_output_____
###Markdown
Initialization of weights
###Code
def init_weights(m):
if hasattr(m, 'weight_orig'):
torch.nn.init.normal_(m.weight_orig)
if hasattr(m, 'bias'):
torch.nn.init.zeros_(m.bias)
G.apply(init_weights)
D.apply(init_weights);
###Output
_____no_output_____
###Markdown
Loading dataset
###Code
# Dataset
mean = 0.1307
std = 0.3081
dataset = MNIST('data', transform=T.Compose([T.ToTensor(), T.Normalize((mean,), (std,))]), download=True)
train(G, D, dataset, **train_params)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
checkpoint = torch.load('checkpoint.pt')
G.load_state_dict(checkpoint['G'])
G.load_avg_weights()
G.eval();
###Output
_____no_output_____
###Markdown
Generated with truncation trick $ \Psi = 0.9 $ and using running average weights
###Code
plt.title('Generated')
plt.imshow(grid(G.sample_images(32, truncation_psi=0.9)*std+mean).squeeze())
plt.show()
plt.title('Real data')
i = np.random.randint(50000)
real_imgs = dataset.data[i:32+i].unsqueeze(-1)
plt.imshow(grid(real_imgs).squeeze())
plt.show()
plt.title('Generated')
plt.imshow(grid(G.sample_images(128, truncation_psi=0.9)*std+mean, ncols=12).squeeze())
plt.show()
###Output
_____no_output_____
###Markdown
Truncation $ \Psi = 0.5$
###Code
plt.title('Generated')
plt.imshow(grid(G.sample_images(128, truncation_psi=0.5)*std+mean, ncols=12).squeeze())
plt.show()
###Output
_____no_output_____
###Markdown
`````` Reverse mapping from images to latents 1. With feedforward model
###Code
class Reverse_Mapping(nn.Module):
def __init__(self, min_res, max_res, min_fmaps, max_fmaps, latent_size,
act, k_size, blocks, img_channels, dense_size=128, **kwargs):
super().__init__()
# building layers
dres = min_res*2**blocks - max_res
self.upsample = torch.nn.Upsample(size=32, mode='bilinear', align_corners=False)
fmaps = np.linspace(min_fmaps, max_fmaps, blocks+1).astype('int')
self.from_channels = nn.Conv2d(img_channels, fmaps[0], 1)
self.layers = []
self.noise_outs = []
for i in range(blocks):
noise_out = nn.Conv2d(fmaps[i],2, 3, padding=1)
self.add_module('noise'+str(i), noise_out)
self.noise_outs.append(noise_out)
layer = D_Block(fmaps[i],fmaps[i+1], k_size, act)
self.add_module(str(i), layer)
self.layers.append(layer)
self.conv = nn.Conv2d(fmaps[-1],fmaps[-1], 3)
self.dense = nn.Linear(fmaps[-1]*(min_res-2)**2, dense_size)
self.output = nn.Linear(dense_size, latent_size)
self.act = act
self.blocks = blocks
def predict_dlatents(self, imgs):
noises = []
x = self.upsample(imgs)
x = self.from_channels(x)
for i,layer in enumerate(self.layers):
noises.append(self.noise_outs[i](x).unsqueeze(2))
x = layer(x)
x = self.act(self.conv(x))
x = x.view(x.shape[0],-1)
x = self.act(self.dense(x))
dlatents = self.output(x)
return dlatents, reversed(noises)
###Output
_____no_output_____
###Markdown
Minimize $L_2$ loss between true dlatents and predicted dlatents and the same with noise maps
###Code
def train_reverse_mapping(G, E, max_iter, batch_size, E_opt_args, val_interval, noise_loss_weight=0.05, **kwargs):
G.eval()
optimizer = torch.optim.Adam(E.parameters(), **E_opt_args)
min_res = G.const.shape[-1]
noise_maps_shapes = [(batch_size, 2 , 1, min_res*2**i, min_res*2**i) for i in range(1,len(G.layers)+1)]
for i in tqdm(range(max_iter)):
with torch.no_grad():
dlatents = G.sample_dlatents(batch_size)
nmaps = [torch.randn(s, device=dlatents.device) for s in noise_maps_shapes]
fake_imgs = G.generate(dlatents, nmaps)
pred_dlatents, pred_nmaps = E.predict_dlatents(fake_imgs)
loss = torch.mean((pred_dlatents - dlatents)**2)
for nmap, pred_nmap in zip(nmaps, pred_nmaps):
loss += noise_loss_weight * torch.mean((pred_nmap - nmap)**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % val_interval == 0:
print(loss.item())
display.clear_output(wait=True)
plt.imshow(grid(to_img(fake_imgs[:16])).squeeze())
plt.show()
plt.imshow(grid(to_img(G.generate(pred_dlatents[:16].data, [n[:16].data for n in pred_nmaps]))).squeeze())
plt.show()
E_opt_args = {'lr' : 0.0005, 'betas' : (0.9, 0.999)}
E = Reverse_Mapping(min_res, max_res, min_fmaps, max_fmaps, latent_size, act, k_size, blocks, img_channels).to(device)
train_reverse_mapping(G, E, 50000, 128, E_opt_args, 20)
torch.save(E.state_dict(),'E.pt')
E.load_state_dict(torch.load('E.pt'))
###Output
_____no_output_____
###Markdown
Generated target image
###Code
noise_maps_shapes = [(32, 2 , 1, min_res*2**i, min_res*2**i) for i in range(1,len(G.layers)+1)]
with torch.no_grad():
dlatents = G.sample_dlatents(32)
nmaps = [torch.randn(s, device=dlatents.device) for s in noise_maps_shapes]
fake_imgs = G.generate(dlatents, nmaps)
plt.imshow(grid(to_img(fake_imgs)).squeeze())
plt.show()
pred_dlatents, pred_nmaps = E.predict_dlatents(fake_imgs)
plt.title('Re-synthesized')
plt.imshow(grid(to_img(G.generate(pred_dlatents.data, [n.data for n in pred_nmaps]))).squeeze())
plt.show()
###Output
_____no_output_____
###Markdown
Real target images Real images reconstructing is much harder for FF model.
###Code
dataloader = NextDataLoader(dataset, 32, shuffle=True)
real_imgs = next(dataloader)[0]
plt.imshow(grid(to_img(real_imgs)).squeeze())
plt.show()
pred_dlatents, pred_nmaps = E.predict_dlatents(real_imgs.to(device))
plt.title('Re-synthesized')
plt.imshow(grid(to_img(G.generate(pred_dlatents.data, [n.data for n in pred_nmaps]))).squeeze())
plt.show()
###Output
_____no_output_____
###Markdown
`````` 2. Optimization (StyleGAN2) Here is projection method described in the StyleGAN2 paper
###Code
from projector import *
###Output
_____no_output_____
###Markdown
The image quality term is the LPIPS distance. Source: https://github.com/richzhang/PerceptualSimilarityBtw the LPIPS can also be used in the Feedforward model above but that doesn't provide much gain for it.
###Code
import sys
sys.path.append("../PerceptualSimilarity/")
from models import PerceptualLoss
image_loss = PerceptualLoss(model='net-lin', net='squeeze', use_gpu=False)
# for some reason, a cuDNN error occurs when using GPU
G.cpu();
proj = Projector(G, image_loss)
###Output
_____no_output_____
###Markdown
Generated target
###Code
target_images = G.sample(24)
plt.imshow(grid(to_img(target_images)).squeeze())
plt.show()
###Output
_____no_output_____
###Markdown
NN in perceptual loss expect much larger images than those in the MINIST, so I use bilinear upsampling
###Code
dlatents, noise_maps = proj.run(target_images.data, num_steps=1000, upsample_size=100)
###Output
_____no_output_____
###Markdown
Real target imagesThis method performs much better than the previous
###Code
dataloader = NextDataLoader(dataset, 24, shuffle=True)
real_imgs = next(dataloader)[0]
plt.imshow(grid(to_img(real_imgs)).squeeze())
plt.show()
dlatents, noise_maps = proj.run(real_imgs.data, num_steps=1000, upsample_size=100)
###Output
_____no_output_____
###Markdown
In our case, $L_2$ loss works just as well.
###Code
proj.image_loss = lambda x, t : torch.mean((x-t)**2)
dlatents, noise_maps = proj.run(target_images.data, num_steps=1000)
###Output
_____no_output_____
###Markdown
Effect of noise regularization on sneaking signal
###Code
proj.show_images=False
proj.noise_reg_weight = 0 #disabled
_, noise_maps = proj.run(target_images.data, num_steps=500)
plt.title('noise maps without regularization')
plt.imshow(grid(to_img(noise_maps[-1][:,0])).squeeze())
plt.show()
proj.noise_reg_weight = 1e5 # default value
_, noise_maps = proj.run(target_images.data, num_steps=500)
plt.title('noise maps with regularization')
plt.imshow(grid(to_img(noise_maps[-1][:,0])).squeeze())
plt.show()
###Output
100%|██████████| 500/500 [01:19<00:00, 6.30it/s]
###Markdown
Get the list of conda packages installed
###Code
!conda list
###Output
# packages in environment at /opt/tljh/user:
#
# Name Version Build Channel
alembic 1.0.5 <pip>
asn1crypto 0.24.0 py36_0
async-generator 1.10 <pip>
backcall 0.1.0 <pip>
bleach 3.0.2 <pip>
ca-certificates 2018.11.29 ha4d7672_0 conda-forge
certifi 2018.11.29 py36_1000 conda-forge
cffi 1.11.5 py36h9745a5d_0
chardet 3.0.4 py36h0f667ec_1
conda 4.5.8 py36_1 conda-forge
conda-env 2.6.0 h36134e3_1
cryptography 2.2.2 py36h14c3975_0
decorator 4.3.0 <pip>
defusedxml 0.5.0 <pip>
entrypoints 0.2.3 <pip>
idna 2.6 py36h82fb2a8_1
ipykernel 5.1.0 <pip>
ipython 7.2.0 <pip>
ipython-genutils 0.2.0 <pip>
ipywidgets 7.4.2 <pip>
jedi 0.13.2 <pip>
Jinja2 2.10 <pip>
jsonschema 2.6.0 <pip>
jupyter-client 5.2.4 <pip>
jupyter-core 4.4.0 <pip>
jupyterhub 0.9.4 <pip>
jupyterlab 0.35.3 <pip>
jupyterlab-git 0.5.0 <pip>
jupyterlab-latex 0.4.1 <pip>
jupyterlab-server 0.2.0 <pip>
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 7.2.0 hdf63c60_3
libstdcxx-ng 7.2.0 hdf63c60_3
Mako 1.0.7 <pip>
MarkupSafe 1.1.0 <pip>
mistune 0.8.4 <pip>
nbconvert 5.4.0 <pip>
nbformat 4.4.0 <pip>
nbgitpuller 0.6.1 <pip>
nbresuse 0.3.0 <pip>
ncurses 6.1 hf484d3e_0
notebook 5.7.0 <pip>
nteract-on-jupyter 1.9.12 <pip>
openssl 1.0.2p h470a237_1 conda-forge
pamela 0.3.0 <pip>
pandocfilters 1.4.2 <pip>
parso 0.3.1 <pip>
pexpect 4.6.0 <pip>
pickleshare 0.7.5 <pip>
pip 10.0.1 py36_0
prometheus-client 0.5.0 <pip>
prompt-toolkit 2.0.7 <pip>
psutil 5.4.8 <pip>
ptyprocess 0.6.0 <pip>
pycosat 0.6.3 py36h0a5515d_0
pycparser 2.18 py36hf9f622e_1
Pygments 2.3.1 <pip>
pyopenssl 18.0.0 py36_0
pysocks 1.6.8 py36_0
python 3.6.5 hc3d631a_2
python-dateutil 2.7.5 <pip>
python-editor 1.0.3 <pip>
python-oauth2 1.1.0 <pip>
pyzmq 17.1.2 <pip>
readline 7.0 ha6073c6_4
requests 2.18.4 py36he2e5f8d_1
ruamel_yaml 0.15.37 py36h14c3975_2
Send2Trash 1.5.0 <pip>
setuptools 39.2.0 py36_0
six 1.11.0 py36h372c433_1
SQLAlchemy 1.2.15 <pip>
sqlite 3.23.1 he433501_0
terminado 0.8.1 <pip>
testpath 0.4.2 <pip>
tk 8.6.7 hc745277_3
tornado 5.1.1 <pip>
traitlets 4.3.2 <pip>
urllib3 1.22 py36hbe7ace6_0
wcwidth 0.1.7 <pip>
webencodings 0.5.1 <pip>
wheel 0.31.1 py36_0
widgetsnbextension 3.4.2 <pip>
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zlib 1.2.11 ha838bed_2
###Markdown
Read an image and plot with imshow
###Code
from skimage import io
import matplotlib.pyplot as plt
%matplotlib inline
# https://directory.eoportal.org/web/eoportal/satellite-missions/c-missions/copernicus-sentinel-2
url="https://directory.eoportal.org/documents/163813/4091221/Sentinel2_Auto98.jpeg"
image = io.imread(url)
plt.imshow(image)
plt.title("Peruvian mountain scene, 14 July 2017, Sentinel-2\n (credit: ESA, processed by ESA, CC BY-SA 3.0 IGO)")
plt.show()
###Output
_____no_output_____
###Markdown
Using equation with LaTeX notation with markdownThe well known Pythagorean theorem $x^2 + y^2 = z^2$ was proved to be invalid for other exponents. Meaning the next equation has no integer solutions: $ x^n + y^n = z^n $ You can also use the following notation for your equations:\begin{equation}x^2 + y^2 = z^2\end{equation}
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig("test.png")
plt.show()
###Output
_____no_output_____
###Markdown
Gym Environment ExampleBasic run of a gym environment to collect and plot traces.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import gym
# Disable scientific printing
np.set_printoptions(threshold=10000, suppress=True, precision=5, linewidth=180)
env = gym.make('CartPole-v1')
print("Observation:")
print(env.observation_space)
print("Action:")
print(env.action_space)
ep_obs = list()
for k in range(2):
obs = list()
observation: np.ndarray = env.reset()
obs.append(observation)
for t in range(100):
# env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
obs.append(observation)
if done:
print(f"Episode {k} finished after {t+1} timesteps")
break
# Curate episode observations
obs = pd.DataFrame(obs, columns=['Cart Position', 'Cart Velocity', 'Pole Angle', 'Pole Tip Vel'])
obs['Episode'] = k
obs['Time Step'] = np.arange(len(obs))
ep_obs.append(obs)
env.close()
ep_obs = pd.concat(ep_obs)
ep_obs.sample(5)
melted_ep = pd.melt(ep_obs, id_vars=['Episode', 'Time Step'],
value_vars=['Cart Position', 'Cart Velocity', 'Pole Angle', 'Pole Tip Vel'],
var_name='Observation',
value_name='Value')
melted_ep['Type'] = np.where(melted_op['Observation'])
melted_ep.sample(5)
g = sns.relplot(x='Time Step', y='Value', hue='Observation', col='Episode', data=melted_ep)
g.savefig("example_plot.pdf", bbox_inches='tight')
melted_ep['Observation'].map()
###Output
_____no_output_____
###Markdown
Auxiliary Functions
###Code
from baselines.ViT.ViT_LRP import vit_base_patch16_224 as vit_LRP
from baselines.ViT.ViT_explanation_generator import LRP
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# create heatmap from mask on image
def show_cam_on_image(img, mask):
heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET)
heatmap = np.float32(heatmap) / 255
cam = heatmap + np.float32(img)
cam = cam / np.max(cam)
return cam
# initialize ViT pretrained
model = vit_LRP(pretrained=True).cuda()
model.eval()
attribution_generator = LRP(model)
def generate_visualization(original_image, class_index=None):
transformer_attribution = attribution_generator.generate_LRP(original_image.unsqueeze(0).cuda(), method="transformer_attribution", index=class_index).detach()
transformer_attribution = transformer_attribution.reshape(1, 1, 14, 14)
transformer_attribution = torch.nn.functional.interpolate(transformer_attribution, scale_factor=16, mode='bilinear')
transformer_attribution = transformer_attribution.reshape(224, 224).cuda().data.cpu().numpy()
transformer_attribution = (transformer_attribution - transformer_attribution.min()) / (transformer_attribution.max() - transformer_attribution.min())
image_transformer_attribution = original_image.permute(1, 2, 0).data.cpu().numpy()
image_transformer_attribution = (image_transformer_attribution - image_transformer_attribution.min()) / (image_transformer_attribution.max() - image_transformer_attribution.min())
vis = show_cam_on_image(image_transformer_attribution, transformer_attribution)
vis = np.uint8(255 * vis)
vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR)
return vis
CLS2IDX = {
243: 'bull mastiff',
282: 'tiger cat',
281: 'tabby, tabby cat',
285: 'Egyptian cat',
811: 'space heater',
340: 'zebra',
101: 'tusker',
386: 'African elephant, Loxodonta africana',
385: 'Indian elephant, Elephas maximus',
343: 'warthog',
}
def print_top_classes(predictions, **kwargs):
# Print Top-5 predictions
prob = torch.softmax(predictions, dim=1)
class_indices = predictions.data.topk(5, dim=1)[1][0].tolist()
max_str_len = 0
class_names = []
for cls_idx in class_indices:
class_names.append(CLS2IDX[cls_idx])
if len(CLS2IDX[cls_idx]) > max_str_len:
max_str_len = len(CLS2IDX[cls_idx])
print('Top 5 classes:')
for cls_idx in class_indices:
output_string = '\t{} : {}'.format(cls_idx, CLS2IDX[cls_idx])
output_string += ' ' * (max_str_len - len(CLS2IDX[cls_idx])) + '\t\t'
output_string += 'value = {:.3f}\t prob = {:.1f}%'.format(predictions[0, cls_idx], 100 * prob[0, cls_idx])
print(output_string)
###Output
_____no_output_____
###Markdown
Examples Cat-Dog
###Code
image = Image.open('samples/catdog.png')
dog_cat_image = transform(image)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(image);
axs[0].axis('off');
output = model(dog_cat_image.unsqueeze(0).cuda())
print_top_classes(output)
# cat - the predicted class
cat = generate_visualization(dog_cat_image)
# dog
# generate visualization for class 243: 'bull mastiff'
dog = generate_visualization(dog_cat_image, class_index=243)
axs[1].imshow(cat);
axs[1].axis('off');
axs[2].imshow(dog);
axs[2].axis('off');
###Output
Top 5 classes:
282 : tiger cat value = 10.559 prob = 68.6%
281 : tabby, tabby cat value = 9.059 prob = 15.3%
285 : Egyptian cat value = 8.414 prob = 8.0%
243 : bull mastiff value = 7.425 prob = 3.0%
811 : space heater value = 5.152 prob = 0.3%
###Markdown
Tusker-Zebra
###Code
image = Image.open('samples/el2.png')
tusker_zebra_image = transform(image)
fig, axs = plt.subplots(1, 3)
axs[0].imshow(image);
axs[0].axis('off');
output = model(tusker_zebra_image.unsqueeze(0).cuda())
print_top_classes(output)
# tusker - the predicted class
tusker = generate_visualization(tusker_zebra_image)
# zebra
# generate visualization for class 340: 'zebra'
zebra = generate_visualization(tusker_zebra_image, class_index=340)
axs[1].imshow(tusker);
axs[1].axis('off');
axs[2].imshow(zebra);
axs[2].axis('off');
###Output
Top 5 classes:
101 : tusker value = 11.216 prob = 37.9%
340 : zebra value = 10.973 prob = 29.7%
386 : African elephant, Loxodonta africana value = 10.747 prob = 23.7%
385 : Indian elephant, Elephas maximus value = 9.547 prob = 7.2%
343 : warthog value = 5.566 prob = 0.1%
###Markdown
Run learners in job scripts Define the learnersWe need the following variables:* `learners` a list of learners* `fnames` a list of file names, one for each learner
###Code
%%writefile learners_file.py
import adaptive
from functools import partial
def h(x, offset=0):
import numpy as np
import random
for _ in range(10): # Burn some CPU time just because
np.linalg.eig(np.random.rand(1000, 1000))
a = 0.01
return x + a ** 2 / (a ** 2 + (x - offset) ** 2)
offset = [i / 20 - 0.5 for i in range(20)]
combos = adaptive.utils.named_product(offset=offset)
learners = []
fnames = []
for i, combo in enumerate(combos):
f = partial(h, offset=combo["offset"])
learner = adaptive.Learner1D(f, bounds=(-1, 1))
fnames.append(f"data/{combo}")
learners.append(learner)
learner = adaptive.BalancingLearner(learners)
# Execute the previous code block and plot the learners
from learners_file import *
adaptive.notebook_extension()
learner.load(fnames)
learner.plot()
###Output
_____no_output_____
###Markdown
Option 1, the simple wayAfter defining the `learners` and `fnames` in an file (above) we can start to run these learners.We split up all learners into seperate jobs, all you need to do is to specify how many cores per job you want.
###Code
import adaptive_scheduler
def goal(learner):
return learner.npoints > 200
run_manager = adaptive_scheduler.server_support.RunManager(
learners_file="learners_file.py",
goal=goal,
cores_per_job=12,
log_interval=30,
save_interval=30,
)
run_manager.start()
# See the current queue with
import pandas as pd
pd.DataFrame(adaptive_scheduler.slurm.queue()).transpose()
# Read the logfiles and put it in a `pandas.DataFrame`.
# This only returns something when there are log-files to parse!
# So after `run_manager.log_interval` has passed.
run_manager.parse_log_files()
# See the database
pd.DataFrame(run_manager.get_database())
# Run this to STOP managing the database and jobs
run_manager.cancel(), run_manager.cleanup()
###Output
_____no_output_____
###Markdown
Option 2, the manual way The `adaptive_scheduler.server_support.RunManager` above essentially does everything we do below. The Python script that is run on the nodes
###Code
# Make sure to use the headnode's address in the next cell
from adaptive_scheduler import server_support
server_support.get_allowed_url()
%%writefile run_learner.py
import adaptive
from adaptive_scheduler import client_support
from mpi4py.futures import MPIPoolExecutor
from learners_file import learners, fnames
if __name__ == "__main__": # ← use this, see warning @ https://bit.ly/2HAk0GG
url = "tcp://10.75.0.5:57101"
learner, fname = client_support.get_learner(url, learners, fnames)
learner.load(fname)
runner = adaptive.Runner(
learner, executor=MPIPoolExecutor(), shutdown_executor=True, goal=None
)
runner.start_periodic_saving(dict(fname=fname), interval=600)
client_support.log_info(runner, interval=600) # log info in the job output script
runner.ioloop.run_until_complete(runner.task) # wait until runner goal reached
client_support.tell_done(url, fname)
###Output
_____no_output_____
###Markdown
Create a new database
###Code
from adaptive_scheduler import server_support
from learners_file import learners, fnames
db_fname = 'running.json'
server_support.create_empty_db(db_fname, fnames)
###Output
_____no_output_____
###Markdown
Check the running learners in the databaseAll the ones that are `None` are still `PENDING`, reached their goal, or are not scheduled.
###Code
server_support.get_database(db_fname)
###Output
_____no_output_____
###Markdown
Start the job scripts
###Code
import asyncio
from adaptive_scheduler import server_support, slurm
from learners_file import learners, fnames
# create unique names for the jobs
job_names = [f"test-job-{i}" for i in range(len(learners))]
# start the "job manager" and the "database manager"
database_task = server_support.start_database_manager("tcp://10.75.0.5:57101", db_fname)
job_task = server_support.start_job_manager(
job_names,
db_fname=db_fname,
cores=2,
interval=60,
run_script="run_learner.py", # optional
job_script_function=slurm.make_job_script, # optional
)
job_task.print_stack()
database_task.print_stack()
# Run this to STOP managing the database and jobs
from adaptive_scheduler import cancel_jobs
job_task.cancel(), database_task.cancel(), cancel_jobs(job_names)
###Output
_____no_output_____
###Markdown
Method 1
###Code
from optionstat import optionstat
options = optionstat.Optionstat()
options.add_trade(45, 0.95, 5, 'Put')
options.add_trade(50, 2.75, -5, 'Put')
options.add_trade(55, 2.65, -5, 'Call')
options.add_trade(60, 0.8, 5, 'Call')
fig, ax = options.plot(current=47)
stat = options.stat()
print(stat)
###Output
{'legs': 4, 'max_profit': 1825.0, 'max_loss': -675.0, 'break_even': [46.35, 58.65]}
###Markdown
Method 2
###Code
from optionstat.optionstat import Optionstat
options = Optionstat()
option_trades = [(45, 0.95, 5, 'Put'),
(50, 2.75, -5, 'Put'),
(55, 2.65, -5, 'Call'),
(60, 0.8, 5, 'Call')]
options.load_from_list(option_trades)
fig, ax = options.plot(current=47)
###Output
_____no_output_____
###Markdown
Main Trading Bot Logic The first algorithm we will test out is DQN. This is the de facto standard for single agent RL algorithms at this point. Before we actually start working on the core algorithm we are going to use for the trading bot, we should probably make sure we can pull the appropriate data and clean it if necessary. Perhaps the most obvious place to start is [Yahoo! Finance](https://finance.yahoo.com/).We will set this up so we can run our algorithm with some input parameters like the ticker code for a stock/crypto and automate the cleaning and training process. Test on LunarLander
###Code
import gym
import numpy as np
import tensorflow as tf
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
# state_dim defines the number of days to take in a
#TAU = 1e-3 # for soft update of target parameters
lunar_agent = agent.DQNAgent(
state_dim=8,
action_dim=4,
hidden_layer_sizes=[64,64],
buffer_size=10000,
batch_size=64,
discount=0.99,
learning_rate=5e-4,
learning_freq=4
)
# Evaluate untrained model
state = env.reset()
for j in range(200):
state = tf.reshape(state,shape=(1,-1))
action = lunar_agent.act(state, evaluation=True)
#env.render()
state, reward, done, _ = env.step(action)
print(reward)
if done:
break
#env.close()
from collections import deque
import numpy as np
def dqn(n_episodes=100, max_t=100, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
print(i_episode)
state = env.reset()
state = tf.reshape(state,shape=(1,-1))
score = 0
for t in range(max_t):
action = lunar_agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
next_state = tf.reshape(next_state,shape=(1,-1))
lunar_agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
# if np.mean(scores_window)>=200.0:
# print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
# torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
# break
return scores
dqn()
###Output
_____no_output_____
###Markdown
Trading Agent
###Code
from SmartTradingBot import agent, utils, trainer
from SmartTradingBot.utils import get_data
train, test = get_data(['BTC-USD'], start_date="2019-06-01", end_date="2020-09-01")
import seaborn as sns
#sns.lineplot(train.index, train)
normalised_train = utils.normalised_difference(data=train)
signorm_train = utils.sigmoid(normalised_train)
sns.lineplot(train.index[:-1],signorm_train)
trading_agent = agent.DQNAgent(
state_dim=10, # 10 days data is one "state1"/feature
action_dim=3, # [Hold,Buy,Sell] = [0,1,2]
hidden_layer_sizes=[128, 256, 256, 128],
buffer_size=1000,
batch_size=32,
discount=0.99,
learning_rate=1e-3,
learning_freq=4
)
n_episodes = 50
results=[]
for episode in range(1, n_episodes):
trainer.train_bot(agent=trading_agent, data=signorm_train, episode=episode, n_episodes=n_episodes)
results.append(x)
results
###Output
_____no_output_____
###Markdown
Example Usage for `mix_gamma_vi`
###Code
from mix_gamma_vi import mix_gamma_vi
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
###Output
_____no_output_____
###Markdown
Generate Dataset Generate 10000 data from a mixture of gamma two gamma distributions. Called this tensor `x`.
###Code
N = 10000
pi_true = [0.5, 0.5]
a_true = [20, 80 ]
B_true = [20, 40 ]
mix_gamma = tfp.distributions.MixtureSameFamily(
mixture_distribution=tfp.distributions.Categorical(probs=pi_true),
components_distribution=tfp.distributions.Gamma(concentration=a_true, rate=B_true))
x = mix_gamma.sample(N)
###Output
_____no_output_____
###Markdown
Variational Inference Under the Shape-Mean Parameterisation (Recommended) The defualt parameterisation for the function `mix_gamma_vi` is the mean-shape parameterisation under which the variational approximations to the posterior are\begin{align*}q^*(\mathbf{\pi}) &= \mathrm{Dirichlet} \left( \zeta_1, ..., \zeta_K \right) , \\q^*(\alpha_k) &= \mathcal{N}(\hat{\alpha}_k, \sigma_j^2) , \\q^* (\mu_k) &= \operatorname{Inv-Gamma} \left( \gamma_k, \lambda_k \right) . \end{align*}The product approximates the joint posterior\begin{align*}p(\mathbf{\pi}, \mathbf{\alpha}, \mathbf{\mu} \mid \mathbf{x}) &= q^*(\mathbf{\pi}) \prod_{k=1}^K q^*(\alpha_k) q^*(\mu_k).\end{align*}
###Code
# Fit a model
fit = mix_gamma_vi(x, 2)
# Get the fitted distribution
distribution = fit.distribution()
# Get the means of the parameters under the fitted posterior
distribution.mean()
# Get the posterior standard deviations
distribution.stddev()
###Output
_____no_output_____
###Markdown
Variational Inference Under the Shape-Rate Parameterisation (Not Recommended) The traditional parameterisation for gamma distribution is the shape-rate parameterisation which this package also supports (although it is not recommended). In this case, the variational approximations to the posterior are\begin{align*}q^*(\mathbf{\pi}) &= \mathrm{Dirichlet} \left( \zeta_1, ..., \zeta_K \right) , \\q^*(\alpha_k) &= \mathcal{N}(\hat{\alpha}_k, \sigma_k^2) , \\q^* (\beta_k) &= \operatorname{Gamma} \left( \gamma_j, \lambda_j \right) . \end{align*}The product approximates the joint posterior\begin{align*}p(\mathbf{\pi}, \mathbf{\alpha}, \mathbf{\beta} \mid \mathbf{x}) &= q^*(\mathbf{\pi}) \prod_{k=1}^K q^*(\alpha_k) q^*(\beta_k) .\end{align*}
###Code
# Fit a model
fit = mix_gamma_vi(x, 2, parameterisation="shape-rate")
# Get the fitted distribution
distribution = fit.distribution()
# Get the means of the parameters under the fitted posterior
distribution.mean()
# Get the posterior standard deviations
distribution.stddev()
###Output
_____no_output_____
###Markdown
Basic exampleWithout additional parameters the PlotTiled class allows to efficiently arrange subplots based on dimensions specified by the index plot arguments (ind_pltx and ind_plty). Here, the emphasis lies on
###Code
reload(pltpg)
do = pltpg.PlotPageData.from_df(df=df_0, ind_pltx=['pt'], ind_plty=['nd'],
ind_axx=['swyr_vl'], series=['coarse_bins_0_6_12_24_48_168_10000'],
values=['eval_comp_net'])
page_kws = dict(page_dim=(5,3), dpi=100, left=0.1, right=0.9, bottom=0., top=0.9)
label_kws = dict(label_format=' ', label_subset=[-1])
plot = pltpg.PlotTiled(do, kind_def='StepPlot', **page_kws)
plt.show()
###Output
kwargs {}
Getting data from DataFrame.
comp_ichg 208
comp_idch 208
ichg 174
idch 183
iteration 3
min 208
nevent 923
res_ichg 140
res_idch 144
slot_max 6816
slot_min 6816
idch_final 3590
ichg_final 3133
eff 2
kind 5
run_id 11
swyr_vl 11
swmh_vl 1
pp_id 4
nd_id 2
pt_id 2
nd 2
pt 2
pp 4
fine_bins 271
fine_bins_fine 271
coarse_bins_0_6_12_24_48_168_10000 9
fine_bins_mid 271
fine_bins_fine_mid 271
dpi 100
{'StepPlot': [('eval_comp_net', '(0,10]'), ('eval_comp_net', '(0,6]'), ('eval_comp_net', '(10,120]'), ('eval_comp_net', '(12,24]'), ('eval_comp_net', '(120,10000]'), ('eval_comp_net', '(168,10000]'), ('eval_comp_net', '(24,48]'), ('eval_comp_net', '(48,168]'), ('eval_comp_net', '(6,12]')]}
not in pkwd.
StepPlot
Plotting ('HYD6_STO', 'CH0') ['pt'] ['nd'] StepPlot
{'xlabel': ['swyr_vl'], 'ylabel': ['eval_comp_net'], 'title': "('HYD6_STO',)\n('CH0',)", 'gridpos': (0, 0)}
not in pkwd.
StepPlot
Plotting ('HYD6_STO', 'DE0') ['pt'] ['nd'] StepPlot
{'xlabel': ['swyr_vl'], 'ylabel': ['eval_comp_net'], 'title': "('HYD6_STO',)\n('DE0',)", 'gridpos': (0, 1)}
not in pkwd.
StepPlot
Plotting ('LIO_STO', 'CH0') ['pt'] ['nd'] StepPlot
{'xlabel': ['swyr_vl'], 'ylabel': ['eval_comp_net'], 'title': "('LIO_STO',)\n('CH0',)", 'gridpos': (1, 0)}
not in pkwd.
StepPlot
Plotting ('LIO_STO', 'DE0') ['pt'] ['nd'] StepPlot
{'xlabel': ['swyr_vl'], 'ylabel': ['eval_comp_net'], 'title': "('LIO_STO',)\n('DE0',)", 'gridpos': (1, 1)}
###Markdown
Simple exampleWe create first some dummy data for weights for three treatments
###Code
np.random.seed(0)
N = 8
group_names = ["Control", "Treatment 1", "Treatment 2"]
groups = pd.Series(
np.repeat(group_names, N),
index=[f"Participant_{i+1}" for i in range(N * len(group_names))],
name="Group",
)
weights = pd.Series(
data=np.random.randn(groups.shape[0], 1)[:, 0] * 5 + 15,
index=groups.index,
name="Weight",
)
# add difference between groups
weights += groups.map(dict(zip(group_names, [0, 0.1, 10]))).values
###Output
_____no_output_____
###Markdown
We can plot this data elegantly with seaborn
###Code
ax = sns.boxplot(y=weights, x=groups)
###Output
_____no_output_____
###Markdown
If you want to know and plot significance on the plot we can simply use satatsplot with almost the same API
###Code
ax, stats = stp.statsplot(variable=weights, test_variable=groups)
stats
###Output
_____no_output_____
###Markdown
If you want to show the value instead of the start you can modify the sig labels.
###Code
ax, stats = stp.statsplot(
variable=weights,
test_variable=groups,
labelkws={"show_ns": True, "use_stars": False},
)
###Output
_____no_output_____
###Markdown
Example with nested groups
###Code
# create data from above groups with before treatment and after treatment time point
df = pd.DataFrame(groups).reset_index().rename(columns={"index": "Participant"})
df_before = df.copy()
df_before["Timepoint"] = "before"
df_before["Measurement"] = np.random.randn(df.shape[0], 1)[:, 0] * 5 + 12
df_after = df.copy()
df_after["Timepoint"] = "after"
df_after["Measurement"] = weights.values
df = pd.concat([df_before, df_after], ignore_index=True)
del df_before, df_after
df.index = "Sample_" + df.index.astype(str)
df.head()
ax = sns.boxplot(
data=df, y="Measurement", hue="Timepoint", x="Group", hue_order=["before", "after"]
)
# and here the statsplot version of it.
# see we use paired ttest as we compate the same patients before and after treatment
ax, stats = stp.statsplot(
variable=df.Measurement,
test_variable=df.Timepoint,
grouping_variable=df.Group,
test="ttest_rel",
order_test=["before", "after"],
)
stats
###Output
_____no_output_____
###Markdown
Example with many variablesIf you have many similar values you can put them in a `StatsTable` and then apply statistics once.This example is based on on microbiome profiling
###Code
relab = pd.read_table("test/data/micobiota_relab.tsv.gz", index_col=0)
Tax = pd.read_table("test/data/micobiota_taxonomy.tsv.gz", index_col=0)
metadata = pd.read_table("test/data/micobiota_metadata.tsv.gz", index_col=0)
# transform data with centered log transform
from statsplot import transformations
clr_data = transformations.clr(relab, log=np.log2)
# put everithing together in a MetaTable
D = MetaTable(clr_data, obs=metadata, var=Tax)
# create stats table
ST = stp.StatsTable(
D,
test_variable="Group",
grouping_variable="Source",
label_variable="Label",
data_unit="centered log$_2$ ratio",
test="welch",
ref_group="RT",
)
ST.plot("MAG001")
plt.show()
ST.plot("MAG002")
# make a vulcanot
axes = ST.vulcanoplot(hue="phylum")
###Output
_____no_output_____
###Markdown
PCAThe following functions represent commonly used plots for dimensional reduction
###Code
from statsplot import DimRed
pca = DimRed(clr_data)
pca.plot_explained_variance_ratio()
pca.plot_components(
plot_ellipse=True,
groups=metadata.Group,
order_groups=["RT", "Hot"],
colors=["grey", "darkred"],
)
pca.plot_components(label_points=True)
pca.plot_biplot(labels=Tax.Label)
###Output
automatic selection selected 10 to visualize, which is probably to much. I select only 8
###Markdown
Stats table with one grouping variable This is to show how to construct a statstable without the MetaTable and for testing
###Code
# create stats table
ST = stp.StatsTable(
relab,
test_variable=metadata.Group,
label_variable=Tax.Label,
data_unit="Relative abundance",
test="mannwhitneyu",
ref_group="RT",
)
ST.vulcanoplot()
ST.stats
# keep in mind that the stats table here has one header row less than if used with a grouping variable
ST.plot("MAG001")
###Output
_____no_output_____
###Markdown
Univariate models Gaussian observations Locally constant (random walk)First, we start by creating a simple univariate Gaussian random walk. This will correspond to a dynamic linear model in the form\begin{align} y_t &\sim \mathcal{N}\left(\theta_t,V\right) \\\theta_t &\sim \mathcal{N}\left(\theta_{t-1},W\right)\end{align}We start by definind the variance of the latent states as $W=1.5$.
###Code
val structure = UnivariateStructure.createLocallyConstant(W = 1.5)
###Output
_____no_output_____
###Markdown
And generate a chain of $n=1000$ states with an initial state of $\theta_0 = 0$.
###Code
val states = StateGenerator.states(nobs = 1000,
structure = structure,
state0 = DenseVector[Double](0.0))
###Output
Sep 26, 2018 9:57:40 PM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Sep 26, 2018 9:57:40 PM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
Sep 26, 2018 9:57:40 PM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Sep 26, 2018 9:57:40 PM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
###Markdown
We can now generate the observations from the states, using an observation variance $V=4.0$.
###Code
val observations = UnivariateGenerator.gaussian(states = states,
structure = structure,
V = 4.0)
import com.cibo.evilplot._
import com.cibo.evilplot.plot._
import com.cibo.evilplot.plot.aesthetics.DefaultTheme._
import com.cibo.evilplot.numeric.Point
val states_plot = ScatterPlot(Seq.tabulate(100) { i => Point(i.toDouble, states(i)(0)) })
val obs_plot = LinePlot(Seq.tabulate(100) { i => Point(i.toDouble, observations(i)) })
val plot = Overlay(states_plot, obs_plot).render()
publish.png(plot.asBufferedImage)
###Output
_____no_output_____
###Markdown
A stock exchange exampleLet's assume we are following the end of day stock price of company Foo Ltd.We will simulate a stock price history for 365 days (each timepoint is a day), for a relatively stable stock price, with no trend or seasonality but with natural fluctuation. We also assume that the stock's initial price at day 0 is around \$100. As such we will set an initial state of $\theta_0 = 100$ and low underlying variance ($W=0.01$) with some noise in the data ($V=1.0$).
###Code
val stock_structure = UnivariateStructure.createLocallyConstant(W = 0.01)
val states = StateGenerator.states(nobs = 365,
structure = structure,
state0 = DenseVector[Double](100.0))
val observations = UnivariateGenerator.gaussian(states = states,
structure = structure,
V = 1.0)
Scatter((1 until 365), observations.toSeq).plot()
###Output
_____no_output_____
###Markdown
Now (ignoring the complexities of the stock market), let's suppose that on day $t=100$, this company announces a revolutionary breakthrough. We want to incorporate in our simulated data a jump of _twice_ is stock price, regardless of the value at $t=100$.We can do this by changing the data at the _state_ level at any point. First we create a chain for a "normal" random walk and then we append another one with a starting value of $\theta_{100} = 2\theta_{100}$.
###Code
val states_pre = StateGenerator.states(nobs = 100, structure = structure, state0 = DenseVector[Double](100.0))
val states_post = StateGenerator.states(nobs = 265, structure = structure, state0 = states_pre.last * 2.0)
val states = states_pre ++ states_post
###Output
_____no_output_____
###Markdown
It is important to note that due to the Markovian nature of the SSM, changes at the state level will _propagate_ to future states. This means that the jump in stock price will be propagated to future values.
###Code
val observations = UnivariateGenerator.gaussian(states = states,
structure = structure,
V = 1.0)
Scatter((1 until 365), observations.toSeq).plot()
###Output
_____no_output_____
###Markdown
Locally linear (mean and trend)For a locally linear model, we assume the state and observations matrices to be, respectively$$ \mathsf{F} = \begin{bmatrix} 1 & 0 \end{bmatrix},\qquad \mathsf{G} = \begin{bmatrix} 1 & 1 \\ 0 & 1\end{bmatrix}.$$The latent states, will then correspond to $\theta_t = \left(\mu, \tau\right)$, that is two components, representing the mean and the trend, respectively. The model will then take the form\begin{align}y_t \sim \mathcal{N}\left(\mathsf{F}\theta_t,V\right) \\\theta_t \sim \mathcal{N}\left(\mathsf{G}\theta_{t-1},\mathsf{W}\right)\end{align}The state covariance will now be a matrix$$ \mathsf{W} = \begin{bmatrix} W_{\tau} & 0 \\ 0 & W_{\mu} \end{bmatrix}$$representing the variance of the underlying mean and trend respectively. Stock exchange (again)Let's now simulate the stock price of the Foo company, but assuming there's a trend to the values, rather than a jump.We will create a mean varying a bit ($W_{\mu}=0.5$) but a rather smooth trend ($W_{\tau}=0.05$) and with some noise in the observations ($V=2.0$).
###Code
val W = DenseMatrix.eye[Double](2)
W(0,0) = 0.05
W(1,1) = 0.5
val stock_structure = UnivariateStructure.createLocallyLinear(W = W)
val states = StateGenerator.states(nobs = 365,
structure = stock_structure,
state0 = DenseVector[Double](100.0, -1.0))
val observations = UnivariateGenerator.gaussian(states = states,
structure = stock_structure,
V = 2.0)
Scatter((1 until 365), observations.toSeq).plot()
###Output
_____no_output_____
###Markdown
One of the advantages of this formulation, is that we can decompose easily the states into the mean and the trend.For instance:
###Code
val x = 1 until 365
val plot = Seq(
Scatter(
x, states.map(_(0)), name = "trend"
),
Scatter(
x, states.map(_(1)), name = "mean"
)
)
plot.plot(title = "Locally linear")
###Output
_____no_output_____
###Markdown
These are just latent in the eigenspectrum of Texan counties. There is no data in this. You can see this yourself just by using arbitrary data in the place of X:
###Code
Xrandom = np.random.uniform(-1,1, size=X.shape) # we don't need to set a seed because it literally doesn't matter
votes.assign(labels=SPENC(n_clusters=10, gamma=0).fit(Xrandom, W=Wm).labels_)\
.plot("labels", cmap='rainbow')
###Output
_____no_output_____
###Markdown
Now, note the distribution of affinities in the final affinity matrix:
###Code
plt.hist(aspatial.affinity_matrix_.toarray()[aspatial.affinity_matrix_.nonzero()].flatten(), bins=100)
plt.xlim(-.1,1.1)
###Output
_____no_output_____
###Markdown
OK, let's spread that out a bit
###Code
# with a new gamma=200
g200 = SPENC(n_clusters=10, gamma=200).fit(X, W=Wm)
# with a new gamma=500
g800 = SPENC(n_clusters=10, gamma=800).fit(X, W=Wm)
plt.hist(g200.affinity_matrix_.toarray()[g200.affinity_matrix_.nonzero()].flatten(), bins=40, color='k')
plt.hist(g800.affinity_matrix_.toarray()[g800.affinity_matrix_.nonzero()].flatten(),
bins=40, alpha=.5, linewidth=3)
plt.xlim(-.1,1.1)
votes.assign(labels=g200.labels_).plot("labels", cmap='rainbow')
votes.assign(labels=g800.labels_).plot("labels", cmap='rainbow')
###Output
_____no_output_____
###Markdown
And, with a higher-order weight:
###Code
Wi_4 = lp.higher_order(Wm, 4).sparse
g200_eta4 = SPENC(n_clusters=10, gamma=200).fit(X, W=Wi_4)
votes.assign(labels=g200_eta4.labels_).plot("labels", cmap='rainbow')
###Output
_____no_output_____
###Markdown
Decomposing unitary matrix into quantum gatesThis tool is useful when you have $2^n \times 2^n$ matrix representing a untary operator acting on register of $n$ bits and want to implement this operator in Q.This notebook demonstrates how to use it. Tl;DR
###Code
import numpy, quantum_decomp
SWAP = numpy.array([[1,0,0,0],[0,0,1,0],[0,1,0,0], [0,0,0,1]])
print(quantum_decomp.matrix_to_qsharp(SWAP, op_name='Swap'))
###Output
operation Swap (qs : Qubit[]) : Unit {
CNOT(qs[1], qs[0]);
CNOT(qs[0], qs[1]);
CNOT(qs[1], qs[0]);
}
###Markdown
ExampleConsider following matrix:$$A = \frac{1}{\sqrt{3}}\begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & e^{\frac{2\pi i}{3}} & e^{\frac{4 \pi i}{3}} & 0 \\ 1 & e^{\frac{4\pi i}{3}} & e^{\frac{2 \pi i}{3}} & 0 \\ 0 & 0 & 0 & -i \sqrt{3} \end{pmatrix}$$This is $3\times 3$ [DFT matrix](https://en.wikipedia.org/wiki/DFT_matrix), padded to have shape $4 \times 4$. Implementing such matrix was one way to solve problem B2 in [Microsoft Q Coding Contest - Winter 2019](https://codeforces.com/blog/entry/65579).[Here](https://assets.codeforces.com/rounds/1116/contest-editorial.pdf) you can find another approach to implementing this matrix, but let's see how we can implement it using our tool and Q.First, let's construct this matrix:
###Code
import numpy as np
w = np.exp((2j / 3) * np.pi)
A = np.array([[1, 1, 1, 0],
[1, w, w * w, 0],
[1, w * w, w, 0],
[0, 0, 0, -1j*np.sqrt(3)]]) / np.sqrt(3)
print(A)
###Output
[[ 0.57735027+0.j 0.57735027+0.j 0.57735027+0.j 0. +0.j ]
[ 0.57735027+0.j -0.28867513+0.5j -0.28867513-0.5j 0. +0.j ]
[ 0.57735027+0.j -0.28867513-0.5j -0.28867513+0.5j 0. +0.j ]
[ 0. +0.j 0. +0.j 0. +0.j 0. -1.j ]]
###Markdown
Now, let's use quantum_decomp library to construct Q code.
###Code
import quantum_decomp as qd
print(qd.matrix_to_qsharp(A))
###Output
operation ApplyUnitaryMatrix (qs : Qubit[]) : Unit {
CNOT(qs[1], qs[0]);
Controlled Ry([qs[0]], (-1.570796326794897, qs[1]));
X(qs[1]);
Controlled Ry([qs[1]], (-1.910633236249018, qs[0]));
X(qs[1]);
Controlled Rz([qs[0]], (-4.712388980384691, qs[1]));
Controlled Ry([qs[0]], (-1.570796326794897, qs[1]));
Controlled Rz([qs[0]], (-1.570796326794896, qs[1]));
Controlled Rz([qs[1]], (-1.570796326794897, qs[0]));
Controlled Ry([qs[1]], (-3.141592653589793, qs[0]));
Controlled Rz([qs[1]], (1.570796326794897, qs[0]));
}
###Markdown
As you can see from code in qsharp/ directory of this repository, this code indeed implements given unitary matrix. Also you can get the same sequence of operations as sequence of gates, where each gate is instance of GateFC or GateSingle, which are internal classes implementing fully controlled gate or gate acting on single qubit.
###Code
gates = qd.matrix_to_gates(A)
print('\n'.join(map(str, gates)))
###Output
X on bit 0, fully controlled
Ry(1.5707963267948966) on bit 1, fully controlled
X on bit 1
Ry(1.9106332362490184) on bit 0, fully controlled
X on bit 1
Rz(4.712388980384691) on bit 1, fully controlled
Ry(1.5707963267948966) on bit 1, fully controlled
Rz(1.570796326794896) on bit 1, fully controlled
Rz(1.5707963267948972) on bit 0, fully controlled
Ry(3.141592653589793) on bit 0, fully controlled
Rz(-1.5707963267948972) on bit 0, fully controlled
###Markdown
This can be represented by a quantum circuit (made with [Q-cirquit](http://physics.unm.edu/CQuIC/Qcircuit/)): This is how you can view decomposition of matrix into 2-level gates, which is used to build sequence of gates.
###Code
print('\n'.join(map(str,qd.two_level_decompose_gray(A))))
###Output
[[0.+0.j 1.+0.j]
[1.+0.j 0.+0.j]] on (2, 3)
[[ 0.70710678-0.00000000e+00j 0.70710678-8.65956056e-17j]
[-0.70710678-8.65956056e-17j 0.70710678-0.00000000e+00j]] on (1, 3)
[[ 0.57735027-0.00000000e+00j 0.81649658-9.99919924e-17j]
[-0.81649658-9.99919924e-17j 0.57735027-0.00000000e+00j]] on (0, 1)
[[-7.07106781e-01+8.65956056e-17j -3.57316295e-16-7.07106781e-01j]
[ 3.57316295e-16-7.07106781e-01j -7.07106781e-01-8.65956056e-17j]] on (1, 3)
[[ 0.00000000e+00+0.j -5.31862526e-16-1.j]
[ 0.00000000e+00-1.j 0.00000000e+00+0.j]] on (2, 3)
###Markdown
Those matrices are ordered in order they are applied, so to write them as a matrix product, we have to reverse them. This product can be written as follows: $$A = \begin{pmatrix} 0 & -i \\ -i & 0 \end{pmatrix}_{2,3}\begin{pmatrix} -\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}i \\ -\frac{\sqrt{2}}{2}i & -\frac{\sqrt{2}}{2} \end{pmatrix}_{1,3}\begin{pmatrix} \sqrt{\frac{1}{3}} & \sqrt{\frac{2}{3}} \\ -\sqrt{\frac{2}{3}} & \sqrt{\frac{1}{3}} \end{pmatrix}_{0,1}\begin{pmatrix} \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \\ -\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \end{pmatrix}_{1,3}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}_{2,3}$$Or, in full form:$$A = \begin{pmatrix} 1 & 0 & 0 & 0 \\0& 1 & 0& 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & -i & 0 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -\frac{\sqrt{2}}{2} & 0 & -\frac{\sqrt{2}}{2}i \\ 0 & 0 & 1 & 0 \\ 0 & -\frac{\sqrt{2}}{2}i & 0 & -\frac{\sqrt{2}}{2} \end{pmatrix}\begin{pmatrix} \sqrt{\frac{1}{3}} & \sqrt{\frac{2}{3}} & 0 & 0 \\ -\sqrt{\frac{2}{3}} & \sqrt{\frac{1}{3}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \frac{\sqrt{2}}{2} & 0 & \frac{\sqrt{2}}{2} \\ 0 & 0 & 1 & 0 \\ 0 & -\frac{\sqrt{2}}{2} & 0 & \frac{\sqrt{2}}{2} \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}$$ Output sizeNumber of Q commands this tool produces is proportional to number of elements in matrix, which is $O(4^n)$, where $n$ is number of qubits in a register. More accurately, it's asymtotically $2 \cdot 4^n$. As it grows very fast, unfortunately this tool is useful only for small values of $n$.See detailed experimental complexity analysis of this tool in [this notebook](https://github.com/fedimser/quantum_decomp/blob/master/complexity.ipynb). ImplementationImplementation is based on:* Article ["Decomposition of unitary matrices and quantum gates"](https://arxiv.org/pdf/1210.7366.pdf) by Chi-Kwong Li and Rebecca Roberts;* Book "Quantum Computing: From Linear Algebra to Physical Implementations" (chapter 4) by Mikio Nakahara and Tetsuo Ohmi.It consists of following steps:1. Decomposing matrix into 2-level unitary matrices;2. Using Gray code to transform those matrices into matrices acting on states whose index differ only in one bit;3. Implementing those matrices as fully controled single-qubit gates;4. Implementing single-gate qubits as Rx, Ry and R1 gates;5. Optimizations: cancelling X gates and removing identity gates. Paper Algorithm used in this tool is in detail outlined in this [paper](https://github.com/fedimser/quantum_decomp/blob/master/res/Fedoriaka2019Decomposition.pdf). Updates Optimized algorithm for 4x4 unitaries (Dec 2019)In case of 4x4 unitary one can implement it in much more effective way. Generic algorithm described above will produce 18 contolled gates, each of which should be implemented with at least 2 CNOTs and 3 single-qubit gates.As proven in [this paper](https://arxiv.org/pdf/quant-ph/0308006.pdf), it's possible to implement any 4x4 unitary using not more than 3 CNOT gates and 15 elementary single-qubit Ry and Rz gates.Algorithm for such optimal decomposition is now implemented in this library. To use it, pass `optimize=True` to functions performing decomposition.This example shows optimized decomposition for matrix A defined above.
###Code
qd.matrix_to_gates(A, optimize=True)
print(qd.matrix_to_qsharp(A, optimize=True))
###Output
operation ApplyUnitaryMatrix (qs : Qubit[]) : Unit {
Rz(2.700933836565789, qs[0]);
Ry(-1.201442806989828, qs[0]);
Rz(-0.974689532916684, qs[0]);
Rz(2.700933836565789, qs[1]);
Ry(-1.201442806989829, qs[1]);
Rz(-2.545485852364665, qs[1]);
CNOT(qs[1], qs[0]);
Rz(4.022910287637800, qs[0]);
Ry(-0.400926166464297, qs[1]);
CNOT(qs[0], qs[1]);
Ry(8.142534160257075, qs[1]);
CNOT(qs[1], qs[0]);
Rz(2.545485857153846, qs[0]);
Ry(-1.940149846599965, qs[0]);
Rz(-0.440658817024004, qs[0]);
R1(3.141592653589793, qs[0]);
Rz(0.974689528127503, qs[1]);
Ry(-1.940149846599965, qs[1]);
Rz(-3.582251470613797, qs[1]);
}
###Markdown
Circ support (Dec 2019)Now it's possible to convert unitary matrix to [Cirq](https://github.com/quantumlib/Cirq) circquit.You don't need to install Cirq to use the library, unless you want to have output as Cirq cirquit.See examples below.
###Code
print(qd.matrix_to_cirq_circuit(SWAP))
qd.matrix_to_cirq_circuit(A)
###Output
_____no_output_____
###Markdown
To verify it's correct, let's convert random unitary to Cirq circuit, and then convert circuit back to matrix, and make sure we get the same matrix.
###Code
from scipy.stats import unitary_group
U = unitary_group.rvs(16)
np.linalg.norm(U - qd.matrix_to_cirq_circuit(U).unitary())
###Output
_____no_output_____
###Markdown
Qiskit support (Dec 2020)*Feature added by [Ryan Vandersmith](https://github.com/rvanasa).*
###Code
print(qd.matrix_to_qiskit_circuit(SWAP))
A_qiskit = qd.matrix_to_qiskit_circuit(A)
print(A_qiskit)
# Verify correctness of decompostion.
import qiskit.quantum_info as qi
np.linalg.norm(qi.Operator(A_qiskit).data - A)
###Output
_____no_output_____
###Markdown
Dorado sensitivity calculator examples Imports
###Code
from astropy import units as u
from astropy.coordinates import GeocentricTrueEcliptic, get_sun, SkyCoord
from astropy.time import Time
from astropy.visualization import quantity_support
from matplotlib import pyplot as plt
import numpy as np
import synphot
import dorado.sensitivity
###Output
_____no_output_____
###Markdown
Plot filter efficiencyNote that this is converted from the effective area curve assuming a fiducial collecting area of 100 cm$^2$.
###Code
dorado.sensitivity.bandpasses.NUV_D.plot(ylog=True, title=r'$\mathrm{NUV}_\mathrm{D}$ sensitivity')
###Output
_____no_output_____
###Markdown
Example SNR calculationThis example is for a 10 minute observation of a flat-spectrum 21 AB mag source in "high" zodiacal light conditions (looking in the plane of the ecliptic, but anti-sunward), observing while on the night side of the Earth.
###Code
time = Time('2020-10-31 12:33:12')
sun = get_sun(time).transform_to(GeocentricTrueEcliptic(equinox=time))
coord = SkyCoord(sun.lon + 180*u.deg, 0*u.deg, frame=GeocentricTrueEcliptic(equinox=time))
source = synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=21 * u.ABmag)
dorado.sensitivity.get_snr(source, exptime=10*u.min, coord=coord, time=time, night=True)
###Output
_____no_output_____
###Markdown
Limiting magnitude calculationCalculate the SNR=5 limiting magnitude as a function of exposure time for a flat-spectrum source at the position of NGC 4993.
###Code
ax = plt.axes()
ax.invert_yaxis()
ax.set_xlabel('Exposure time (s)')
ax.set_ylabel('Limiting magnitude (AB)')
exptimes = np.linspace(0, 1000) * u.s
coord = SkyCoord.from_name('NGC 4993')
time = Time('2017-08-17 17:54:00')
for night in [False, True]:
limmags = dorado.sensitivity.get_limmag(
synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=0 * u.ABmag), snr=5, exptime=exptimes, coord=coord, time=time, night=night)
ax.plot(exptimes, limmags, label='night' if night else 'day')
ax.legend()
###Output
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:477: RuntimeWarning: divide by zero encountered in true_divide
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:477: RuntimeWarning: divide by zero encountered in true_divide
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
###Markdown
Round trip checkCheck that `get_limmag` is the inverse of `get_snr`.
###Code
for exptime, limmag in zip(exptimes, limmags):
print(dorado.sensitivity.get_snr(
synphot.SourceSpectrum(synphot.ConstFlux1D, amplitude=limmag),
exptime=exptime, coord=coord, time=time, night=night))
###Output
/Users/lpsinger/Library/Caches/pypoetry/virtualenvs/dorado-sensitivity-RYVm8gWH-py3.8/lib/python3.8/site-packages/astropy/units/quantity.py:477: RuntimeWarning: invalid value encountered in multiply
result = super().__array_ufunc__(function, method, *arrays, **kwargs)
nan
5.000000000000003
5.000000000000003
4.999999999999996
5.000000000000002
5.000000000000002
5.000000000000003
5.000000000000002
5.000000000000001
4.999999999999989
5.0000000000000036
5.000000000000003
5.0
5.000000000000002
5.0
5.000000000000008
5.000000000000008
4.999999999999997
4.999999999999985
5.000000000000001
5.000000000000003
4.9999999999999964
4.999999999999996
5.000000000000004
4.999999999999993
4.999999999999998
5.000000000000001
4.999999999999989
4.999999999999994
4.999999999999994
4.999999999999989
4.999999999999991
4.999999999999999
5.000000000000005
5.000000000000003
4.999999999999992
5.000000000000006
5.0
5.000000000000007
4.999999999999994
4.9999999999999725
4.999999999999992
4.99999999999999
5.000000000000008
4.999999999999994
5.000000000000003
5.000000000000009
4.999999999999994
4.999999999999999
4.9999999999999964
###Markdown
Example Usage of the Corpus PipelineWe have an input directory which contains .nena formatted texts, `example_texts`, and an output directory `example_out`. The pipeline class, `CorpusPipeline`, is instanced on a configuration file, which links to a bunch of definitions needed by the various parsers. All data up to the static search tools are produced with `.build_corpus`. This methodrequires an in-directory (NENA texts) and out-directory, which is populated with documentation.md, tf, and search_tool.
###Code
from pipeline.corpus_pipeline import CorpusPipeline
cp = CorpusPipeline('config.json')
cp.build_corpus('example_texts', 'example_out')
###Output
Beginning parsing of NENA formatted texts...
parsing example_texts/A Close Shave.nena...
parsing example_texts/A Cure for a Husband’s Madness.nena...
parsing example_texts/A Donkey Knows Best.nena...
parsing example_texts/A Dragon in the Well.nena...
parsing example_texts/A Dutiful Son.nena...
parsing example_texts/A Frog Wants a Husband.nena...
parsing example_texts/A Hundred Gold Coins.nena...
parsing example_texts/A Lost Donkey.nena...
parsing example_texts/A Lost Ring.nena...
parsing example_texts/A Man Called Čuxo.nena...
parsing example_texts/A Painting of the King of Iran.nena...
parsing example_texts/A Pound of Flesh.nena...
parsing example_texts/A Sweater to Pay Off a Debt.nena...
parsing example_texts/A Tale of Two Kings.nena...
parsing example_texts/A Tale of a Prince and a Princess.nena...
parsing example_texts/A Thousand Dinars.nena...
parsing example_texts/A Visit From Harun Ar-Rashid.nena...
parsing example_texts/Agriculture and Village Life.nena...
parsing example_texts/Am I Dead?.nena...
parsing example_texts/An Orphan Duckling.nena...
parsing example_texts/Axiqar.nena...
parsing example_texts/Baby Leliθa.nena...
parsing example_texts/Bread_and_cheese.nena...
parsing example_texts/Dəmdəma.nena...
parsing example_texts/Events in 1946 on the Urmi Plain.nena...
parsing example_texts/Games.nena...
parsing example_texts/Gozali and Nozali.nena...
parsing example_texts/Hunting.nena...
parsing example_texts/I Am Worth the Same as a Blind Wolf.nena...
parsing example_texts/I Have Died.nena...
parsing example_texts/Ice for Dinner.nena...
parsing example_texts/Is There a Man With No Worries?.nena...
parsing example_texts/Kindness to a Donkey.nena...
parsing example_texts/Lost Money.nena...
parsing example_texts/Man Is Treacherous.nena...
parsing example_texts/Measure for Measure.nena...
parsing example_texts/Mistaken Identity.nena...
parsing example_texts/Much Ado About Nothing.nena...
parsing example_texts/Nanno and Jəndo.nena...
parsing example_texts/Nipuxta.nena...
parsing example_texts/No Bread Today.nena...
parsing example_texts/Problems Lighting a Fire.nena...
parsing example_texts/Qaṭina Rescues His Nephew From Leliθa.nena...
parsing example_texts/Sour Grapes.nena...
parsing example_texts/St. Zayya’s Cake Dough.nena...
parsing example_texts/Star-Crossed Lovers.nena...
parsing example_texts/Stomach Trouble.nena...
parsing example_texts/Tales From the 1001 Nights.nena...
parsing example_texts/The Adventures of Ashur.nena...
parsing example_texts/The Adventures of Two Brothers.nena...
parsing example_texts/The Adventures of a Princess.nena...
parsing example_texts/The Angel of Death.nena...
parsing example_texts/The Assyrians of Armenia.nena...
parsing example_texts/The Assyrians of Urmi.nena...
parsing example_texts/The Bald Child and the Monsters.nena...
parsing example_texts/The Bald Man and the King.nena...
parsing example_texts/The Battle With Yuwanəs the Armenian.nena...
parsing example_texts/The Bear and the Fox.nena...
parsing example_texts/The Bird and the Fox.nena...
parsing example_texts/The Brother of Giants.nena...
parsing example_texts/The Cat and the Mice.nena...
parsing example_texts/The Cat’s Dinner.nena...
parsing example_texts/The Cooking Pot.nena...
parsing example_texts/The Cow and the Poor Girl.nena...
parsing example_texts/The Crafty Hireling.nena...
parsing example_texts/The Crow and the Cheese.nena...
parsing example_texts/The Daughter of the King.nena...
parsing example_texts/The Dead Rise and Return.nena...
parsing example_texts/The Fisherman and the Princess.nena...
parsing example_texts/The Fox and the Lion.nena...
parsing example_texts/The Fox and the Miller.nena...
parsing example_texts/The Fox and the Stork.nena...
parsing example_texts/The Giant One-Eyed Demon.nena...
parsing example_texts/The Giant’s Cave.nena...
parsing example_texts/The Girl and the Seven Brothers.nena...
parsing example_texts/The King With Forty Sons.nena...
parsing example_texts/The Leliθa From č̭āl.nena...
parsing example_texts/The Lion King.nena...
parsing example_texts/The Lion With a Swollen Leg.nena...
parsing example_texts/The Little Prince and the Snake.nena...
parsing example_texts/The Loan of a Cooking Pot.nena...
parsing example_texts/The Man Who Cried Wolf.nena...
parsing example_texts/The Man Who Wanted to Complain to God.nena...
parsing example_texts/The Man Who Wanted to Work.nena...
parsing example_texts/The Monk Who Wanted to Know When He Would Die.nena...
parsing example_texts/The Monk and the Angel.nena...
parsing example_texts/The Old Man and the Fish.nena...
parsing example_texts/The Priest and the Mullah.nena...
parsing example_texts/The Purchase of a Donkey.nena...
parsing example_texts/The Sale of an Ox.nena...
parsing example_texts/The Scorpion and the Snake.nena...
parsing example_texts/The Selfish Neighbour.nena...
parsing example_texts/The Sisisambər Plant.nena...
parsing example_texts/The Snake’s Dilemma.nena...
parsing example_texts/The Story With No End.nena...
parsing example_texts/The Stupid Carpenter.nena...
parsing example_texts/The Tale of Farxo and Səttiya.nena...
parsing example_texts/The Tale of Mămo and Zine.nena...
parsing example_texts/The Tale of Mərza Pămət.nena...
parsing example_texts/The Tale of Nasimo.nena...
parsing example_texts/The Tale of Parizada, Warda and Nargis.nena...
parsing example_texts/The Tale of Rustam (1).nena...
parsing example_texts/The Tale of Rustam (2).nena...
parsing example_texts/The Wife Who Learns How to Work (2).nena...
parsing example_texts/The Wife Who Learns How to Work.nena...
parsing example_texts/The Wife’s Condition.nena...
parsing example_texts/The Wise Brother.nena...
parsing example_texts/The Wise Daughter of the King.nena...
parsing example_texts/The Wise Snake.nena...
parsing example_texts/The Wise Young Daughter.nena...
parsing example_texts/The Wise Young Man.nena...
parsing example_texts/Trickster.nena...
parsing example_texts/Two Birds Fall in Love.nena...
parsing example_texts/Two Wicked Daughters-In-Law.nena...
parsing example_texts/Village Life (2).nena...
parsing example_texts/Village Life (3).nena...
parsing example_texts/Village Life (4).nena...
parsing example_texts/Village Life (5).nena...
parsing example_texts/Village Life (6).nena...
parsing example_texts/Village Life.nena...
parsing example_texts/Vineyards.nena...
parsing example_texts/Weddings and Festivals.nena...
parsing example_texts/Weddings.nena...
parsing example_texts/When Shall I Die?.nena...
parsing example_texts/Women Are Stronger Than Men.nena...
parsing example_texts/Women Do Things Best.nena...
parsing example_texts/Šošət Xere.nena...
DONE parsing all .nena texts!
Indexing new corpus data...
This is Text-Fabric 8.5.12
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
26 features found and 0 ignored
0.00s Importing data from walking through the source ...
| 0.00s Preparing metadata...
| 0.00s No structure nodes will be set up
| SECTION TYPES: dialect, text, line
| SECTION FEATURES: dialect, title, line_number
| STRUCTURE TYPES:
| STRUCTURE FEATURES:
| TEXT FEATURES:
| | text-orig-full text, text_end
| | text-orig-lite lite, lite_end
| | text-trans-full full, full_end
| | text-trans-fuzzy fuzzy, fuzzy_end
| | text-trans-lite lite, lite_end
| 0.01s OK
| 0.00s Following director...
0.00s indexing all dialects / texts...
| 0.00s indexing alquosh, Bread and cheese...
| 0.06s indexing barwar, A Hundred Gold Coins...
| 0.11s indexing barwar, A Man Called Čuxo...
| 0.21s indexing barwar, A Tale of Two Kings...
| 0.28s indexing barwar, A Tale of a Prince and a Princess...
| 0.57s indexing barwar, Baby Leliθa...
| 0.70s indexing barwar, Dəmdəma...
| 0.79s indexing barwar, Gozali and Nozali...
| 1.39s indexing barwar, I Am Worth the Same as a Blind Wolf...
| 1.46s indexing barwar, Man Is Treacherous...
| 1.50s indexing barwar, Measure for Measure...
| 1.52s indexing barwar, Nanno and Jəndo...
| 1.62s indexing barwar, Qaṭina Rescues His Nephew From Leliθa...
| | 0.00s force-closing subsentence in §0.80
| | 0.00s force-closing sentence in §0.80
| | 0.01s force-closing subsentence in §1.57
| | 0.01s force-closing sentence in §1.57
| | 0.02s force-closing subsentence in §2.17
| | 0.02s force-closing sentence in §2.17
| | 0.03s force-closing sentence in §3.68
| | 0.03s force-closing subsentence in §4.38
| | 0.03s force-closing sentence in §4.38
| | 0.04s force-closing subsentence in §5.20
| | 0.04s force-closing sentence in §5.20
| | 0.04s force-closing subsentence in §6.17
| | 0.04s force-closing sentence in §6.17
| | 0.05s force-closing subsentence in §7.13
| | 0.05s force-closing sentence in §7.13
| | 0.05s force-closing subsentence in §8.19
| | 0.05s force-closing sentence in §8.19
| 1.71s indexing barwar, Sour Grapes...
| 1.72s indexing barwar, Tales From the 1001 Nights...
| 2.20s indexing barwar, The Battle With Yuwanəs the Armenian...
| | 0.58s force-closing subsentence in §1.17
| | 0.58s force-closing sentence in §1.17
| | 0.64s force-closing subsentence in §2.513
| | 0.64s force-closing sentence in §2.513
| | 0.65s force-closing subsentence in §3.37
| | 0.65s force-closing sentence in §3.37
| | 0.66s force-closing sentence in §4.127
| | 0.67s force-closing subsentence in §5.10
| | 0.67s force-closing sentence in §5.10
| 2.30s indexing barwar, The Bear and the Fox...
| 2.36s indexing barwar, The Brother of Giants...
| 2.43s indexing barwar, The Cat and the Mice...
| 2.45s indexing barwar, The Cooking Pot...
| 2.49s indexing barwar, The Crafty Hireling...
| 2.70s indexing barwar, The Crow and the Cheese...
| | 1.07s force-closing subsentence in §0.76
| | 1.07s force-closing sentence in §0.76
| 2.71s indexing barwar, The Daughter of the King...
| 2.91s indexing barwar, The Fox and the Lion...
| 2.92s indexing barwar, The Fox and the Miller...
| 3.36s indexing barwar, The Fox and the Stork...
| 3.37s indexing barwar, The Giant’s Cave...
| 3.45s indexing barwar, The Girl and the Seven Brothers...
| | 1.82s force-closing sentence in §0.61
| | 1.83s force-closing subsentence in §1.8
| | 1.83s force-closing sentence in §1.8
| 3.57s indexing barwar, The King With Forty Sons...
| 3.95s indexing barwar, The Leliθa From č̭āl...
| 3.99s indexing barwar, The Lion King...
| 4.00s indexing barwar, The Lion With a Swollen Leg...
| 4.06s indexing barwar, The Man Who Cried Wolf...
| 4.09s indexing barwar, The Man Who Wanted to Work...
| 4.29s indexing barwar, The Monk Who Wanted to Know When He Would Die...
| 4.35s indexing barwar, The Monk and the Angel...
| 4.45s indexing barwar, The Priest and the Mullah...
| 4.51s indexing barwar, The Sale of an Ox...
| 4.70s indexing barwar, The Scorpion and the Snake...
| 4.73s indexing barwar, The Selfish Neighbour...
| 4.76s indexing barwar, The Sisisambər Plant...
| | 3.14s force-closing subsentence in §2.109
| | 3.14s force-closing sentence in §2.109
| | 3.14s force-closing subsentence in §3.17
| | 3.15s force-closing sentence in §3.17
| | 3.18s force-closing subsentence in §4.162
| | 3.18s force-closing sentence in §4.162
| | 3.19s force-closing subsentence in §5.13
| | 3.19s force-closing sentence in §5.13
| 4.84s indexing barwar, The Story With No End...
| 4.89s indexing barwar, The Tale of Farxo and Səttiya...
| 5.33s indexing barwar, The Tale of Mămo and Zine...
| 5.75s indexing barwar, The Tale of Mərza Pămət...
| 5.92s indexing barwar, The Tale of Nasimo...
| | 4.30s force-closing sentence in §0.87
| | 4.30s force-closing subsentence in §1.38
| | 4.30s force-closing sentence in §1.38
| | 4.31s force-closing sentence in §2.21
| 5.99s indexing barwar, The Tale of Parizada, Warda and Nargis...
| 6.29s indexing barwar, The Tale of Rustam (1)...
| 6.45s indexing barwar, The Tale of Rustam (2)...
| 6.76s indexing barwar, The Wise Daughter of the King...
| 6.83s indexing barwar, The Wise Snake...
| | 5.20s force-closing sentence in §0.15
| 6.98s indexing barwar, The Wise Young Man...
| 7.17s indexing barwar, Šošət Xere...
| | 5.56s force-closing subsentence in §0.204
| | 5.56s force-closing sentence in §0.204
| | 5.58s force-closing sentence in §2.42
| 7.24s indexing urmi_c, A Close Shave...
| 7.25s indexing urmi_c, A Cure for a Husband’s Madness...
| 7.46s indexing urmi_c, A Donkey Knows Best...
| 7.48s indexing urmi_c, A Dragon in the Well...
| 7.57s indexing urmi_c, A Dutiful Son...
| 7.73s indexing urmi_c, A Frog Wants a Husband...
| 7.80s indexing urmi_c, A Lost Donkey...
| 7.81s indexing urmi_c, A Lost Ring...
| 7.82s indexing urmi_c, A Painting of the King of Iran...
| 7.94s indexing urmi_c, A Pound of Flesh...
| 8.05s indexing urmi_c, A Sweater to Pay Off a Debt...
| 8.07s indexing urmi_c, A Thousand Dinars...
| | 6.47s foreign letter ŏ̀ encountered...
| 8.15s indexing urmi_c, A Visit From Harun Ar-Rashid...
| 8.22s indexing urmi_c, Agriculture and Village Life...
| 8.63s indexing urmi_c, Am I Dead?...
| 8.66s indexing urmi_c, An Orphan Duckling...
| 8.69s indexing urmi_c, Axiqar...
| 9.12s indexing urmi_c, Events in 1946 on the Urmi Plain...
| 9.21s indexing urmi_c, Games...
| 9.34s indexing urmi_c, Hunting...
| 9.47s indexing urmi_c, I Have Died...
| 9.48s indexing urmi_c, Ice for Dinner...
| 9.50s indexing urmi_c, Is There a Man With No Worries?...
| 9.63s indexing urmi_c, Kindness to a Donkey...
| 9.64s indexing urmi_c, Lost Money...
| 9.64s indexing urmi_c, Mistaken Identity...
| 9.66s indexing urmi_c, Much Ado About Nothing...
| | 8.04s foreign letter ä encountered...
| | 8.05s foreign letter ä̀ encountered...
| | 8.06s foreign letter ä encountered...
| | 8.06s foreign letter ä̀ encountered...
| 9.77s indexing urmi_c, Nipuxta...
| 9.83s indexing urmi_c, No Bread Today...
| 9.87s indexing urmi_c, Problems Lighting a Fire...
| 9.89s indexing urmi_c, St. Zayya’s Cake Dough...
| 9.98s indexing urmi_c, Star-Crossed Lovers...
| 10s indexing urmi_c, Stomach Trouble...
| 10s indexing urmi_c, The Adventures of Ashur...
| | 8.45s foreign letter ǜ encountered...
| 11s indexing urmi_c, The Adventures of Two Brothers...
| 11s indexing urmi_c, The Adventures of a Princess...
| | 9.45s foreign letter ǘ encountered...
| | 9.45s foreign letter ǘ encountered...
| | 9.45s foreign letter ü encountered...
| | 9.45s foreign letter ü encountered...
| | 9.47s foreign letter ǘ encountered...
| | 9.47s foreign letter ǘ encountered...
| | 9.48s foreign letter ü encountered...
| 11s indexing urmi_c, The Angel of Death...
| 11s indexing urmi_c, The Assyrians of Armenia...
| 11s indexing urmi_c, The Assyrians of Urmi...
| 13s indexing urmi_c, The Bald Child and the Monsters...
| 13s indexing urmi_c, The Bald Man and the King...
| 13s indexing urmi_c, The Bird and the Fox...
| 13s indexing urmi_c, The Cat’s Dinner...
| 13s indexing urmi_c, The Cow and the Poor Girl...
| 13s indexing urmi_c, The Dead Rise and Return...
| 13s indexing urmi_c, The Fisherman and the Princess...
| 13s indexing urmi_c, The Giant One-Eyed Demon...
| 14s indexing urmi_c, The Little Prince and the Snake...
| 14s indexing urmi_c, The Loan of a Cooking Pot...
| 14s indexing urmi_c, The Man Who Wanted to Complain to God...
| 14s indexing urmi_c, The Old Man and the Fish...
| 14s indexing urmi_c, The Purchase of a Donkey...
| 14s indexing urmi_c, The Snake’s Dilemma...
| 14s indexing urmi_c, The Stupid Carpenter...
| | 12s foreign letter ã̀ encountered...
| 14s indexing urmi_c, The Wife Who Learns How to Work (2)...
| 14s indexing urmi_c, The Wife Who Learns How to Work...
| 14s indexing urmi_c, The Wife’s Condition...
| 14s indexing urmi_c, The Wise Brother...
| 14s indexing urmi_c, The Wise Young Daughter...
| 14s indexing urmi_c, Trickster...
| 15s indexing urmi_c, Two Birds Fall in Love...
| 15s indexing urmi_c, Two Wicked Daughters-In-Law...
| | 13s foreign letter ü encountered...
| | 13s foreign letter ü encountered...
| 15s indexing urmi_c, Village Life (2)...
| 15s indexing urmi_c, Village Life (3)...
| | 13s foreign letter ý encountered...
| 15s indexing urmi_c, Village Life (4)...
| 15s indexing urmi_c, Village Life (5)...
| 15s indexing urmi_c, Village Life (6)...
| 16s indexing urmi_c, Village Life...
| 16s indexing urmi_c, Vineyards...
| 16s indexing urmi_c, Weddings and Festivals...
| 16s indexing urmi_c, Weddings...
| 16s indexing urmi_c, When Shall I Die?...
| | 15s foreign letter ŏ́ encountered...
| 16s indexing urmi_c, Women Are Stronger Than Men...
| 16s indexing urmi_c, Women Do Things Best...
| 16s "edge" actions: 0
| 16s "feature" actions: 4304221
| 16s "node" actions: 295347
| 16s "resume" actions: 0
| 16s "slot" actions: 541384
| 16s "terminate" actions: 836857
| 3 x "dialect" node
| 36594 x "inton" node
| 541384 x "letter" node = slot type
| 2587 x "line" node
| 351 x "paragraph" node
| 16369 x "sentence" node
| 94101 x "stress" node
| 24617 x "subsentence" node
| 127 x "text" node
| 120598 x "word" node
| 836731 nodes of all types
| 17s OK
| 0.00s checking for nodes and edges ...
| 0.00s OK
| 0.00s checking features ...
| 0.00s OK
| 0.00s reordering nodes ...
| 0.15s Sorting 3 nodes of type "dialect"
| 0.19s Sorting 36594 nodes of type "inton"
| 0.30s Sorting 2587 nodes of type "line"
| 0.34s Sorting 351 nodes of type "paragraph"
| 0.38s Sorting 16369 nodes of type "sentence"
| 0.45s Sorting 94101 nodes of type "stress"
| 0.63s Sorting 24617 nodes of type "subsentence"
| 0.76s Sorting 127 nodes of type "text"
| 0.80s Sorting 120598 nodes of type "word"
| 1.01s Max node = 836731
| 1.01s OK
| 0.00s reassigning feature values ...
| | 16s node feature "dialect" with 130 nodes
| | 16s node feature "full" with 661982 nodes
| | 16s node feature "full_end" with 120586 nodes
| | 17s node feature "fuzzy" with 661982 nodes
| | 17s node feature "fuzzy_end" with 120598 nodes
| | 17s node feature "lang" with 120598 nodes
| | 17s node feature "line_number" with 2587 nodes
| | 17s node feature "lite" with 661982 nodes
| | 17s node feature "lite_end" with 120586 nodes
| | 17s node feature "phonation" with 307888 nodes
| | 17s node feature "phonetic_class" with 541366 nodes
| | 17s node feature "phonetic_manner" with 312113 nodes
| | 17s node feature "phonetic_place" with 312113 nodes
| | 17s node feature "place" with 127 nodes
| | 17s node feature "speaker" with 120598 nodes
| | 17s node feature "speakers" with 126 nodes
| | 17s node feature "text" with 661982 nodes
| | 18s node feature "text_end" with 120598 nodes
| | 18s node feature "text_id" with 126 nodes
| | 18s node feature "text_nostress" with 661982 nodes
| | 18s node feature "text_nostress_end" with 120586 nodes
| | 18s node feature "timestamp" with 447 nodes
| | 18s node feature "title" with 127 nodes
| 1.90s OK
0.00s Exporting 24 node and 1 edge and 1 config features to example_out/tf:
0.00s VALIDATING oslots feature
0.10s VALIDATING oslots feature
0.10s maxSlot= 541384
0.10s maxNode= 836731
0.15s OK: oslots is valid
| 0.00s T dialect to example_out/tf
| 0.69s T full to example_out/tf
| 0.12s T full_end to example_out/tf
| 0.64s T fuzzy to example_out/tf
| 0.12s T fuzzy_end to example_out/tf
| 0.11s T lang to example_out/tf
| 0.00s T line_number to example_out/tf
| 0.63s T lite to example_out/tf
| 0.16s T lite_end to example_out/tf
| 0.20s T otype to example_out/tf
| 0.39s T phonation to example_out/tf
| 0.55s T phonetic_class to example_out/tf
| 0.32s T phonetic_manner to example_out/tf
| 0.33s T phonetic_place to example_out/tf
| 0.00s T place to example_out/tf
| 0.13s T speaker to example_out/tf
| 0.00s T speakers to example_out/tf
| 0.68s T text to example_out/tf
| 0.13s T text_end to example_out/tf
| 0.00s T text_id to example_out/tf
| 0.74s T text_nostress to example_out/tf
| 0.15s T text_nostress_end to example_out/tf
| 0.01s T timestamp to example_out/tf
| 0.00s T title to example_out/tf
| 1.12s T oslots to example_out/tf
| 0.00s M otext to example_out/tf
7.43s Exported 24 node features and 1 edge features and 1 config features to example_out/tf
SUCCESS! TF corpus built.
Loading TF data and building documentation...
This is Text-Fabric 8.5.12
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
26 features found and 0 ignored
0.00s loading features ...
| 0.29s T otype from example_out/tf
| 4.55s T oslots from example_out/tf
| 0.00s Dataset without structure sections in otext:no structure functions in the T-API
| 1.35s T text from example_out/tf
| 0.22s T text_end from example_out/tf
| 0.00s T dialect from example_out/tf
| 0.01s T line_number from example_out/tf
| 1.18s T lite from example_out/tf
| 1.17s T fuzzy from example_out/tf
| 1.25s T full from example_out/tf
| 0.21s T full_end from example_out/tf
| 0.22s T fuzzy_end from example_out/tf
| 0.22s T lite_end from example_out/tf
| 0.00s T title from example_out/tf
| | 0.20s C __levels__ from otype, oslots, otext
| | 10s C __order__ from otype, oslots, __levels__
| | 0.40s C __rank__ from otype, __order__
| | 7.86s C __levUp__ from otype, oslots, __rank__
| | 2.32s C __levDown__ from otype, __levUp__, __rank__
| | 2.49s C __boundary__ from otype, oslots, __rank__
| | 0.01s C __sections__ from otype, oslots, otext, __levUp__, __levels__, dialect, title, line_number
34s All features loaded/computed - for details use loadLog()
0.00s loading features ...
| 0.22s T lang from example_out/tf
| 0.73s T phonation from example_out/tf
| 0.97s T phonetic_class from example_out/tf
| 0.74s T phonetic_manner from example_out/tf
| 0.74s T phonetic_place from example_out/tf
| 0.00s T place from example_out/tf
| 0.23s T speaker from example_out/tf
| 0.00s T speakers from example_out/tf
| 0.00s T text_id from example_out/tf
| 1.24s T text_nostress from example_out/tf
| 0.22s T text_nostress_end from example_out/tf
| 0.00s T timestamp from example_out/tf
5.10s All additional features loaded - for details use loadLog()
done!
Building search tool...
This is Text-Fabric 8.5.12
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
26 features found and 0 ignored
0.00s loading features ...
| 0.00s Dataset without structure sections in otext:no structure functions in the T-API
1.73s All features loaded/computed - for details use loadLog()
0.00s loading features ...
0.28s All additional features loaded - for details use loadLog()
###Markdown
Check sites
###Code
def part_gradient(part_id):
part_slice = meta[meta.ID == part_id].sort_values(by='WAVE')
time_span = 3*(part_slice.WAVE.iloc[-1] - part_slice.WAVE.iloc[0])
first_index = meta[meta.ID == part_id].sort_values(by='WAVE')['index'].iloc[0]
last_index = meta[meta.ID == part_id].sort_values(by='WAVE')['index'].iloc[-1]
gradient = (data[last_index] - data[first_index]) / time_span
return gradient
def check_sites(part_id , top_sites=True, bottom=True, n_std=4):
# Extract participant data
part_slice = meta[meta.ID == part_id].sort_values(by='WAVE')
part_data = data[part_slice['index'],]
# Compute evolution of methylation between first and last timepoint
meth_evolution = part_gradient(part_id)
mean = np.nanmean(meth_evolution)
std = np.nanstd(meth_evolution)
# Find locations whit large gradients
top_sites = np.where(meth_evolution > mean + n_std*std)[0]
bottom_sites = np.where(meth_evolution < mean - n_std*std)[0]
fig, (ax1, ax2) = plt.subplots(2, 1)
# Plot top sites
for site in top_sites:
ax1.plot(part_slice['WAVE'], part_data[:,site])
# Plot bottom sites
for site in bottom_sites:
ax2.plot(part_slice['WAVE'], part_data[:,site])
return fig, (top_sites, bottom_sites)
# Compute evolution of methylation between first and last timepoint
meth_evolution = part_gradient('LBC0001A')
mean = np.nanmean(meth_evolution)
std = np.nanstd(meth_evolution)
print(f'Mean: {mean} -- 2 Std: {2*std}')
box = sns.boxplot(x=meth_evolution)
# Extract and plot top and bottom sites
fig, sites = check_sites('LBC0001A')
# Extract and plot top and bottom sites
fig_2, sites_2 = check_sites('LBC0251K')
###Output
_____no_output_____
###Markdown
Predicting the presence of mutations based on the longitudinal evolution of mutations Preparing a dataset
###Code
import numpy as np
from tensorflow import keras
from keras.datasets import mnist
#loading dataset
(train_X, train_y), (val_X, val_y) = mnist.load_data()
#normalizing the dataset
train_X, val_X = train_X/255, val_X/255
# visualizing 9 rndom digits from the dataset
for i in range(331,340):
plt.subplot(i)
a = np.random.randint(0, train_X.shape[0], 1)
plt.imshow(train_X[a[0]], cmap = plt.get_cmap('binary'))
plt.tight_layout()
plt.show()
train_X.shape
###Output
_____no_output_____
###Markdown
Example from [scikit-image plot-label](https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_label.html)- define data
###Code
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from skimage import data
from skimage.filters import threshold_otsu
from skimage.segmentation import clear_border
from skimage.measure import label, regionprops
from skimage.morphology import closing, square
from skimage.color import label2rgb
from skimage.transform import resize
image = data.coins()[50:-50, 50:-50]
image = resize(image, (256, 256))
# apply threshold
thresh = threshold_otsu(image)
bw = closing(image > thresh, square(3))
# remove artifacts connected to image border
cleared = clear_border(bw)
###Output
_____no_output_____
###Markdown
Running scikit-image
###Code
# label image regions
label_image = label(cleared.copy())
# to make the background transparent, pass the value of `bg_label`,
# and leave `bg_color` as `None` and `kind` as `overlay`
image_label_overlay = label2rgb(label_image, image=image, bg_label=0)
###Output
_____no_output_____
###Markdown
Running cc_torch
###Code
import torch
from cc_torch import connected_components_labeling
cleared_torch = torch.from_numpy(cleared.copy()).to("cuda", torch.uint8)
cc_out = connected_components_labeling(cleared_torch)
cc_out = cc_out.cpu().numpy()
cc_image_overlay = label2rgb(cc_out, image=image, bg_label=0)
###Output
_____no_output_____
###Markdown
Plot
###Code
fig, axes = plt.subplots(1, 2, figsize=(10, 6))
def show_ax(ax, title, image, label):
ax.set_title(title)
ax.imshow(image)
for region in regionprops(label):
# take regions with large enough areas
if region.area >= 100:
# draw rectangle around segmented coins
minr, minc, maxr, maxc = region.bbox
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
show_ax(axes[0], "scikit-image", image_label_overlay, label_image)
show_ax(axes[1], "cc_torch", cc_image_overlay, cc_out)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Small scale example
###Code
def func(a, b, c):
res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1
res = tf.einsum('iab,kb->iak', res, c)
return res
a = tf.random_normal((10, 11, 12))
b = tf.random_normal((11, 13))
c = tf.random_normal((12, 14))
# res = func(a, b, c)
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c)
res1 = func(a, b, c)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c)
%timeit sess.run(res2)
# Check that the results of optimized and the original function are the same.
np.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5)
###Output
_____no_output_____
###Markdown
Example with more savings, but slower to optimize
###Code
def func(a, b, c, d):
res = tf.einsum('si,sj,sk,ij->s', a, b, d, c)
res += tf.einsum('s,si->s', res, a)
return res
a = tf.random_normal((100, 101))
b = tf.random_normal((100, 102))
c = tf.random_normal((101, 102))
d = tf.random_normal((100, 30))
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d)
res1 = func(a, b, c, d)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c, d)
%timeit sess.run(res2)
###Output
The slowest run took 28.74 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 767 µs per loop
###Markdown
Look at the recommendations:
###Code
orders
###Output
_____no_output_____
###Markdown
Example notebook
###Code
import dask
print(dask.__version__)
dask.config.config
###Output
_____no_output_____
###Markdown
Imports and Data Loading Import pandas for data manipulation, plotly for plotting, and molplot for visualising structures!
###Code
import pandas as pd
import plotly.express as px
import molplotly
###Output
_____no_output_____
###Markdown
Let's load the ESOL dataset from [ESOL: Estimating Aqueous Solubility Directly from Molecular Structure](https://doi.org/10.1021/ci034243x) - helpfully hosted by the [deepchem](https://github.com/deepchem/deepchem) team but also included as `example.csv` in the repo.
###Code
# df_esol = pd.read_csv('example.csv')
df_esol = pd.read_csv(
'https://raw.githubusercontent.com/deepchem/deepchem/master/datasets/delaney-processed.csv')
df_esol['y_pred'] = df_esol['ESOL predicted log solubility in mols per litre']
df_esol['y_true'] = df_esol['measured log solubility in mols per litre']
###Output
_____no_output_____
###Markdown
Simple Examples Let's make a scatter plot comparing the measured vs predicted solubilities using [`plotly`](https://plotly.com/python/)
###Code
df_esol['delY'] = df_esol["y_pred"] - df_esol["y_true"]
fig_scatter = px.scatter(df_esol,
x="y_true",
y="y_pred",
color='delY',
title='ESOL Regression (default plotly)',
labels={'y_pred': 'Predicted Solubility',
'y_true': 'Measured Solubility',
'delY': 'ΔY'},
width=1200,
height=800)
# This adds a dashed line for what a perfect model _should_ predict
y = df_esol["y_true"].values
fig_scatter.add_shape(
type="line", line=dict(dash='dash'),
x0=y.min(), y0=y.min(),
x1=y.max(), y1=y.max()
)
fig_scatter.show()
###Output
_____no_output_____
###Markdown
now all we have to do is `add_molecules`!
###Code
fig_scatter.update_layout(title='ESOL Regression (with add_molecules!)')
app_scatter = molplotly.add_molecules(fig=fig_scatter,
df=df_esol,
smiles_col='smiles',
title_col='Compound ID'
)
# change the arguments here to run the dash app on an external server and/or change the size of the app!
app_scatter.run_server(mode='inline', port=8001, height=1000)
###Output
_____no_output_____
###Markdown
Cool right? Let's explore some more options:Apart from showing the $(x,y)$ coordinates (you can turn them off using `show_coords=False`), we can add extra values to show up in the mouse tooltip by specifying `caption_cols` - the values in these columns of `df_esol` are also shown in the hover box.We can also apply some function transformations to the captions via `caption_transform` - in this example, rounding all our numbers to 2 decimal places.
###Code
fig_scatter.update_layout(
title='ESOL Regression (with add_molecules & extra captions)')
app_scatter_with_captions = molplotly.add_molecules(fig=fig_scatter,
df=df_esol,
smiles_col='smiles',
title_col='Compound ID',
caption_cols=['Molecular Weight', 'Number of Rings'],
caption_transform={'Predicted Solubility': lambda x: f"{x:.2f}",
'Measured Solubility': lambda x: f"{x:.2f}",
'Molecular Weight': lambda x: f"{x:.2f}"
},
show_coords=True)
app_scatter_with_captions.run_server(mode='inline', port=8002, height=1000)
###Output
_____no_output_____
###Markdown
What about adding colors? Here I've made an arbitrary random split of the dataset into `train` and `test`. When plotting, this leads to two separate plotly "curves" so the condition determining the color of the points needs to be passed in to the `add_molecules` function in order for the correct SMILES to be selected for visualisation - this is done via `color_col`. Notice that the `title` for the molecules in the hover box have the same color as the data point! For fun I also used the `size` argument in the scatter plot to change the size of the markers in proportion to the molecular weight.(notice I've been choosing different `port` numbers in all my plots, this is so that they don't interfere with each other!)
###Code
from sklearn.model_selection import train_test_split
train_inds, test_inds = train_test_split(df_esol.index)
df_esol['dataset'] = [
'Train' if x in train_inds else 'Test' for x in df_esol.index]
fig_train_test = px.scatter(df_esol,
x="y_true",
y="y_pred",
size='Molecular Weight',
color='dataset',
title='ESOL Regression (colored by random train/test split)',
labels={'y_pred': 'Predicted Solubility',
'y_true': 'Measured Solubility'},
width=1200,
height=800)
# fig.show()
app_train_test = molplotly.add_molecules(fig=fig_train_test,
df=df_esol,
smiles_col='smiles',
title_col='Compound ID',
color_col='dataset')
app_train_test.run_server(mode='inline', port=8003, height=1000)
###Output
_____no_output_____
###Markdown
More complex examplesLet's go beyond scatter plots and explore a few other graphs that might be relevant for cheminformatics, hopefully letting you see how `molplotly` could be useful for you when looking through (messy) data! Strip plotsStrip plots are useful for visualising how the same property is distributed between data from different groups. Here I plot how the measured solubility changes with the number of rings on a molecule (it goes down, surprising I know).Violin plots can also useful for this purpose but it's not compatible with `plotly` (see section ["violin plots"](violin))
###Code
fig_strip = px.strip(df_esol.sort_values('Number of Rings'), # sorting so that the colorbar is sorted!
x='Number of Rings',
y='y_true',
color='Number of Rings',
labels={'y_true': 'Measured Solubility'},
width=1000,
height=800)
app_strip = molplotly.add_molecules(fig=fig_strip,
df=df_esol,
smiles_col='smiles',
title_col='Compound ID',
color_col='Number of Rings',
caption_transform={'Measured Solubility': lambda x: f"{x:.2f}"},
wrap=True,
wraplen=25,
width=150,
show_coords=True)
app_strip.run_server(mode='inline', port=8004, height=850)
###Output
_____no_output_____
###Markdown
Scatter MatricesFor visualising the relationship between multiple variables at once, use a matrix of scatter plots!Here I've increased the width of the hover box using the `width` parameter because the caption titles were getting long; also I've used `show_coords=False` because $(x, y)$ coordinates for non-trivial scatter plots become messy.
###Code
features = ['Number of H-Bond Donors',
'Number of Rings',
'Number of Rotatable Bonds',
'Polar Surface Area']
fig_matrix = px.scatter_matrix(df_esol,
dimensions=features,
width=1200,
height=800,
title='Scatter matrix of molecular properties')
app_matrix = molplotly.add_molecules(fig=fig_matrix,
df=df_esol,
smiles_col='smiles',
title_col='Compound ID',
caption_cols=features,
width=200,
show_coords=False)
# Only show informative lower triangle
fig_matrix.update_traces(diagonal_visible=False, showupperhalf=False)
app_matrix.run_server(mode='inline', port=8005, height=1000)
###Output
_____no_output_____
###Markdown
Visualising MorganFP PCA componentsA common way to visualise a molecular dataset is to calculate the morgan fingerprints of the molecules and visualise them in a 2D embedding (eg PCA/t-SNE). In this example I'm going to plot the 2 largest PCA components for ESOL and inspect the data. Let's calculate the PCA components first!
###Code
import numpy as np
from rdkit import Chem
from rdkit.Chem import AllChem, DataStructs
from sklearn.decomposition import PCA
def smi_to_fp(smi):
fp = AllChem.GetMorganFingerprintAsBitVect(
Chem.MolFromSmiles(smi), 2, nBits=1024)
arr = np.zeros((0,), dtype=np.int8)
DataStructs.ConvertToNumpyArray(fp, arr)
return arr
esol_fps = np.array([smi_to_fp(smi) for smi in df_esol['smiles']])
pca = PCA(n_components=2)
components = pca.fit_transform(esol_fps.reshape(-1, 1024))
df_esol['PCA-1'] = components[:, 0]
df_esol['PCA-2'] = components[:, 1]
###Output
_____no_output_____
###Markdown
and now let's look at them!with `molplotly`, it's super easy to see which molecules are where - steroid molecules at the top, alcohols in the bottom left, chlorinated aromatic compounds in the bottom right.
###Code
fig_pca = px.scatter(df_esol,
x="PCA-1",
y="PCA-2",
color='y_true',
title='ESOL PCA of morgan fingerprints',
labels={'y_true': 'Measured Solubility'},
width=1200,
height=800)
app_pca = molplotly.add_molecules(fig=fig_pca,
df=df_esol.rename(columns={'y_true': 'Measured Solubility'}),
smiles_col='smiles',
title_col='Compound ID',
caption_cols=['Measured Solubility'],
caption_transform={'Measured Solubility': lambda x: f"{x:.2f}"},
color_col='Measured Solubility',
show_coords=False)
app_pca.run_server(mode='inline', port=8006, height=850)
###Output
_____no_output_____
###Markdown
ClusteringLet's do some clustering of the ESOL molecules, borrowing useful functions from Pat Walters' excellent blog post on [clustering](http://practicalcheminformatics.blogspot.com/2021/11/picking-highest-scoring-molecules-from.html).
###Code
from rdkit.ML.Cluster import Butina
def smi2fp(smi):
fp = AllChem.GetMorganFingerprintAsBitVect(Chem.MolFromSmiles(smi), 2)
return fp
def taylor_butina_clustering(fp_list, cutoff=0.35):
dists = []
nfps = len(fp_list)
for i in range(1, nfps):
sims = DataStructs.BulkTanimotoSimilarity(fp_list[i], fp_list[:i])
dists.extend([1-x for x in sims])
mol_clusters = Butina.ClusterData(dists, nfps, cutoff, isDistData=True)
return mol_clusters
cluster_res = taylor_butina_clustering(
[smi2fp(smi) for smi in df_esol['smiles']])
cluster_id_list = np.zeros(len(df_esol), dtype=int)
for cluster_num, cluster in enumerate(cluster_res):
for member in cluster:
cluster_id_list[member] = cluster_num
df_esol['cluster'] = cluster_id_list
###Output
_____no_output_____
###Markdown
Now let's make a strip plot of the top-10 clusters, see what they look like and how soluable they are!
###Code
df_cluster = df_esol.query('cluster < 10').copy().reset_index()
# sorting is needed to make the legend appear in order!
df_cluster = df_cluster.sort_values('cluster')
fig_cluster = px.strip(df_cluster,
y='y_true',
color='cluster',
labels={'y_true': 'Measured Solubility'},
width=1000,
height=800)
app_cluster = molplotly.add_molecules(fig=fig_cluster,
df=df_cluster,
smiles_col='smiles',
title_col='Compound ID',
color_col='cluster'
)
app_cluster.run_server(mode='inline', port=8007, height=850)
###Output
_____no_output_____
###Markdown
Incompatible `plotly` functionality with molplotly`Plotly` is a graphing library that does far more than just scatter plots - it has lots of cool functionalities that unfortunately clash with how `molplotly` implements the hover box (for now at least). Here are some examples of known incompatibilities, which are still very useful data visualisations in vanilla `plotly`! Marginals on scatter plots I like having marginals on the sides by default because the data density in a dataset can often vary a lot. Anything to do with histogram/violin plots don't work yet with `molplotly`.
###Code
fig_marginal = px.scatter(df_esol,
x="y_true",
y="y_pred",
title='ESOL Regression (with histogram marginals)',
labels={'y_pred': 'Predicted Solubility',
'y_true': 'Measured Solubility'},
marginal_x='violin',
marginal_y='histogram',
width=1200,
height=800)
fig_marginal.show()
###Output
_____no_output_____
###Markdown
Violin plotsThe aesthetic of violin plots are nice, especially when there's a lot of datapoints but if there's not much data (often the case in drug discovery!) then those nice smooth KDE curves can be misleading so I usually prefer strip plots. `plotly` has cool mouseover data on violin plots which are incompatible with `molplotly` but at least if there's enough data that I prefer using a violin plot, it's probably too memory consuming to run a strip plot with `molplotly` anyway!
###Code
fig_violin = px.violin(df_esol,
y="y_true",
title='ESOL violin plot of measured solubility',
labels={'y_true': 'Measured Solubility'},
box=True,
points='all',
width=1200,
height=800)
fig_violin.show()
###Output
_____no_output_____
###Markdown
Augmentation example
###Code
import cv2
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
from compose import Compose
from affine_transform.rotate import RandomRotate
from affine_transform.translate import RandomTranslate
from affine_transform.shear import RandomXShear, RandomYShear
from affine_transform.scale import RandomScale
from affine_transform.flip import RandomHorizontalFlip, RandomVerticalFlip
from visual_effect.histogram_equalize import CLAHE
from visual_effect.adjust_brightness import RandomAdjustBrightness
from visual_effect.adjust_hue import RandomAdjustHue
from visual_effect.adjust_saturation import RandomAdjustSaturation
def visualize(image, target):
fig = plt.figure()
ax = plt.axes()
ax.imshow(image.transpose(1,2,0))
bboxes = target['boxes']
for bbox in bboxes:
r = patches.Rectangle(xy=(bbox[0], bbox[1]), width=bbox[2] - bbox[0], height=bbox[3] - bbox[1], color='lightgreen', fill=False)
ax.add_patch(r)
plt.show()
def sample_dataset(image_path, transform=None):
origin_image = cv2.imread(image_path)[:,:,::-1]
image = origin_image.transpose(2,0,1)
target = {}
target["boxes"] = np.array([[230, 220, 350, 390], [0, 0, 50, 50], [462, 462, 512, 512]], np.float32)
target["labels"] = np.array([1, 0, 0], np.float32)
target["image_id"] = np.array([1], np.float32)
if transform is not None:
image, target = transform(image, target)
return image, target
image_path = './lena_color.tiff'
transforms = Compose(
[RandomRotate(-10, 10),
RandomTranslate((50, 50)),
RandomXShear(-10, 10),
RandomYShear(-10, 10),
RandomScale(0.9, 1.1),
RandomHorizontalFlip(0.5),
RandomVerticalFlip(0.5),
CLAHE(clip_limit=1.0),
RandomAdjustBrightness(0.4, 1.0),
RandomAdjustHue(-20, 20),
RandomAdjustSaturation(0.95, 1.05)
])
image, target = sample_dataset(image_path, transforms)
visualize(image, target)
###Output
_____no_output_____
###Markdown
Simple Tar Dataset - examplesThis notebook will go through a few common use cases. All the needed Tar files are very minimal and included with the library. Just load the imagesThe default `TarDataset` simply loads all PNG, JPG and JPEG images from a Tar file, and allows you to iterate them.Images are returned as `Tensor`. Here some RGB values are printed.
###Code
from tardataset import TarDataset
dataset = TarDataset('example-data/colors.tar')
for (idx, image) in enumerate(dataset):
print(f"Image #{idx}, color: {image[:,0,0]}")
###Output
Image #0, color: tensor([0., 0., 1.])
Image #1, color: tensor([0., 1., 0.])
Image #2, color: tensor([1., 0., 0.])
###Markdown
Folders as class labels (like torchvision's ImageFolder)Similarly to [`ImageFolder`](https://pytorch.org/vision/stable/datasets.htmlimagefolder), `TarImageFolder` assumes that each top-level folder contains all samples of a different class.In this example, the Tar archive has this structure:- `red/a.png`- `green/b.png`- `blue/c.png`
###Code
from tarimagefolder import TarImageFolder
dataset = TarImageFolder('example-data/colors.tar')
for (idx, (image, label)) in enumerate(dataset):
print(f"Image #{idx}, label: {label} "
f"({dataset.idx_to_class[label]}), color: {image[:,0,0]}")
###Output
Image #0, label: 0 (blue), color: tensor([0., 0., 1.])
Image #1, label: 1 (green), color: tensor([0., 1., 0.])
Image #2, label: 2 (red), color: tensor([1., 0., 0.])
###Markdown
Use a DataLoader (multiple processes) and return a mini-batchUsing a `DataLoader` is the same as with a standard `Dataset`. The library supports various multiprocessing configurations without extra code.
###Code
from torch.utils.data import DataLoader
if __name__ == '__main__': # needed for dataloaders
dataset = TarImageFolder('example-data/colors.tar')
loader = DataLoader(dataset, batch_size=3, num_workers=2, shuffle=True)
for (image, label) in loader:
print(f"Dimensions of image batch: {image.shape}")
print(f"Labels in batch: {label}")
###Output
Dimensions of image batch: torch.Size([3, 3, 8, 8])
Labels in batch: tensor([2, 1, 0])
###Markdown
Load videos as stacks of frames (custom Tar structures)To have more control over how files in the Tar archive are related to iterated samples, you can subclass `TarDataset`.Here we consider each folder starting with `'vid'` as a sample, load 3 sequentially-named frames from it, and return the concatenated frames.
###Code
import torch
class VideoDataset(TarDataset):
"""Example video dataset, each folder has the frames of a video"""
def __init__(self, archive):
super().__init__(archive=archive,
is_valid_file=lambda m: m.isdir() and m.name.startswith('vid'))
def __getitem__(self, index):
"""Load and return a stack of 3 frames from this folder"""
folder = self.samples[index]
images = [self.get_image(f"{folder}/{frame:02}.png")
for frame in range(3)]
return torch.stack(images)
dataset = VideoDataset('example-data/videos.tar')
for (idx, video) in enumerate(dataset):
print(f"Video #{idx}, stack of frames with dims: {video.shape}")
###Output
Video #0, stack of frames with dims: torch.Size([3, 3, 8, 8])
Video #1, stack of frames with dims: torch.Size([3, 3, 8, 8])
###Markdown
Load non-image files, such as pickled Python objectsYou can choose the loaded file types with `extensions` (or the more advanced `is_valid_file`, as above).You can also use `get_file` to load arbitrary files as data streams, completely in-memory (without writing them to disk). You can plug this in to Pickle or JSON modules.
###Code
import pickle
class PickleDataset(TarDataset):
"""Example non-image dataset"""
def __init__(self, archive):
super().__init__(archive=archive, extensions=('.pickle'))
def __getitem__(self, index):
"""Return a pickled Python object"""
filename = self.samples[index]
return pickle.load(self.get_file(filename))
dataset = PickleDataset('example-data/objects.tar')
for (idx, obj) in enumerate(dataset):
print(f"Sample #{idx}, object: {obj}")
###Output
Sample #0, object: {'id': 0, 'content': 'one sample'}
Sample #1, object: {'id': 1, 'content': 'another sample'}
###Markdown
Load custom meta-data files (e.g. ground truth information)Often datasets come with various pieces of information in different files. You can easily read a text file from the Tar archive into a string with `get_text_file`, either at initialisation or during iteration. For more general binary files, use `get_file` as above.In this example we read a text file from the archive, which contains the file name of each image and its label `'red'` or `'not-red'` (one per line). When the dataset is iterated, `__getitem__` then returns the image and this custom label as a boolean.
###Code
class RedDataset(TarDataset):
"""Example dataset, which loads from a text file a binary label of
whether each image is red or not."""
def __init__(self, archive):
super().__init__(archive=archive)
self.image_is_red = {}
for line in self.get_text_file('custom-data.txt').splitlines():
(name, redness) = line.split(',')
self.image_is_red[name] = (redness == 'red')
def __getitem__(self, index):
"""Return the image and the binary label"""
filename = self.samples[index]
image = self.get_image(filename)
is_red = self.image_is_red[filename]
return (image, is_red)
dataset = RedDataset('example-data/colors.tar')
for (idx, (image, label)) in enumerate(dataset):
print(f"Image #{idx}, redness: {label}, color: {image[:,0,0]}")
###Output
Image #0, redness: False, color: tensor([0., 0., 1.])
Image #1, redness: False, color: tensor([0., 1., 0.])
Image #2, redness: True, color: tensor([1., 0., 0.])
###Markdown
Step 1Simply define your PyTorch model like usual, and create an instance of it.
###Code
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6,3, 5)
#self.fc1 = nn.Linear(16*5*5, 120)
#self.fc2 = nn.Linear(120, 84)
#self.fc3 = nn.Linear(84, 10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
#out = out.view(out.size(0), -1)
#out = F.relu(self.fc1(out))
#out = F.relu(self.fc2(out))
#out = self.fc3(out)
return out
pytorch_network = LeNet()
###Output
_____no_output_____
###Markdown
Step 2Determine the names of the layers.For the above model example it is very straightforward, but if you use param groups it may be a little more involved. To determine the names of the layers the next commands are useful:
###Code
# The most useful, just print the network
print(pytorch_network)
# Also useful: will only print those layers with params
state_dict = pytorch_network.state_dict()
print(util.state_dict_layer_names(state_dict))
###Output
LeNet(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 3, kernel_size=(5, 5), stride=(1, 1))
)
['conv1', 'conv2']
###Markdown
Step 3Define an equivalent Keras network. Use the built-in `name` keyword argument for each layer with params.
###Code
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
#K.set_image_data_format('channels_first')
def lenet_keras():
model = Sequential()
model.add(Conv2D(6, kernel_size=(5, 5),
activation='relu',
input_shape=(32,32,1),
name='conv1'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(3, (5, 5), activation='relu', name='conv2'))
model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Flatten())
#model.add(Dense(120, activation='relu', name='fc1'))
#model.add(Dense(84, activation='relu', name='fc2'))
#model.add(Dense(10, activation=None, name='fc3'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta())
return model
keras_network = lenet_keras()
keras_network.layers[-3].get_weights()
keras_network.layers
state_dict.keys()
w=np.transpose(state_dict['conv2.weight'],[2,3,1,0])
b=state_dict['conv2.bias']
w.shape
keras_network.layers[2].set_weights([w,b])
state_dict['fc1.weight'].shape
###Output
_____no_output_____
###Markdown
Step 4Now simply convert!
###Code
#transfer.keras_to_pytorch(keras_network, pytorch_network)
###Output
_____no_output_____
###Markdown
Done!Now let's check whether it was succesful. If it was, both networks should have the same output.
###Code
# Create dummy data
data = torch.rand(6,1,32,32)
datat=data.reshape(6,32,32,1)
data_keras = datat.numpy()
data_pytorch = Variable(data, requires_grad=False)
# Do a forward pass in both frameworks
keras_pred = keras_network.predict(data_keras)
pytorch_pred = pytorch_network(data_pytorch).data.numpy()
#assert keras_pred.shape == pytorch_pred.shape
plt.axis('Off')
plt.imshow(keras_pred[0,])
plt.show()
plt.axis('Off')
plt.imshow(pytorch_pred[0,:])
plt.show()
plt.imshow(data[0,0,:])
plt.imshow(datat[0,:,:,0])
###Output
_____no_output_____
###Markdown
Receparser: レセ電パーサライブラリ ReceparserはPythonで電子レセプトファイルを読み込むためのパーサです。 電子レセプトファイルを読み込み、人間に読み取れる形へ変換します。 現在は医科レセプト・DPCレセプトに対応しています。 電子レセプトファイルはこのような形をしています。 ```RE,1,1127,42806,サンプルDPC01,1,3160822,,,,,,,1111,,,,,0,,,,,59,,,,HO,06132013,1234567,1,5,57706,,3,2072,,,44400,,,,1080KO,80137045,2222222,,5,57706,,,,,0,0,0BU,110290XX99X00X,4280617,4280621,6,SB,5849004,,,N178,01,,SB,5849004,,,N178,11,,SB,5849004,,,N178,21,,SB,4280005,,,I500,31,,SB,4280005,,,I500,41,,SB,8843935,,,I352,42,,SB,8836695,,,I050,43,,KK,,,2,4280429,1,74,,,,,,,``` とても人間に読み取れる形式ではありませんし、Pythonでそのまま扱うことも出来ません。 `Receparser`はこれを、Pythonでも扱いやすいディクショナリ・ライクなオブジェクトに変換します。 各行の先頭に`RE`,`HO`,`SB`のようなアルファベットが付いています。これを**レコード**と呼び、その行にどのようなデータが格納されているか決めています。 `Receparser`で読み込んだ、上記ファイルの`RE`行は以下のような形になります。 ```{'レコード識別番号': 'RE', 'レセプト番号': '1', 'レセプト種別': '1127','診療年月': '42806', '氏名': 'サンプルDPC01', '男女区分': '1','生年月日': '3160822'...}``` Overview receparser.Rece1件単位でレセプトデータを読み込み、**レコード**をキーにしたディクショナリ・ライクなオブジェクトに変換して返します。 receparser.MontlyReceファイル全体を読み込み、**カルテ番号**をキーにしたディクショナリ・ライクなオブジェクトに変換して返します。それぞれのキーには、対応するレセプトの`Rece`オブジェクトが格納されます。 第一引数にはファイルを指定し、第二引数には`codes`オプションで読み込む電子レセプトの形式を指定します。 医科レセプトの場合は`codes="ika"`、DPCレセプトの場合は`codes="dpc"を指定して下さい。 参考情報- 仕様一覧 https://shinryohoshu.mhlw.go.jp/shinryohoshu/receMenu/doReceInfo- 医科レセプト仕様 https://shinryohoshu.mhlw.go.jp/shinryohoshu/file/spec/R02bt1_1_kiroku.pdf- DPCレセプト仕様 https://shinryohoshu.mhlw.go.jp/shinryohoshu/file/spec/R02bt1_2_kiroku_dpc.pdf Usage
###Code
from receparser import MonthlyRece,Rece
# 解説用にインポートしています。
# 通常はreceparser.codesを明示的にインポートする必要はありません。
from receparser.codes import dpc_codes,ika_codes
# 例えばdpcレセプトファイルのRE行は、このような構造です。
dpc_codes['RE']
# サンプルファイルを読み込みます。
# 読み込みの際には、codesオプションに"dpc"か"ika"を指定します。
dpc = MonthlyRece('dpcsample.csv',codes="dpc")
# .keysでカルテ番号の一覧を見ることが出来ます。
# ディクショナリのように動きます。.items(),.values()も使えます。
dpc.keys()
# レコードを指定すれば、その内容を見ることが出来ます。レコードは常にディクショナリのリストを返します。
dpc['1111']['RE']
# 複数のレコードが記録されている場合です。
dpc['1111']['SB']
import pandas as pd
# レコードに対してpandasを使えば、簡単にDataFrameやSeriesへ変換出来ます。
pd.DataFrame(dpc['1111']['SB'])
pd.Series(dpc['1111']['RE'])
# 医科ファイルの読み込みも同様です
ik = MonthlyRece('ikasample.csv',codes="ika")
ik.keys()
###Output
_____no_output_____
###Markdown
Build a POMDP environment: Pendulum-V (only observe the velocity)
###Code
cuda_id = 0 # -1 if using cpu
ptu.set_gpu_mode(torch.cuda.is_available() and cuda_id >= 0, cuda_id)
env_name = "Pendulum-V-v0"
env = gym.make(env_name)
max_trajectory_len = env._max_episode_steps
act_dim = env.action_space.shape[0]
obs_dim = env.observation_space.shape[0]
print(env, obs_dim, act_dim, max_trajectory_len)
###Output
<TimeLimit<POMDPWrapper<TimeLimit<PendulumEnv<Pendulum-V-v0>>>>> 1 1 200
###Markdown
Build a recurent model-free RL agent: separate architecture, `lstm` encoder, `oar` policy input space, `td3` RL algorithm (context length set later)
###Code
agent = Policy_RNN(
obs_dim=obs_dim,
action_dim=act_dim,
encoder="lstm",
algo="td3",
action_embedding_size=8,
state_embedding_size=32,
reward_embedding_size=8,
rnn_hidden_size=128,
dqn_layers=[128, 128],
policy_layers=[128, 128],
lr=0.0003,
gamma=0.9,
tau=0.005,
).to(ptu.device)
###Output
Critic_RNN(
(observ_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(action_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(reward_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(rnn): LSTM(48, 128)
(current_observ_action_embedder): FeatureExtractor(
(fc): Linear(in_features=2, out_features=48, bias=True)
)
(qf1): FlattenMlp(
(fc0): Linear(in_features=176, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
(qf2): FlattenMlp(
(fc0): Linear(in_features=176, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
)
Actor_RNN(
(observ_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(action_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(reward_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(rnn): LSTM(48, 128)
(current_observ_embedder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(policy): DeterministicPolicy(
(fc0): Linear(in_features=160, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
)
###Markdown
Define other training parameters such as context length and training frequency
###Code
num_updates_per_iter = 1.0 # training frequency
sampled_seq_len = 64 # context length
buffer_size = 1e6
batch_size = 32
num_iters = 150
num_init_rollouts_pool = 5
num_rollouts_per_iter = 1
total_rollouts = num_init_rollouts_pool + num_iters * num_rollouts_per_iter
n_env_steps_total = max_trajectory_len * total_rollouts
_n_env_steps_total = 0
print("total env episodes", total_rollouts, "total env steps", n_env_steps_total)
###Output
total env episodes 155 total env steps 31000
###Markdown
Define key functions: collect rollouts and policy update
###Code
@torch.no_grad()
def collect_rollouts(
num_rollouts, random_actions=False, deterministic=False, train_mode=True
):
"""collect num_rollouts of trajectories in task and save into policy buffer
:param
random_actions: whether to use policy to sample actions, or randomly sample action space
deterministic: deterministic action selection?
train_mode: whether to train (stored to buffer) or test
"""
if not train_mode:
assert random_actions == False and deterministic == True
total_steps = 0
total_rewards = 0.0
for idx in range(num_rollouts):
steps = 0
rewards = 0.0
obs = ptu.from_numpy(env.reset())
obs = obs.reshape(1, obs.shape[-1])
done_rollout = False
# get hidden state at timestep=0, None for mlp
action, reward, internal_state = agent.get_initial_info()
if train_mode:
# temporary storage
obs_list, act_list, rew_list, next_obs_list, term_list = (
[],
[],
[],
[],
[],
)
while not done_rollout:
if random_actions:
action = ptu.FloatTensor([env.action_space.sample()]) # (1, A)
else:
# policy takes hidden state as input for rnn, while takes obs for mlp
(action, _, _, _), internal_state = agent.act(
prev_internal_state=internal_state,
prev_action=action,
reward=reward,
obs=obs,
deterministic=deterministic,
)
# observe reward and next obs (B=1, dim)
next_obs, reward, done, info = utl.env_step(env, action.squeeze(dim=0))
done_rollout = False if ptu.get_numpy(done[0][0]) == 0.0 else True
# update statistics
steps += 1
rewards += reward.item()
# early stopping env: such as rmdp, pomdp, generalize tasks. term ignores timeout
term = (
False
if "TimeLimit.truncated" in info or steps >= max_trajectory_len
else done_rollout
)
if train_mode:
# append tensors to temporary storage
obs_list.append(obs) # (1, dim)
act_list.append(action) # (1, dim)
rew_list.append(reward) # (1, dim)
term_list.append(term) # bool
next_obs_list.append(next_obs) # (1, dim)
# set: obs <- next_obs
obs = next_obs.clone()
if train_mode:
# add collected sequence to buffer
policy_storage.add_episode(
observations=ptu.get_numpy(torch.cat(obs_list, dim=0)), # (L, dim)
actions=ptu.get_numpy(torch.cat(act_list, dim=0)), # (L, dim)
rewards=ptu.get_numpy(torch.cat(rew_list, dim=0)), # (L, dim)
terminals=np.array(term_list).reshape(-1, 1), # (L, 1)
next_observations=ptu.get_numpy(
torch.cat(next_obs_list, dim=0)
), # (L, dim)
)
print(
"Mode:",
"Train" if train_mode else "Test",
"env_steps",
steps,
"total rewards",
rewards,
)
total_steps += steps
total_rewards += rewards
if train_mode:
return total_steps
else:
return total_rewards / num_rollouts
def update(num_updates):
rl_losses_agg = {}
# print(num_updates)
for update in range(num_updates):
# sample random RL batch: in transitions
batch = ptu.np_to_pytorch_batch(policy_storage.random_episodes(batch_size))
# RL update
rl_losses = agent.update(batch)
for k, v in rl_losses.items():
if update == 0: # first iterate - create list
rl_losses_agg[k] = [v]
else: # append values
rl_losses_agg[k].append(v)
# statistics
for k in rl_losses_agg:
rl_losses_agg[k] = np.mean(rl_losses_agg[k])
return rl_losses_agg
###Output
_____no_output_____
###Markdown
Train and Evaluate the agent: only costs < 20 min
###Code
policy_storage = SeqReplayBuffer(
max_replay_buffer_size=int(buffer_size),
observation_dim=obs_dim,
action_dim=act_dim,
sampled_seq_len=sampled_seq_len,
sample_weight_baseline=0.0,
)
env_steps = collect_rollouts(
num_rollouts=num_init_rollouts_pool, random_actions=True, train_mode=True
)
_n_env_steps_total += env_steps
# evaluation parameters
last_eval_num_iters = 0
log_interval = 5
eval_num_rollouts = 10
learning_curve = {
"x": [],
"y": [],
}
while _n_env_steps_total < n_env_steps_total:
env_steps = collect_rollouts(num_rollouts=num_rollouts_per_iter, train_mode=True)
_n_env_steps_total += env_steps
train_stats = update(int(num_updates_per_iter * env_steps))
current_num_iters = _n_env_steps_total // (
num_rollouts_per_iter * max_trajectory_len
)
if (
current_num_iters != last_eval_num_iters
and current_num_iters % log_interval == 0
):
last_eval_num_iters = current_num_iters
average_returns = collect_rollouts(
num_rollouts=eval_num_rollouts,
train_mode=False,
random_actions=False,
deterministic=True,
)
learning_curve["x"].append(_n_env_steps_total)
learning_curve["y"].append(average_returns)
print(_n_env_steps_total, average_returns)
###Output
Mode: Train env_steps 200 total rewards -1215.5405168533325
Mode: Train env_steps 200 total rewards -1309.3240714073181
Mode: Train env_steps 200 total rewards -1070.255422860384
Mode: Train env_steps 200 total rewards -1716.9817371368408
Mode: Train env_steps 200 total rewards -1348.119238615036
Mode: Train env_steps 200 total rewards -1794.5983276367188
Mode: Train env_steps 200 total rewards -1641.6694905161858
Mode: Train env_steps 200 total rewards -1590.8518767878413
Mode: Train env_steps 200 total rewards -1717.778513431549
Mode: Train env_steps 200 total rewards -1716.919951915741
Mode: Test env_steps 200 total rewards -1690.6299517154694
Mode: Test env_steps 200 total rewards -1667.401160120964
Mode: Test env_steps 200 total rewards -1683.2179251909256
Mode: Test env_steps 200 total rewards -1629.752505838871
Mode: Test env_steps 200 total rewards -1730.7712788581848
Mode: Test env_steps 200 total rewards -1709.7121629714966
Mode: Test env_steps 200 total rewards -1737.636411190033
Mode: Test env_steps 200 total rewards -1724.8275074958801
Mode: Test env_steps 200 total rewards -1644.5090357661247
Mode: Test env_steps 200 total rewards -1670.3785852193832
2000 -1688.8836524367332
Mode: Train env_steps 200 total rewards -1675.8528361320496
Mode: Train env_steps 200 total rewards -1658.8392679691315
Mode: Train env_steps 200 total rewards -1519.6182126998901
Mode: Train env_steps 200 total rewards -1543.8249187469482
Mode: Train env_steps 200 total rewards -1378.7394891306758
Mode: Test env_steps 200 total rewards -1243.581422328949
Mode: Test env_steps 200 total rewards -1279.0839395523071
Mode: Test env_steps 200 total rewards -1115.5180749297142
Mode: Test env_steps 200 total rewards -1240.0015530586243
Mode: Test env_steps 200 total rewards -1131.4246773123741
Mode: Test env_steps 200 total rewards -1271.0484585762024
Mode: Test env_steps 200 total rewards -1296.8658256530762
Mode: Test env_steps 200 total rewards -1268.0181958675385
Mode: Test env_steps 200 total rewards -1105.4287464022636
Mode: Test env_steps 200 total rewards -1221.9913232326508
3000 -1217.29622169137
Mode: Train env_steps 200 total rewards -1086.907365836203
Mode: Train env_steps 200 total rewards -809.5890567302704
Mode: Train env_steps 200 total rewards -1509.1656613349915
Mode: Train env_steps 200 total rewards -875.1950886547565
Mode: Train env_steps 200 total rewards -883.6977178305387
Mode: Test env_steps 200 total rewards -932.8838503956795
Mode: Test env_steps 200 total rewards -916.5262511968613
Mode: Test env_steps 200 total rewards -853.4724770113826
Mode: Test env_steps 200 total rewards -972.6363238096237
Mode: Test env_steps 200 total rewards -916.7851620316505
Mode: Test env_steps 200 total rewards -892.7446937561035
Mode: Test env_steps 200 total rewards -911.9960522651672
Mode: Test env_steps 200 total rewards -862.5102658420801
Mode: Test env_steps 200 total rewards -909.3836004137993
Mode: Test env_steps 200 total rewards -902.3712181299925
4000 -907.1309894852341
Mode: Train env_steps 200 total rewards -896.5191862247884
Mode: Train env_steps 200 total rewards -1148.8554611206055
Mode: Train env_steps 200 total rewards -919.8976370096207
Mode: Train env_steps 200 total rewards -894.6185926496983
Mode: Train env_steps 200 total rewards -777.0896812826395
Mode: Test env_steps 200 total rewards -800.0095049291849
Mode: Test env_steps 200 total rewards -729.1357635855675
Mode: Test env_steps 200 total rewards -790.4656649529934
Mode: Test env_steps 200 total rewards -658.2100356258452
Mode: Test env_steps 200 total rewards -678.3389454782009
Mode: Test env_steps 200 total rewards -764.867270976305
Mode: Test env_steps 200 total rewards -711.1784103494138
Mode: Test env_steps 200 total rewards -704.299937158823
Mode: Test env_steps 200 total rewards -703.3847205489874
Mode: Test env_steps 200 total rewards -769.4560797959566
5000 -730.9346333401278
Mode: Train env_steps 200 total rewards -774.3973034918308
Mode: Train env_steps 200 total rewards -863.303290605545
Mode: Train env_steps 200 total rewards -754.3786760801449
Mode: Train env_steps 200 total rewards -787.7701032310724
Mode: Train env_steps 200 total rewards -814.8449696339667
Mode: Test env_steps 200 total rewards -641.1826608031988
Mode: Test env_steps 200 total rewards -673.1848703697324
Mode: Test env_steps 200 total rewards -636.2317231073976
Mode: Test env_steps 200 total rewards -636.3841380421072
Mode: Test env_steps 200 total rewards -634.7440396994352
Mode: Test env_steps 200 total rewards -1434.365993976593
Mode: Test env_steps 200 total rewards -639.5609966111369
Mode: Test env_steps 200 total rewards -638.4026339892298
Mode: Test env_steps 200 total rewards -629.0861927568913
Mode: Test env_steps 200 total rewards -635.3440890386701
6000 -719.8487338394392
Mode: Train env_steps 200 total rewards -624.8576611503959
Mode: Train env_steps 200 total rewards -731.2055732905865
Mode: Train env_steps 200 total rewards -643.7517330273986
Mode: Train env_steps 200 total rewards -512.888639099896
Mode: Train env_steps 200 total rewards -678.9873680695891
Mode: Test env_steps 200 total rewards -649.3965282291174
Mode: Test env_steps 200 total rewards -541.0664244294167
Mode: Test env_steps 200 total rewards -656.5433887466788
Mode: Test env_steps 200 total rewards -701.5938144102693
Mode: Test env_steps 200 total rewards -570.9794048666954
Mode: Test env_steps 200 total rewards -526.0970221487805
Mode: Test env_steps 200 total rewards -528.7169065512717
Mode: Test env_steps 200 total rewards -791.1858232319355
Mode: Test env_steps 200 total rewards -760.1559834107757
Mode: Test env_steps 200 total rewards -796.3674455285072
7000 -652.2102741553448
Mode: Train env_steps 200 total rewards -575.0728849545121
Mode: Train env_steps 200 total rewards -538.9270869866014
Mode: Train env_steps 200 total rewards -703.1943583320826
Mode: Train env_steps 200 total rewards -522.5574248465709
Mode: Train env_steps 200 total rewards -526.6231522634625
Mode: Test env_steps 200 total rewards -471.21681063994765
Mode: Test env_steps 200 total rewards -407.10355828516185
Mode: Test env_steps 200 total rewards -429.82667701132596
Mode: Test env_steps 200 total rewards -396.4019733443856
Mode: Test env_steps 200 total rewards -1491.0763459205627
Mode: Test env_steps 200 total rewards -326.2651424361393
Mode: Test env_steps 200 total rewards -464.98171285912395
Mode: Test env_steps 200 total rewards -392.0769012141973
Mode: Test env_steps 200 total rewards -269.7005622461438
Mode: Test env_steps 200 total rewards -509.407666021958
8000 -515.8057349978947
Mode: Train env_steps 200 total rewards -639.5204429877922
Mode: Train env_steps 200 total rewards -396.447283314541
Mode: Train env_steps 200 total rewards -519.2145761235151
Mode: Train env_steps 200 total rewards -386.9386151973158
Mode: Train env_steps 200 total rewards -393.6131444051862
Mode: Test env_steps 200 total rewards -136.34055368886766
Mode: Test env_steps 200 total rewards -130.04246410355336
Mode: Test env_steps 200 total rewards -137.05444939476
Mode: Test env_steps 200 total rewards -134.1194399067317
Mode: Test env_steps 200 total rewards -131.07375583963585
Mode: Test env_steps 200 total rewards -130.39294535505906
Mode: Test env_steps 200 total rewards -256.4807607967232
Mode: Test env_steps 200 total rewards -133.45546923366783
Mode: Test env_steps 200 total rewards -137.30824294477497
Mode: Test env_steps 200 total rewards -397.2588393399783
9000 -172.3526920603752
Mode: Train env_steps 200 total rewards -260.3047589848429
Mode: Train env_steps 200 total rewards -260.44967386405915
Mode: Train env_steps 200 total rewards -9.588460055063479
Mode: Train env_steps 200 total rewards -503.4001742233813
Mode: Train env_steps 200 total rewards -132.90466969866975
Mode: Test env_steps 200 total rewards -245.46063787024468
Mode: Test env_steps 200 total rewards -258.87249805172905
Mode: Test env_steps 200 total rewards -253.1965181294363
Mode: Test env_steps 200 total rewards -256.33532144408673
Mode: Test env_steps 200 total rewards -122.02367229596712
Mode: Test env_steps 200 total rewards -378.40153571846895
Mode: Test env_steps 200 total rewards -129.97556851245463
Mode: Test env_steps 200 total rewards -256.6560115632601
Mode: Test env_steps 200 total rewards -128.58447807095945
Mode: Test env_steps 200 total rewards -468.4694554193411
10000 -249.79756970759482
Mode: Train env_steps 200 total rewards -253.84205745416693
Mode: Train env_steps 200 total rewards -258.597339340964
Mode: Train env_steps 200 total rewards -249.67442950383338
Mode: Train env_steps 200 total rewards -264.99233946722234
Mode: Train env_steps 200 total rewards -123.49480776841665
Mode: Test env_steps 200 total rewards -386.33284205210657
Mode: Test env_steps 200 total rewards -374.89824844955365
Mode: Test env_steps 200 total rewards -127.82263034246353
Mode: Test env_steps 200 total rewards -3.396543635226408
Mode: Test env_steps 200 total rewards -0.3892205822030519
Mode: Test env_steps 200 total rewards -127.58443048472691
Mode: Test env_steps 200 total rewards -123.29965032166001
Mode: Test env_steps 200 total rewards -405.617472100781
Mode: Test env_steps 200 total rewards -131.20015325089298
Mode: Test env_steps 200 total rewards -270.9554879873649
11000 -195.1496679206979
Mode: Train env_steps 200 total rewards -128.46735045554306
Mode: Train env_steps 200 total rewards -385.3559364905559
Mode: Train env_steps 200 total rewards -133.3203926575943
Mode: Train env_steps 200 total rewards -130.180486971527
Mode: Train env_steps 200 total rewards -129.11331324546154
Mode: Test env_steps 200 total rewards -259.27573602375924
Mode: Test env_steps 200 total rewards -127.15911891811993
Mode: Test env_steps 200 total rewards -131.78587026067544
Mode: Test env_steps 200 total rewards -124.41451870201854
Mode: Test env_steps 200 total rewards -120.47274359833682
Mode: Test env_steps 200 total rewards -124.89280595941818
Mode: Test env_steps 200 total rewards -121.65913894737605
Mode: Test env_steps 200 total rewards -249.62018572923262
Mode: Test env_steps 200 total rewards -1.0191547659342177
Mode: Test env_steps 200 total rewards -130.19940298219444
12000 -139.04986758870655
Mode: Train env_steps 200 total rewards -130.7861404924015
Mode: Train env_steps 200 total rewards -128.20895186233065
Mode: Train env_steps 200 total rewards -240.80124919944137
Mode: Train env_steps 200 total rewards -127.05305419189972
Mode: Train env_steps 200 total rewards -389.74735507116566
Mode: Test env_steps 200 total rewards -125.799274083809
Mode: Test env_steps 200 total rewards -126.80654663550376
Mode: Test env_steps 200 total rewards -128.47082148335176
Mode: Test env_steps 200 total rewards -125.38395279903489
Mode: Test env_steps 200 total rewards -265.4943495452462
Mode: Test env_steps 200 total rewards -391.3820340028615
Mode: Test env_steps 200 total rewards -124.5938728672918
Mode: Test env_steps 200 total rewards -115.8693172446583
Mode: Test env_steps 200 total rewards -121.6324416497664
Mode: Test env_steps 200 total rewards -403.91459427748487
13000 -192.93472045890084
Mode: Train env_steps 200 total rewards -120.75656462824372
Mode: Train env_steps 200 total rewards -244.2110134603572
Mode: Train env_steps 200 total rewards -271.4861283576247
Mode: Train env_steps 200 total rewards -299.46712611912517
Mode: Train env_steps 200 total rewards -276.9068454174121
Mode: Test env_steps 200 total rewards -130.26577123824973
Mode: Test env_steps 200 total rewards -122.85300587835081
Mode: Test env_steps 200 total rewards -125.84164321703429
Mode: Test env_steps 200 total rewards -127.25999846162449
Mode: Test env_steps 200 total rewards -245.0846909333195
Mode: Test env_steps 200 total rewards -251.7522211139776
Mode: Test env_steps 200 total rewards -117.7094244834152
Mode: Test env_steps 200 total rewards -249.07677362083632
Mode: Test env_steps 200 total rewards -259.21219713821483
Mode: Test env_steps 200 total rewards -118.03599187266809
14000 -174.7091717957691
Mode: Train env_steps 200 total rewards -242.31402633567632
Mode: Train env_steps 200 total rewards -127.27280326851178
Mode: Train env_steps 200 total rewards -243.62500214390457
Mode: Train env_steps 200 total rewards -126.50611761247274
Mode: Train env_steps 200 total rewards -123.3945286332164
Mode: Test env_steps 200 total rewards -257.4191315458156
Mode: Test env_steps 200 total rewards -119.91926783090457
Mode: Test env_steps 200 total rewards -4.727449198719114
Mode: Test env_steps 200 total rewards -378.35922101838514
Mode: Test env_steps 200 total rewards -123.7072509995196
Mode: Test env_steps 200 total rewards -280.62047006061766
Mode: Test env_steps 200 total rewards -248.55686107743531
Mode: Test env_steps 200 total rewards -125.25552876619622
Mode: Test env_steps 200 total rewards -245.17300941608846
Mode: Test env_steps 200 total rewards -263.7774709605146
15000 -204.75156608741963
Mode: Train env_steps 200 total rewards -369.5970004310366
Mode: Train env_steps 200 total rewards -117.8776598579716
Mode: Train env_steps 200 total rewards -266.6137974287849
Mode: Train env_steps 200 total rewards -247.84643931523897
Mode: Train env_steps 200 total rewards -133.65093973837793
Mode: Test env_steps 200 total rewards -132.58213516324759
Mode: Test env_steps 200 total rewards -317.6314685828984
Mode: Test env_steps 200 total rewards -120.63207617402077
Mode: Test env_steps 200 total rewards -134.50522946193814
Mode: Test env_steps 200 total rewards -249.93733799178153
Mode: Test env_steps 200 total rewards -126.03254494443536
Mode: Test env_steps 200 total rewards -127.51484705973417
Mode: Test env_steps 200 total rewards -133.02907354477793
Mode: Test env_steps 200 total rewards -131.04472528398037
Mode: Test env_steps 200 total rewards -133.04624734260142
16000 -160.59556855494156
Mode: Train env_steps 200 total rewards -131.4692294076085
Mode: Train env_steps 200 total rewards -257.0220946841873
Mode: Train env_steps 200 total rewards -132.60133136808872
Mode: Train env_steps 200 total rewards -252.69747569982428
Mode: Train env_steps 200 total rewards -122.5156181063503
Mode: Test env_steps 200 total rewards -120.0488967075944
Mode: Test env_steps 200 total rewards -125.59240189334378
Mode: Test env_steps 200 total rewards -122.92463257256895
Mode: Test env_steps 200 total rewards -266.6653274325654
Mode: Test env_steps 200 total rewards -129.52725801430643
Mode: Test env_steps 200 total rewards -386.4986750278622
Mode: Test env_steps 200 total rewards -127.47746223770082
Mode: Test env_steps 200 total rewards -131.84532477753237
Mode: Test env_steps 200 total rewards -123.68566208239645
Mode: Test env_steps 200 total rewards -133.80112480558455
17000 -166.80667655514554
Mode: Train env_steps 200 total rewards -130.1032104054466
Mode: Train env_steps 200 total rewards -5.792526931327302
Mode: Train env_steps 200 total rewards -129.94445695829927
Mode: Train env_steps 200 total rewards -1.8074299860745668
Mode: Train env_steps 200 total rewards -371.67741363390815
Mode: Test env_steps 200 total rewards -129.01796465553343
Mode: Test env_steps 200 total rewards -255.2657772154198
Mode: Test env_steps 200 total rewards -124.8317355401814
Mode: Test env_steps 200 total rewards -127.61366206099046
Mode: Test env_steps 200 total rewards -130.1721339863725
Mode: Test env_steps 200 total rewards -128.43343426752836
Mode: Test env_steps 200 total rewards -264.26960422779666
Mode: Test env_steps 200 total rewards -3.667812744155526
Mode: Test env_steps 200 total rewards -251.8668613290938
Mode: Test env_steps 200 total rewards -251.72904552519321
18000 -166.68680315522653
Mode: Train env_steps 200 total rewards -129.41188386362046
Mode: Train env_steps 200 total rewards -122.25436197966337
Mode: Train env_steps 200 total rewards -132.0075741810724
Mode: Train env_steps 200 total rewards -125.08316496918269
Mode: Train env_steps 200 total rewards -120.87805001712695
Mode: Test env_steps 200 total rewards -130.77035507211986
Mode: Test env_steps 200 total rewards -130.97795120121737
Mode: Test env_steps 200 total rewards -285.9067427550326
Mode: Test env_steps 200 total rewards -130.19821366295218
Mode: Test env_steps 200 total rewards -248.72471698420122
Mode: Test env_steps 200 total rewards -131.5111675742737
Mode: Test env_steps 200 total rewards -252.134106502519
Mode: Test env_steps 200 total rewards -249.68509305920452
Mode: Test env_steps 200 total rewards -259.2564549049275
Mode: Test env_steps 200 total rewards -131.86590750053256
19000 -195.10307092169805
Mode: Train env_steps 200 total rewards -336.72006702711224
Mode: Train env_steps 200 total rewards -3.6598976548411883
Mode: Train env_steps 200 total rewards -128.5459162555635
Mode: Train env_steps 200 total rewards -389.0736679392867
Mode: Train env_steps 200 total rewards -132.46394797693938
Mode: Test env_steps 200 total rewards -127.63480124925263
Mode: Test env_steps 200 total rewards -132.9844055683352
Mode: Test env_steps 200 total rewards -350.4678683485836
Mode: Test env_steps 200 total rewards -1491.0205211639404
Mode: Test env_steps 200 total rewards -123.56267284578644
Mode: Test env_steps 200 total rewards -253.39906679093838
Mode: Test env_steps 200 total rewards -131.26202398515306
Mode: Test env_steps 200 total rewards -375.1163965202868
Mode: Test env_steps 200 total rewards -132.37188876396976
Mode: Test env_steps 200 total rewards -254.79661067272536
20000 -337.26162559089715
Mode: Train env_steps 200 total rewards -127.21233860775828
Mode: Train env_steps 200 total rewards -397.9239173475653
Mode: Train env_steps 200 total rewards -261.70106873475015
Mode: Train env_steps 200 total rewards -136.95836029946804
Mode: Train env_steps 200 total rewards -130.52756336517632
Mode: Test env_steps 200 total rewards -127.33369559422135
Mode: Test env_steps 200 total rewards -283.45684512890875
Mode: Test env_steps 200 total rewards -136.14634452015162
Mode: Test env_steps 200 total rewards -137.2795043103397
Mode: Test env_steps 200 total rewards -248.97463169554248
Mode: Test env_steps 200 total rewards -8.958229891955853
Mode: Test env_steps 200 total rewards -10.105981927365065
Mode: Test env_steps 200 total rewards -132.38649014476687
Mode: Test env_steps 200 total rewards -133.52735120104626
Mode: Test env_steps 200 total rewards -132.87370552495122
21000 -135.1042779939249
Mode: Train env_steps 200 total rewards -135.44952426105738
Mode: Train env_steps 200 total rewards -136.6360167451203
Mode: Train env_steps 200 total rewards -126.07958034798503
Mode: Train env_steps 200 total rewards -129.10063152387738
Mode: Train env_steps 200 total rewards -254.23420189972967
Mode: Test env_steps 200 total rewards -9.132988084107637
Mode: Test env_steps 200 total rewards -122.19331623334438
Mode: Test env_steps 200 total rewards -253.2292528897524
Mode: Test env_steps 200 total rewards -291.03938596788794
Mode: Test env_steps 200 total rewards -127.90111041348428
Mode: Test env_steps 200 total rewards -7.189530588919297
Mode: Test env_steps 200 total rewards -122.86703424248844
Mode: Test env_steps 200 total rewards -252.5274507328868
Mode: Test env_steps 200 total rewards -126.35793518205173
Mode: Test env_steps 200 total rewards -252.72059313277714
22000 -156.51585974677
Mode: Train env_steps 200 total rewards -132.3777971100062
Mode: Train env_steps 200 total rewards -263.93837735801935
Mode: Train env_steps 200 total rewards -380.18561655655503
Mode: Train env_steps 200 total rewards -408.3316973443143
Mode: Train env_steps 200 total rewards -134.41268048726488
Mode: Test env_steps 200 total rewards -252.1836907789111
Mode: Test env_steps 200 total rewards -136.87916581658646
Mode: Test env_steps 200 total rewards -130.30568698607385
Mode: Test env_steps 200 total rewards -295.1264161616564
Mode: Test env_steps 200 total rewards -285.27469485998154
Mode: Test env_steps 200 total rewards -257.36417460720986
Mode: Test env_steps 200 total rewards -122.39938643248752
Mode: Test env_steps 200 total rewards -136.13417248800397
Mode: Test env_steps 200 total rewards -251.1970808338374
Mode: Test env_steps 200 total rewards -135.31905758287758
23000 -200.21835265476255
Mode: Train env_steps 200 total rewards -265.19849015702493
Mode: Train env_steps 200 total rewards -268.84571858868003
Mode: Train env_steps 200 total rewards -137.15437516197562
Mode: Train env_steps 200 total rewards -131.01147694559768
Mode: Train env_steps 200 total rewards -389.00455401837826
Mode: Test env_steps 200 total rewards -123.15574537939392
Mode: Test env_steps 200 total rewards -264.8135799880838
Mode: Test env_steps 200 total rewards -359.71586162620224
Mode: Test env_steps 200 total rewards -121.86481238342822
Mode: Test env_steps 200 total rewards -134.40076231583953
Mode: Test env_steps 200 total rewards -127.8359218480764
Mode: Test env_steps 200 total rewards -252.95195665210485
Mode: Test env_steps 200 total rewards -133.68351730890572
Mode: Test env_steps 200 total rewards -249.9511700947769
Mode: Test env_steps 200 total rewards -416.6168870218098
24000 -218.49902146186213
Mode: Train env_steps 200 total rewards -133.75552151724696
Mode: Train env_steps 200 total rewards -249.84270376106724
Mode: Train env_steps 200 total rewards -119.0928434144007
Mode: Train env_steps 200 total rewards -252.1334647499025
Mode: Train env_steps 200 total rewards -4.308382875751704
Mode: Test env_steps 200 total rewards -250.32012339681387
Mode: Test env_steps 200 total rewards -130.86303978820797
Mode: Test env_steps 200 total rewards -268.61977915861644
Mode: Test env_steps 200 total rewards -256.51407427561935
Mode: Test env_steps 200 total rewards -268.53248357982375
Mode: Test env_steps 200 total rewards -131.89295327838045
Mode: Test env_steps 200 total rewards -247.8418615491828
Mode: Test env_steps 200 total rewards -132.06573122669943
Mode: Test env_steps 200 total rewards -246.07906676083803
Mode: Test env_steps 200 total rewards -128.755500536412
25000 -206.1484613550594
Mode: Train env_steps 200 total rewards -268.73735208273865
Mode: Train env_steps 200 total rewards -249.699738193769
Mode: Train env_steps 200 total rewards -257.7146478953655
Mode: Train env_steps 200 total rewards -132.48573947069235
Mode: Train env_steps 200 total rewards -117.73745695047546
Mode: Test env_steps 200 total rewards -117.13273281010333
Mode: Test env_steps 200 total rewards -125.37805172341177
Mode: Test env_steps 200 total rewards -246.70760537590832
Mode: Test env_steps 200 total rewards -126.25057095201919
Mode: Test env_steps 200 total rewards -356.92420602519996
Mode: Test env_steps 200 total rewards -247.3438758761622
Mode: Test env_steps 200 total rewards -123.14953158609569
Mode: Test env_steps 200 total rewards -127.49349682836328
Mode: Test env_steps 200 total rewards -130.86493495781906
Mode: Test env_steps 200 total rewards -131.28574351139832
26000 -173.2530749646481
Mode: Train env_steps 200 total rewards -129.1364300606656
Mode: Train env_steps 200 total rewards -131.16975290200207
Mode: Train env_steps 200 total rewards -121.95525176647061
Mode: Train env_steps 200 total rewards -347.63898885797244
Mode: Train env_steps 200 total rewards -1516.262550830841
Mode: Test env_steps 200 total rewards -125.29759021170321
Mode: Test env_steps 200 total rewards -116.29971585396561
Mode: Test env_steps 200 total rewards -132.65588944178307
Mode: Test env_steps 200 total rewards -242.80255369469523
Mode: Test env_steps 200 total rewards -120.76851275190711
Mode: Test env_steps 200 total rewards -129.98449951899238
Mode: Test env_steps 200 total rewards -263.6801114343107
Mode: Test env_steps 200 total rewards -133.65415045432746
Mode: Test env_steps 200 total rewards -247.21006692014635
Mode: Test env_steps 200 total rewards -117.64420653533307
27000 -162.99972968171642
Mode: Train env_steps 200 total rewards -130.20218588324497
Mode: Train env_steps 200 total rewards -118.29003828013083
Mode: Train env_steps 200 total rewards -247.1906664679991
Mode: Train env_steps 200 total rewards -251.76994302743697
Mode: Train env_steps 200 total rewards -380.8231740617193
Mode: Test env_steps 200 total rewards -128.14449329604395
Mode: Test env_steps 200 total rewards -133.00257929693907
Mode: Test env_steps 200 total rewards -121.33280960656703
Mode: Test env_steps 200 total rewards -117.21745651622768
Mode: Test env_steps 200 total rewards -260.304541438818
Mode: Test env_steps 200 total rewards -129.4903052574955
Mode: Test env_steps 200 total rewards -123.66184103582054
Mode: Test env_steps 200 total rewards -4.47467941895593
Mode: Test env_steps 200 total rewards -136.82730377465487
Mode: Test env_steps 200 total rewards -128.40459588193335
28000 -128.2860605523456
Mode: Train env_steps 200 total rewards -359.02930258901324
Mode: Train env_steps 200 total rewards -126.99004180729389
Mode: Train env_steps 200 total rewards -130.01239318959415
Mode: Train env_steps 200 total rewards -132.86401597573422
Mode: Train env_steps 200 total rewards -131.5378251487855
Mode: Test env_steps 200 total rewards -377.7228271923959
Mode: Test env_steps 200 total rewards -388.79292901046574
Mode: Test env_steps 200 total rewards -134.40097275190055
Mode: Test env_steps 200 total rewards -121.09551488608122
Mode: Test env_steps 200 total rewards -238.15228960616514
Mode: Test env_steps 200 total rewards -131.88327238895
Mode: Test env_steps 200 total rewards -246.09436088893563
Mode: Test env_steps 200 total rewards -5.141647985205054
Mode: Test env_steps 200 total rewards -130.17426304146647
Mode: Test env_steps 200 total rewards -125.60784388473257
29000 -189.90659216362982
Mode: Train env_steps 200 total rewards -376.1674876296893
Mode: Train env_steps 200 total rewards -375.34097828599624
Mode: Train env_steps 200 total rewards -127.59093644656241
Mode: Train env_steps 200 total rewards -136.18268738826737
Mode: Train env_steps 200 total rewards -129.42341559915803
Mode: Test env_steps 200 total rewards -391.77343064476736
Mode: Test env_steps 200 total rewards -254.3057643007487
Mode: Test env_steps 200 total rewards -134.01842796755955
Mode: Test env_steps 200 total rewards -391.50856303423643
Mode: Test env_steps 200 total rewards -265.35276218969375
Mode: Test env_steps 200 total rewards -136.64729456044734
Mode: Test env_steps 200 total rewards -133.1267894115299
Mode: Test env_steps 200 total rewards -5.491715028416365
Mode: Test env_steps 200 total rewards -133.11291719414294
Mode: Test env_steps 200 total rewards -127.73738552071154
30000 -197.3075049852254
Mode: Train env_steps 200 total rewards -121.78846242744476
Mode: Train env_steps 200 total rewards -131.7180840705987
Mode: Train env_steps 200 total rewards -3.245894107458298
Mode: Train env_steps 200 total rewards -129.29797964007594
Mode: Train env_steps 200 total rewards -379.41606050374685
Mode: Test env_steps 200 total rewards -121.7213050108403
Mode: Test env_steps 200 total rewards -131.86788710579276
Mode: Test env_steps 200 total rewards -264.3296286612749
Mode: Test env_steps 200 total rewards -126.13307171873748
Mode: Test env_steps 200 total rewards -269.3273641727865
Mode: Test env_steps 200 total rewards -126.06584425829351
Mode: Test env_steps 200 total rewards -138.2838618159294
Mode: Test env_steps 200 total rewards -128.50390940532088
Mode: Test env_steps 200 total rewards -255.43328048475087
Mode: Test env_steps 200 total rewards -273.4956193007529
31000 -183.51617719344796
###Markdown
Draw the learning curve
###Code
import matplotlib.pyplot as plt
plt.plot(learning_curve["x"], learning_curve["y"])
plt.xlabel("env steps")
plt.ylabel("return")
plt.show()
###Output
_____no_output_____
###Markdown
Example to use HW-NAS-Bench under NAS-Bench-201's Space
###Code
from hw_nas_bench_api import HWNASBenchAPI as HWAPI
hw_api = HWAPI("HW-NAS-Bench-v1_0.pickle", search_space="nasbench201")
# Example to get all the hardware metrics in the No.0,1,2 architectures under NAS-Bench-201's Space
print("===> Example to get all the hardware metrics in the No.0,1,2 architectures under NAS-Bench-201's Space")
for idx in range(3):
for dataset in ["cifar10", "cifar100", "ImageNet16-120"]:
HW_metrics = hw_api.query_by_index(idx, dataset)
print("The HW_metrics (type: {}) for No.{} @ {} under NAS-Bench-201: {}".format(type(HW_metrics),
idx,
dataset,
HW_metrics))
# Example to get use the hardware metrics in the No.0 architectures in CIFAR-10 under NAS-Bench-201's Space
print("===> Example to get use the hardware metrics in the No.0 architectures in CIFAR-10 under NAS-Bench-201's Space")
HW_metrics = hw_api.query_by_index(0, "cifar10")
for k in HW_metrics:
if 'average' in k:
print("{}: {}".format(k, HW_metrics[k]))
continue
elif "latency" in k:
unit = "ms"
else:
unit = "mJ"
print("{}: {} ({})".format(k, HW_metrics[k], unit))
# Create the network
config = hw_api.get_net_config(0, "cifar10")
print(config)
from hw_nas_bench_api.nas_201_models import get_cell_based_tiny_net
network = get_cell_based_tiny_net(config) # create the network from configurration
print(network) # show the structure of this architecture
###Output
{'name': 'infer.tiny', 'C': 16, 'N': 5, 'arch_str': '|avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|', 'num_classes': 10}
TinyNetwork(
TinyNetwork(C=16, N=5, L=17)
(stem): Sequential(
(0): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(cells): ModuleList(
(0): InferCell(
info :: nodes=4, inC=16, outC=16, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(1): InferCell(
info :: nodes=4, inC=16, outC=16, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(2): InferCell(
info :: nodes=4, inC=16, outC=16, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(3): InferCell(
info :: nodes=4, inC=16, outC=16, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(4): InferCell(
info :: nodes=4, inC=16, outC=16, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(5): ResNetBasicblock(
ResNetBasicblock(inC=16, outC=32, stride=2)
(conv_a): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv_b): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(downsample): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
(6): InferCell(
info :: nodes=4, inC=32, outC=32, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(7): InferCell(
info :: nodes=4, inC=32, outC=32, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(8): InferCell(
info :: nodes=4, inC=32, outC=32, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(9): InferCell(
info :: nodes=4, inC=32, outC=32, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(10): InferCell(
info :: nodes=4, inC=32, outC=32, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(11): ResNetBasicblock(
ResNetBasicblock(inC=32, outC=64, stride=2)
(conv_a): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(conv_b): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(downsample): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
(12): InferCell(
info :: nodes=4, inC=64, outC=64, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(13): InferCell(
info :: nodes=4, inC=64, outC=64, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(14): InferCell(
info :: nodes=4, inC=64, outC=64, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(15): InferCell(
info :: nodes=4, inC=64, outC=64, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
(16): InferCell(
info :: nodes=4, inC=64, outC=64, [1<-(I0-L0) | 2<-(I0-L1,I1-L2) | 3<-(I0-L3,I1-L4,I2-L5)], |avg_pool_3x3~0|+|nor_conv_1x1~0|skip_connect~1|+|nor_conv_1x1~0|skip_connect~1|skip_connect~2|
(layers): ModuleList(
(0): POOLING(
(op): AvgPool2d(kernel_size=3, stride=1, padding=1)
)
(1): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): Identity()
(3): ReLUConvBN(
(op): Sequential(
(0): ReLU()
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): Identity()
(5): Identity()
)
)
)
(lastact): Sequential(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(global_pooling): AdaptiveAvgPool2d(output_size=1)
(classifier): Linear(in_features=64, out_features=10, bias=True)
)
###Markdown
Example to use HW-NAS-Bench under FBNet's Space
###Code
# The index in FBNet Space is not a number but a list with 22 elements, and each element is from 0~8
from hw_nas_bench_api import HWNASBenchAPI as HWAPI
hw_api = HWAPI("HW-NAS-Bench-v1_0.pickle", search_space="fbnet")
# Example to get all the hardware metrics in 3 specfic architectures under FBNet's Space
print("===> Example to get all the hardware metrics in the No.0,1,2 architectures under FBNet's Space")
for idx in [[0]*22, [0]*21+[1]*1, [0]*20+[1]*2]:
for dataset in ["cifar100", "ImageNet"]:
HW_metrics = hw_api.query_by_index(idx, dataset)
print("The HW_metrics (type: {}) for No.{} @ {} under NAS-Bench-201: {}".format(type(HW_metrics),
idx,
dataset,
HW_metrics))
# Example to get use the hardware metrics in one specific architectures in ImageNet under FBNet's Space
print("===> Example to get use the hardware metrics in the No.0 architectures in ImageNet under FBNet's Space")
HW_metrics = hw_api.query_by_index([0]*22, "cifar100")
for k in HW_metrics:
if 'average' in k:
print("{}: {}".format(k, HW_metrics[k]))
continue
elif "latency" in k:
unit = "ms"
else:
unit = "mJ"
print("{}: {} ({})".format(k, HW_metrics[k], unit))
# Create the network
config = hw_api.get_net_config([0]*22, "cifar100")
print(config)
from hw_nas_bench_api.fbnet_models import FBNet_Infer
network = FBNet_Infer(config) # create the network from configurration
print(network) # show the structure of this architecture
###Output
{'dataset': 'cifar100', 'num_classes': 100, 'op_idx_list': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'arch_str': ['k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1', 'k3_e1']}
FBNet_Infer(
(stem): ConvNorm(
(conv): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(cells): ModuleList(
(0): ConvBlock(
(conv1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(1): ConvBlock(
(conv1): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False)
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(16, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(2): ConvBlock(
(conv1): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=24, bias=False)
(bn2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(3): ConvBlock(
(conv1): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=24, bias=False)
(bn2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(4): ConvBlock(
(conv1): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(24, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=24, bias=False)
(bn2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(5): ConvBlock(
(conv1): Conv2d(24, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(24, 24, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=24, bias=False)
(bn2): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(24, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(6): ConvBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(7): ConvBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(8): ConvBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(9): ConvBlock(
(conv1): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(10): ConvBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(11): ConvBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(12): ConvBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(13): ConvBlock(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(14): ConvBlock(
(conv1): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(112, 112, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=112, bias=False)
(bn2): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(15): ConvBlock(
(conv1): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(112, 112, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=112, bias=False)
(bn2): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(16): ConvBlock(
(conv1): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(112, 112, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=112, bias=False)
(bn2): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(17): ConvBlock(
(conv1): Conv2d(112, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(112, 112, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=112, bias=False)
(bn2): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(112, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(18): ConvBlock(
(conv1): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
(bn2): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(19): ConvBlock(
(conv1): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
(bn2): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(20): ConvBlock(
(conv1): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
(bn2): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
(21): ConvBlock(
(conv1): Conv2d(184, 184, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(184, 184, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=184, bias=False)
(bn2): BatchNorm2d(184, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(184, 352, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(nl): ReLU(inplace=True)
)
)
(header): ConvNorm(
(conv): Conv2d(352, 1504, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(1504, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc): Linear(in_features=1504, out_features=100, bias=True)
)
###Markdown
gridfinderRun through the full gridfinder model from data input to final guess for Burundi.Note that the 'truth' data used for the grid here is very bad, so the accuracy results don't mean much.
###Code
import os
from pathlib import Path
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
import seaborn as sns
from IPython.display import display, Markdown
import numpy as np
import rasterio
import geopandas as gpd
import gridfinder as gf
from gridfinder import save_raster
###Output
_____no_output_____
###Markdown
Set folders and parameters
###Code
folder_inputs = Path('test_data')
folder_ntl_in = folder_inputs / 'ntl'
aoi_in = folder_inputs / 'gadm.gpkg'
roads_in = folder_inputs / 'roads.gpkg'
pop_in = folder_inputs / 'pop.tif'
grid_truth = folder_inputs / 'grid.gpkg'
folder_out = Path('test_output')
folder_ntl_out = folder_out / 'ntl_clipped'
raster_merged_out = folder_out / 'ntl_merged.tif'
targets_out = folder_out / 'targets.tif'
targets_clean_out = folder_out / 'targets_clean.tif'
roads_out = folder_out / 'roads.tif'
dist_out = folder_out / 'dist.tif'
guess_out = folder_out / 'guess.tif'
guess_skeletonized_out = folder_out / 'guess_skel.tif'
guess_nulled = folder_out / 'guess_nulled.tif'
guess_vec_out = folder_out / 'guess.gpkg'
animate_out = folder_out / 'animated'
percentile = 70 # percentile value to use when merging monthly NTL rasters
ntl_threshold = 0.1 # threshold when converting filtered NTL to binary (probably shouldn't change)
upsample_by = 2 # factor by which to upsample before processing roads (both dimensions are scaled by this)
cutoff = 0.0 # cutoff to apply to output dist raster, values below this are considered grid
###Output
_____no_output_____
###Markdown
Clip and merge monthly rasters
###Code
gf.clip_rasters(folder_ntl_in, folder_ntl_out, aoi_in)
raster_merged, affine = gf.merge_rasters(folder_ntl_out, percentile=percentile)
save_raster(raster_merged_out, raster_merged, affine)
print('Merged')
plt.imshow(raster_merged, vmin=0, vmax=1)
###Output
_____no_output_____
###Markdown
Create filter
###Code
ntl_filter = gf.create_filter()
X = np.fromfunction(lambda i, j: i, ntl_filter.shape)
Y = np.fromfunction(lambda i, j: j, ntl_filter.shape)
fig = plt.figure()
sns.set()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, ntl_filter, cmap=cm.coolwarm, linewidth=0, antialiased=False)
###Output
_____no_output_____
###Markdown
Clip, filter and resample NTL
###Code
ntl_thresh, affine = gf.prepare_ntl(raster_merged_out,
aoi_in,
ntl_filter=ntl_filter,
threshold=ntl_threshold,
upsample_by=upsample_by)
save_raster(targets_out, ntl_thresh, affine)
print('Targets prepared')
plt.imshow(ntl_thresh, cmap='viridis')
###Output
_____no_output_____
###Markdown
Remove target areas with no underlying population
###Code
targets_clean = gf.drop_zero_pop(targets_out, pop_in, aoi_in)
save_raster(targets_clean_out, targets_clean, affine)
print('Removed zero pop')
plt.imshow(ntl_thresh, cmap='viridis')
###Output
_____no_output_____
###Markdown
Roads: assign values, clip and rasterize
###Code
roads_raster, affine = gf.prepare_roads(roads_in,
aoi_in,
targets_out)
save_raster(roads_out, roads_raster, affine, nodata=-1)
print('Costs prepared')
plt.imshow(roads_raster, cmap='viridis', vmin=0, vmax=1)
###Output
_____no_output_____
###Markdown
Get targets and costs and run algorithm
###Code
targets, costs, start, affine = gf.get_targets_costs(targets_clean_out, roads_out)
est_mem = gf.estimate_mem_use(targets, costs)
print(f'Estimated memory usage: {est_mem:.2f} GB')
dist = gf.optimise(targets, costs, start,
jupyter=True,
animate=True,
affine=affine,
animate_path=animate_out)
save_raster(dist_out, dist, affine)
plt.imshow(dist)
###Output
_____no_output_____
###Markdown
Filter dist results to grid guess
###Code
guess, affine = gf.threshold(dist_out, cutoff=cutoff)
save_raster(guess_out, guess, affine)
print('Got guess')
plt.imshow(guess, cmap='viridis')
###Output
_____no_output_____
###Markdown
Check results
###Code
true_pos, false_neg = gf.accuracy(grid_truth, guess_out, aoi_in)
print(f'Points identified as grid that are grid: {100*true_pos:.0f}%')
print(f'Actual grid that was missed: {100*false_neg:.0f}%')
###Output
_____no_output_____
###Markdown
Skeletonize
###Code
guess_skel, affine = gf.thin(guess_out)
save_raster(guess_skeletonized_out, guess_skel, affine)
print('Skeletonized')
plt.imshow(guess_skel)
###Output
_____no_output_____
###Markdown
Convert to geometry
###Code
guess_gdf = gf.raster_to_lines(guess_skeletonized_out)
guess_gdf.to_file(guess_vec_out, driver='GPKG')
print('Converted to geom')
guess_gdf.plot()
###Output
_____no_output_____
###Markdown
Adding palettes
###Code
list_palettes()
main = (0, 73, 114) # dark blue
warm = (114, 0, 16) # red
cold = (114, 98, 0) # ugly green
save_palette(colors=[warm, cold, main], name='book') # here warm replaces red, cold replaces g, main replaces blue
list_palettes()
remove_palette('book')
list_palettes()
save_palette(colors=[warm, cold, main], name='book')
list_palettes()
colors = load_palette('book')
colors
###Output
_____no_output_____
###Markdown
Changing colors
###Code
# Load original as a numpy array
img = load_sample(nbr=0) # 0-2
print(f'Image type is {type(img)} of which shape is {img.shape}')
show_img(img, size=(10,10))
# Replace colors
adjusted_img = rgb2(img, *load_palette('default'))
show_img(adjusted_img, size=(10,10))
# Inspect alpha channel
show_img(adjusted_img[:,:,3],size=(10, 10))
save_img('adjusted.png', adjusted_img)
# Note: Gimp's 'Mean Curvature Blur'-filter does wonders to such images.
# It not only smooths the colors naturally but also cleans the edges if you
# apply it to the alpha channel as well,
###Output
_____no_output_____
###Markdown
Example Task As an example use case, we will select the best pretrained model for a task of contextual emotion detection from text. The collection of pretrained models is formed from the development history of a participant of the [EmoContext](https://www.humanizing-ai.com/emocontext.html) task in SemEval 2019. Changes at each development step include adding various word representations such as ELMo and GloVe and leveraging speaker embeddings and/or universal sentence encoder, which creates performance differences among the models. Our goal is to select the best pretrained model to make predictions on the unlabelled instances by only partially labelling a very few of 5,509 of them via ```modelpicker```. Model Picker Run this command on terminal
###Code
# Modelpicker takes the following arguments in order: --path to prediction file (CSV) --path to labelspace file (CSV) --an integer budget
%run modelpicker data/emocontext/predictions data/emocontext/labelspace 5
###Output
Please enter the label for the instance with ID 1346:
###Markdown
Or take numpy arrays as inputs in your code Load data Here we load the predictions matrix and labelspace contained in the ```data/emocontext/``` path. They are both ```CSV``` files and the labels are coded by integers. Map your labels to integers before you proceed.In the example we have below, we have 8 different models. The data consists of 8 model predictions on 5,509 unlabeled data instances.
###Code
# Set filenames
filename_predictions = 'predictions'
filename_labelspace = 'labelspace'
# Model collections and label set
datapath = Path(r'data/emocontext') # set path
file_predictions = open(str(datapath)+'/'+str(filename_predictions)+'.csv') # read predictions
mypredictions = np.loadtxt(file_predictions, delimiter=",")
file_labelspace = open(str(datapath)+'/'+str(filename_labelspace)+'.csv') # read label space
mylabelspace = np.loadtxt(file_labelspace, delimiter=",")
###Output
_____no_output_____
###Markdown
Run Model Picker The ```modelpicker``` algorithm takes the following inputs**ARGUMENTS**- _predictions_ The name of your CSV file consisting of model predictions. This is a 2D array of model predictions on your freshly collected data with size 𝑁×𝑘 where 𝑁 is the amount of unlabeled instances available at time 𝑡, and 𝑘 is the number of models. Each prediction is mapped to an integer.- _labelset_ The name of your CSV file consisting of elements of label space. For instance, for a dataset consisting of 4 classes, a possible label space can be {0,1,2,3}. These labels should be consistent with the mapping used for prediction matrix as well.- _budget_ An integer that indicates number of labeling the user wants to do.At the output, the algorithm returns the following:**OUTPUTS**- _bestmodel_ ID of the winner model based on the requested labels- _beliefs_ An array of size $k$ that quantifies the posterior belief on each model being the best one. The posterior belief also hints the ranking of models. That is, the higher nominal value will indicate a higher belief on that model being the best.
###Code
## Set budget
budget = 5
## Run model picker
(bestmodel, beliefs) = modelpicker(mypredictions, mylabelspace, budget)
# Note: for the sake of completeness, we added the ground truth labels for this dataset (see data/emocontext/oracle.csv).
# For your own dataset, labeling is left to the user. The labeling shows below is based on the ground truths.
print('ID of best model: '+ bestmodel)
###Output
_____no_output_____
###Markdown
vari'art: Example of latent analysis of a rap clip
###Code
import numpy as np
import pandas as pd
import random
from sklearn.utils import shuffle
import tensorflow as tf
from tensorflow.keras.layers import (
InputLayer,
Dense,
Reshape,
Flatten,
Dropout,
Conv2D,
Conv2DTranspose,
MaxPool2D,
BatchNormalization,
LeakyReLU
)
from tensorflow.keras.optimizers import Adam
from variart.preprocessing import ArtVideo
from variart.model import VAE, GAN
from variart.latent import Latent
###Output
_____no_output_____
###Markdown
1. Load data and preprocessing
###Code
# Load video
name = 'DrillFR4'
filename = 'inputs/DrillFR4.mp4'
DrillFR4 = ArtVideo(name, filename)
DrillFR4.load_video()
# Crop images as squares
DrillFR4.square()
# Resize images
size = 128
new_shape=(size,size)
DrillFR4.resize(new_shape=new_shape)
# Rescale pixels in (0,1)
DrillFR4.rescale_images()
# Input data shape
print(f"Shape {DrillFR4.name}: {DrillFR4.shape}")
# Show randomm image
DrillFR4.show_random_image()
###Output
_____no_output_____
###Markdown
2. Train GAN
###Code
# Parameters
batch_size = 128
noise_dim = 256
learning_rate=1e-4
wgan=False # Wasserstein GAN configuration if True
# Prepare data for training
data_train = DrillFR4.X.astype('float32')
data_train = shuffle(data_train, random_state=0)
input_shape_tuple = data_train.shape[1:]
train_dataset = tf.data.Dataset.from_tensor_slices(data_train).batch(batch_size)
# Definition of functions to create the generator and the discriminator
def make_generator_model():
dim=int(size/4)
generative_net = tf.keras.Sequential(
[
Dense(units=dim*dim*32, use_bias=False, input_shape=(noise_dim,)),
BatchNormalization(),
LeakyReLU(),
Reshape(target_shape=(dim, dim, 32)),
Conv2DTranspose(filters=64, kernel_size=3, strides=2, padding='same', use_bias=False),
BatchNormalization(),
LeakyReLU(),
Conv2DTranspose(filters=32, kernel_size=3, strides=2, padding='same', use_bias=False),
BatchNormalization(),
LeakyReLU(),
Conv2DTranspose(filters=3, kernel_size=3, strides=1, padding='same', use_bias=False, activation='tanh'),
]
)
return generative_net
def make_discriminator_model(wgan=False):
discriminative_net = tf.keras.Sequential([
Conv2D(16, (5, 5), strides=(2, 2), padding='same', input_shape=[size, size, 3]),
LeakyReLU(),
Dropout(0.3),
Conv2D(32, (5, 5), strides=(2, 2), padding='same'),
LeakyReLU(),
Dropout(0.3),
Flatten(),
])
if wgan:
discriminative_net.add(Dense(1))
else:
discriminative_net.add(Dense(1, activation='sigmoid'))
return discriminative_net
# Genertor and discriminator
generator = make_generator_model()
discriminator = make_discriminator_model(wgan=wgan)
# Create GAN object
gan_model = GAN(DrillFR4.name, noise_dim, input_shape_tuple, generator, discriminator, learning_rate=learning_rate, wgan=wgan)
# Train GAN
gan_model.train(train_dataset, epochs=1000, n_steps_gen=1, n_steps_disc=1, freq_plot=10, n_to_plot=4)
# Generate images
gan_model.generate_and_plot(n_to_plot=4)
###Output
_____no_output_____
###Markdown
3. Train VAE
###Code
# Prepare data for training
data = DrillFR4.X.astype('float32')
data = shuffle(data, random_state=0)
TRAIN_BUF = int(data.shape[0]*0.9)
data_train = data[:TRAIN_BUF]
data_validation = data[TRAIN_BUF:]
# Parameters
batch_size = 128
epochs = 10000
early_stop_patience = 15
latent_dim = 16
optimizer = Adam(1e-3)
train_dataset = tf.data.Dataset.from_tensor_slices(data_train).batch(batch_size)
validation_dataset = tf.data.Dataset.from_tensor_slices(data_validation).batch(batch_size)
nb_features = data.shape[1]*data.shape[2]*data.shape[3]
input_shape = (batch_size, data.shape[1], data.shape[2], data.shape[3])
# Encoder and decoder networks (inference and generative)
inference_net = tf.keras.Sequential(
[
InputLayer(input_shape=(data.shape[1], data.shape[2], data.shape[3])),
Conv2D(filters=4, kernel_size=3, strides=(1, 1), activation='tanh'),
MaxPool2D((2,2)),
BatchNormalization(),
Conv2D(filters=8, kernel_size=3, strides=(1, 1), activation='tanh'),
MaxPool2D((2,2)),
BatchNormalization(),
Flatten(),
Dense(latent_dim + latent_dim),
]
)
generative_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(units=data.shape[1]*data.shape[2]*4, activation='tanh'),
BatchNormalization(),
Reshape(target_shape=(data.shape[1], data.shape[2], 4)),
Conv2DTranspose(
filters=8,
kernel_size=3,
strides=(1, 1),
padding="SAME",
activation='tanh'),
BatchNormalization(),
Conv2DTranspose(
filters=4,
kernel_size=3,
strides=(1, 1),
padding="SAME",
activation='tanh'),
BatchNormalization(),
Conv2DTranspose(
filters=3, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
# Model definition
model = VAE(DrillFR4.name, latent_dim, input_shape, inference_net, generative_net)
# Train
model = model.train(optimizer,
train_dataset,
validation_dataset,
epochs,
batch_size,
early_stop_patience = early_stop_patience,
freq_plot = 25,
plot_test = True,
n_to_plot = 4)
###Output
_____no_output_____
###Markdown
4. Latent analysis
###Code
# Create latent object
LatentDrillFR4 = Latent(data, model)
# Encode and decode data
LatentDrillFR4.encode_data()
LatentDrillFR4.decode_data()
# Create tsne representation of data in latent space
LatentDrillFR4.latent_tsne()
LatentDrillFR4.plot_latent_tsne()
# Compute distributions of latent space dimensions
LatentDrillFR4.compute_dist_coord()
LatentDrillFR4.plot_latent_dist_coord()
# Perform clustering in latent space, test number of cluesters on a grid
LatentDrillFR4.latent_space_clustering(grid=range(5,100,5))
LatentDrillFR4.plot_silhouette_score()
# Select number of clusters
n_clusters = 5
clusterer = LatentDrillFR4.dico_clust[n_clusters]['clusterer']
LatentDrillFR4.plot_latent_tsne(clusterer=clusterer)
# Show images for a given cluster
label = 0
list_id = [i for i,l in enumerate(clusterer.labels_) if l==label][0:5]
LatentDrillFR4.plot_encoded_decoded(list_id=list_id)
# Generate images by sampling from distributions in the latent space
list_z, fig = LatentDrillFR4.generate_image(n=5, method='dist')
fig.show()
# Create a GIF from generated images
filename = f"outputs/gif_{LatentDrillFR4.name}.gif"
LatentDrillFR4.create_gif(list_z, filename)
###Output
_____no_output_____
###Markdown
Import dependencies
###Code
# Import all the module dependencies of this script
import json
import pandas
import getpass
import requests
import sys
import msrest
# Import the python autorest wrappers
from emsapi import emsapi
###Output
_____no_output_____
###Markdown
System Configuration / Constants
###Code
# The URL of EFOQA EMS API
api_url = "https://ems.efoqa.com/api/"
###Output
_____no_output_____
###Markdown
Gather User CredentialsOne day we could pull them from a credential store or key vault.
###Code
# Query the user for the credentials for the ems.efoqa.com website.
efoqa_user = input('Enter your EFOQA username: ')
efoqa_pass = getpass.getpass(prompt = 'Enter your EFOQA password: ')
###Output
_____no_output_____
###Markdown
API Session set up
###Code
# Use a username and password combination to create an api client
myapi = emsapi.create(efoqa_user, efoqa_pass, api_url)
###Output
_____no_output_____
###Markdown
Query API for EMS Systems
###Code
# Print the systems the user has access to in order to demonstrate the API.
systems = myapi.ems_system.get_ems_systems()
# Create a list out of the systems list that contains only the information we want.
sysList = list(map(lambda system: [system.id, system.name, system.description], systems))
df = pandas.DataFrame(sysList,columns=['id', 'name', 'description'])
print("You have access to the following systems:")
df
###Output
_____no_output_____
###Markdown
Query API for time-series dataLet's pull a little bit of data. We'll pick 'baro-corrected altitude' for a particular flight on the demo system. We'll extract 100 points evenly spread through the entire flight.The altitudeId value below was obtained by using the REST explorer to search for the parameter on EMS Online https://ems.efoqa.com/Docs/Rest/ExplorerThe output of this block of code should be an altitude chart that looks familiar.
###Code
# Baro-corrected altitude
altitudeId = "H4sIAAAAAAAEAG2Q0QuCMBDG34P+B/HdbZVUiApBPQT2kgi9rrn0YM7aZvbnN5JVUvdwfHD34/vu4iPXrbjTs+D7kksDF+DKezRC6ggSvzbmGmHc9z3qF6hVFZ4TMsOnQ5azmjc0AKkNlYz7A/Mm9GusUUkNZa00ijLj+BCTFd6UgApF/XQ68bx4SMHVvkyd1GjX6KytgFER46+FEZBfObOZ2db6eBBJEIlvVGfz4P+LhYRbZ29NyVCzgJD1MgitDIhrrj6+P/h04obj36VPLpuOeVIBAAA="
# EMS7 - the demo system.
emsId = myapi.find_ems_system_id('ems7-app')
# A flight that is known to exist
flightId = 190
# Pull out altitude with 100 samples through the file.
query = {
"select": [
{
"analyticId": altitudeId
}
],
"size": 100
}
# Execute the API call.
altitude = myapi.analytic.get_query_results(emsId, flightId, query)
# Offsets accessible using altitude.offsets
# Create a new data frame with the altitude in it.
altitudeDataFrame = pandas.DataFrame();
altitudeDataFrame["Altitude"] = altitude.results[0].values
line = altitudeDataFrame.plot.line()
###Output
_____no_output_____
###Markdown
Ray End-to-End NLP Example**GOAL:** In this example, we will go through how to use Ray to implement an end-to-end NLP example. Specifically, we will go through:- How to use RaySGD to scale the training of HuggingFace Transformer library.- How to serve the trained model with Ray ServeFirst we install some dependencies:
###Code
# !pip install uvicorn
# !pip install blist
###Output
_____no_output_____
###Markdown
And also import the libraries needed for the example:
###Code
import os
import time
import math
import random
import argparse
import json
from filelock import FileLock
import numpy as np
import ray
from ray import serve
from ray.util.sgd.torch import TrainingOperator
from ray.util.sgd import TorchTrainer
import requests
import torch
import torch.distributed as dist
from torch.utils.data import (DataLoader, RandomSampler,
SequentialSampler, TensorDataset)
from torch.utils.tensorboard import SummaryWriter
from transformers import (
AdamW,
GPT2LMHeadModel,
GPT2Tokenizer,
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
get_linear_schedule_with_warmup,
)
try:
from apex import amp
except ImportError:
amp = None
###Output
_____no_output_____
###Markdown
We also initialize Ray to use RaySGD and Ray Serve later.
###Code
ray.init(address="auto")
###Output
_____no_output_____
###Markdown
DatasetDownload the dataset. Here we use the wikitext-2 dataset as a demonstrative example. Any text datasets are feasible for this example.
###Code
!wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip
!unzip wikitext-2-v1.zip
###Output
--2020-06-05 22:32:11-- https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.217.40.150
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.217.40.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4475746 (4.3M) [application/zip]
Saving to: ‘wikitext-2-v1.zip.1’
wikitext-2-v1.zip.1 100%[===================>] 4.27M 6.45MB/s in 0.7s
2020-06-05 22:32:12 (6.45 MB/s) - ‘wikitext-2-v1.zip.1’ saved [4475746/4475746]
Archive: wikitext-2-v1.zip
creating: wikitext-2/
inflating: wikitext-2/wiki.test.tokens
inflating: wikitext-2/wiki.valid.tokens
inflating: wikitext-2/wiki.train.tokens
###Markdown
Parallel Training with Ray SGDIn this section, we show how to use RaySGD to scale up the training of the HuggingFace Transformer library.First we define the arguments for training:
###Code
# Training arguments (from hugging face)
training_arguments = TrainingArguments(
output_dir = "/home/ubuntu/ray-e2e-nlp-example/output_dir/",
learning_rate = 2e-5,
num_train_epochs = 3,
fp16 = True,
do_train = True,
do_eval = True
)
args = argparse.Namespace(**vars(training_arguments))
# args = training_arguments
# Model arguments
args.model_name_or_path = "gpt2"
args.model_type = "gpt2"
args.config_name = None
args.tokenizer_name = None
args.cache_dir = None
# Data processing arguments
args.train_data_file = "/home/ubuntu/ray-e2e-nlp-example/wikitext-2/wiki.train.tokens"
args.eval_data_file = "/home/ubuntu/ray-e2e-nlp-example/wikitext-2/wiki.test.tokens"
args.line_by_line = False
args.block_size = 128
args.overwrite_cache = False
args.tensorboard_dir = "/home/ubuntu/ray_results/ray-e2e-nlp-example/"
# Ray arguments
args.num_workers = 4
args.address = "auto"
use_gpu = torch.cuda.is_available() and not args.no_cuda
args.device = torch.device("cuda" if use_gpu else "cpu")
args
###Output
_____no_output_____
###Markdown
Here we set the random seeds for reproducibility:
###Code
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
set_seed(args)
###Output
_____no_output_____
###Markdown
Data creatorThen we define the data creator for the trainer. The data creator creates a train data loader object for training. Note that we do not need to wrap the data loader with a distributed loader since the RaySGD trainer will automatically does that.
###Code
def data_creator(config):
args = config["args"]
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name
if args.tokenizer_name else args.model_name_or_path,
cache_dir=args.cache_dir if args.cache_dir else None,
)
if args.block_size <= 0:
args.block_size = tokenizer.max_len
# Our input block size will be the max possible for the model
else:
args.block_size = min(args.block_size, tokenizer.max_len)
train_dataset = TextDataset(
tokenizer=tokenizer, file_path=args.train_data_file,
block_size=args.block_size, overwrite_cache=args.overwrite_cache
)
train_sampler = RandomSampler(train_dataset) if not dist.is_initialized() else None
train_loader = DataLoader(
train_dataset,
sampler=train_sampler,
batch_size=args.per_device_train_batch_size
)
return train_loader
###Output
_____no_output_____
###Markdown
Model creatorThe model creator creates models for each training worker. Here we initialize the modelwith a trained GPT-2 model.
###Code
def model_creator(config):
with FileLock(os.path.expanduser("~/.download.lock")):
args = config["args"]
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name
if args.tokenizer_name else args.model_name_or_path,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model_config = AutoConfig.from_pretrained(
args.config_name if args.config_name else args.model_name_or_path,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model = AutoModelWithLMHead.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=model_config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model.resize_token_embeddings(len(tokenizer))
return model
###Output
_____no_output_____
###Markdown
Optimizer creatorWe use Adam optimizer for training. In the following code, we group the parameters into two groups: one with weight decay and one without weight decay for training accuracy.
###Code
def optimizer_creator(model, config):
args = config["args"]
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [
p for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)
],
"weight_decay": args.weight_decay,
},
{
"params": [
p for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0
},
]
return AdamW(
optimizer_grouped_parameters,
lr=args.learning_rate,
eps=args.adam_epsilon)
###Output
_____no_output_____
###Markdown
Training operatorNext we define the training operator. The training operator defines a custom training loop that includes gradient accumulation (i.e. perform gradient updates after a certain amount of forward and backward propagations). The training operator here also defines the warmup learning rate scheduler for the Adam optimizer.
###Code
def announce_training(args, dataset_len, t_total):
# Train!
print("***** Running training *****")
print("CUDA_VISIBLE_DEVICES", os.environ["CUDA_VISIBLE_DEVICES"])
print(" Num examples = %d" % dataset_len)
print(" Num Epochs = %d" % args.num_train_epochs)
print(" Instantaneous batch size per GPU = %d" %
args.per_device_train_batch_size)
print(
" Total train batch size (w. parallel, distributed & accum) = %d" %
args.per_device_train_batch_size * args.gradient_accumulation_steps *
args.num_workers
)
print(" Gradient Accumulation steps = %d" %
args.gradient_accumulation_steps)
print(" Total optimization steps = %d" % t_total)
class TransformerOperator(TrainingOperator):
def setup(self, config):
self.args = args = config["args"]
self.tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name
if args.tokenizer_name else args.model_name_or_path,
cache_dir=args.cache_dir if args.cache_dir else None,
)
self.train_data_len = len(self.train_loader)
self._warmup_scheduler = get_linear_schedule_with_warmup(
self.optimizer,
num_warmup_steps=args.warmup_steps,
num_training_steps=self.calculate_t_total())
self._global_step = 0
announce_training(args, self.train_data_len, self.calculate_t_total())
def train_batch(self, batch, batch_info=None):
args = self.args
model = self.model
optimizer = self.optimizer
step = batch_info["batch_idx"]
model.train()
batch = batch.to(self.device)
outputs = model(input_ids=batch, labels=batch)
# model outputs are always tuple in transformers (see doc)
loss = outputs[0]
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
batch_loss = loss.item()
# last step in epoch but step is always smaller
# than gradient_accumulation_steps
ending = (self.train_data_len <= args.gradient_accumulation_steps
and (step + 1) == self.train_data_len)
if (step + 1) % args.gradient_accumulation_steps == 0 or ending:
if args.fp16:
torch.nn.utils.clip_grad_norm_(
amp.master_params(optimizer), args.max_grad_norm)
else:
torch.nn.utils.clip_grad_norm_(model.parameters(),
args.max_grad_norm)
self.optimizer.step()
self._warmup_scheduler.step() # Update learning rate schedule
model.zero_grad()
self._global_step += 1
learning_rate_scalar = self._warmup_scheduler.get_lr()[0]
return {"learning_rate": learning_rate_scalar, "loss": batch_loss}
def calculate_t_total(self):
args = self.args
grad_accum_steps = args.gradient_accumulation_steps
train_data_len = len(self.train_loader)
if args.max_steps > 0:
t_total = args.max_steps
args.num_train_epochs = args.max_steps // (
train_data_len // grad_accum_steps) + 1
else:
t_total = (
train_data_len // grad_accum_steps * args.num_train_epochs)
return t_total
###Output
_____no_output_____
###Markdown
RaySGD Torch TrainerFinallly we define a RaySGD Torch trainer to perform distributed training.
###Code
trainer = TorchTrainer(
model_creator=model_creator,
data_creator=data_creator,
optimizer_creator=optimizer_creator,
training_operator_cls=TransformerOperator,
use_fp16=args.fp16,
apex_args={"opt_level": args.fp16_opt_level},
num_workers=args.num_workers,
use_gpu=use_gpu,
use_tqdm=False,
config={"args": args}
)
###Output
_____no_output_____
###Markdown
EvaluationHere we define the evalutate function, which evaluates the trained model on the evalutation dataset.
###Code
def evaluate(args, model, tokenizer):
# Loop to handle MNLI double evaluation (matched, mis-matched)
results = {}
eval_dataset = TextDataset(
tokenizer=tokenizer, file_path=args.eval_data_file,
block_size=args.block_size, overwrite_cache=args.overwrite_cache
)
args.eval_batch_size = args.per_device_eval_batch_size
eval_sampler = SequentialSampler(eval_dataset)
eval_dataloader = DataLoader(
eval_dataset,
sampler=eval_sampler,
batch_size=args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
for batch in eval_dataloader:
model.eval()
batch = batch.to(args.device)
with torch.no_grad():
outputs = model(input_ids=batch, labels=batch)
tmp_eval_loss = outputs[0]
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
return {"loss": eval_loss}
###Output
_____no_output_____
###Markdown
Training LoopWe define the training loop here. We will evaluate the model on the validation set every epoch. We also log the results to the tensorboard and thus we can get the training curve by clicking the tensorboard button on the Anyscale dashboard.
###Code
tokenizer = trainer.get_local_operator().tokenizer
local_model = trainer.get_model()
epochs_trained = 0
train_iterator = range(
epochs_trained,
int(args.num_train_epochs)
)
tensorboard_writer = SummaryWriter(log_dir=args.tensorboard_dir, flush_secs=30)
if args.do_train:
for _ in train_iterator:
train_stats = trainer.train()
eval_stats = evaluate(args, local_model, tokenizer)
print("Training stats:", train_stats)
print("Validation stats:", eval_stats)
tensorboard_writer.add_scalar('Loss/train', train_stats['loss'], train_stats["epoch"])
tensorboard_writer.add_scalar('Loss/eval', eval_stats['loss'], train_stats["epoch"])
###Output
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
[2m[36m(pid=33032)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
[2m[36m(pid=33029)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
[2m[36m(pid=33058)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
[2m[36m(pid=33029)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
[2m[36m(pid=33032)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
[2m[36m(pid=33058)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
[2m[36m(pid=33032)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
[2m[36m(pid=33029)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
[2m[36m(pid=33058)[0m Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
Training stats: {'num_samples': 2392, 'epoch': 1, 'batch_count': 598, 'learning_rate': 1.6661092530657763e-05, 'last_learning_rate': 1.3333333333333333e-05, 'loss': 3.389202869456747, 'last_loss': 3.241966962814331}
Validation stats: {'loss': 2.9518456591041855}
Training stats: {'num_samples': 2392, 'epoch': 2, 'batch_count': 598, 'learning_rate': 9.994425863991069e-06, 'last_learning_rate': 6.666666666666667e-06, 'loss': 3.210772928984269, 'last_loss': 3.298543930053711}
Validation stats: {'loss': 2.9231047490063835}
Training stats: {'num_samples': 2392, 'epoch': 3, 'batch_count': 598, 'learning_rate': 3.3277591973244156e-06, 'last_learning_rate': 0.0, 'loss': 3.161288238289364, 'last_loss': 3.2020671367645264}
Validation stats: {'loss': 2.9164522288167354}
###Markdown
When the training finishes, we save the model to the disk and also shutdown the trainer to release the GPUs for serving.
###Code
def save_model(args, model, tokenizer):
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
print("Saving model checkpoint to %s" % args.output_dir)
model.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
torch.save(args, os.path.join(args.output_dir, "training_args.bin"))
save_model(args, local_model, tokenizer)
trainer.shutdown()
###Output
Saving model checkpoint to /home/ubuntu/ray-e2e-nlp-example/output_dir/
###Markdown
ServingHere we demonstrate how to use Ray Serve to serve the model we just trained.First we define a serving backend, which is a Ray actor that processes incoming requests. Here we assume the request is a prefix of an English sentence, and we will use our model to predict the next word of the input segement.
###Code
serve.init()
class NextWord:
def __init__(self, args):
self.args = args
self.model = AutoModelWithLMHead.from_pretrained(args.output_dir)
self.tokenizer = AutoTokenizer.from_pretrained(args.output_dir)
self.model.to(args.device)
def __call__(self, flask_request):
input_sentence = flask_request.data.decode("utf-8")
generated = self.tokenizer.encode(input_sentence)
context = torch.tensor([generated]).to(args.device)
past = None
output, past = self.model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = self.tokenizer.decode(generated)
return sequence
# If the backend name have been defined before, we should delete them before create a new one.
# serve.delete_endpoint("nextword")
serve.create_backend("nextword", NextWord, args, ray_actor_options={"num_gpus": 1})
###Output
_____no_output_____
###Markdown
Now we create a serving endpoint at `/nextword`.
###Code
# Similarly, of the endpoint name have been defined before, we should delete them before create a new one.
# serve.delete_endpoint("nextword")
serve.create_endpoint("nextword", "/nextword", methods=["GET", "POST"])
###Output
[2m[36m(pid=33037)[0m 2020-06-05 22:57:53,092 INFO master.py:536 -- Registering route /nextword to endpoint nextword with methods ['GET', 'POST'].
###Markdown
Connect the endpoint with the backend.
###Code
serve.set_traffic("nextword", {"nextword": 1.0})
###Output
_____no_output_____
###Markdown
Now we can send the request to the server and receive the results:
###Code
r = requests.post("http://127.0.0.1:8000/nextword", data="The Manhattan bridge is a major")
r.text
###Output
_____no_output_____
###Markdown
Prerequisite: download ```pluto.py``` and the ```models``` folder, put the both in the same directory as this file.
/base_dir
|-- pluto.py
|-- /models
| |-- fbm2.pt
| |-- wa1.pt
| ...
|
|-- example.ipynb <-- your are here
In addition to that, you might want to download some example images from the [GitHub repo](https://github.com/Patzold/Pluto) or the [screenshots dataset](https://www.kaggle.com/patzold/screenshots-dataset).
###Code
# (optional) install dependencies
!pip install numpy
!pip install matplotlib
!pip install opencv-python
!pip install pytorch
!pip install easyocr
import pluto as pl
###Output
_____no_output_____
###Markdown
Let's start by loading a screenshot and displaying it.
###Code
path = "example images/6.jpg" # This image is from the Pluto GitHub repo
img = pl.read_image(path)
pl.show_image(img)
###Output
_____no_output_____
###Markdown
Next, let's run the core method for the category specific feature. All features like the Facebook, New York Times or FB Messengeer one have a to_json() method. It performes the core function of Pluto, which is to extract information from a screenshot and to return it as a .json file.
###Code
post = pl.Facebook(img)
post.to_json()
###Output
_____no_output_____
###Markdown
In the example above, we don't specify a specific image nor output path, so the image from the object initialization is used and the output is simply printed.
Let's perform the same action, but this time with a different image and the same object.
###Code
img2 = pl.read_image("example images/5.jpg")
pl.show_image(img2)
# check if you are running on cuda
print(post.determine_device())
post.to_json(img2)
###Output
_____no_output_____
###Markdown
Again, this is the basic workflow for all features in Pluto.
###Code
# This is what it would look like
# pl.Facebook(img).to_json()
# pl.NYT(img).to_json()
# pl.WPost(img).to_json()
# pl.WELT(img).to_json()
# pl.Discord(img).to_json()
# pl.FBM(img).to_json()
# pl.WhatsApp(img).to_json()
# pl.Tagesschau(img).to_json()
###Output
_____no_output_____
###Markdown
Let's look at the New York Times feature.
###Code
img = pl.read_image("example images/NYT_Example_3.jpg")
pl.show_image(img)
article = pl.NYT(img)
article.to_json(img, "NYT_3_out.json")
###Output
_____no_output_____
###Markdown
Check your directory. You should find a file called 'NYT_3_out.json'. But let's go a little bit further than that. What if we would like to find the original article seen in the screenshot?
###Code
article.open_search()
###Output
_____no_output_____
###Markdown
DWD_historical_weather: Beispiel-Notebook Bundesland als globalen Parameter festlegen
###Code
BUNDESLAND = 'Berlin'
from DWD_hist_weather import tagestemp_land, tageswerte_land
import pandas as pd
import pickle
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
###Output
_____no_output_____
###Markdown
Das eigentliche Einlesen der Daten: Wenn vorhanden aus pickle, sonst **tageswerte_land** aus dem Modul aufrufen und die Daten vom DWD ziehen
###Code
pickle_dateiname = f'{BUNDESLAND}.pickle'
try:
tageswerte = pickle.load(open(pickle_dateiname, 'rb'))
print(f'Wetterdaten für {BUNDESLAND} aus pickle eingelesen.')
except (OSError, IOError):
tageswerte = tageswerte_land(BUNDESLAND)
pickle.dump(tageswerte, open(pickle_dateiname, 'wb'))
print(f'\nWetterdaten für {BUNDESLAND} in pickle geschrieben.')
###Output
Wetterdaten für Berlin in pickle geschrieben.
###Markdown
DataFrame ausgeben
###Code
display(tageswerte)
###Output
_____no_output_____
###Markdown
Heatmap der täglichen Durchschnittstemperaturen
###Code
ana = tageswerte.pivot(index='Jahr', columns='Tag_des_Jahres', values='TempMean')
f, ax = plt.subplots(figsize=(20, 10))
sns.heatmap(ana, vmin=-10, vmax=23, cmap="RdBu_r")
ax.axes.set_title("Tagesmitteltemperaturen", y=1.01)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_minor_locator(mdates.DayLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
###Output
_____no_output_____
###Markdown
Jährliche Durchschnittstemperaturen plus 5-Jahres-Mittel
###Code
ana = tageswerte.pivot(index='Jahr', columns='Tag_des_Jahres', values='TempMean')
ana['Jahresmittel'] = ana.mean(axis=1)
ana['Jahresmittel5'] = ana['Jahresmittel'].rolling(5).mean()
plt.subplots(figsize=(20, 10))
sns.lineplot(data=ana, x='Jahr', y='Jahresmittel')
sns.lineplot(data=ana, x='Jahr', y='Jahresmittel5', color='red')
###Output
_____no_output_____
###Markdown
This notebook is intended to show the usage of our proposed RegGNN and sample selection module on interactive notebook environments (e.g. Jupyter Notebook, Google Colab). InstallationFirst, we install the required packages that are not already installed on our runtime. The following cell includes packages that are not installed on Colab.
###Code
import torch
torch, cuda = torch.__version__.split('+')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{torch}+{cuda}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{torch}+{cuda}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{torch}+{cuda}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{torch}+{cuda}.html
!pip install torch-geometric
!pip install pymanopt
###Output
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-scatter
[?25l Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_scatter-2.0.7-cp37-cp37m-linux_x86_64.whl (2.6MB)
[K |████████████████████████████████| 2.6MB 2.6MB/s
[?25hInstalling collected packages: torch-scatter
Successfully installed torch-scatter-2.0.7
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-sparse
[?25l Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_sparse-0.6.10-cp37-cp37m-linux_x86_64.whl (1.4MB)
[K |████████████████████████████████| 1.4MB 2.6MB/s
[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-sparse) (1.4.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scipy->torch-sparse) (1.19.5)
Installing collected packages: torch-sparse
Successfully installed torch-sparse-0.6.10
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-cluster
[?25l Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_cluster-1.5.9-cp37-cp37m-linux_x86_64.whl (926kB)
[K |████████████████████████████████| 931kB 2.7MB/s
[?25hInstalling collected packages: torch-cluster
Successfully installed torch-cluster-1.5.9
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-spline-conv
[?25l Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_spline_conv-1.2.1-cp37-cp37m-linux_x86_64.whl (368kB)
[K |████████████████████████████████| 368kB 2.7MB/s
[?25hInstalling collected packages: torch-spline-conv
Successfully installed torch-spline-conv-1.2.1
Collecting torch-geometric
[?25l Downloading https://files.pythonhosted.org/packages/33/4b/9f6bb94ccd93f3c9324cb6b7c5742dfaf3c3a5127604cf5195a1901d048c/torch_geometric-1.7.1.tar.gz (222kB)
[K |████████████████████████████████| 225kB 5.0MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.19.5)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (4.41.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.4.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.5.1)
Requirement already satisfied: python-louvain in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.15)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.22.2.post1)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.23.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.1.5)
Collecting rdflib
[?25l Downloading https://files.pythonhosted.org/packages/d0/6b/6454aa1db753c0f8bc265a5bd5c10b5721a4bb24160fb4faf758cf6be8a1/rdflib-5.0.0-py3-none-any.whl (231kB)
[K |████████████████████████████████| 235kB 9.9MB/s
[?25hRequirement already satisfied: googledrivedownloader in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.11.3)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx->torch-geometric) (4.4.2)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->torch-geometric) (1.0.1)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2021.5.30)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (1.24.3)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2018.9)
Collecting isodate
[?25l Downloading https://files.pythonhosted.org/packages/9b/9f/b36f7774ff5ea8e428fdcfc4bb332c39ee5b9362ddd3d40d9516a55221b2/isodate-0.6.0-py2.py3-none-any.whl (45kB)
[K |████████████████████████████████| 51kB 5.3MB/s
[?25hRequirement already satisfied: pyparsing in /usr/local/lib/python3.7/dist-packages (from rdflib->torch-geometric) (2.4.7)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from rdflib->torch-geometric) (1.15.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->torch-geometric) (2.0.1)
Building wheels for collected packages: torch-geometric
Building wheel for torch-geometric (setup.py) ... [?25l[?25hdone
Created wheel for torch-geometric: filename=torch_geometric-1.7.1-cp37-none-any.whl size=381206 sha256=25a8c9e057dba845e9808ea5d955acbae27c4f40e78fa000993caeda60cf8dc9
Stored in directory: /root/.cache/pip/wheels/f3/97/91/7572ed6157a4c1ccef22a91a7ae9365413b57bb1a65d6056fa
Successfully built torch-geometric
Installing collected packages: isodate, rdflib, torch-geometric
Successfully installed isodate-0.6.0 rdflib-5.0.0 torch-geometric-1.7.1
Collecting pymanopt
[?25l Downloading https://files.pythonhosted.org/packages/2e/fc/836f55664c3142c606d1e2e974e987ac2a81d6faf055cd1fdff3e4757e4a/pymanopt-0.2.5-py3-none-any.whl (59kB)
[K |████████████████████████████████| 61kB 1.9MB/s
[?25hRequirement already satisfied: numpy>=1.16 in /usr/local/lib/python3.7/dist-packages (from pymanopt) (1.19.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from pymanopt) (1.4.1)
Installing collected packages: pymanopt
Successfully installed pymanopt-0.2.5
###Markdown
Then, we clone the repository and move the files into the working directory.
###Code
!git clone https://github.com/basiralab/reggnn.git
!mv reggnn/* .
###Output
Cloning into 'reggnn'...
remote: Enumerating objects: 94, done.[K
remote: Counting objects: 100% (94/94), done.[K
remote: Compressing objects: 100% (68/68), done.[K
remote: Total 94 (delta 48), reused 64 (delta 25), pack-reused 0[K
Unpacking objects: 100% (94/94), done.
###Markdown
Help For ArgumentsThe help menu that lists valid argument values are displayed.
###Code
!python demo.py -h
###Output
2021-06-20 21:50:21.288811: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
usage: demo.py [-h] [--mode {data,infer}] [--model {CPM,PNA,RegGNN}]
[--data-source {simulated,saved}]
[--measure {abs,geo,tan,node,eigen,close,concat_orig,concat_scale}]
optional arguments:
-h, --help show this help message and exit
--mode {data,infer} Creates data and topological features OR make
inferences on data
--model {CPM,PNA,RegGNN}
Chooses the inference model that will be used
--data-source {simulated,saved}
Simulates random data or loads from path in config
--measure {abs,geo,tan,node,eigen,close,concat_orig,concat_scale}
Chooses the topological measure to be used
###Markdown
Data PreparationFollowing command will generate data according to the ```config.py``` file and extract eigenvector centrality features from the data, saving all in the current directory.
###Code
!python demo.py --mode data --data-source simulated --measure eigen
###Output
2021-06-20 21:50:28.435885: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
'simulated' data will be used with 'eigen' measure.
Starting topological feature extraction...
100% 30/30 [01:39<00:00, 3.33s/it]
Data and topological features are created and saved at ./simulated_data/ successfully.
###Markdown
Making InferencesFollowing command will make inferences on the generated data, report the errors, and save the predictions in the working directory.
###Code
!python demo.py --mode infer --model RegGNN
###Output
2021-06-20 21:52:14.305496: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
RegGNN will be run on the data.
Cross Validation Fold 1/5
Cross Validation Fold 2/5
Cross Validation Fold 3/5
Cross Validation Fold 4/5
Cross Validation Fold 5/5
For k in [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]:
Mean MAE +- std over k: 6.559 +- 0.105
Min, Max MAE over k: 6.410, 6.856
Mean RMSE +- std over k: 8.320 +- 0.143
Min, Max RMSE over k: 8.174, 8.733
Predictions are successfully saved at ./.
###Markdown
The content of a cell magic is included above the input file, to allow for extra formatting.
###Code
%%tikz -i example.tikz --no-wrap -e example.pdf
\tikzset{every node/.style={font=\sffamily,white}}
###Output
_____no_output_____
###Markdown
Ejemplo gapminder
###Code
library('gapminder')
data(gapminder)
head(gapminder)
###Output
_____no_output_____
###Markdown
Ejercicio 1Realice un gráfico de puntos de la esperanza de vida contra el año, coloreado por continente.
###Code
library('ggplot2')
ggplot(data=gapminder, aes(x=year, y=lifeExp, colour=continent)) + geom_point()
###Output
_____no_output_____
###Markdown
Ejercicio 2Calcule la media de esperanza de vida en el continente asiático.
###Code
mean(gapminder$lifeExp[gapminder$continent == 'Asia'])
###Output
_____no_output_____
###Markdown
Ejercicio 2Calcule la media de esperanza de vida en el continente americano.
###Code
mean(gapminder$lifeExp[gapminder$continent == 'Americas'])
###Output
_____no_output_____
###Markdown
Example notebook on how to read and plot the available data.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
plt.rc('text', usetex=True)
mvir_mass_bins = np.linspace(13.0,14.25,4)
mass_label = 'M_{\mathrm{vir}}'
###Output
_____no_output_____
###Markdown
mu: stellar mass surface density profiles
###Code
data_location = 'data/HSC/mu/'
files=['hsc_s16a_mu_logmvir_13.00_13.42.npy',
'hsc_s16a_mu_logmvir_13.42_13.83.npy',
'hsc_s16a_mu_logmvir_13.83_14.25.npy']
hsc_mu_arrays = [np.load(data_location+file) for file in files]
fig = plt.figure(figsize=(10*6, 12))
gs1 = gridspec.GridSpec(1, 3)
gs1.update(left=0.05, right=0.48, wspace=0.0, hspace=0.0)
ax1 = plt.subplot(gs1[0, 0])
ax2 = plt.subplot(gs1[0, 1])
ax3 = plt.subplot(gs1[0, 2])
ax1.plot(hsc_mu_arrays[0]['r_mpc'], hsc_mu_arrays[0]['median_mu'],
linestyle='-', linewidth=5.0, c='k', alpha=1, zorder=2, label='HSC')
ax2.plot(hsc_mu_arrays[1]['r_mpc'], hsc_mu_arrays[1]['median_mu'],
linestyle='-', linewidth=5.0, c='k', alpha=1, zorder=2, label='HSC')
ax3.plot(hsc_mu_arrays[2]['r_mpc'], hsc_mu_arrays[2]['median_mu'],
linestyle='-', linewidth=5.0, c='k', alpha=1, zorder=2, label='HSC')
ax1.fill_between(hsc_mu_arrays[0]['r_mpc'],
hsc_mu_arrays[0]['median_mu']+hsc_mu_arrays[0]['std_mu'],
hsc_mu_arrays[0]['median_mu']-hsc_mu_arrays[0]['std_mu'],
alpha=0.4, color='k', zorder=1, linewidth=3)
ax2.fill_between(hsc_mu_arrays[1]['r_mpc'],
hsc_mu_arrays[1]['median_mu']+hsc_mu_arrays[1]['std_mu'],
hsc_mu_arrays[1]['median_mu']-hsc_mu_arrays[1]['std_mu'],
alpha=0.4, color='k', zorder=1, linewidth=3)
ax3.fill_between(hsc_mu_arrays[2]['r_mpc'],
hsc_mu_arrays[2]['median_mu']+hsc_mu_arrays[2]['std_mu'],
hsc_mu_arrays[2]['median_mu']-hsc_mu_arrays[2]['std_mu'],
alpha=0.4, color='k', zorder=1, linewidth=3)
######################################################################################################################
# plot details
######################################################################################################################
# # X-Y limits
ax1.set_xlim(0.9, 3.9)
ax1.set_ylim(4.1, 10)
ax2.set_xlim(0.9, 3.9)
ax2.set_ylim(4.1, 10)
ax3.set_xlim(0.9, 3.9)
ax3.set_ylim(4.1, 10)
ax1.tick_params(axis='y', which='major', labelsize=35)
ax1.tick_params(axis='x', which='major', labelsize=35)
ax2.tick_params(axis='x', which='major', labelsize=35)
ax3.tick_params(axis='x', which='major', labelsize=35)
ax1.text(1.65, 4.2, r'${0}<{1}<{2}$'.format(round(mvir_mass_bins[0],2),mass_label, round(mvir_mass_bins[1],2)),
size=32)
ax2.text(1.65, 4.2, r'${0}<{1}<{2}$'.format(round(mvir_mass_bins[1],2),mass_label, round(mvir_mass_bins[2],2)),
size=32)
ax3.text(1.65, 4.2, r'${0}<{1}<{2}$'.format(round(mvir_mass_bins[2],2),mass_label, round(mvir_mass_bins[3],2)),
size=32)
ax1.legend(loc= 'upper center', fontsize=35)
#add twin x axis in kpc
x1, x2 = ax1.get_xlim()
ax1_twin = ax1.twiny()
ax1_twin.set_xlim(x1, x2)
ax1_twin.figure.canvas.draw()
ax1_twin.xaxis.set_ticks([2**0.25, 5**0.25, 10**0.25, 50**0.25, 100**0.25, 200**0.25])
ax1_twin.xaxis.set_ticklabels([r'$2$', r'$5$', r'$10$', r'$50$', r'$100$', r'$200$'])
ax1_twin.tick_params(axis='both', which='major', labelsize=35)
ax1_twin.set_xlabel(r'$R \: [kpc]$', fontsize=40)
x1, x2 = ax2.get_xlim()
ax2_twin = ax2.twiny()
ax2_twin.set_xlim(x1, x2)
ax2_twin.figure.canvas.draw()
ax2_twin.xaxis.set_ticks([2**0.25, 5**0.25, 10**0.25, 50**0.25, 100**0.25, 200**0.25])
ax2_twin.xaxis.set_ticklabels([r'$2$', r'$5$', r'$10$', r'$50$', r'$100$', r'$200$'])
ax2_twin.tick_params(axis='both', which='major', labelsize=35)
ax2_twin.set_xlabel(r'$R \: [kpc]$', fontsize=40)
x1, x2 = ax3.get_xlim()
ax3_twin = ax3.twiny()
ax3_twin.set_xlim(x1, x2)
ax3_twin.figure.canvas.draw()
ax3_twin.xaxis.set_ticks([2**0.25, 5**0.25, 10**0.25, 50**0.25, 100**0.25, 200**0.25])
ax3_twin.xaxis.set_ticklabels([r'$2$', r'$5$', r'$10$', r'$50$', r'$100$', r'$200$'])
ax3_twin.tick_params(axis='both', which='major', labelsize=35)
ax3_twin.set_xlabel(r'$R \: [\mathrm{kpc}]$', fontsize=40)
######################################################################################################################
#axis labels and vertical lines
ax1.set_xlabel(r'$R^{1/4} \: [\mathrm{kpc}^{1/4}]$', fontsize=40)
ax2.set_xlabel(r'$R^{1/4} \: [\mathrm{kpc}^{1/4}]$', fontsize=40)
ax3.set_xlabel(r'$R^{1/4} \: [\mathrm{kpc}^{1/4}]$', fontsize=40)
ax1.set_ylabel(r'$\mu_{\star}\ [\log (M_{\odot}/\mathrm{kpc}^2)]$', fontsize=40)
#vertical lines for HSC limits
ax1.axvline(100.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
ax1.axvline(6.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
ax2.axvline(100.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
ax2.axvline(6.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
ax3.axvline(100.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
ax3.axvline(6.0 ** 0.25, linestyle='--', linewidth=3.0, alpha=0.5, c='k')
#grey out psf region
ax1.axvspan(0, 6**0.25, alpha=0.25, color='grey')
ax2.axvspan(0, 6**0.25, alpha=0.25, color='grey')
ax3.axvspan(0, 6**0.25, alpha=0.25, color='grey')
######################################################################################################################
#adjustments to ticks and space between subplots
plt.setp(ax2.get_yticklabels(), visible=False)
plt.setp(ax3.get_yticklabels(), visible=False)
plt.show()
###Output
_____no_output_____
###Markdown
delta_sigma: weak lensing profiles
###Code
data_location = 'data/HSC/delta_sigma/'
files=['hsc_s16a_dsigma_logmvir_13.00_13.42.npy',
'hsc_s16a_dsigma_logmvir_13.42_13.83.npy',
'hsc_s16a_dsigma_logmvir_13.83_14.25.npy']
hsc_dsigma_arrays = [np.load(data_location+file) for file in files]
fig = plt.figure(figsize=(12*3, 10))
axes = [plt.subplot(1,3,i) for i in [1,2,3]]
for i in range(3):
axes[i].loglog()
#plot HSC
axes[i].errorbar(hsc_dsigma_arrays[i]['r_mpc'], hsc_dsigma_arrays[i]['dsigma_lr'],
hsc_dsigma_arrays[i]['dsigma_err_jk'], c='k', markersize=10, marker='o',
linewidth=4.0, alpha=0.75, label= 'HSC', zorder=10)
#text label
axes[i].text(0.05, 0.1, r'${0}<\log \left(\mathrm{{M_{{vir}}}}\right)<{1}$'.format(round(mvir_mass_bins[i],2),
round(mvir_mass_bins[i+1],2)),
size=34, transform=axes[i].transAxes) #transform to axis coords rather than data coordinates
axes[i].tick_params(axis='both', which='major', labelsize=30)
axes[i].set_ylim([7*10**-2, 4*10**2])
axes[i].set_xlabel(r'$\mathrm{R \ [Mpc]}$', fontsize=40)
axes[i].set_ylabel(r'$\Delta\Sigma \ [(M_{\odot})/\mathrm{pc}^2]$', fontsize=40)
axes[0].legend(fontsize=30)
plt.show()
###Output
_____no_output_____
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib inline
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(8,5,0)
def compute_object_points():
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
shape = None
# Step through the list and search for chessboard corners
for fname in images:
img = mpimg.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
if shape == None:
shape = gray.shape[::-1]
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
return objpoints, imgpoints, shape
###Output
_____no_output_____
###Markdown
And so on and so forth...
###Code
# Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
objpoints, imgpoints, shape = compute_object_points()
def calibrate_camera(objpoints, imgpoints, shape):
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, shape, None, None)
return mtx, dist
# Apply a distortion correction to raw images.
def cal_undistort(img, mtx, dist):
dst = cv2.undistort(img, mtx, dist, None, mtx)
return dst
mtx, dist = calibrate_camera(objpoints, imgpoints, shape)
images = glob.glob('camera_cal/calibration*.jpg')
for fname in images:
img = mpimg.imread(fname)
uimg = cal_undistort(img, mtx, dist)
fn = fname.split('/')[-1]
print('undistorted_images/'+fn)
cv2.imwrite('undistorted_images/'+fn,uimg)
# Use color transforms, gradients, etc., to create a thresholded binary image.
def compute_thresholded_binary_image(img, sx_thresh=(10, 100)):
img = np.copy(img)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(s_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
return sxbinary*255
images = glob.glob('test_images/*.jpg')
for fname in images:
img = mpimg.imread(fname)
uimg = cal_undistort(img, mtx, dist)
fn = fname.split('/')[-1]
cv2.imwrite('undistorted_images/'+fn,uimg)
bimg = compute_thresholded_binary_image(uimg)
print('thresholded_binary_images/'+fn)
cv2.imwrite('thresholded_binary_images/'+fn,bimg)
# Apply a perspective transform to rectify binary image ("birds-eye view").
img = mpimg.imread('thresholded_binary_images/straight_lines1.jpg')
src = np.float32([[200, 720],[590, 450],[690, 450], [1110, 720]])
dest = np.float32([[350,720],[350, 0],[950, 0],[950, 720]])
def compute_perspective_transform(img, src, dest):
# plt.plot(200, 720, 'ro')
# plt.plot(590, 450, 'ro')
# plt.plot(690, 450, 'ro')
# plt.plot(1110, 720, 'ro')
# plt.imshow(img)
# plt.show()
M = cv2.getPerspectiveTransform(src, dest)
# Warp the image using OpenCV warpPerspective()
img_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST)
return warped
warped = compute_perspective_transform(img, src, dest)
cv2.imwrite('transformed_images/straight_lines1.jpg',warped)
plt.imshow(warped)
# Detect lane pixels and fit to find the lane boundary.
def find_lane_pixels(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Identify the nonzero pixels in x and y within the window #
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
# Fit a second order polynomial to each using `np.polyfit`
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
window_img = np.zeros_like(out_img)
leftlane = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
rightlane = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
lanespan = np.hstack((leftlane, rightlane))
cv2.fillPoly(window_img, np.int_([lanespan]), (0,255, 0))
result = cv2.addWeighted(out_img, 1, window_img, 1, 0)
# Plots the left and right polynomials on the lane lines
# plt.plot(left_fitx, ploty, color='yellow')
# plt.plot(right_fitx, ploty, color='yellow')
return ploty, left_fit, right_fit, result
ploty, left_fitx, right_fitx, out_img = fit_polynomial(warped)
cv2.imwrite('poly_images/straight_lines1.jpg',out_img)
plt.imshow(out_img)
# Determine the curvature of the lane and vehicle position with respect to center.
def measure_curvature_real(ploty, left_fit_cr, right_fit_cr):
'''
Calculates the curvature of polynomial functions in meters.
'''
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/600 # meters per pixel in x dimension
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
return left_curverad, right_curverad
# Calculate the radius of curvature in meters for both lane lines
left_curverad, right_curverad = measure_curvature_real(ploty, left_fitx, right_fitx)
# Warp the detected lane boundaries back onto the original image.
unwarped = compute_perspective_transform(out_img, dest, src)
plt.imshow(unwarped)
def overlay(original, addon):
return cv2.addWeighted(original, 1, addon, 0.5, 0)
img = mpimg.imread('test_images/straight_lines1.jpg')
ov = overlay(img, unwarped)
plt.imshow(ov)
# caliberation
objpoints, imgpoints, shape = compute_object_points()
mtx, dist = calibrate_camera(objpoints, imgpoints, shape)
# pipline on all test images
src = np.float32([[200, 720],[590, 450],[690, 450], [1110, 720]])
dest = np.float32([[350,720],[350, 0],[950, 0],[950, 720]])
images = glob.glob('test_images/straight_lines2.jpg')
def pipline(img):
uimg = cal_undistort(img, mtx, dist)
bimg = compute_thresholded_binary_image(uimg)
warped = compute_perspective_transform(bimg, src, dest)
ploty, left_fitx, right_fitx, out_img = fit_polynomial(warped)
left_curverad, right_curverad = measure_curvature_real(ploty, left_fitx, right_fitx)
unwarped = compute_perspective_transform(out_img, dest, src)
ov = overlay(img, unwarped)
text = 'left curverad: ' + str(left_curverad) + ' right_curverad: ' + str(right_curverad)
cv2.putText(ov,text,(100,100), cv2.FONT_HERSHEY_SIMPLEX, 1,(255,255,255),2,cv2.LINE_AA)
return ov
for fname in images:
img = mpimg.imread(fname)
ov = pipline(img)
plt.imshow(ov)
fn = fname.split('/')[-1]
print('overlay_images/'+fn)
cv2.imwrite('overlay_images/'+fn, ov)
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return pipline(image)
white_output = 'project_video_lane.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("project_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
white_output = 'challenge_video_lane.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("challenge_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
_____no_output_____
###Markdown
Interactive LabelingIn many applications creating the label taxonomy for your problem is the hard part.In this notebook we show some tricks that can help you with that.
###Code
%load_ext autoreload
%autoreload 2
import altair as alt
from datasets import load_dataset
import pandas as pd
from whatlies.language import UniversalSentenceLanguage, BytePairLanguage
from whatlies.transformers import Umap
from ipysheet.pandas_loader import from_dataframe, to_dataframe
from sklearn.metrics.pairwise import cosine_similarity
###Output
_____no_output_____
###Markdown
Example Dataset
###Code
ds = load_dataset("bing_coronavirus_query_set", queries_by="state", start_date="2020-09-01", end_date="2020-09-30")
df = ds.data['train'].to_pandas()
us = (
df
.loc[lambda d: d['Country']=="United States"]
.value_counts("Query")
.reset_index()
.rename({"Query":"query",0:"counts"},axis=1)
)
us
###Output
_____no_output_____
###Markdown
Trick 1: Align Regex MatchesThis helps you eyeball what's matched by a pattern a lot easier.
###Code
from rich.console import Console
import re
def print_matches_centered(texts,pattern,left=45,right=45,style="bold blue underline",max_lines=None):
console = Console(highlight=False)
n_matches = 0
max_lines = max_lines if max_lines else len(texts)
# we shuffle
for text in pd.Series(texts).sample(frac=1):
match = re.search(pattern,text)
if match:
start, end = match.span()
length = end - start
if start > left:
prefix = text[(start-left):start]
else:
prefix = " "*(left-start) + text[:start]
processed_text = prefix+f"[{style}]" + text[start:end] + "[/]" + text[end:(end+right-length)]
console.print(processed_text)
n_matches+=1
if n_matches >= max_lines:
break
print_matches_centered(us['query'],"mask",max_lines=20)
###Output
_____no_output_____
###Markdown
Load embeddingsWe use the whatlies package as a convenience wrapper around our sentence embedders and dimensionality reducers.In practice you want to try out different embeddings, different dimensionality reducers, and different hyper paramters for the latter.Clustering in practice is like reading tea leaves: you need to stir the cup every now to see what new patterns emerge.
###Code
lang = BytePairLanguage("en") # Use UniversalSentenceLanguage() for better results
embset = lang[[s for s in us['query']]]
embs = embset.to_X()
umapped = embset.transform(Umap(2)).to_X() # Umap has kwargs that you can play with, or try PCA
us[['dim0','dim1']] = umapped
###Output
_____no_output_____
###Markdown
PlotThis plot is also in the whatlies package, it's called the brush_plot there, we create it ourselves here so we can edit it interactively more easily.
###Code
us['label'] = "Missing" # initialize labels
x_axis='dim0'
y_axis='dim1'
x_label = "X"
y_label = "Y"
color="label"
tooltip=["query",'label','counts']
title="hello"
n_show=15
result = (
alt.Chart(us)
.mark_circle(size=60,opacity=.2)
.encode(
x=alt.X(x_axis, axis=alt.Axis(title=x_label)),
y=alt.X(y_axis, axis=alt.Axis(title=y_label)),
tooltip=tooltip,
color=alt.Color(":N", legend=None) if not color else alt.Color(color),
)
.properties(title=title)
)
brush = alt.selection(type="interval")
ranked_text = (
alt.Chart(us)
.mark_text()
.encode(
y=alt.Y("row_number:O", axis=None),
color=alt.Color(":N", legend=None) if not color else alt.Color(color),
)
.transform_window(row_number="row_number()")
.transform_filter(brush)
.transform_window(rank="rank(row_number)")
.transform_filter(alt.datum.rank < n_show)
)
text_plt = ranked_text.encode(text="query:N").properties(
width=250, title="Text Selection"
)
result.add_selection(brush) | text_plt
###Output
_____no_output_____
###Markdown
Assign labelsThis is just an example.Here we greediy assign labels: a row gets the label of the last pattern that was matched.This isn't perfect. I'd prefer to assign each label to a separate column and then do some manual refinement afterwards.
###Code
# us['label'] = "Missing" # Uncomment if you want to remove all previous labels
patterns = [
"county",
"mask|shield|face\b|cover",
# states_pattern, # Defined below
"quarantine",
"football|nfl|ball"
]
for pat in patterns:
us.loc[[True if re.search(pat, q) else False for q in us['query']], 'label'] = pat
us['label'].value_counts(normalize=True).reset_index().assign(label = lambda d: d.label.apply(lambda f: f"{f:.2f}"))
states = """Alabama
Alaska
Arizona
Arkansas
California
Colorado
Connecticut
Delaware
Florida
Georgia
Hawaii
Idaho
Illinois
Indiana
Iowa
Kansas
Kentucky
Louisiana
Maine
Maryland
Massachusetts
Michigan
Minnesota
Mississippi
Missouri
Montana
Nebraska
Nevada
New Hampshire
New Jersey
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Pennsylvania
Rhode Island
South Carolina
South Dakota
Tennessee
Texas
Utah
Vermont
Virginia
Washington
West Virginia
Wisconsin
Wyoming
District of Columbia
Puerto Rico
Guam
American Samoa
U.S. Virgin Islands
Northern Mariana Islands
"""
states_pattern = "|".join(s.lower() for s in states.splitlines())
###Output
_____no_output_____
###Markdown
Similarity SearchSometimes you can't describe a rule as a regex pattern.Here we show how you can write an example sentence. Display similar sentences in an editable table and then assign a label to those rows that you found that match it well.Ideally you'd also add these as seperate columns to then resolve which label(s) fit best.
###Code
ex = "face mask"
us['sims'] = cosine_similarity(lang[ex].vector.reshape((1,-1)),embs).reshape((-1,))
manual_label = (
us
.nlargest(50,columns="sims")
.sort_values(by="dim0")
.assign(relevant = "X")
[["query","label","relevant"]]
)
sheet = from_dataframe(manual_label)
sheet
out = to_dataframe(sheet)
us.loc[
out.query("relevant == 'X'").index.astype(int), "label"] = "manual_mask"
###Output
_____no_output_____
###Markdown
Example notebook for the _MOSES Sternfahrt 4_This notebook downloads data from the MOSES Sternfahrt 4 mission using the restAPI and gets the model parameters of the ECOSMO model via the `ecosmo.api` module. The MOSES 4 Show CaseWe select some sensors from the MOSES 4 Sternfahrt. These have been taken from the O2A Data Web Services at https://dashboard.awi.de/data-xxl
###Code
sensors = [
"small_scale_facility:pfb_awi_751801:longitude_0001",
"small_scale_facility:pfb_awi_751801:latitude_0001",
"small_scale_facility:pfb_awi_751801:temperature_0001",
"small_scale_facility:pfb_awi_751801:temperature_sbe45_0001",
"small_scale_facility:pfb_awi_751801:salinity_sbe45_0001",
]
###Output
_____no_output_____
###Markdown
The MOSES 4 Show CaseTo download the data, we use the O2A data-web-services package (https://github.com/o2a-data/o2a-data-dws).
###Code
from dws import dws
df = dws.get(sensors, "2020-01-01", "2020-12-31")
df.columns = [col.split(":")[-1].replace("_0001", "").replace(" (mean) ", " ") for col in df.columns]
df
###Output
_____no_output_____
###Markdown
The ECOSMO Backend ModuleWe created a backend module that loads climate model data and extracts the data along a specific path. ```pythondef get_model_parameter( names: Union[Parameter, List[Parameter]], time: List[datetime.datetime], lat: List[float], lon: List[float],) -> DataFrame: """Get ECOSMO model parameters for a given 3D path. This function takes a certain list of parameters and a 3D path, denoted by time, latitude and longitude, and extracts the data from the highresolution model output. Parameters ---------- names : Union[Parameter, List[Parameter]] The parameter names to extract, see :class:`Parameter`. time : List[datetime.datetime] The list of times for each point. lat : List[float] The list of latitudes in degrees North for each point. Must be of the same length as `time` and `lon`. lon : List[float] The list of longitudes in degrees East for each point. Must be of the same length as `time` and `lat`. Returns ------- DataFrame The dataframe with the selected model parameters for the given path. """ if isinstance(names, str): names = [names] params = [param.value for param in map(Parameter, names)] if len(lat) != len(lon) or len(lat) != len(time): raise ValueError("lat, time and lon must all be of the same length!") with xr.open_dataset( osp.join(data_dir, "ecosmo-ute-daewel-20200501-20200531.nc") ) as ds: ds = ds[params].isel(layer=0).drop_vars("layer") lon_da = xr.DataArray(lon, dims="path") lat_da = xr.DataArray(lat, dims="path") time_da = xr.DataArray(time, dims="path") ds = ds.interp(lon=lon_da, lat=lat_da, time=time_da) convert the data to a pandas dataframe return ds.to_dataframe()``` Using the ECOSMO api moduleNow we call the backend module (assuming that it is connected already).
###Code
from ecosmo.api import get_model_parameter
model_data = get_model_parameter(
["temp", "salt"],
df.datetime.to_list(),
df["latitude [degree]"].to_list(),
df["longitude [degree]"].to_list()
)
model_data
###Output
_____no_output_____
###Markdown
Model vs. Observations Now we can merge the `temp` and `salt` columns into our observations and plot them.
###Code
df["ecosmo_temperature"] = model_data.temp.values
df["ecosmo_salinity"] = model_data.salt.values
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 3, figsize=(14, 4), dpi=150)
df.plot.scatter("temperature [°C]", "ecosmo_temperature", ax=axes[0])
df.plot.scatter("temperature_sbe45 [°C]", "ecosmo_temperature", ax=axes[1])
df.plot.scatter("salinity_sbe45 [PSU]", "ecosmo_salinity", ax=axes[2])
###Output
_____no_output_____
###Markdown
1) Initial Population a) Heuristic b) Randomized ✔ 2) Selection a) Roulette Wheel Selection ✔ b) Rank Selection ✔ c) Steady State Selection ✔ d) Tournament Selection ✔ e) Elitism Selection ✔ f) Boltzmann Selection 3) Reproduction a) One-point crossover ✔ b) k-point crossover ✔ c) Uniform crossover ✔ 4) Mutation a) Bit string mutation b) Flip Bit c) Boundary ✔ d) Non-Uniform ✔ e) Uniform ✔ f) Gaussian ✔ g) Shrink ✔
###Code
from GeneticCVSearch import GeneticCVSearch
search_space = {
'size':3,
'max_depth':(1, 16),
'n_estimators':(100, 1000),
'learning_rate':(0.001, 0.1),
'gamma':(1, 10)
}
gcvs = GeneticCVSearch()
gcvs.seq_evo(
search_space=search_space,
pop_size=30,
estimator=xgbr,
cv=5,
scoring='neg_mean_absolute_error',
select_fn='R',
Tournament_size=2,
offspring_size=27,
c_pt=1,
dominance=True,
weighted=False,
n_elite=0.1,
mu_method='Non-Uniform',
epsilon=.1,
momentum=.05,
verbose=1)
###Output
----------------------------------------Generation 1----------------------------------------
3 elite(s) pass througth to the next generation.
Selection Pressure: 0
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.1
Best Chromosome: {'max_depth': 5, 'n_estimators': 566, 'learning_rate': 0.020932045479663798, 'gamma': 2}
Best Score: -2855.177082543963
Mean Fitness: -3434.7262299434165
----------------------------------------Generation 2----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 0
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.15000000000000002
Best Chromosome: {'max_depth': 5, 'n_estimators': 566, 'learning_rate': 0.020932045479663798, 'gamma': 2}
Best Score: -2855.177082543963
Mean Fitness: -3168.0145976295044
----------------------------------------Generation 3----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 0
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.2
Best Chromosome: {'max_depth': 5, 'n_estimators': 566, 'learning_rate': 0.020932045479663798, 'gamma': 2}
Best Score: -2855.177082543963
Mean Fitness: -3146.5496420618088
----------------------------------------Generation 4----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 0
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.25
Best Chromosome: {'max_depth': 5, 'n_estimators': 559, 'learning_rate': 0.02046597819895803, 'gamma': 2}
Best Score: -2858.1542515030524
Mean Fitness: -3101.1837254286743
No Improvement.
----------------------------------------Generation 5----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 1
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.3
Best Chromosome: {'max_depth': 5, 'n_estimators': 559, 'learning_rate': 0.02046597819895803, 'gamma': 2}
Best Score: -2858.1542515030524
Mean Fitness: -3014.6323553115053
No Improvement.
----------------------------------------Generation 6----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 2
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.35
Best Chromosome: {'max_depth': 5, 'n_estimators': 549, 'learning_rate': 0.02169151679269741, 'gamma': 1}
Best Score: -2855.2520681617384
Mean Fitness: -2947.749357159837
----------------------------------------Generation 7----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 1
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.39999999999999997
Best Chromosome: {'max_depth': 4, 'n_estimators': 554, 'learning_rate': 0.02169151679269741, 'gamma': 1}
Best Score: -2805.180134038712
Mean Fitness: -2905.095074638363
----------------------------------------Generation 8----------------------------------------
2 elite(s) pass througth to the next generation.
Selection Pressure: 0
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.44999999999999996
Best Chromosome: {'max_depth': 4, 'n_estimators': 551, 'learning_rate': 0.021385132144262565, 'gamma': 1}
Best Score: -2786.0187179971226
Mean Fitness: -2811.441048278729
----------------------------------------Generation 9----------------------------------------
1 elite(s) pass througth to the next generation.
Selection Pressure: -1
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.49999999999999994
Best Chromosome: {'max_depth': 4, 'n_estimators': 552, 'learning_rate': 0.02153832446847999, 'gamma': 1}
Best Score: -2781.0630918301104
Mean Fitness: -2797.4418371800757
----------------------------------------Generation 10----------------------------------------
1 elite(s) pass througth to the next generation.
Selection Pressure: -2
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.5499999999999999
Best Chromosome: {'max_depth': 4, 'n_estimators': 552, 'learning_rate': 0.02153832446847999, 'gamma': 1}
Best Score: -2781.0630918301104
Mean Fitness: -2791.6356450561602
----------------------------------------Generation 11----------------------------------------
1 elite(s) pass througth to the next generation.
Selection Pressure: -3
Mutation occured (Non-Uniform/Uniform).
Mutation Rate: 0.6
Best Chromosome: {'max_depth': 4, 'n_estimators': 551, 'learning_rate': 0.02150002638742563, 'gamma': 1}
Best Score: -2783.1602258599105
Mean Fitness: -2797.736570132671
No Improvement.
No improvement in mean fitness.
###Markdown
`ifsFractals` Example Notebook
###Code
from ifsFractals import *
T_0 = Translate(1/3, 0) @ ShearX(1/3) @ Scale(1/3)
T_1 = Rotate(pi/3) @ T_0
T_2 = Rotate(2*pi/3) @ T_0
T_3 = Rotate(3*pi/3) @ T_0
T_4 = Rotate(4*pi/3) @ T_0
T_5 = Rotate(5*pi/3) @ T_0
T = [T_0, T_1, T_2, T_3, T_4, T_5]
fractal = Fractal(T)
fractal.check_transformations(verbose=True)
fractal.plot_figures()
fractal.add_points(100_000)
fractal.display()
fractal.export()
fractal.embed_web()
###Output
_____no_output_____
###Markdown
Decomposing unitary matrix into quantum gatesThis tool is useful when you have $2^n \times 2^n$ matrix representing a untary operator acting on register of $n$ bits and want to implement this operator in Q.This notebook demonstrates how to use it. Tl;DR
###Code
import numpy, quantum_decomp
SWAP = numpy.array([[1,0,0,0],[0,0,1,0],[0,1,0,0], [0,0,0,1]])
print(quantum_decomp.matrix_to_qsharp(SWAP, op_name='Swap'))
###Output
operation Swap (qs : Qubit[]) : Unit {
CNOT(qs[1], qs[0]);
CNOT(qs[0], qs[1]);
CNOT(qs[1], qs[0]);
}
###Markdown
ExampleConsider following matrix:$$A = \frac{1}{\sqrt{3}}\begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & e^{\frac{2\pi i}{3}} & e^{\frac{4 \pi i}{3}} & 0 \\ 1 & e^{\frac{4\pi i}{3}} & e^{\frac{2 \pi i}{3}} & 0 \\ 0 & 0 & 0 & -i \sqrt{3} \end{pmatrix}$$This is $3\times 3$ [DFT matrix](https://en.wikipedia.org/wiki/DFT_matrix), padded to have shape $4 \times 4$. Implementing such matrix was one way to solve problem B2 in [Microsoft Q Coding Contest - Winter 2019](https://codeforces.com/blog/entry/65579).[Here](https://assets.codeforces.com/rounds/1116/contest-editorial.pdf) you can find another approach to implementing this matrix, but let's see how we can implement it using our tool and Q.First, let's construct this matrix:
###Code
import numpy as np
w = np.exp((2j / 3) * np.pi)
A = np.array([[1, 1, 1, 0],
[1, w, w * w, 0],
[1, w * w, w, 0],
[0, 0, 0, -1j*np.sqrt(3)]]) / np.sqrt(3)
print(A)
###Output
[[ 0.57735027+0.j 0.57735027+0.j 0.57735027+0.j 0. +0.j ]
[ 0.57735027+0.j -0.28867513+0.5j -0.28867513-0.5j 0. +0.j ]
[ 0.57735027+0.j -0.28867513-0.5j -0.28867513+0.5j 0. +0.j ]
[ 0. +0.j 0. +0.j 0. +0.j 0. -1.j ]]
###Markdown
Now, let's use quantum_decomp library to construct Q code.
###Code
import quantum_decomp as qd
print(qd.matrix_to_qsharp(A))
###Output
operation ApplyUnitaryMatrix (qs : Qubit[]) : Unit {
CNOT(qs[1], qs[0]);
Controlled Ry([qs[0]], (-1.570796326794897, qs[1]));
X(qs[1]);
Controlled Ry([qs[1]], (-1.910633236249018, qs[0]));
X(qs[1]);
Controlled Rz([qs[0]], (-4.712388980384691, qs[1]));
Controlled Ry([qs[0]], (-1.570796326794897, qs[1]));
Controlled Rz([qs[0]], (-1.570796326794896, qs[1]));
Controlled Rz([qs[1]], (-1.570796326794897, qs[0]));
Controlled Ry([qs[1]], (-3.141592653589793, qs[0]));
Controlled Rz([qs[1]], (1.570796326794897, qs[0]));
}
###Markdown
As you can see from code in qsharp/ directory of this repository, this code indeed implements given unitary matrix. Also you can get the same sequence of operations as sequence of gates, where each gate is instance of GateFC or GateSingle, which are internal classes implementing fully controlled gate or gate acting on single qubit.
###Code
gates = qd.matrix_to_gates(A)
print('\n'.join(map(str, gates)))
###Output
X on bit 0, fully controlled
Ry(1.5707963267948966) on bit 1, fully controlled
X on bit 1
Ry(1.9106332362490184) on bit 0, fully controlled
X on bit 1
Rz(4.712388980384691) on bit 1, fully controlled
Ry(1.5707963267948966) on bit 1, fully controlled
Rz(1.570796326794896) on bit 1, fully controlled
Rz(1.5707963267948972) on bit 0, fully controlled
Ry(3.141592653589793) on bit 0, fully controlled
Rz(-1.5707963267948972) on bit 0, fully controlled
###Markdown
This can be represented by a quantum circuit (made with [Q-cirquit](http://physics.unm.edu/CQuIC/Qcircuit/)): This is how you can view decomposition of matrix into 2-level gates, which is used to build sequence of gates.
###Code
print('\n'.join(map(str,qd.two_level_decompose_gray(A))))
###Output
[[0.+0.j 1.+0.j]
[1.+0.j 0.+0.j]] on (2, 3)
[[ 0.70710678-0.00000000e+00j 0.70710678-8.65956056e-17j]
[-0.70710678-8.65956056e-17j 0.70710678-0.00000000e+00j]] on (1, 3)
[[ 0.57735027-0.00000000e+00j 0.81649658-9.99919924e-17j]
[-0.81649658-9.99919924e-17j 0.57735027-0.00000000e+00j]] on (0, 1)
[[-7.07106781e-01+8.65956056e-17j -3.57316295e-16-7.07106781e-01j]
[ 3.57316295e-16-7.07106781e-01j -7.07106781e-01-8.65956056e-17j]] on (1, 3)
[[ 0.00000000e+00+0.j -5.31862526e-16-1.j]
[ 0.00000000e+00-1.j 0.00000000e+00+0.j]] on (2, 3)
###Markdown
Those matrices are ordered in order they are applied, so to write them as a matrix product, we have to reverse them. This product can be written as follows: $$A = \begin{pmatrix} 0 & -i \\ -i & 0 \end{pmatrix}_{2,3}\begin{pmatrix} -\frac{\sqrt{2}}{2} & -\frac{\sqrt{2}}{2}i \\ -\frac{\sqrt{2}}{2}i & -\frac{\sqrt{2}}{2} \end{pmatrix}_{1,3}\begin{pmatrix} \sqrt{\frac{1}{3}} & \sqrt{\frac{2}{3}} \\ -\sqrt{\frac{2}{3}} & \sqrt{\frac{1}{3}} \end{pmatrix}_{0,1}\begin{pmatrix} \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \\ -\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \end{pmatrix}_{1,3}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}_{2,3}$$Or, in full form:$$A = \begin{pmatrix} 1 & 0 & 0 & 0 \\0& 1 & 0& 0 \\ 0 & 0 & 0 & -i \\ 0 & 0 & -i & 0 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -\frac{\sqrt{2}}{2} & 0 & -\frac{\sqrt{2}}{2}i \\ 0 & 0 & 1 & 0 \\ 0 & -\frac{\sqrt{2}}{2}i & 0 & -\frac{\sqrt{2}}{2} \end{pmatrix}\begin{pmatrix} \sqrt{\frac{1}{3}} & \sqrt{\frac{2}{3}} & 0 & 0 \\ -\sqrt{\frac{2}{3}} & \sqrt{\frac{1}{3}} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \frac{\sqrt{2}}{2} & 0 & \frac{\sqrt{2}}{2} \\ 0 & 0 & 1 & 0 \\ 0 & -\frac{\sqrt{2}}{2} & 0 & \frac{\sqrt{2}}{2} \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}$$ Output sizeNumber of Q commands this tool produces is proportional to number of elements in matrix, which is $O(4^n)$, where $n$ is number of qubits in a register. More accurately, it's asymtotically $2 \cdot 4^n$. As it grows very fast, unfortunately this tool is useful only for small values of $n$.See detailed experimental complexity analysis of this tool in [this notebook](complexity.ipynb). ImplementationImplementation is based on:* Article ["Decomposition of unitary matrices and quantum gates"](https://arxiv.org/pdf/1210.7366.pdf) by Chi-Kwong Li and Rebecca Roberts;* Book "Quantum Computing: From Linear Algebra to Physical Implementations" (chapter 4) by Mikio Nakahara and Tetsuo Ohmi.It consists of following steps:1. Decomposing matrix into 2-level unitary matrices;2. Using Gray code to transform those matrices into matrices acting on states whose index differ only in one bit;3. Implementing those matrices as fully controled single-qubit gates;4. Implementing single-gate qubits as Rx, Ry and R1 gates;5. Optimizations: cancelling X gates and removing identity gates. Paper Algorithm used in this tool is in detail outlined in this [paper](res/Fedoriaka2019Decomposition.pdf). Updates Optimized algorithm for 4x4 unitaries (Dec 2019)In case of 4x4 unitary one can implement it in much more effective way. Generic algorithm described above will produce 18 contolled gate, each of which should be implemented with at least 2 CNOTs and 3 single-qubit gates.As proven in [this paper](https://arxiv.org/pdf/quant-ph/0308006.pdf), it's possible to implement any 4x4 unitary using not more than 3 CNOT gates and 15 elementary single-qubit Ry and Rz gates.Algorithm for such optimal decomposition is now implemented in this library. To use it, pass `optimize=True` to functions performing decomposition.This example shows optimized decomposition for matrix A defined above.
###Code
qd.matrix_to_gates(A, optimize=True)
print(qd.matrix_to_qsharp(A, optimize=True))
###Output
operation ApplyUnitaryMatrix (qs : Qubit[]) : Unit {
Rz(2.700933836565789, qs[0]);
Ry(-1.201442806989828, qs[0]);
Rz(-0.974689532916684, qs[0]);
Rz(2.700933836565789, qs[1]);
Ry(-1.201442806989829, qs[1]);
Rz(-2.545485852364665, qs[1]);
CNOT(qs[1], qs[0]);
Rz(4.022910287637800, qs[0]);
Ry(-0.400926166464297, qs[1]);
CNOT(qs[0], qs[1]);
Ry(8.142534160257075, qs[1]);
CNOT(qs[1], qs[0]);
Rz(2.545485857153846, qs[0]);
Ry(-1.940149846599965, qs[0]);
Rz(-0.440658817024004, qs[0]);
R1(3.141592653589793, qs[0]);
Rz(0.974689528127503, qs[1]);
Ry(-1.940149846599965, qs[1]);
Rz(-3.582251470613797, qs[1]);
}
###Markdown
Circ support (Dec 2019)Now it's possible to convert unitary matrix to [Cirq](https://github.com/quantumlib/Cirq) circquit.You don't need to install Cirq to use the library, unless you want to have output as Cirq cirquit.See examples below.
###Code
print(qd.matrix_to_cirq_circuit(SWAP))
qd.matrix_to_cirq_circuit(A, optimize=True)
###Output
_____no_output_____
###Markdown
To verify it's correct, let's convert random unitary to Cirq circuit, and then convert circuit back to matrix, and make sure we get the same matrix.
###Code
from scipy.stats import unitary_group
U = unitary_group.rvs(16)
np.linalg.norm(U - qd.matrix_to_cirq_circuit(U).unitary())
###Output
_____no_output_____
###Markdown
Instalacion
###Code
!pip install target-describe
###Output
Collecting target-describe
Using cached target_describe-0.0.1-py3-none-any.whl (6.6 kB)
Requirement already satisfied: nbformat==5.1.3 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from target-describe) (5.1.3)
Requirement already satisfied: plotly==5.6.0 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from target-describe) (5.6.0)
Requirement already satisfied: numpy==1.22.2 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from target-describe) (1.22.2)
Requirement already satisfied: tenacity==8.0.1 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from target-describe) (8.0.1)
Requirement already satisfied: pandas==1.4.1 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from target-describe) (1.4.1)
Requirement already satisfied: jupyter-core in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from nbformat==5.1.3->target-describe) (4.9.2)
Requirement already satisfied: traitlets>=4.1 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from nbformat==5.1.3->target-describe) (5.1.1)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from nbformat==5.1.3->target-describe) (4.4.0)
Requirement already satisfied: ipython-genutils in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from nbformat==5.1.3->target-describe) (0.2.0)
Requirement already satisfied: pytz>=2020.1 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from pandas==1.4.1->target-describe) (2021.3)
Requirement already satisfied: python-dateutil>=2.8.1 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from pandas==1.4.1->target-describe) (2.8.2)
Requirement already satisfied: six in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from plotly==5.6.0->target-describe) (1.16.0)
Requirement already satisfied: attrs>=17.4.0 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat==5.1.3->target-describe) (21.4.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /home/daniel/miniconda3/envs/variable2/lib/python3.9/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat==5.1.3->target-describe) (0.18.1)
Installing collected packages: target-describe
Successfully installed target-describe-0.0.1
###Markdown
Carga de datos
###Code
import pandas as pd
from target_describe import targetDescribe
df = pd.read_csv(
"https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv"
)
###Output
None
###Markdown
Generamos la descripcion de algunas variables tanto para el caso en que si _Survived= 0_ y _Survived=1_
###Code
td = targetDescribe(df,"Survived", problem="binary_classification")
td.describe_some(["Sex"])
td.describe_some(["Sex"],target_value_described="0")
# td.all_associations()
###Output
_____no_output_____
###Markdown
Generamos la descripcion de todas las variables
###Code
td.all_associations()
###Output
_____no_output_____
###Markdown
Load and preprocess the data
###Code
graph = load_dataset('data/cora.npz')
adj_matrix = graph['adj_matrix']
labels = graph['labels']
adj_matrix, labels = standardize(adj_matrix, labels)
n_nodes = adj_matrix.shape[0]
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
n_flips = 1000
dim = 32
window_size = 5
###Output
_____no_output_____
###Markdown
Generate candidate edge flips
###Code
candidates = generate_candidates_removal(adj_matrix=adj_matrix)
###Output
_____no_output_____
###Markdown
Compute simple baselines
###Code
b_eig_flips = baseline_eigencentrality_top_flips(adj_matrix, candidates, n_flips)
b_deg_flips = baseline_degree_top_flips(adj_matrix, candidates, n_flips, True)
b_rnd_flips = baseline_random_top_flips(candidates, n_flips, 0)
###Output
_____no_output_____
###Markdown
Compute adversarial flips using eigenvalue perturbation
###Code
our_flips = perturbation_top_flips(adj_matrix, candidates, n_flips, dim, window_size)
###Output
_____no_output_____
###Markdown
Evaluate classification performance using the skipgram objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, None, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding = deepwalk_skipgram(adj_matrix_flipped, dim, window_size=window_size)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
cln, F1: 0.81 0.78
rnd, F1: 0.80 0.77
deg, F1: 0.77 0.74
eig, F1: 0.81 0.78
our, F1: 0.72 0.69
###Markdown
Evaluate classification performance using the SVD objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, None, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding, _, _, _ = deepwalk_svd(adj_matrix_flipped, window_size, dim)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
cln, F1: 0.82 0.80
rnd, F1: 0.81 0.79
deg, F1: 0.79 0.76
eig, F1: 0.82 0.80
our, F1: 0.76 0.74
###Markdown
Building a Sample Factor GraphA factor graph in this framework is composed of two primary classes of components: factors and variables.**Factors** are the functions we use to compute our priors, and **Variables** represent the posterior probabilities we want to compute.As an example, let's build a factor graph over student test scores on a 5-point exam. In this example, we are taking as our posterior the probability that a student gets a certain score. For the sake of our example, we're going to assume that scores are correlated with two different factors: the students aptitude, and, because the test may be more or less difficult, the scores of the other students.To begin, let's import everything we will need and create an empty `FactorGraph` object.
###Code
from pfg.factor_graph import FactorGraph, Variable, Factor, FactorCategory
import numpy as np
import pprint
fg = FactorGraph()
###Output
_____no_output_____
###Markdown
VariablesWe must then declare all of the variables that we will need. A `Variable` is defined by 2 things:- A unique name. Uniqueness is important for indexing into the results after inference has been performed.- A dimension. Factor graphs operate over discrete probability mass functions, and the dimension represents the number of possible states for the variable.In this case, we will have three variables, each representing the three students.
###Code
var_a = Variable('Alice', 5)
var_b = Variable('Bob', 5)
var_c = Variable('Carol', 5)
###Output
_____no_output_____
###Markdown
We have given each variable a dimension of 5, because there are 5 possible scores on the exam. FactorsNow we must include the factors that will be used to compute the posteriors during inference. A `Factor` connects to some arbitrary number of variables, and it's important when declaring factors to keep the dimensions of the factor consistent with the variables it will connect to.Specifically, a `Factor` must be declared with the following parameters:- The values of the factor. This is a rank `N` tensor (i.e. `N` dimensional) where `N` is the number of variables that will be connected to the factor. The length of each dimension must match the length of the variable associated with that dimension.- An optional name for the factor.- an optional category for the factor. Categories are used for scheduling during loopy belief propagation. This will be discussed later in the document.The first of the factors we will add is the aptitude of the students towards the material. This might be determined from, for example, their past grades.There will be a separate factor for each student. Since the aptitude factor only connects to a single student, the value will be of shape `[5]`, to match the 5 possible scores a student can receive.
###Code
factor_apt_a = Factor(np.array([0.05, 0.05, 0.3, 0.3, 0.3]), name='Aptitude_Alice')
factor_apt_b = Factor(np.array([0.2, 0.3, 0.3, 0.2, 0.0]), name='Aptitude_Bob')
factor_apt_c = Factor(np.array([0.2, 0.2, 0.2, 0.2, 0.2]), name='Aptitude_Carol')
fg.add_factor([var_a], factor_apt_a)
fg.add_factor([var_b], factor_apt_b)
fg.add_factor([var_c], factor_apt_c)
###Output
_____no_output_____
###Markdown
In this construction, the factors are probability distributions over the 5 possible test score values. Note that factors do not need to be normalized to 1, but doing so may improve numerical stability. In our example, Alice is a good student, being much more likely to score a 3 or better. Bob is a poor student, tending towards lower scores, and Carol is a new student for whom nothing is known, so she has a uniform prior over test scores.After we create the factors, we add them to the graph by connecting each factor to its affiliated variable. Variables are automatically added to the graph when their factors are added, but they could have also been explicitly added using the `add_variable()` or `add_variables_from_list()` methods of the `FactorGraph` class.We additionally add a factor the we connect to all the students. This is the "correlation factor", which indicates that all of the students scores are generally correlated. This could be because, for instance, the test was either easier or harder than other tests. We do this by creating a function `correlation_value(a, b, c)`, which takes in 3 possible test scores and returns the prior probability of seeing those scores. This function is then used to fill a tensor of shape `[5, 5, 5]` which models the factor of the student score correlations.For the sake of this example, we will use a symmetric function that does not bias towards one student doing better than another.
###Code
def correlation_value(a, b, c):
return 1 - 0.1 * (abs(a - b) + abs(b - c) + abs(a - c))
corr_values = np.zeros([5, 5, 5])
for a in range(5):
for b in range(5):
for c in range(5):
corr_values[a, b, c] = correlation_value(a, b, c)
print('Correlation Tensor:')
print(corr_values)
# ----------
corr_factor = Factor(corr_values, name='Correlation')
fg.add_factor([var_a, var_b, var_c], corr_factor)
###Output
Correlation Tensor:
[[[1. 0.8 0.6 0.4 0.2]
[0.8 0.8 0.6 0.4 0.2]
[0.6 0.6 0.6 0.4 0.2]
[0.4 0.4 0.4 0.4 0.2]
[0.2 0.2 0.2 0.2 0.2]]
[[0.8 0.8 0.6 0.4 0.2]
[0.8 1. 0.8 0.6 0.4]
[0.6 0.8 0.8 0.6 0.4]
[0.4 0.6 0.6 0.6 0.4]
[0.2 0.4 0.4 0.4 0.4]]
[[0.6 0.6 0.6 0.4 0.2]
[0.6 0.8 0.8 0.6 0.4]
[0.6 0.8 1. 0.8 0.6]
[0.4 0.6 0.8 0.8 0.6]
[0.2 0.4 0.6 0.6 0.6]]
[[0.4 0.4 0.4 0.4 0.2]
[0.4 0.6 0.6 0.6 0.4]
[0.4 0.6 0.8 0.8 0.6]
[0.4 0.6 0.8 1. 0.8]
[0.2 0.4 0.6 0.8 0.8]]
[[0.2 0.2 0.2 0.2 0.2]
[0.2 0.4 0.4 0.4 0.4]
[0.2 0.4 0.6 0.6 0.6]
[0.2 0.4 0.6 0.8 0.8]
[0.2 0.4 0.6 0.8 1. ]]]
###Markdown
Notice how the correlation tensor has value `1` at the indices `(i, i, i)` to indicate a preference for all scores the same, and has no values of `0`, since these would make those score combinations impossible. Having added all of our factors, our graph now looks like this:![title](images/sample.png)Notice how the graph could actually be viewed as a tree, with the "correlation factor" as the root. InferenceWe finish by performing belief propagation (BP) to compute the posterior distributions using the factors we've constructed. There are two methods that can be run to perform belief propagation:- `belief_propagation_iteration()`: This performs a single iteration of belief propagation, according to the defined schedule (schedules will be explained in the next section. The schedule defaults to the order in which factors were added). For a general graph, it is not possible to use the belief propagation algorithm to compute the exact posteriors of the distribution. That said, good approximations are possible, and often multiple iterations of BP can yield better results. In practice, therefore, one would usually call this multiple times.- `belief_propagation_tree()`: If the factor graph is actually a tree (as our graph is), then an exact solution to the posterior distribution is possible. In that case, this method can be called to achieve an exact solution in a single iteration.
###Code
fg.belief_propagation_tree()
###Output
_____no_output_____
###Markdown
After the belief propagation is performed, the posteriors for the variables can be queried all at once or individually.
###Code
print('All Posteriors:')
pprint.pprint(fg.posterior_for_all_variables())
print()
print('Posterior for Alice:')
print(fg.posterior_for_variable(var_a))
###Output
All Posteriors:
{'Alice': array([0.04666667, 0.05777778, 0.35777778, 0.31333333, 0.22444444]),
'Bob': array([0.13703704, 0.28888889, 0.34444444, 0.22962963, 0. ]),
'Carol': array([0.15407407, 0.20962963, 0.23925926, 0.22444444, 0.17259259])}
Posterior for Alice:
[0.04666667 0.05777778 0.35777778 0.31333333 0.22444444]
###Markdown
Note how Alice's chance of getting a higher score has gone down. this is because the other students are less likely to do well, and we have constrained all of the scores to be positively correlated. SchedulingAs mentioned above, the BP algorithm is not guaranteed to converge for general undirected graphs. In fact, there are scenarios where you will get different results depending on the order of message passing between variables and factors.To handle this, the `pfg` library allows you to optionally set a schedule for belief propagation. This is done through the use of the `FactorCategory` class. A `FactorCategory` instance is essentially just a unique identifier for a set of `Factor`s. A schedule can then be composed as a list of `FactorCategory` instances. Factor categories are useful in that they allow associated but disparate factors to be grouped together (e.g. the "aptitude" factors in our example).To explain using a simple example, we first rebuild the previous factor graph, this time assigning each factor to a category. To make this example a little more complicated and break the tree structure, we add a third "anti-correlation" factor between Alice and Bob, indicating that if one does better on the exam, the other is likely to do worse.
###Code
# New categories for factors to use in scheduling
apt_factor_category = FactorCategory('Aptitude')
corr_factor_category = FactorCategory('Correlation')
anticorr_factor_category = FactorCategory('Anti-Correlation')
# ------- Identical to above calls, but with categories --------
fg = FactorGraph()
var_a = Variable('Alice', 5)
var_b = Variable('Bob', 5)
var_c = Variable('Carol', 5)
factor_apt_a = Factor(np.array([0.05, 0.05, 0.3, 0.3, 0.3]), name='Aptitude_Alice',
category=apt_factor_category)
factor_apt_b = Factor(np.array([0.2, 0.3, 0.3, 0.2, 0.0]), name='Aptitude_Bob',
category=apt_factor_category)
factor_apt_c = Factor(np.array([0.2, 0.2, 0.2, 0.2, 0.2]), name='Aptitude_Carol',
category=apt_factor_category)
fg.add_factor([var_a], factor_apt_a)
fg.add_factor([var_b], factor_apt_b)
fg.add_factor([var_c], factor_apt_c)
corr_values = np.zeros([5, 5, 5])
for a in range(5):
for b in range(5):
for c in range(5):
corr_values[a, b, c] = correlation_value(a, b, c)
corr_factor = Factor(corr_values, name='Correlation', category=corr_factor_category)
fg.add_factor([var_a, var_b, var_c], corr_factor)
# ----------- New factor to make schedule more interesting ---------
anticorr_values = np.zeros([5, 5])
for a in range(5):
for b in range(5):
anticorr_values[a, b] = 1. - correlation_value(a, b, a)
anti_corr_factor = Factor(anticorr_values, name='Anti-Correlation', category=anticorr_factor_category)
fg.add_factor([var_a, var_b], anti_corr_factor)
###Output
_____no_output_____
###Markdown
Our new graph looks as follows (note the lack of tree structure):![title](images/sample2.png)Now that we have put every factor into a category, we can create a schedule simply by indicating in what order we want to operate on the categories:
###Code
fg.set_schedule([apt_factor_category, anticorr_factor_category, corr_factor_category])
fg.belief_propagation_iteration()
pprint.pprint(fg.posterior_for_all_variables())
###Output
{'Alice': array([0.05511022, 0.0490982 , 0.27054108, 0.29458918, 0.33066132]),
'Bob': array([0.2244898, 0.3 , 0.2755102, 0.2 , 0. ]),
'Carol': array([0.16923077, 0.21538462, 0.23076923, 0.21538462, 0.16923077])}
###Markdown
Find Iris with Daugman algorithm example
###Code
import matplotlib.pyplot as plt
import cv2
import numpy as np
from daugman import find_iris
# read, square crop and grayscale image of an eye
img = cv2.imread('eye.jpg')
img = img[20:130, 20:130]
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_ = plt.imshow(gray_img, cmap='gray')
###Output
_____no_output_____
###Markdown
We are considering every pixel in the central third of the image as a possible iris center.But we could reduce that number with `points_step`. *It has a linear correlation with overall iris search speed.*For each possible iris center, we will consider different radii, given as `range(daugman_start, daugman_end, daugman_step)`. *The `daugman_step` has a linear correlation with overall iris search speed.*See `daugman_visual_explanation.ipynb` for details and intuition
###Code
# minimal iris radius -- 10px
# maximal iris radius -- 30px
answer = find_iris(gray_img, daugman_start=10, daugman_end=30, daugman_step=1, points_step=3)
print(answer)
iris_center, iris_rad = answer
# plot result
out = img.copy()
cv2.circle(out, iris_center, iris_rad, (0, 0, 255), 1)
_ = plt.imshow(out[::,::,::-1])
###Output
_____no_output_____
###Markdown
Speed measurementPlay with `daugman_step` and `points_step` params.
###Code
%%timeit
find_iris(gray_img, daugman_start=10, daugman_end=30, daugman_step=1, points_step=3)
###Output
87.1 ms ± 5.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Function profiling
###Code
import cProfile
cProfile.run('find_iris(gray_img, daugman_start=10, daugman_end=30, daugman_step=1, points_step=3)')
###Output
17582 function calls (17244 primitive calls) in 0.124 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
169 0.000 0.000 0.002 0.000 <__array_function__ internals>:2(argmax)
169 0.000 0.000 0.001 0.000 <__array_function__ internals>:2(copyto)
169 0.000 0.000 0.001 0.000 <__array_function__ internals>:2(empty_like)
169 0.000 0.000 0.003 0.000 <__array_function__ internals>:2(zeros_like)
1 0.000 0.000 0.124 0.124 <string>:1(<module>)
1 0.001 0.001 0.124 0.124 daugman.py:63(find_iris)
169 0.081 0.000 0.123 0.001 daugman.py:8(daugman)
169 0.000 0.000 0.000 0.000 fromnumeric.py:1115(_argmax_dispatcher)
169 0.000 0.000 0.001 0.000 fromnumeric.py:1119(argmax)
169 0.000 0.000 0.001 0.000 fromnumeric.py:52(_wrapfunc)
169 0.000 0.000 0.000 0.000 multiarray.py:1054(copyto)
169 0.000 0.000 0.000 0.000 multiarray.py:75(empty_like)
169 0.000 0.000 0.000 0.000 numeric.py:71(_zeros_like_dispatcher)
169 0.001 0.000 0.003 0.000 numeric.py:75(zeros_like)
169 0.003 0.000 0.003 0.000 {GaussianBlur}
169 0.000 0.000 0.000 0.000 {built-in method builtins.abs}
1 0.000 0.000 0.124 0.124 {built-in method builtins.exec}
169 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
1 0.000 0.000 0.000 0.000 {built-in method builtins.max}
169 0.001 0.000 0.001 0.000 {built-in method numpy.array}
676/338 0.002 0.000 0.004 0.000 {built-in method numpy.core._multiarray_umath.implement_array_function}
169 0.001 0.000 0.001 0.000 {built-in method numpy.zeros}
3380 0.010 0.000 0.010 0.000 {circle}
3718 0.001 0.000 0.001 0.000 {method 'append' of 'list' objects}
169 0.001 0.000 0.001 0.000 {method 'argmax' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
3380 0.006 0.000 0.006 0.000 {method 'fill' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {method 'index' of 'list' objects}
3380 0.015 0.000 0.015 0.000 {method 'reduce' of 'numpy.ufunc' objects}
###Markdown
Dataset 🧾
###Code
!gdown --id 1DHjy_vso6DhuN3xMsz6_TerdgHibVuqi
###Output
Downloading...
From: https://drive.google.com/uc?id=1DHjy_vso6DhuN3xMsz6_TerdgHibVuqi
To: /content/Musical_instruments_reviews.csv
0.00B [00:00, ?B/s]
6.09MB [00:00, 95.4MB/s]
###Markdown
Reading the data 💿
###Code
import pandas as pd
df = pd.read_csv('/content/Musical_instruments_reviews.csv')
df.head()
df = df[['reviewText', 'overall']]
df.head()
###Output
_____no_output_____
###Markdown
OneShot is all you need 🤫
###Code
!pip install git+https://github.com/nakshatrasinghh/OneShot.git
!python -m spacy download en_core_web_md
import OneShot as osx
import re
def get_clean(x):
x = str(x).lower().replace('\\', '').replace('_', ' ')
x = osx.cont_exp(x)
x = osx.remove_emails(x)
x = osx.remove_urls(x)
x = osx.remove_html_tags(x)
x = osx.remove_rt(x)
x = osx.remove_mentions(x)
x = osx.remove_accented_chars(x)
x = osx.remove_special_chars(x)
x = osx.remove_stopwords(x)
x = osx.remove_dups_char(x)
x = re.sub("(.)\\1{2,}", "\\1", x)
return x
df['Cleaned_reviewText'] = df['reviewText'].apply(lambda x: get_clean(x))
df.head()
###Output
_____no_output_____
###Markdown
###Code
!pip3 install sRemoAPI
import os
access_token = os.environ["sRemo_Access_Token"]
device_identifier = os.environ["sRemo_Device_Identifier"]
from sRemo import sRemoAPI
api = sRemoAPI(access_token, device_identifier)
def 電気を消す():
appliance_number = "3"
appliance_type = "light"
signal = "2"
api.send_signal(appliance_number, appliance_type, signal)
def 電気をつける():
appliance_number = "3"
appliance_type = "light"
signal = "3"
api.send_signal(appliance_number, appliance_type, signal)
電気をつける()
電気を消す()
###Output
_____no_output_____
###Markdown
Data augmentation analysis Data import Load data from OpenML
###Code
from utils import data_import, data_augmentation
import imageio
import numpy as np
from sklearn.datasets import fetch_openml
from matplotlib import pyplot as plt
%matplotlib inline
import sys
reload(sys)
sys.setdefaultencoding('utf8')
X, y = fetch_openml('Fashion-MNIST', return_X_y=True)
selected_class = '4'
image_size = (28,28)
proportion = 0.2
aug_type = "rotate"
np.unique(y, return_counts=True)
image = np.resize(X[0], image_size)
plt.imshow(image, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Generate balanced dictionary from the fetched data
###Code
d_balanced = data_import.generate_balanced_dictionary(X,y)
d_balanced, d_test = data_import.train_test_split_dictionary(d_balanced)
X0, y0 = data_import.lists_from_dict(d_balanced)
###Output
_____no_output_____
###Markdown
Generate unbalanced dictionary by reducing the `selected_class` examples to its `proportion`.
###Code
d_unbalanced, _ = data_import.reduce_class_samples(d_balanced,
label_key=selected_class,
proportion=proportion)
###Output
_____no_output_____
###Markdown
Label sample distribution in the unbalanced data
###Code
X1, y1 = data_import.lists_from_dict(d_unbalanced)
np.unique(y1, return_counts=True)
###Output
_____no_output_____
###Markdown
Data augmentation Generate augmentation with [Augmentor](https://github.com/mdbloice/Augmentor). We have to save the selected images to file (PNG lossless) and load back the augmented images
###Code
data_augmentation.remove_directory("augment/")
data_augmentation.save_label_to_file(d_unbalanced,selected_class,image_size)
###Output
_____no_output_____
###Markdown
The number of required new images is the difference between the balanced and the unbalanced
###Code
missing_image_num = len(d_balanced[selected_class]) - len(d_unbalanced[selected_class])
data_augmentation.generate_augmented_data(missing_image_num, aug_type)
d_augmented= data_augmentation.load_augmented_data(d_unbalanced, selected_class, aug_type)
X2, y2 = data_import.lists_from_dict(d_augmented)
np.unique(y2, return_counts=True)
X_test_selected_class = d_test.pop(selected_class, None)
y_test_selected_class = [selected_class]*len(X_test_selected_class)
X_test, y_test = data_import.lists_from_dict(d_test)
np.unique(y_test, return_counts=True)
###Output
_____no_output_____
###Markdown
Models
###Code
from sklearn.linear_model import SGDClassifier as SGD
from sklearn.svm import SVC, LinearSVC
sgd = SGD(loss="log", max_iter=100)
sgd.fit(X0,y0)
sgd.score(X_test,y_test)
sgd.score(X_test_selected_class, y_test_selected_class)
sgd_unbalanced = SGD(loss="log", max_iter=100)
sgd_unbalanced.fit(X1,y1)
sgd_unbalanced.score(X_test,y_test)
sgd_unbalanced.score(X_test_selected_class, y_test_selected_class)
sgd_augmented = SGD(loss="log", max_iter=100)
sgd_augmented.fit(X2,y2)
sgd_augmented.score(X_test, y_test)
sgd_augmented.score(X_test_selected_class, y_test_selected_class)
svc = LinearSVC()
svc.fit(X0, y0)
print(svc.score(X_test, y_test))
print(svc.score(X_test_selected_class, y_test_selected_class))
svc = LinearSVC()
svc.fit(X1, y1)
print(svc.score(X_test, y_test))
print(svc.score(X_test_selected_class, y_test_selected_class))
svc = LinearSVC()
svc.fit(X2, y2)
print(svc.score(X_test, y_test))
print(svc.score(X_test_selected_class, y_test_selected_class))
###Output
_____no_output_____
###Markdown
Vertica ML Python ExampleThis notebook is an example on how to use the Vetica ML Python Library. It will use the Titanic dataset to introduce you the library. The purpose is to predict the passengers survival. InitializationLet's create a connection and load the dataset.
###Code
from vertica_ml_python.utilities import vertica_cursor
from vertica_ml_python.learn.datasets import load_titanic
cur = vertica_cursor("VerticaDSN")
titanic = load_titanic(cur)
print(titanic)
###Output
_____no_output_____
###Markdown
Data Exploration and PreparationLet's explore the data by displaying descriptive statistics of all the columns.
###Code
titanic.describe(method = "categorical")
###Output
_____no_output_____
###Markdown
The column "body" is useless as it is only the ID of the passengers. Besides, it has too much missing values. The column "home.dest" will not influence the survival as it is from where the passengers embarked and where they are going to. We can have the same conclusion with "embarked" which is the port of embarkation. The column 'ticket' which is the ticket ID will also not give us information on the survival. Let's analyze the columns "name" and "cabin to see if we can extract some information. Let's first look at the passengers 'name'.
###Code
from vertica_ml_python.learn.preprocessing import CountVectorizer
CountVectorizer("name_voc", cur).fit("titanic", ["Name"]).to_vdf()
###Output
_____no_output_____
###Markdown
It is possible to extract from the 'name' the title of the passengers. Let's now look at the 'cabins'.
###Code
from vertica_ml_python.learn.preprocessing import CountVectorizer
CountVectorizer("cabin_voc", cur).fit("titanic", ["cabin"]).to_vdf()
###Output
_____no_output_____
###Markdown
We can extract the cabin position (the letter which reprent the position in the boat) and look at the number of occurences.
###Code
CountVectorizer("cabin_voc", cur).fit("titanic", ["cabin"]).to_vdf()["token"].str_slice(1, 1).groupby(
columns = ["token"], expr = ["SUM(cnt)"]).head(30)
###Output
_____no_output_____
###Markdown
The NULL values possibly represent passengers having no cabin (MNAR = Missing values not at random). The same for the column "boat" NULL values which represent passengers who bought the 'lifeboat' option. We can drop the useless columns and encode the others.
###Code
titanic.drop(["body", "home.dest", "embarked", "ticket"])
titanic["cabin"].str_slice(1, 1)["name"].str_extract(' ([A-Za-z]+)\.')["boat"].fillna(
method = "0ifnull")["cabin"].fillna("No Cabin")
###Output
795 elements were filled
948 elements were filled
###Markdown
We can notice that our assumptions about the cabin is wrong as passengers in first class must have a cabin. This column has missing values at random (MAR) and too much. We can drop it.
###Code
titanic["cabin"].drop()
###Output
_____no_output_____
###Markdown
Let's look at descriptive statistics of the entire Virtual Dataframe.
###Code
titanic.statistics()
###Output
_____no_output_____
###Markdown
We can have with this method many relevant information. We can notice for example that the 'age' of the passengers follows more or less a normal distribution (kurtosis and skewness around 0).
###Code
x = titanic["age"].hist()
###Output
_____no_output_____
###Markdown
The column 'fare' has many outliers (512.33 which is the maximum is much greater than 79.13 which is the 9th decile). Most of the passengers traveled in 3rd class (median of pclass = 3) and much more... 'sibsp' represents the number of siblings and parch the number of parents and children, it can be relevant to build a new feature 'family_size'.
###Code
titanic.eval("family_size", "parch + sibsp + 1")
###Output
The new vColumn "family_size" was added to the vDataframe.
###Markdown
Let's deal with the outliers. There are many methods to find them (LocalOutlier Factors, DBSCAN, KMeans...) but we will just winsorize the 'fare' distribution which is the main subject to this anomaly (some passengers could have paid a very expensive fare but outliers could destroy our model prediction).
###Code
titanic["fare"].fill_outliers(method = "winsorize", alpha = 0.03)
###Output
_____no_output_____
###Markdown
Let's encode the column 'sex' to be able to use it with numerical methods.
###Code
titanic["sex"].label_encode()
###Output
_____no_output_____
###Markdown
The column 'age' has too many missing values and we need to impute them. Let's impute them by the average of passengers having the same 'pclass' and the same 'sex'.
###Code
titanic["age"].fillna(method = "mean", by = ["pclass", "sex"])
###Output
237 elements were filled
###Markdown
We can draw the correlation matrix to see different information we could get.
###Code
titanic.corr(method = "spearman")
###Output
_____no_output_____
###Markdown
The fare is very correlated to the family size. It is normal as the bigger the family is, the greater the number of tickets they have to buy will be (so the fare as well). The survival is very correlated to the 'boat'. In case of linear model we will never be able to predict the survival of the passenger having no life boat. To be able to create a real predictive model, we must split the study into 2 use cases: Passengers having no lifeboat Passengers having a lifeboatWe did a lot of operations to clean this table and nothing was saved in the DB ! We can look at the Virtual Dataframe relation to be sure.
###Code
print(titanic.current_relation())
###Output
(SELECT COALESCE("age", AVG("age") OVER (PARTITION BY pclass, sex)) AS "age",
"survived" AS "survived",
DECODE("sex", 'female', 0, 'male', 1, 2) AS "sex",
"pclass" AS "pclass",
"parch" AS "parch",
(CASE
WHEN "fare" < -176.6204982585513 THEN -176.6204982585513
WHEN "fare" > 244.5480856064831 THEN 244.5480856064831
ELSE "fare"
END) AS "fare",
REGEXP_SUBSTR("name", ' ([A-Za-z]+)\.') AS "name",
DECODE("boat", NULL, 0, 1) AS "boat",
"sibsp" AS "sibsp",
parch + sibsp + 1 AS "family_size"
FROM
(SELECT "age" AS "age",
"survived" AS "survived",
"sex" AS "sex",
"pclass" AS "pclass",
"parch" AS "parch",
"fare" AS "fare",
"name" AS "name",
"boat" AS "boat",
"sibsp" AS "sibsp",
0 AS "family_size"
FROM "public"."titanic") t1) final_table
###Markdown
Let see what's happening when we aggregate and turn on the SQL.
###Code
titanic.sql_on_off().avg()
###Output
_____no_output_____
###Markdown
VERTICA ML Python will do SQL generation during the entire process and keep in mind all the users modifications.
###Code
titanic.sql_on_off().info()
###Output
The vDataframe was modified many times:
* {Fri Mar 20 20:52:40 2020} [Drop]: vColumn '"body"' was deleted from the vDataframe.
* {Fri Mar 20 20:52:40 2020} [Drop]: vColumn '"home.dest"' was deleted from the vDataframe.
* {Fri Mar 20 20:52:40 2020} [Drop]: vColumn '"embarked"' was deleted from the vDataframe.
* {Fri Mar 20 20:52:40 2020} [Drop]: vColumn '"ticket"' was deleted from the vDataframe.
* {Fri Mar 20 20:52:40 2020} [SUBSTR(, 1, 1)]: The vColumn 'cabin' was transformed with the func 'x -> SUBSTR(x, 1, 1)'.
* {Fri Mar 20 20:52:40 2020} [REGEXP_SUBSTR(, ' ([A-Za-z]+)\.')]: The vColumn 'name' was transformed with the func 'x -> REGEXP_SUBSTR(x, ' ([A-Za-z]+)\.')'.
* {Fri Mar 20 20:52:40 2020} [Fillna]: 795 missing values of the vColumn '"boat"' were filled.
* {Fri Mar 20 20:52:40 2020} [Fillna]: 948 missing values of the vColumn '"cabin"' were filled.
* {Fri Mar 20 20:52:41 2020} [Drop]: vColumn '"cabin"' was deleted from the vDataframe.
* {Fri Mar 20 20:52:47 2020} [Eval]: A new vColumn '"family_size"' was added to the vDataframe.
* {Fri Mar 20 20:52:47 2020} [(CASE WHEN < -176.6204982585513 THEN -176.6204982585513 WHEN > 244.5480856064831 THEN 244.5480856064831 ELSE END)]: The vColumn 'fare' was transformed with the func 'x -> (CASE WHEN x < -176.6204982585513 THEN -176.6204982585513 WHEN x > 244.5480856064831 THEN 244.5480856064831 ELSE x END)'.
* {Fri Mar 20 20:52:48 2020} [Label Encoding]: Label Encoding was applied to the vColumn '"sex"' using the following mapping:
female => 0 male => 1
* {Fri Mar 20 20:52:48 2020} [Fillna]: 237 missing values of the vColumn '"age"' were filled.
###Markdown
You already love the Virtual Dataframe, do you? &128540; If you want to share the object with a member of the team, you can use the following method.
###Code
x = titanic.to_vdf("titanic")
###Output
_____no_output_____
###Markdown
We created a .vdf file which can be read with the 'read_vdf' function:
###Code
from vertica_ml_python.utilities import read_vdf
titanic2 = read_vdf("titanic.vdf", cur)
print(titanic2)
###Output
_____no_output_____
###Markdown
Let's now save the vDataframe in the Database to fulfill the next step: Data Modelling.
###Code
from vertica_ml_python.utilities import drop_view
drop_view("titanic_boat", cur)
drop_view("titanic_not_boat", cur)
x = titanic.save().filter("boat = 1").to_db("titanic_boat").load().filter("boat = 0").to_db("titanic_not_boat")
###Output
The view titanic_boat was successfully dropped.
The view titanic_not_boat was successfully dropped.
795 elements were filtered
439 elements were filtered
###Markdown
Machine Learning Passengers with a lifeboat First let's look at the number of survivors in this dataset.
###Code
from vertica_ml_python import vDataframe
titanic_boat = vDataframe("titanic_boat", cur)
titanic_boat["survived"].describe()
###Output
_____no_output_____
###Markdown
We only have 9 death. Let's try to understand why these passengers died.
###Code
titanic_boat.filter("survived = 0").head(10)
###Output
430 elements were filtered
###Markdown
These passengers have no reason to die except the ones in third class. Building a model for this part of the data is useless. Passengers without a lifeboat Let's now look at passengers without a lifeboat.
###Code
from vertica_ml_python import vDataframe
titanic_boat = vDataframe("titanic_not_boat", cur)
titanic_boat["survived"].describe()
###Output
_____no_output_____
###Markdown
Only 20 survived. Let's look why.
###Code
titanic_boat.filter("survived = 1").head(20)
###Output
775 elements were filtered
###Markdown
They are mostly women. The famous quotation "Women and children first" is then right. Let's build a model to get more insights. As predictors, we have one categorical columns. Besides, we have correlated features as predictors. It is preferable to work with a non-linear classifier which can handle that. Random Forest seems to be perfect for the study. Let's evaluate it with a Cross Validation.
###Code
from vertica_ml_python.learn.ensemble import RandomForestClassifier
from vertica_ml_python.learn.model_selection import cross_validate
from vertica_ml_python.utilities import drop_model
predictors = titanic.get_columns()
predictors.remove('"survived"')
response = "survived"
relation = "titanic_not_boat"
drop_model("rf_titanic", cur)
model = RandomForestClassifier("rf_titanic", cur, n_estimators = 40, max_depth = 4)
cross_validate(model, relation, predictors, response)
###Output
The model rf_titanic was successfully dropped.
###Markdown
As the dataset is unbalanced, the AUC is a good way to evaluate it. The model is very good with an average greater than 0.9 ! We can now build a model with the entire dataset.
###Code
model.fit(relation, predictors, response)
###Output
_____no_output_____
###Markdown
Let's look at the features importance.
###Code
model.features_importance()
###Output
_____no_output_____
###Markdown
Example of Data Analysis with DCD Hub Data First, we import the Python SDK
###Code
from dcd.entities.thing import Thing
###Output
_____no_output_____
###Markdown
We provide the thing ID and access token (replace with yours)
###Code
from dotenv import load_dotenv
import os
load_dotenv()
THING_ID = os.environ['THING_ID']
THING_TOKEN = os.environ['THING_TOKEN']
###Output
_____no_output_____
###Markdown
We instantiate a Thing with its credential, then we fetch its details
###Code
my_thing = Thing(thing_id=THING_ID, token=THING_TOKEN)
my_thing.read()
###Output
INFO:dcd:things:my-test-thing-27aa:Initialising MQTT connection for Thing 'dcd:things:my-test-thing-27aa'
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-27aa HTTP/1.1" 200 1290
###Markdown
What does a Thing look like?
###Code
my_thing.to_json()
###Output
_____no_output_____
###Markdown
Which property do we want to explore and over which time frame?
###Code
from datetime import datetime
# What dates?
START_DATE = "2019-10-08 21:17:00"
END_DATE = "2019-11-08 21:25:00"
from datetime import datetime
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
from_ts = datetime.timestamp(datetime.strptime(START_DATE, DATE_FORMAT)) * 1000
to_ts = datetime.timestamp(datetime.strptime(END_DATE, DATE_FORMAT)) * 1000
###Output
_____no_output_____
###Markdown
Let's find this property and read the data.
###Code
PROPERTY_NAME = "My Random Property"
my_property = my_thing.find_property_by_name(PROPERTY_NAME)
my_property.read(from_ts, to_ts)
###Output
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-27aa/properties/my-random-property-6820?from=1570562220000.0&to=1573244700000.0 HTTP/1.1" 200 2
###Markdown
How many data point did we get?
###Code
print(len(my_property.values))
###Output
_____no_output_____
###Markdown
Display values
###Code
my_property.values
###Output
_____no_output_____
###Markdown
From CSV
###Code
from numpy import
import pandas as pd
data = genfromtxt('data.csv', delimiter=',')
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')), columns = ['x', 'y', 'z'])
data_frame
###Output
_____no_output_____
###Markdown
Plot some charts with MatplotlibIn this example we plot an histogram, distribution of all values and dimensions.
###Code
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from numpy import ma
data = np.array(my_property.values)
figure(num=None, figsize=(15, 5))
t = data_frame.index
plt.plot(t, data_frame.x, t, data_frame.y, t, data_frame.z)
plt.hist(data[:,1:])
plt.show()
###Output
_____no_output_____
###Markdown
Generate statistics with NumPy and Pandas
###Code
import numpy as np
from scipy.stats import kurtosis, skew
np.min(data[:,1:4], axis=0)
skew(data[:,1:4])
###Output
_____no_output_____
###Markdown
You can select a column (slice) of data, or a subset of data. In the example below we select rowsfrom 10 to 20 (10 in total) and the colum 1 to x (i.e skiping the first column representing the time).
###Code
data[:10,1:]
###Output
_____no_output_____
###Markdown
Out of the box, Pandas give you some statistics, do not forget to convert your array into a DataFrame.
###Code
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')))
pd.DataFrame.describe(data_frame)
data_frame.rolling(10).std()
###Output
_____no_output_____
###Markdown
Rolling / Sliding WindowTo apply statistics on a sliding (or rolling) window, we can use the rolling() function of a data frame. In the example below, we roll with a window size of 4 elements to apply a skew()
###Code
rolling2s = data_frame.rolling('2s').std()
plt.plot(rolling2s)
plt.show()
rolling100_data_points = data_frame.rolling(100).skew()
plt.plot(rolling100_data_points)
plt.show()
###Output
_____no_output_____
###Markdown
Zero Crossing
###Code
plt.hist(np.where(np.diff(np.sign(data[:,1]))))
plt.show()
###Output
_____no_output_____
###Markdown
Treex**Main features**:* Modules contain their parameters* Easy transfer learning* Simple initialization* No metaclass magic* No apply methodTo prove the previous we will start with by creating a very contrived but complete module which will use everything from parameters, states, and random state:
###Code
from typing import Tuple
import jax.numpy as jnp
import numpy as np
import treex as tx
class NoisyStatefulLinear(tx.Module):
# tree parts are defined by treex annotations
w: tx.Parameter
b: tx.Parameter
count: tx.State
rng: tx.Rng
# other annotations are possible but ignored by type
name: str
def __init__(self, din, dout, name="noisy_stateful_linear"):
self.name = name
# Initializers only expect RNG key
self.w = tx.Initializer(lambda k: jax.random.uniform(k, shape=(din, dout)))
self.b = tx.Initializer(lambda k: jax.random.uniform(k, shape=(dout,)))
# random state is JUST state, we can keep it locally
self.rng = tx.Initializer(lambda k: k)
# if value is known there is no need for an Initiaizer
self.count = jnp.array(1)
def __call__(self, x: np.ndarray) -> np.ndarray:
assert isinstance(self.count, jnp.ndarray)
assert isinstance(self.rng, jnp.ndarray)
# state can easily be updated
self.count = self.count + 1
# random state is no different :)
key, self.rng = jax.random.split(self.rng, 2)
# your typical linear operation
y = jnp.dot(x, self.w) + self.b
# add noise for fun
state_noise = 1.0 / self.count
random_noise = 0.8 * jax.random.normal(key, shape=y.shape)
return y + state_noise + random_noise
def __repr__(self) -> str:
return f"NoisyStatefulLinear(w={self.w}, b={self.b}, count={self.count}, rng={self.rng})"
linear = NoisyStatefulLinear(1, 1)
linear
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
InitializationAs advertised, initialization is easy, the only thing you need to do is to call `init` on your module with a random key:
###Code
import jax
linear = linear.init(key=jax.random.PRNGKey(42))
linear
###Output
_____no_output_____
###Markdown
Modules are PytreesIts fundamentally important that modules are also Pytrees, we can check that they are by using `tree_map` with an arbitrary function:
###Code
# its a pytree alright
doubled = jax.tree_map(lambda x: 2 * x, linear)
doubled
###Output
_____no_output_____
###Markdown
Modules can be slicedAn important feature of this Module system is that it can be sliced based on the type of its parameters, the `slice` method does exactly that:
###Code
params = linear.slice(tx.Parameter)
states = linear.slice(tx.State)
print(f"{params=}")
print(f"{states=}")
###Output
params=NoisyStatefulLinear(w=[[0.91457367]], b=[0.42094743], count=Nothing, rng=Nothing)
states=NoisyStatefulLinear(w=Nothing, b=Nothing, count=1, rng=[1371681402 3011037117])
###Markdown
Notice the following:* Both `params` and `states` are `NoisyStatefulLinear` objects, their type doesn't change after being sliced.* The fields that are filtered out by the `slice` on each field get a special value of type `tx.Nothing`.Why is this important? As we will see later, it is useful keep parameters and state separate as they will crusially flow though different parts of `value_and_grad`. Modules can be mergedThis is just the inver operation to `slice`, `merge` behaves like dict's `update` but returns a new module leaving the original modules intact:
###Code
linear = params.merge(states)
linear
###Output
_____no_output_____
###Markdown
Modules composeAs you'd expect, you can have modules inside ther modules, same as previously the key is to annotate the class fields. Here we will create an `MLP` class that uses two `NoisyStatefulLinear` modules:
###Code
class MLP(tx.Module):
linear1: NoisyStatefulLinear
linear2: NoisyStatefulLinear
def __init__(self, din, dmid, dout):
self.linear1 = NoisyStatefulLinear(din, dmid, name="linear1")
self.linear2 = NoisyStatefulLinear(dmid, dout, name="linear2")
def __call__(self, x: np.ndarray) -> np.ndarray:
x = jax.nn.relu(self.linear1(x))
x = self.linear2(x)
return x
def __repr__(self) -> str:
return f"MLP(linear1={self.linear1}, linear2={self.linear2})"
model = MLP(din=1, dmid=2, dout=1).init(key=42)
model
###Output
_____no_output_____
###Markdown
Full ExampleUsing the previous `model` we will show how to train it using the proposed Module system. First lets get some data:
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
def get_data(dataset_size: int) -> Tuple[np.ndarray, np.ndarray]:
x = np.random.normal(size=(dataset_size, 1))
y = 5 * x - 2 + 0.4 * np.random.normal(size=(dataset_size, 1))
return x, y
def get_batch(
data: Tuple[np.ndarray, np.ndarray], batch_size: int
) -> Tuple[np.ndarray, np.ndarray]:
idx = np.random.choice(len(data[0]), batch_size)
return jax.tree_map(lambda x: x[idx], data)
data = get_data(1000)
plt.scatter(data[0], data[1])
plt.show()
###Output
_____no_output_____
###Markdown
Now we will be reusing the previous MLP model, and we will create an optax optimizer that will be used to train the model:
###Code
import optax
optimizer = optax.adam(1e-2)
params = model.slice(tx.Parameter)
states = model.slice(tx.State)
opt_state = optimizer.init(params)
###Output
_____no_output_____
###Markdown
Notice that we are already splitting the model into `params` and `states` since we need to pass the `params` only to the optimizer. Next we will create the loss function, it will take the model parts and the data parts and return the loss plus the new states:
###Code
from functools import partial
@partial(jax.value_and_grad, has_aux=True)
def loss_fn(params: MLP, states: MLP, x, y):
# merge params and states to get a full model
model: MLP = params.merge(states)
# apply model
pred_y = model(x)
# MSE loss
loss = jnp.mean((y - pred_y) ** 2)
# new states
states = model.slice(tx.State)
return loss, states
###Output
_____no_output_____
###Markdown
Notice that the first thing we are doing is merging the `params` and `states` into the complete model since we need everything in place to perform the forward pass. Also, we return the updated states from the model, this is needed because JAX functional API requires us to be explicit about state management.**Note**: inside `loss_fn` (which is wrapped by `value_and_grad`) module can behave like a regular mutable python object, however, every time its treated as pytree a new reference will be created as happens in `jit`, `grad`, `vmap`, etc. Its important to keep this into account when using functions like `vmap` inside a module as certain book keeping will be needed to manage state correctly.Next we will implement the `update` function, it will look indistinguishable from your standard Haiku update which also separates weights into `params` and `states`:
###Code
@jax.jit
def update(params: MLP, states: MLP, opt_state, x, y):
(loss, states), grads = loss_fn(params, states, x, y)
updates, opt_state = optimizer.update(grads, opt_state, params)
# use regular optax
params = optax.apply_updates(params, updates)
return params, states, opt_state, loss
###Output
_____no_output_____
###Markdown
Finally we create a simple training loop that perform a few thousand updates and merge `params` and `states` back into a single `model` at the end:
###Code
steps = 10_000
for step in range(steps):
x, y = get_batch(data, batch_size=32)
params, states, opt_state, loss = update(params, states, opt_state, x, y)
if step % 1000 == 0:
print(f"[{step}] loss = {loss}")
# get the final model
model = params.merge(states)
###Output
[0] loss = 36.88694763183594
###Markdown
Now lets generate some test data and see how our model performed:
###Code
import matplotlib.pyplot as plt
X_test = np.linspace(data[0].min(), data[0].max(), 100)[:, None]
y_pred = model(X_test)
plt.scatter(data[0], data[1], label="data", color="k")
plt.plot(X_test, y_pred, label="prediction")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Load network dataset and extract ARW input data
###Code
path = './datasets/acl.pkl'
network = ig.Graph.Read_Pickle(path)
print network.summary()
attr = 'single_attr' if network['attributed'] else None
input_data = utils.extract_arw_input_data(network, 'time', 0.00, 0.01, debug=False, attrs=attr)
###Output
IGRAPH DN-- 18665 115311 --
+ attr: attributed (g), attributes (g), single_attr (g), attrs (v), id (v), name (v), single_attr (v), time (v), venue_id (v)
###Markdown
Generate ARW graph with fitted parameters
###Code
params = dict(p_diff=0.08, p_same=0.06, jump=0.42, out=1)
arw_graph = arw.RandomWalkSingleAttribute(params['p_diff'], params['p_same'],
params['jump'], params['out'],
input_data['gpre'], attr_name=attr)
arw_graph.add_nodes(input_data['chunk_sizes'], input_data['mean_outdegs'],
chunk_attr_sampler=input_data['chunk_sampler'] if attr else None)
arw_graphs[network] = arw_graph
###Output
Total chunks: 44
3 7 11 15 19 23 27 31 35 39 43
IGRAPH D--- 18665 119370 --
+ attr: chunk_id (v), single_attr (v)
###Markdown
Compare graph statistics
###Code
utils.plot_deg_and_cc_and_deg_cc([arw_graph.g, network], ['ARW', 'Dataset'], get_atty=network['attributed'])
###Output
ARW: 0.063
Dataset: 0.067
###Markdown
This is a Jupyter Notebook that shows examples of all the functions in the maccorcyclingdata package.
###Code
import maccorcyclingdata.testdata as testdata
import maccorcyclingdata.schedules as schedules
import maccorcyclingdata.validate as validate
import importlib
df = testdata.import_maccor_data('example_data/', 'testdata.csv')
df.head(5)
mult_df = testdata.import_multiple_csv_data('example_data/multiple_csv/')
mult_df.head(5)
cycles = testdata.get_num_cycles(mult_df)
print("The number of cycles in testdata.csv: " + str(cycles))
data = testdata.get_cycle_data(df, ['current_ma', 'voltage_v'], [1, 39], [11, 13])
data.head(5)
del_df = testdata.delete_cycle_steps(df, [1, 2])
del_df.head(5)
print(len(df))
print(len(del_df))
del_df_shifted = testdata.delete_cycle_steps(df, [1,2], True)
del_df_shifted.head(5)
print(len(del_df_shifted))
schedule_df = schedules.import_schedules('example_data/','schedule.csv')
schedule_df
validation_df = validate.validate_test_data(schedule_df, df, 108, 60, 5, 50, False, 3, 2)
validation_df
schedule_df_errors = schedules.import_schedules('example_data/','schedule_errors.csv')
schedule_df_errors
df_errors = testdata.import_maccor_data('example_data/', 'testdata_errors.csv')
df_errors
importlib.reload(validate)
validation_df = validate.validate_test_data(schedule_df_errors, df_errors, 108, 60, 5, 50, False, 3, 2)
validation_df
###Output
_____no_output_____
###Markdown
The following preparation will be done during pre-processing:
###Code
# x_test = x_test[:1000]
# y_test = y_test[:1000]
dataset = x_test
dataset_labels = y_test
del x_train
del y_train
###Output
_____no_output_____
###Markdown
Make sure "python setup_deepeverest_index.py build" is run ahead of time.
###Code
layer_name = "activation_12"
layer_id = all_layer_names.index(layer_name)
import ctypes
lib_file = "/Users/donghe/GoogleDrive/Projects/uwdb-deep-everest/index/build/lib.macosx-10.7-x86_64-3.7/deepeverst_index.cpython-37m-darwin.so"
index_lib = ctypes.CDLL(lib_file)
import math
from utils import *
n_images = len(dataset)
n_partitions= 64
batch_size = 64
ratio = 0.05
bits_per_image = math.ceil(math.log(n_partitions, 2))
layer_result = get_layer_result_by_layer_id(model, dataset, layer_id, batch_size=batch_size)
from DeepEverest import *
rev_act, rev_idx_act, rev_bit_arr, rev_idx_idx, par_low_bound, par_upp_bound = construct_index(
index_lib=index_lib,
n_images=n_images,
ratio=ratio,
n_partitions=n_partitions,
bits_per_image=bits_per_image,
layer_result=layer_result)
###Output
_____no_output_____
###Markdown
The indexes can be persisted to disk with np.save() or pickle.dump() for convenient re-use later.
###Code
label_predicted = np.argmax(model.predict(dataset), axis=1)
label_test = np.argmax(dataset_labels, axis=1)
###Output
_____no_output_____
###Markdown
At query time:
###Code
misclassified_mask = label_predicted[:1000] != dataset_labels[:1000]
np.where(misclassified_mask)
image_ids = [193, 412, 582, 659, 938]
for image_id in image_ids:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
import heapq
def get_topk_activations_given_images(model, dataset, image_ids, layer_name, k):
res = list()
image_samples = list()
for image_sample_id in image_ids:
image_samples.append(dataset[image_sample_id])
layer_result_image_samples = model.get_layer_result_by_layer_name(image_samples, layer_name)
for idx, image_sample_id in enumerate(image_ids):
heap = list()
for neuron_idx, activation in np.ndenumerate(layer_result_image_samples[idx]):
if len(heap) < k:
heapq.heappush(heap, (activation, neuron_idx))
elif (activation, neuron_idx) > heap[0]:
heapq.heapreplace(heap, (activation, neuron_idx))
res.append(sorted(heap, reverse=True))
return res
image_ids = [659]
k_global = 20
topk_activations = get_topk_activations_given_images(model, x_test, image_ids, layer_name, k_global)[0]
topk_activations_neurons = [x[1] for x in topk_activations]
topk_activations
from NeuronGroup import *
image_sample_id = 659
neuron_group = NeuronGroup(model.model, layer_id, neuron_idx_list=topk_activations_neurons[:3])
top_k, exit_msg, is_in_partition_0, n_images_rerun = answer_query_with_guarantee(
model, dataset, rev_act, rev_idx_act, rev_bit_arr, rev_idx_idx,
par_low_bound, par_upp_bound, image_sample_id,
neuron_group, k_global, n_partitions, bits_per_image,
BATCH_SIZE=batch_size, batch_size=batch_size)
top_k = sorted(top_k)
top_k, exit_msg
for neg_dist, image_id in top_k:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
def predict_2_as_7(image_id):
return label_predicted[image_id] == 7 and label_test[image_id] == 2
def predict_7_as_7(image_id):
return label_predicted[image_id] == 7 and label_test[image_id] == 7
def predict_2_as_2(image_id):
return label_predicted[image_id] == 2 and label_test[image_id] == 2
def predict_7_as_2(image_id):
return label_predicted[image_id] == 2 and label_test[image_id] == 7
for neg_dist, image_id in top_k:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(dataset, label_test, image_id, prediction)
seven_as_two = -1
two_as_seven = -1
two_as_two = -1
seven_as_seven = -1
for image_id in range(x_test.shape[0]):
if seven_as_two < 0 and predict_7_as_2(image_id):
seven_as_two = image_id
if two_as_seven < 0 and predict_2_as_7(image_id):
two_as_seven = image_id
if two_as_two < 0 and predict_2_as_2(image_id):
two_as_two = image_id
if seven_as_seven < 0 and predict_7_as_7(image_id):
seven_as_seven = image_id
if seven_as_two > 0 and two_as_seven > 0 and two_as_two > 0 and seven_as_seven > 0:
break
image_ids = [two_as_two, seven_as_seven, two_as_seven, seven_as_two]
for image_id in image_ids:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
k_global = 20
topk_activations = get_topk_activations_given_images(model, x_test, image_ids, layer_name, k_global)
topk_activations
neuron_cnt = dict()
for topk_activation in topk_activations:
for activation, neuron_idx in topk_activation:
if neuron_idx in neuron_cnt:
neuron_cnt[neuron_idx] += 1
else:
neuron_cnt[neuron_idx] = 1
sorted_neurons = [(k, v) for k, v in sorted(neuron_cnt.items(), key=lambda item: item[1], reverse=True)]
sorted_neurons_idx = [x[0] for x in sorted_neurons]
sorted_neurons
image_sample_id = seven_as_two
layer_id = all_layer_names.index(layer_name)
neuron_group = NeuronGroup(model.model, layer_id, neuron_idx_list=sorted_neurons_idx[:1])
top_k, exit_msg, is_in_partition_0, n_images_run = answer_query_with_guarantee(
model, dataset, rev_act, rev_idx_act, rev_bit_arr, rev_idx_idx,
par_low_bound, par_upp_bound, image_sample_id,
neuron_group, k_global, n_partitions, bits_per_image,
BATCH_SIZE=batch_size, batch_size=batch_size)
top_k = sorted(top_k)
for neg_dist, image_id in top_k:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
layer_id = all_layer_names.index(layer_name)
neuron_group = NeuronGroup(model.model, layer_id, neuron_idx_list=[(1, 0, 441)])
top_k, exit_msg, is_in_partition_0, n_images_run = answer_query_with_guarantee(
model, dataset, rev_act, rev_idx_act, rev_bit_arr, rev_idx_idx,
par_low_bound, par_upp_bound, image_sample_id,
neuron_group, k_global, n_partitions, bits_per_image,
BATCH_SIZE=batch_size, batch_size=batch_size)
top_k = sorted(top_k)
for neg_dist, image_id in top_k:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
confusion_activations = [topk_activations[2], topk_activations[3]]
neuron_cnt = dict()
for topk_activation in confusion_activations:
for activation, neuron_idx in topk_activation:
if neuron_idx in neuron_cnt:
neuron_cnt[neuron_idx] += 1
else:
neuron_cnt[neuron_idx] = 1
{k: v for k, v in sorted(neuron_cnt.items(), key=lambda item: item[1], reverse=True)}
layer_id = all_layer_names.index(layer_name)
neuron_group = NeuronGroup(model.model, layer_id, dimension_ranges=[(1, 2), (1, 2), (62, 130)])
top_k, exit_msg, is_in_partition_0, n_images_run = answer_query_with_guarantee(
model, dataset, rev_act, rev_idx_act, rev_bit_arr, rev_idx_idx,
par_low_bound, par_upp_bound, image_sample_id,
neuron_group, k_global, n_partitions, bits_per_image,
BATCH_SIZE=batch_size, batch_size=batch_size)
top_k = sorted(top_k)
for neg_dist, image_id in top_k:
prediction = np.argmax(model.predict(x_test[image_id]), axis=1).item()
plot_mnist(x_test, label_test, image_id, prediction)
###Output
image 5654, size of neuron group 68
threshold: 0.16640746593475342, max in answer: 1.3392893075942993, images run: 3862
threshold: 0.47820723056793213, max in answer: 1.3304256200790405, images run: 6197
threshold: 0.6547818779945374, max in answer: 1.3304256200790405, images run: 7604
threshold: 0.7672882676124573, max in answer: 1.3304256200790405, images run: 8434
threshold: 0.9051432609558105, max in answer: 1.3304256200790405, images run: 8940
threshold: 1.0185017585754395, max in answer: 1.3304256200790405, images run: 9284
threshold: 1.142806053161621, max in answer: 1.3304256200790405, images run: 9485
threshold: 1.2024894952774048, max in answer: 1.3304256200790405, images run: 9608
threshold: 1.3079551458358765, max in answer: 1.3304256200790405, images run: 9700
threshold: 1.3763256072998047, max in answer: 1.3304256200790405, images run: 9765
======================= NTA exited =======================
###Markdown
U-net - Example application*Marcos R. A. Conceição* U-net architechureA U-net is a state-of-art fully convolutional neural network (FCNN) first described by Ronneberger *et al.* (2015). Such network is based on three major pillars: encoder-decoder structure, multi-scale analysis and skip-connections. A typical U-net gets high-resolution $N$-dimensional tensor data as input (i.e., time series, images and volumes), which suffers multiple filtering by a number of trainable convolutional kernels, applied to an activation function and reduced on its dimensions for a posterior lower resolution processing. This sequence is applied, scale after scale, down to the last one. Such process is called the encoding, as the lower resolution layers hold condensed representations of abstract features present in the original data. Such condition is forced during trained, once such low resolution representations need to be decompressed by the network on later decoding steps. Each of such steps resizes inputs to the resolution used on the upper scale and performs other number of similar filtering operations. A key improvement used in U-nets are the so called skip connections. At each scale, before filtering the decompressed data, they are concatenated with the last outputs gotten for the same scale, back in the compression stage. This simple addition gets previously available higher resolution data features to be recalled by the network, making U-net results remarkably precise when locating events, when compared to other networks (even when it comes to common FCNNs). The last step in this model is the application of $M$ convolutional filters to the outputs --- now in the original scale --- which are going to produce $M$ outputs on each data dimension. These outputs may represent probability of belonging to each of the $M$ considered classes when segmentation is the task, or even the $M$ different channels of output image. Example problem Here we will use a U-net over mnist dataset to perform segmentation of zeros on input image. Importing libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import optimizers, callbacks
import sklearn.datasets as skds
from sklearn.model_selection import train_test_split
from unet import make_unet
###Output
_____no_output_____
###Markdown
Setting up the data
###Code
X, y = skds.load_digits(n_class=10, return_X_y=True)
X = X.reshape(-1, 8, 8, 1)
X_mean = X.mean(axis=(1,2,3))[:,None,None,None]
X_std = X.std(axis=(1,2,3))[:,None,None,None]
X_norm = (X-X_mean)/X_std
thresh = .5
Y = X_norm >= thresh
num_masked = 0
Y_num = Y * (y==num_masked)[:, None, None, None]
del(X_mean)
del(X_std)
del(X_norm)
X_true = X
Y_true = Y_num
###Output
_____no_output_____
###Markdown
Balancing dataset
###Code
idx_masked, = np.nonzero(y==num_masked)
idx_not_masked, = np.nonzero(y!=num_masked)
idx_masked.size / y.size
idx_balance = np.concatenate([idx_masked, idx_not_masked[:idx_masked.size]])
np.random.shuffle(idx_balance)
X_true = X_true[idx_balance]
Y_true = Y_true[idx_balance]
###Output
_____no_output_____
###Markdown
Showing data
###Code
i0 = 1
nrows = 5
ncols = 3
fig, axes = plt.subplots(nrows, 2*ncols, figsize=(8, 6))
for j in range(0, axes.shape[1], 2):
axes[0, j+0].set_title('Input')
axes[0, j+1].set_title('Expected')
for i in range(axes.shape[0]):
axes[i, j+0].imshow(X_true[ncols*j+i0+i, ..., 0])
axes[i, j+1].imshow(Y_true[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
plt.show()
###Output
_____no_output_____
###Markdown
Defining U-net
###Code
model = make_unet(
X.shape[1:],
nout=1,
scales=2,
nconvs_by_scale=2,
base_filters=3,
kernel_size=3,
activation='relu',
first_activation='tanh',
last_activation='sigmoid',
interpolator='bilinear',
last_interpolator=None,
norm=True,
dropout=False,
norm_at_start=True,
nconvs_bottom=None,
use_skip_connections=True,
return_encoders=False,
verbose=True,
)
###Output
start (None, 8, 8, 1)
prepare (None, 8, 8, 3)
downward (None, 4, 4, 6)
downward (None, 2, 2, 12)
upward (None, 4, 4, 6)
upward (None, 8, 8, 3)
out (None, 8, 8, 1)
###Markdown
U-net predictions - before training
###Code
Y_pred_proba = model.predict(X)
i0 = 1
nrows = 5
ncols = 3
fig, axes = plt.subplots(nrows, 3*ncols, figsize=(10, 6))
for j in range(0, axes.shape[1], 3):
axes[0, j+0].set_title('Input')
axes[0, j+1].set_title('Expected')
axes[0, j+2].set_title('Predicted')
for i in range(axes.shape[0]):
axes[i, j+0].imshow(X_true[ncols*j+i0+i, ..., 0])
axes[i, j+1].imshow(Y_true[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
axes[i, j+2].imshow(Y_pred_proba[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
plt.show()
###Output
/home/marcosrdac/.local/share/python_envs/m/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py:4211: UserWarning: Even though the `tf.config.experimental_run_functions_eagerly` option is set, this option does not apply to tf.data functions. To force eager execution of tf.data functions, please use `tf.data.experimental.enable_debug_mode()`.
warnings.warn(
###Markdown
Preparing holdout validation
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X_true,
Y_true,
test_size=1 / 4)
print('n_train: {X_train.shape[0]}')
print('n_test: {X_test.shape[0]}\n')
###Output
n_train: {X_train.shape[0]}
n_test: {X_test.shape[0]}
###Markdown
Training U-net
###Code
learning_rate = 0.0005
max_epochs = 2000
batch_size = X_train.shape[0]
# batch_size = 100
optim = optimizers.Adam(learning_rate)
model.compile(optimizer=optim, loss='binary_crossentropy', metrics=['accuracy'])
callback_list = [
callbacks.EarlyStopping(
mode='min',
monitor='val_loss',
patience=80,
min_delta=0,
verbose=1,
baseline=None,
restore_best_weights=True,
)
]
train = model.fit(X_train,
Y_train,
epochs=max_epochs,
validation_data=(X_test, Y_test),
callbacks=callback_list,
batch_size=batch_size)
###Output
Epoch 1/2000
1/1 [==============================] - 0s 349ms/step - loss: 0.8827 - accuracy: 0.4099 - val_loss: 0.6550 - val_accuracy: 0.6987
Epoch 2/2000
1/1 [==============================] - 0s 206ms/step - loss: 0.8569 - accuracy: 0.4265 - val_loss: 0.6544 - val_accuracy: 0.6986
Epoch 3/2000
1/1 [==============================] - 0s 225ms/step - loss: 0.8336 - accuracy: 0.4400 - val_loss: 0.6537 - val_accuracy: 0.6979
Epoch 4/2000
1/1 [==============================] - 0s 262ms/step - loss: 0.8135 - accuracy: 0.4550 - val_loss: 0.6529 - val_accuracy: 0.6979
Epoch 5/2000
1/1 [==============================] - 0s 226ms/step - loss: 0.7962 - accuracy: 0.4642 - val_loss: 0.6520 - val_accuracy: 0.6950
Epoch 6/2000
1/1 [==============================] - 0s 168ms/step - loss: 0.7816 - accuracy: 0.4747 - val_loss: 0.6511 - val_accuracy: 0.6952
Epoch 7/2000
1/1 [==============================] - 0s 244ms/step - loss: 0.7685 - accuracy: 0.4834 - val_loss: 0.6502 - val_accuracy: 0.6936
Epoch 8/2000
1/1 [==============================] - 0s 215ms/step - loss: 0.7565 - accuracy: 0.4906 - val_loss: 0.6493 - val_accuracy: 0.6921
Epoch 9/2000
1/1 [==============================] - 0s 252ms/step - loss: 0.7454 - accuracy: 0.4958 - val_loss: 0.6484 - val_accuracy: 0.6929
Epoch 10/2000
1/1 [==============================] - 0s 210ms/step - loss: 0.7348 - accuracy: 0.5048 - val_loss: 0.6474 - val_accuracy: 0.6915
Epoch 11/2000
1/1 [==============================] - 0s 201ms/step - loss: 0.7248 - accuracy: 0.5114 - val_loss: 0.6465 - val_accuracy: 0.6914
Epoch 12/2000
1/1 [==============================] - 0s 174ms/step - loss: 0.7152 - accuracy: 0.5188 - val_loss: 0.6456 - val_accuracy: 0.6910
Epoch 13/2000
1/1 [==============================] - 0s 184ms/step - loss: 0.7060 - accuracy: 0.5263 - val_loss: 0.6448 - val_accuracy: 0.6901
Epoch 14/2000
1/1 [==============================] - 0s 186ms/step - loss: 0.6973 - accuracy: 0.5331 - val_loss: 0.6440 - val_accuracy: 0.6877
Epoch 15/2000
1/1 [==============================] - 0s 168ms/step - loss: 0.6891 - accuracy: 0.5368 - val_loss: 0.6433 - val_accuracy: 0.6861
Epoch 16/2000
1/1 [==============================] - 0s 174ms/step - loss: 0.6817 - accuracy: 0.5411 - val_loss: 0.6427 - val_accuracy: 0.6845
Epoch 17/2000
1/1 [==============================] - 0s 168ms/step - loss: 0.6748 - accuracy: 0.5448 - val_loss: 0.6422 - val_accuracy: 0.6838
Epoch 18/2000
1/1 [==============================] - 0s 177ms/step - loss: 0.6685 - accuracy: 0.5471 - val_loss: 0.6418 - val_accuracy: 0.6836
Epoch 19/2000
1/1 [==============================] - 0s 170ms/step - loss: 0.6625 - accuracy: 0.5507 - val_loss: 0.6415 - val_accuracy: 0.6817
Epoch 20/2000
1/1 [==============================] - 0s 152ms/step - loss: 0.6567 - accuracy: 0.5551 - val_loss: 0.6413 - val_accuracy: 0.6805
Epoch 21/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.6510 - accuracy: 0.5602 - val_loss: 0.6412 - val_accuracy: 0.6796
Epoch 22/2000
1/1 [==============================] - 0s 176ms/step - loss: 0.6453 - accuracy: 0.5650 - val_loss: 0.6412 - val_accuracy: 0.6798
Epoch 23/2000
1/1 [==============================] - 0s 193ms/step - loss: 0.6396 - accuracy: 0.5696 - val_loss: 0.6413 - val_accuracy: 0.6803
Epoch 24/2000
1/1 [==============================] - 0s 182ms/step - loss: 0.6342 - accuracy: 0.5741 - val_loss: 0.6414 - val_accuracy: 0.6794
Epoch 25/2000
1/1 [==============================] - 0s 200ms/step - loss: 0.6289 - accuracy: 0.5790 - val_loss: 0.6415 - val_accuracy: 0.6789
Epoch 26/2000
1/1 [==============================] - 0s 260ms/step - loss: 0.6239 - accuracy: 0.5843 - val_loss: 0.6417 - val_accuracy: 0.6791
Epoch 27/2000
1/1 [==============================] - 0s 175ms/step - loss: 0.6192 - accuracy: 0.5880 - val_loss: 0.6419 - val_accuracy: 0.6794
Epoch 28/2000
1/1 [==============================] - 0s 195ms/step - loss: 0.6147 - accuracy: 0.5933 - val_loss: 0.6421 - val_accuracy: 0.6800
Epoch 29/2000
1/1 [==============================] - 0s 177ms/step - loss: 0.6104 - accuracy: 0.5972 - val_loss: 0.6422 - val_accuracy: 0.6798
Epoch 30/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.6062 - accuracy: 0.6019 - val_loss: 0.6423 - val_accuracy: 0.6803
Epoch 31/2000
1/1 [==============================] - 0s 182ms/step - loss: 0.6021 - accuracy: 0.6062 - val_loss: 0.6424 - val_accuracy: 0.6814
Epoch 32/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.5980 - accuracy: 0.6104 - val_loss: 0.6425 - val_accuracy: 0.6815
Epoch 33/2000
1/1 [==============================] - 0s 182ms/step - loss: 0.5941 - accuracy: 0.6144 - val_loss: 0.6425 - val_accuracy: 0.6821
Epoch 34/2000
1/1 [==============================] - 0s 189ms/step - loss: 0.5902 - accuracy: 0.6185 - val_loss: 0.6425 - val_accuracy: 0.6838
Epoch 35/2000
1/1 [==============================] - 0s 175ms/step - loss: 0.5863 - accuracy: 0.6219 - val_loss: 0.6426 - val_accuracy: 0.6824
Epoch 36/2000
1/1 [==============================] - 0s 159ms/step - loss: 0.5826 - accuracy: 0.6256 - val_loss: 0.6426 - val_accuracy: 0.6815
Epoch 37/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.5790 - accuracy: 0.6297 - val_loss: 0.6426 - val_accuracy: 0.6817
Epoch 38/2000
1/1 [==============================] - 0s 169ms/step - loss: 0.5754 - accuracy: 0.6339 - val_loss: 0.6427 - val_accuracy: 0.6814
Epoch 39/2000
1/1 [==============================] - 0s 206ms/step - loss: 0.5719 - accuracy: 0.6378 - val_loss: 0.6427 - val_accuracy: 0.6812
Epoch 40/2000
1/1 [==============================] - 0s 177ms/step - loss: 0.5685 - accuracy: 0.6423 - val_loss: 0.6428 - val_accuracy: 0.6803
Epoch 41/2000
1/1 [==============================] - 0s 175ms/step - loss: 0.5652 - accuracy: 0.6458 - val_loss: 0.6428 - val_accuracy: 0.6794
Epoch 42/2000
1/1 [==============================] - 0s 159ms/step - loss: 0.5619 - accuracy: 0.6513 - val_loss: 0.6429 - val_accuracy: 0.6796
Epoch 43/2000
1/1 [==============================] - 0s 187ms/step - loss: 0.5586 - accuracy: 0.6554 - val_loss: 0.6430 - val_accuracy: 0.6798
Epoch 44/2000
1/1 [==============================] - 0s 171ms/step - loss: 0.5554 - accuracy: 0.6588 - val_loss: 0.6431 - val_accuracy: 0.6807
Epoch 45/2000
1/1 [==============================] - 0s 187ms/step - loss: 0.5522 - accuracy: 0.6615 - val_loss: 0.6432 - val_accuracy: 0.6808
Epoch 46/2000
1/1 [==============================] - 0s 179ms/step - loss: 0.5491 - accuracy: 0.6647 - val_loss: 0.6433 - val_accuracy: 0.6794
Epoch 47/2000
1/1 [==============================] - 0s 157ms/step - loss: 0.5460 - accuracy: 0.6680 - val_loss: 0.6434 - val_accuracy: 0.6794
Epoch 48/2000
1/1 [==============================] - 0s 182ms/step - loss: 0.5430 - accuracy: 0.6718 - val_loss: 0.6435 - val_accuracy: 0.6787
Epoch 49/2000
1/1 [==============================] - 0s 183ms/step - loss: 0.5399 - accuracy: 0.6747 - val_loss: 0.6437 - val_accuracy: 0.6784
Epoch 50/2000
1/1 [==============================] - 0s 209ms/step - loss: 0.5369 - accuracy: 0.6788 - val_loss: 0.6438 - val_accuracy: 0.6777
Epoch 51/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.5338 - accuracy: 0.6829 - val_loss: 0.6440 - val_accuracy: 0.6752
Epoch 52/2000
1/1 [==============================] - 0s 170ms/step - loss: 0.5308 - accuracy: 0.6864 - val_loss: 0.6441 - val_accuracy: 0.6743
Epoch 53/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.5278 - accuracy: 0.6891 - val_loss: 0.6442 - val_accuracy: 0.6745
Epoch 54/2000
1/1 [==============================] - 0s 173ms/step - loss: 0.5248 - accuracy: 0.6932 - val_loss: 0.6443 - val_accuracy: 0.6735
Epoch 55/2000
1/1 [==============================] - 0s 180ms/step - loss: 0.5218 - accuracy: 0.6958 - val_loss: 0.6443 - val_accuracy: 0.6728
Epoch 56/2000
1/1 [==============================] - 0s 199ms/step - loss: 0.5188 - accuracy: 0.6997 - val_loss: 0.6443 - val_accuracy: 0.6731
Epoch 57/2000
1/1 [==============================] - 0s 177ms/step - loss: 0.5159 - accuracy: 0.7043 - val_loss: 0.6443 - val_accuracy: 0.6721
Epoch 58/2000
1/1 [==============================] - 0s 157ms/step - loss: 0.5129 - accuracy: 0.7080 - val_loss: 0.6442 - val_accuracy: 0.6728
Epoch 59/2000
1/1 [==============================] - 0s 189ms/step - loss: 0.5100 - accuracy: 0.7110 - val_loss: 0.6441 - val_accuracy: 0.6724
Epoch 60/2000
1/1 [==============================] - 0s 202ms/step - loss: 0.5071 - accuracy: 0.7144 - val_loss: 0.6439 - val_accuracy: 0.6717
Epoch 61/2000
1/1 [==============================] - 0s 182ms/step - loss: 0.5042 - accuracy: 0.7173 - val_loss: 0.6437 - val_accuracy: 0.6710
Epoch 62/2000
1/1 [==============================] - 0s 209ms/step - loss: 0.5013 - accuracy: 0.7213 - val_loss: 0.6435 - val_accuracy: 0.6705
Epoch 63/2000
1/1 [==============================] - 0s 177ms/step - loss: 0.4984 - accuracy: 0.7255 - val_loss: 0.6433 - val_accuracy: 0.6713
Epoch 64/2000
1/1 [==============================] - 0s 185ms/step - loss: 0.4956 - accuracy: 0.7286 - val_loss: 0.6431 - val_accuracy: 0.6721
Epoch 65/2000
1/1 [==============================] - 0s 260ms/step - loss: 0.4928 - accuracy: 0.7323 - val_loss: 0.6428 - val_accuracy: 0.6726
Epoch 66/2000
1/1 [==============================] - 0s 188ms/step - loss: 0.4900 - accuracy: 0.7362 - val_loss: 0.6425 - val_accuracy: 0.6731
Epoch 67/2000
1/1 [==============================] - 0s 236ms/step - loss: 0.4873 - accuracy: 0.7400 - val_loss: 0.6421 - val_accuracy: 0.6743
Epoch 68/2000
1/1 [==============================] - 0s 161ms/step - loss: 0.4846 - accuracy: 0.7433 - val_loss: 0.6417 - val_accuracy: 0.6763
Epoch 69/2000
1/1 [==============================] - 0s 197ms/step - loss: 0.4820 - accuracy: 0.7466 - val_loss: 0.6413 - val_accuracy: 0.6784
Epoch 70/2000
1/1 [==============================] - 0s 208ms/step - loss: 0.4794 - accuracy: 0.7494 - val_loss: 0.6408 - val_accuracy: 0.6785
Epoch 71/2000
1/1 [==============================] - 0s 227ms/step - loss: 0.4769 - accuracy: 0.7529 - val_loss: 0.6403 - val_accuracy: 0.6794
Epoch 72/2000
1/1 [==============================] - 0s 207ms/step - loss: 0.4743 - accuracy: 0.7556 - val_loss: 0.6397 - val_accuracy: 0.6794
Epoch 73/2000
1/1 [==============================] - 0s 152ms/step - loss: 0.4718 - accuracy: 0.7593 - val_loss: 0.6390 - val_accuracy: 0.6821
Epoch 74/2000
1/1 [==============================] - 0s 268ms/step - loss: 0.4692 - accuracy: 0.7617 - val_loss: 0.6383 - val_accuracy: 0.6829
Epoch 75/2000
1/1 [==============================] - 0s 221ms/step - loss: 0.4667 - accuracy: 0.7636 - val_loss: 0.6375 - val_accuracy: 0.6845
Epoch 76/2000
1/1 [==============================] - 0s 242ms/step - loss: 0.4642 - accuracy: 0.7665 - val_loss: 0.6367 - val_accuracy: 0.6859
Epoch 77/2000
1/1 [==============================] - 0s 196ms/step - loss: 0.4617 - accuracy: 0.7707 - val_loss: 0.6358 - val_accuracy: 0.6882
Epoch 78/2000
1/1 [==============================] - 0s 209ms/step - loss: 0.4593 - accuracy: 0.7735 - val_loss: 0.6348 - val_accuracy: 0.6907
Epoch 79/2000
1/1 [==============================] - 0s 246ms/step - loss: 0.4568 - accuracy: 0.7774 - val_loss: 0.6338 - val_accuracy: 0.6922
Epoch 80/2000
1/1 [==============================] - 0s 209ms/step - loss: 0.4544 - accuracy: 0.7805 - val_loss: 0.6327 - val_accuracy: 0.6950
Epoch 81/2000
1/1 [==============================] - 0s 206ms/step - loss: 0.4520 - accuracy: 0.7839 - val_loss: 0.6315 - val_accuracy: 0.6965
Epoch 82/2000
1/1 [==============================] - 0s 185ms/step - loss: 0.4497 - accuracy: 0.7870 - val_loss: 0.6303 - val_accuracy: 0.6973
Epoch 83/2000
1/1 [==============================] - 0s 239ms/step - loss: 0.4474 - accuracy: 0.7901 - val_loss: 0.6290 - val_accuracy: 0.7001
Epoch 84/2000
1/1 [==============================] - 0s 208ms/step - loss: 0.4451 - accuracy: 0.7932 - val_loss: 0.6276 - val_accuracy: 0.7035
Epoch 85/2000
1/1 [==============================] - 0s 234ms/step - loss: 0.4428 - accuracy: 0.7961 - val_loss: 0.6260 - val_accuracy: 0.7061
Epoch 86/2000
1/1 [==============================] - 0s 190ms/step - loss: 0.4406 - accuracy: 0.7980 - val_loss: 0.6244 - val_accuracy: 0.7100
Epoch 87/2000
1/1 [==============================] - 0s 198ms/step - loss: 0.4385 - accuracy: 0.8004 - val_loss: 0.6227 - val_accuracy: 0.7152
Epoch 88/2000
1/1 [==============================] - 0s 214ms/step - loss: 0.4363 - accuracy: 0.8031 - val_loss: 0.6208 - val_accuracy: 0.7191
Epoch 89/2000
1/1 [==============================] - 0s 200ms/step - loss: 0.4342 - accuracy: 0.8052 - val_loss: 0.6189 - val_accuracy: 0.7212
Epoch 90/2000
1/1 [==============================] - 0s 230ms/step - loss: 0.4321 - accuracy: 0.8075 - val_loss: 0.6169 - val_accuracy: 0.7252
Epoch 91/2000
1/1 [==============================] - 0s 180ms/step - loss: 0.4301 - accuracy: 0.8097 - val_loss: 0.6148 - val_accuracy: 0.7291
Epoch 92/2000
1/1 [==============================] - 0s 223ms/step - loss: 0.4280 - accuracy: 0.8123 - val_loss: 0.6126 - val_accuracy: 0.7337
Epoch 93/2000
1/1 [==============================] - 0s 235ms/step - loss: 0.4260 - accuracy: 0.8136 - val_loss: 0.6104 - val_accuracy: 0.7377
Epoch 94/2000
1/1 [==============================] - 0s 205ms/step - loss: 0.4241 - accuracy: 0.8157 - val_loss: 0.6081 - val_accuracy: 0.7423
Epoch 95/2000
1/1 [==============================] - 0s 178ms/step - loss: 0.4221 - accuracy: 0.8172 - val_loss: 0.6058 - val_accuracy: 0.7449
Epoch 96/2000
1/1 [==============================] - 0s 248ms/step - loss: 0.4202 - accuracy: 0.8191 - val_loss: 0.6034 - val_accuracy: 0.7489
Epoch 97/2000
1/1 [==============================] - 0s 207ms/step - loss: 0.4182 - accuracy: 0.8210 - val_loss: 0.6010 - val_accuracy: 0.7505
Epoch 98/2000
1/1 [==============================] - 0s 217ms/step - loss: 0.4163 - accuracy: 0.8225 - val_loss: 0.5986 - val_accuracy: 0.7537
Epoch 99/2000
1/1 [==============================] - 0s 195ms/step - loss: 0.4144 - accuracy: 0.8240 - val_loss: 0.5961 - val_accuracy: 0.7556
Epoch 100/2000
1/1 [==============================] - 0s 238ms/step - loss: 0.4125 - accuracy: 0.8257 - val_loss: 0.5936 - val_accuracy: 0.7588
Epoch 101/2000
1/1 [==============================] - 0s 230ms/step - loss: 0.4107 - accuracy: 0.8272 - val_loss: 0.5910 - val_accuracy: 0.7642
Epoch 102/2000
1/1 [==============================] - 0s 199ms/step - loss: 0.4088 - accuracy: 0.8292 - val_loss: 0.5884 - val_accuracy: 0.7679
Epoch 103/2000
1/1 [==============================] - 0s 186ms/step - loss: 0.4069 - accuracy: 0.8311 - val_loss: 0.5859 - val_accuracy: 0.7704
Epoch 104/2000
1/1 [==============================] - 0s 181ms/step - loss: 0.4051 - accuracy: 0.8332 - val_loss: 0.5833 - val_accuracy: 0.7725
Epoch 105/2000
1/1 [==============================] - 0s 200ms/step - loss: 0.4033 - accuracy: 0.8343 - val_loss: 0.5806 - val_accuracy: 0.7739
Epoch 106/2000
1/1 [==============================] - 0s 211ms/step - loss: 0.4015 - accuracy: 0.8370 - val_loss: 0.5781 - val_accuracy: 0.7772
Epoch 107/2000
1/1 [==============================] - 0s 271ms/step - loss: 0.3997 - accuracy: 0.8381 - val_loss: 0.5755 - val_accuracy: 0.7798
Epoch 108/2000
1/1 [==============================] - 0s 271ms/step - loss: 0.3979 - accuracy: 0.8401 - val_loss: 0.5730 - val_accuracy: 0.7821
Epoch 109/2000
1/1 [==============================] - 0s 241ms/step - loss: 0.3962 - accuracy: 0.8419 - val_loss: 0.5704 - val_accuracy: 0.7853
Epoch 110/2000
1/1 [==============================] - 0s 226ms/step - loss: 0.3944 - accuracy: 0.8432 - val_loss: 0.5679 - val_accuracy: 0.7884
Epoch 111/2000
1/1 [==============================] - 0s 188ms/step - loss: 0.3927 - accuracy: 0.8445 - val_loss: 0.5655 - val_accuracy: 0.7916
Epoch 112/2000
1/1 [==============================] - 0s 202ms/step - loss: 0.3911 - accuracy: 0.8460 - val_loss: 0.5630 - val_accuracy: 0.7944
Epoch 113/2000
1/1 [==============================] - 0s 169ms/step - loss: 0.3894 - accuracy: 0.8475 - val_loss: 0.5605 - val_accuracy: 0.7963
Epoch 114/2000
1/1 [==============================] - 0s 220ms/step - loss: 0.3877 - accuracy: 0.8490 - val_loss: 0.5581 - val_accuracy: 0.7976
Epoch 115/2000
###Markdown
Showing history
###Code
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
epochs = 1 + np.arange(len(train.history['loss']))
axes[0].set_title('Loss')
axes[0].plot(epochs, train.history['loss'], label='Train')
axes[0].plot(epochs, train.history['val_loss'], label='Test')
# axes[0].set_yscale('log')
axes[1].set_title('Accuracy')
axes[1].plot(epochs, train.history['accuracy'], label='Train')
axes[1].plot(epochs, train.history['val_accuracy'], label='Test')
for ax in axes:
ax.set_xlabel('Epochs')
ax.set_ylabel('Metric')
ax.grid()
ax.axvline(
epochs[np.argmin(train.history['val_loss'])],
label='Best model',
c='k',
ls='--',
)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
U-net predictions - after training
###Code
Y_pred_proba = model.predict(X_true)
Y_pred = Y_pred_proba >= .5
i0 = 1
nrows = 5
ncols = 3
fig, axes = plt.subplots(nrows, 3*ncols, figsize=(10, 6))
for j in range(0, axes.shape[1], 3):
axes[0, j+0].set_title('Input')
axes[0, j+1].set_title('Expected')
axes[0, j+2].set_title('Predicted')
for i in range(axes.shape[0]):
axes[i, j+0].imshow(X_true[ncols*j+i0+i, ..., 0])
axes[i, j+1].imshow(Y_true[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
axes[i, j+2].imshow(Y_pred_proba[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
plt.show()
i0 = 1
nrows = 5
ncols = 3
fig, axes = plt.subplots(nrows, 3*ncols, figsize=(10, 6))
for j in range(0, axes.shape[1], 3):
axes[0, j+0].set_title('Input')
axes[0, j+1].set_title('Expected')
axes[0, j+2].set_title('Predicted')
for i in range(axes.shape[0]):
axes[i, j+0].imshow(X_true[ncols*j+i0+i, ..., 0])
axes[i, j+1].imshow(Y_true[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
axes[i, j+2].imshow(Y_pred[ncols*j+i0+i, ..., 0], vmin=0, vmax=1)
plt.show()
###Output
_____no_output_____
###Markdown
continuous-buildHi friend! You've found the continuous build example notebook, provided in [this Github repository](https://www.github.com/binder-examples/continuous-build) and the container it builds. For complete documentation, we will direct you to the [repo2docker](https://repo2docker.readthedocs.io/en/latest/deploy.html) pages, on ReadTheDocs. This NotebookThis notebook is a very simple dummy example to show how the `requirements.txt` served in the repository root is used to install dependencies to the container, allowing you to see some fabulous pokemon ascii in the sections below. This is all made possible by [repo2docker](https://www.github.com/jupyter/repo2docker), and you are encouraged to provide your notebooks with this software to ensure reproducible use, and mitigate the hassle of installing dependencies.If you want to get help, or request an example, please let us know on the continuous-build [issues board](https://www.github.com/binder-examples/continuous-build/issues) or the [repo2docker](https://www.github.com/jupyter/repo2docker/issues) issues board, depending on your issue.
###Code
# Here is a dependency in our requirements.txt, pokemon!
import pokemon
# What is your Pokemon avatar?
from pokemon.skills import get_avatar
avatar = get_avatar('dinosaur')
###Output
@@@@@@@@@@@@.?%..%%.@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@%*.........****.*%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@?.+S%?%.+++++.....++?.*..+@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@.......%...**+.++..%+.+##.#@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@,.%......%...S....?+%SSSS+S.@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@SS*..+***..........*+?+S+?.S@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@:+*..,,,:::*S+......*.++++.*@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@%S@,,,:::#.:%........+++++@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@S,,,::::.?..*........++++,@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@,,::::,,::::**....+%*.+++,@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@.S,@+,:::::::*:*.....%++S@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@.:::::.*...*..S++@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@,:::,@.%+++++S+@@@@@@@@@@@@@@@@@@.*..?@@@@
@@@@@@@@@@@@@@@@@@,:::,@@@@@@@@@@@@@@@@@@@@@@@@@@@+....++@@@
@@@@@@@@@@@@@@@S..,::::S%*@@@@@@@@@@@@@@@@@@@@+...++++++++++
@@@@@@@@:.,,,,:+::::::::*%+:*,,#@@@@@@@@@@@@@+....+.#S.S+...
@@@@@@@,,,:.,,:,::::::,::::::,#::,.@@@@@@@@@@@.++++%*.#+++++
@@@@@@@@.,,,,,,:?,,,,,,?::::::::,@@@@@@@@@@@@@@@@@S?::S+,,@@
@@@@@@@*,,,:,++SS:,,,,,?S+%:::::,,@@@@@@@@@@@@@@@@,,::@@@@@@
@@@@@@@%+%?++....*@,,,,SS+#S?S+@@@@@@@@@@@@@@@@@@+,,:.@@@@@@
@@@@:.+..S+*........*.S...S%S++@@@@@@@@@@@@@@@@@.,,:.@@@@@@@
@@@,....S%%............?.+.++S#%@@@@@@@@@@@@@@+,,,::@@@@@@@@
@@@@%S+S.+S..............?S+?.++%@@@@@@@@@@+,,,,::%@@@@@@@@@
@@@@@@.......................++++%@@@@:%*,,,,,::,:@@@@@@@@@@
@@@@@@?...+?++.....+++.......+++++.******::::::?@@@@@@@@@@@@
@@@@@@:....+?+++++++++?.....++++++%******::::%@@@@@@@@@@@@@@
@@@@@@@@.+++++#++++++++.++++++S++++:******::@@@@@@@@@@@@@@@@
@@@@@@@@@@@.++++..++++++.++++++++++:***:%@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@.+++.@@@@*????#%++++++?@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@*..+++:@@@@@@@@@@@@@@++++#@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@%.S%S.++++?@@@@@@@@@@@@@@@S+++++@@@@@@@@@@@@@@@@@@@@@@
@@@S.S?+SSS+++++*@@@@@@@@@@@@@%?...%?.@@@@@@@@@@@@@@@@@@@@@@
@%S?+SSSSSS+.@@@@@@@@@@@@@@@.*SS++%SSS+@@@@@@@@@@@@@@@@@@@@@
@,.SSS%.:@@@@@@@@@@@@@@@@@@S.SSSS#+SSSS+@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@.SSSS,%SSSSSS@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@,S+.@@@%SS+?@@@@@@@@@@@@@@@@@@@@@@
dinosaur
###Markdown
How does this function work? 1. Name ConversionFirst we convert the trainer name into a number
###Code
from pokemon.skills import get_trainer
# The name of the trainer (me) is dinosaur
name = 'dinosaur'
# Here is the number for the trainer
trainer = get_trainer(name)
print('%s is trainer number %s' %(name, trainer))
###Output
dinosaur is trainer number 89518204
###Markdown
2. Catch em' AllWe then catch a complete list of Pokemon, and derive a unique index into it using the trainer identification number.
###Code
from pokemon.skills import catch_em_all
# Then we get a complete
pokemons = catch_em_all()
# The IDs are numbers between 1 and the max
number_pokemons = len(pokemons)
pid = str(trainer % number_pokemons)
print('The pid of trainer %s is %s' %(name, pid))
###Output
The pid of trainer dinosaur is 286
###Markdown
3. Catch Away!We then retrieve the pokemon, including the ascii and complete metadata about the pokemon.
###Code
from pokemon.skills import get_pokemon
import json # for pretty printing
# Here is the complete pokemon
pokemon = get_pokemon(pid=pid,pokemons=pokemons)
# And this is the avatar printed to the screen, with the addition of the name
avatar = pokemon[pid]["ascii"]
# Let's remove the avatar string (it's long and ugly) and print the remaining data, followed by the avatar!
del pokemon[pid]['ascii']
print(json.dumps(pokemon, indent=4, sort_keys=True))
###Output
{
"286": {
"abilities": [
"effect spore",
"poison heal",
"technician"
],
"height": 1.19,
"id": 286,
"japanese": "Kinogassa",
"link": "http://pokemondb.net/pokedex/breloom",
"name": "Breloom",
"type": [
"grass",
"fighting"
],
"weight": 86.4
}
}
###Markdown
And here is our Pokemon, who we now know to be "Breloom," the grass fighter! :D
###Code
print(avatar)
###Output
@@@@@@@@@@@@.?%..%%.@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@%*.........****.*%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@?.+S%?%.+++++.....++?.*..+@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@.......%...**+.++..%+.+##.#@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@,.%......%...S....?+%SSSS+S.@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@SS*..+***..........*+?+S+?.S@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@:+*..,,,:::*S+......*.++++.*@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@%S@,,,:::#.:%........+++++@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@S,,,::::.?..*........++++,@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@,,::::,,::::**....+%*.+++,@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@.S,@+,:::::::*:*.....%++S@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@.:::::.*...*..S++@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@,:::,@.%+++++S+@@@@@@@@@@@@@@@@@@.*..?@@@@
@@@@@@@@@@@@@@@@@@,:::,@@@@@@@@@@@@@@@@@@@@@@@@@@@+....++@@@
@@@@@@@@@@@@@@@S..,::::S%*@@@@@@@@@@@@@@@@@@@@+...++++++++++
@@@@@@@@:.,,,,:+::::::::*%+:*,,#@@@@@@@@@@@@@+....+.#S.S+...
@@@@@@@,,,:.,,:,::::::,::::::,#::,.@@@@@@@@@@@.++++%*.#+++++
@@@@@@@@.,,,,,,:?,,,,,,?::::::::,@@@@@@@@@@@@@@@@@S?::S+,,@@
@@@@@@@*,,,:,++SS:,,,,,?S+%:::::,,@@@@@@@@@@@@@@@@,,::@@@@@@
@@@@@@@%+%?++....*@,,,,SS+#S?S+@@@@@@@@@@@@@@@@@@+,,:.@@@@@@
@@@@:.+..S+*........*.S...S%S++@@@@@@@@@@@@@@@@@.,,:.@@@@@@@
@@@,....S%%............?.+.++S#%@@@@@@@@@@@@@@+,,,::@@@@@@@@
@@@@%S+S.+S..............?S+?.++%@@@@@@@@@@+,,,,::%@@@@@@@@@
@@@@@@.......................++++%@@@@:%*,,,,,::,:@@@@@@@@@@
@@@@@@?...+?++.....+++.......+++++.******::::::?@@@@@@@@@@@@
@@@@@@:....+?+++++++++?.....++++++%******::::%@@@@@@@@@@@@@@
@@@@@@@@.+++++#++++++++.++++++S++++:******::@@@@@@@@@@@@@@@@
@@@@@@@@@@@.++++..++++++.++++++++++:***:%@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@.+++.@@@@*????#%++++++?@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@*..+++:@@@@@@@@@@@@@@++++#@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@%.S%S.++++?@@@@@@@@@@@@@@@S+++++@@@@@@@@@@@@@@@@@@@@@@
@@@S.S?+SSS+++++*@@@@@@@@@@@@@%?...%?.@@@@@@@@@@@@@@@@@@@@@@
@%S?+SSSSSS+.@@@@@@@@@@@@@@@.*SS++%SSS+@@@@@@@@@@@@@@@@@@@@@
@,.SSS%.:@@@@@@@@@@@@@@@@@@S.SSSS#+SSSS+@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@.SSSS,%SSSSSS@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@,S+.@@@%SS+?@@@@@@@@@@@@@@@@@@@@@@
###Markdown
InitializationImport relevant packages
###Code
import os
import numpy as np
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt
import settree
###Output
_____no_output_____
###Markdown
Create datasetCreate synthetic dataset of 2D point, following exp.1 in the paper (first quadrant). This dataset is comprised from sets of 2D points,a positive set contains a single point from the first quadrant. A negative set is not containing points from the first quadrant.We also configure a SetDataset object to be used in conjunction with SetTree. This object stores the sets in a convenient way.
###Code
# Data params
SET_SIZE = 5
ITEM_DIM = 2
N_TRAIN = 1000
N_TEST = 1000
x_train, y_train = settree.get_first_quarter_data(N_TRAIN, min_items_set=SET_SIZE, max_items_set=SET_SIZE+1, dim=ITEM_DIM)
x_test, y_test = settree.get_first_quarter_data(N_TEST, min_items_set=SET_SIZE, max_items_set=SET_SIZE+1, dim=ITEM_DIM)
ds_train = settree.SetDataset(records=x_train, is_init=True)
ds_test = settree.SetDataset(records=x_test, is_init=True)
print('Train dataset object: ' + str(ds_train))
print('Test dataset object: ' + str(ds_test))
###Output
Train dataset object: SetDataset(num_records=1000, num_features=2)
Test dataset object: SetDataset(num_records=1000, num_features=2)
###Markdown
Configure the desired set-compatible split criteria for SetTree.
###Code
list_of_operations = settree.OPERATIONS
print(settree.OPERATIONS)
###Output
[Op (min), Op (max), Op (sum), Op (mean), Op (sec_mom_mean), Op (harm_mean), Op (geo_mean)]
###Markdown
Configure and train Set-Tree model
###Code
# Model params
ATTN_SET_LIMIT = 3
USE_ATTN_SET = True
USE_ATTN_SET_COMP = True
MAX_DEPTH = 6
SEED = 0
set_tree_model = settree.SetTree(classifier=True,
criterion='entropy',
splitter='sklearn',
max_features=None,
min_samples_split=2,
operations=list_of_operations,
use_attention_set=USE_ATTN_SET,
use_attention_set_comp=USE_ATTN_SET_COMP,
attention_set_limit=ATTN_SET_LIMIT,
max_depth=MAX_DEPTH,
min_samples_leaf=None,
random_state=SEED)
set_tree_model.fit(ds_train, y_train)
set_tree_test_acc = (set_tree_model.predict(ds_test) == y_test).mean()
print('Set-Tree: Test accuracy: {:.4f}'.format(set_tree_test_acc))
###Output
Set-Tree: Test accuracy: 0.9980
###Markdown
Configure and train vanilla decision tree model
###Code
x_train_flat, x_test_flat = settree.flatten_datasets(ds_train, ds_test, list_of_operations)
tree_model = DecisionTreeClassifier(criterion="gini",
splitter="best",
max_depth=MAX_DEPTH,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.,
max_features=None,
random_state=SEED)
tree_model.fit(x_train_flat, y_train)
tree_test_acc = (tree_model.predict(x_test_flat) == y_test).mean()
print('Vanilla decision tree: Test accuracy: {:.4f}'.format(tree_test_acc))
###Output
Vanilla decision tree: Test accuracy: 0.7040
###Markdown
Visualize Set-TreeIn order to plot the tree's structure please install pydotplus:conda install -c anaconda pydotplus
###Code
from exps.eval_utils.plotting import save_dt_plot
print('The trained model has {} nodes and {} leafs'.format(set_tree_model.n_nodes,
set_tree_model.n_leafs))
save_dt_plot(set_tree_model, features_list=None, dir='', file_name='dt_graph.jpg')
fig=plt.figure(figsize=(12,8), dpi= 100, facecolor='w', edgecolor='k')
plt.imshow(plt.imread(os.path.join(os.getcwd(), 'dt_graph.jpg')))
plt.xticks([]), plt.yticks([])
plt.show()
###Output
The trained model has 4 nodes and 5 leafs
###Markdown
Visualize the items importance
###Code
SCALE = 1e3
N = 2
SAMPLE_LABEL = 1
test_indx = np.where(y_test == SAMPLE_LABEL)[0][N]
sample_record = x_test[test_indx]
point2rank = settree.get_item2rank_from_tree(set_tree_model, settree.SetDataset(records=[sample_record], is_init=True))
min_val = SCALE * 2**(-max(list(point2rank.values())))
fig=plt.figure(figsize=(4,4), dpi= 100, facecolor='w', edgecolor='k')
for i, point in enumerate(sample_record):
if i in point2rank:
plt.scatter(point[0], point[1], s=SCALE * 2**(-point2rank[i]), color='orange')
else:
plt.scatter(point[0], point[1], s=min_val, color='blue')
plt.hlines(0, -1, 1, colors='black')
plt.vlines(0, -1, 1, colors='black')
plt.show()
print('This is a visualization of a sample test set of points in 2D.\nEach circle represents a point from the set of points and '
'it\'s scale is proportional to its importance rank.')
print('Legend:\nOrange points: appear in the model\'s attention-sets\nBlue points: don\'t appear in the model\'s attention-sets\n'
'The scale of the points is proportional to their relative importance, where larger circle means the point'
' is more important in the decision process of the model.')
###Output
_____no_output_____
###Markdown
Keras TrainerAn abstraction to train Keras CNN models for image classification. To use it is required to have also installed the `keras-model-specs` package. The list of models supported is the following:`vgg16`, `vgg19`, `resnet50`, `resnet152`, `mobilenet_v1`, `xception`,`inception_resnet_v2`, `inception_v3`, `inception_v4`, `nasnet_large`, `nasnet_mobile`, `densenet_169`,`densenet_121`, `densenet_201`And the defaults are specified [here](https://github.com/triagemd/keras-model-specs/blob/master/keras_model_specs/model_specs.json). This will get the model_spec by default of the `mobilenet_v1` arquitecture:
###Code
model_spec = ModelSpec.get('mobilenet_v1')
###Output
_____no_output_____
###Markdown
Here you can see the contents:
###Code
print(json.dumps(model_spec.as_json(), indent=True))
###Output
{
"name": "mobilenet_v1",
"klass": "keras.applications.mobilenet.MobileNet",
"target_size": [
224,
224,
3
],
"preprocess_func": "between_plus_minus_1",
"preprocess_args": null
}
###Markdown
You can override the defaults, passing different parameters. Let's use `preprocess_func= mean_subtraction` as an image preprocessing function, and let's also set the mean to subtract as `preprocess_args=dataset_mean`.
###Code
dataset_mean = [142.69182214, 119.05833338, 106.89884415]
model_spec = ModelSpec.get('mobilenet_v1', preprocess_func='mean_subtraction', preprocess_args=dataset_mean)
###Output
_____no_output_____
###Markdown
We'll see the changes now:
###Code
print(json.dumps(model_spec.as_json(), indent=True))
###Output
{
"name": "mobilenet_v1",
"klass": "keras.applications.mobilenet.MobileNet",
"target_size": [
224,
224,
3
],
"preprocess_func": "mean_subtraction",
"preprocess_args": [
142.69182214,
119.05833338,
106.89884415
]
}
###Markdown
Keras Trainer definition These are the default options:
###Code
Trainer.OPTIONS
###Output
_____no_output_____
###Markdown
Setting up the training dataTo train a model the first thing you need is to have the data ready. There must be a parent folder containing one folder per each class. E.g. for the cats vs dogs classification problem: `'data/train/cats'` , `'data/train/dogs'`.Also it is needed to have a validation set: `'data/valid/cats'` , `'data/valid/dogs'` You will need to specify these under `train_dataset_dir` and `val_dataset_dir`. Also you will need to specify a path for the model logs and outputs, `output_model_dir` and `output_logs_dir`:
###Code
train_dataset_dir = 'data/train/'
val_dataset_dir = 'data/valid/'
output_model_dir = 'output/models/'
output_logs_dir = 'output/logs/'
###Output
_____no_output_____
###Markdown
By default Keras trainer will use keras generators with data augmentation as follows:```train_data_generator = image.ImageDataGenerator( rotation_range=180, width_shift_range=0, height_shift_range=0, preprocessing_function=self.model_spec.preprocess_input, shear_range=0, zoom_range=0.1, horizontal_flip=True, vertical_flip=True, fill_mode='nearest' )``` But you can set custom ones under and pass them as parameters with `train_data_generator`, `val_data_generator` if you want to do data augmentation. Or `train_generator`, `val_generator` for a complete iterator. Setting up the model, fine tuning a pre-trained modelBy default weights from imagenet will be loaded (`weights='imagenet'`) and top dense layers will not be included (`include_top=False`) allowing to define new top-layers to fine tune the network. You can choose `weights='None'` to train from scratch. You can specify layers to put on top if you specify a list of Keras layers inside `top_layers`:
###Code
from keras.layers import Dense, Dropout, Activation
# Create a dropout layer with dropout rate 0.5
dropout = Dropout(0.5)
# Create a dense layer with 10 outputs
dense = Dense(10, name='dense')
# Create a softmax activation layer
softmax = Activation('softmax', name='softmax_activation')
top_layers = [dropout, dense, softmax]
###Output
_____no_output_____
###Markdown
If you don't, by default we'll add a `Dense` linear layer with output `num_classes` followed by a `Softmax` layer activation. Optimizers, Callbacks, Metrics and Loss Functions By default SGD optimizer will be used with the default parameters as shown in the OPTIONS. ```self.optimizer = self.optimizer or optimizers.SGD( lr=self.sgd_lr, decay=self.decay, momentum=self.momentum, nesterov=True)``` But we allow the use of any optimizer, you can define it and pass it with the `optimizer` variable. Moreover, you can define variable learning rates in the form of a Keras Callback.You can define as much callbacks as you want! They go under `callback_list`Let's see an example:
###Code
from keras.callbacks import LearningRateScheduler
from keras import optimizers
# Decrease learning rate by 10 in epochs 10 and 20
def scheduler(epoch):
if epoch == 10 or epoch == 20:
lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, lr / 10)
print("lr changed to {}".format(lr / 10))
return K.get_value(model.optimizer.lr)
schedule_lr = LearningRateScheduler(scheduler)
callback_list = [schedule_lr]
optim = optimizers.SGD(lr=0.001, decay=0.0005, momentum=0.9, nesterov=True)
###Output
_____no_output_____
###Markdown
You can also define a dictionary of class weights:
###Code
class_weights = {0: 13.883058178601447, 1: 1.4222778260019158}
###Output
_____no_output_____
###Markdown
Any custom metrics or loss functions can be also defined in `metrics` or `loss_function`, by default we will use `accuracy` and `categorical cross-entropy` respectively. Creating the Trainer Once it is all ready, we create the trainer object:
###Code
trainer = Trainer(model_spec=model_spec,
train_dataset_dir=train_dataset_dir,
val_dataset_dir=val_dataset_dir,
output_model_dir=output_model_dir,
output_logs_dir=output_logs_dir,
batch_size=32,
epochs=10,
workers=16,
max_queue_size=128,
num_gpus=0,
optimizer=optim,
class_weights=class_weights,
verbose=False,
input_shape=(None, None, 3)
)
###Output
Training data
Found 23000 images belonging to 2 classes.
Validation data
Found 2000 images belonging to 2 classes.
###Markdown
The trainer object contains the model, and you can have access to it:
###Code
trainer.model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
conv1_pad (ZeroPadding2D) (None, None, None, 3) 0
_________________________________________________________________
conv1 (Conv2D) (None, None, None, 32) 864
_________________________________________________________________
conv1_bn (BatchNormalization (None, None, None, 32) 128
_________________________________________________________________
conv1_relu (ReLU) (None, None, None, 32) 0
_________________________________________________________________
conv_dw_1 (DepthwiseConv2D) (None, None, None, 32) 288
_________________________________________________________________
conv_dw_1_bn (BatchNormaliza (None, None, None, 32) 128
_________________________________________________________________
conv_dw_1_relu (ReLU) (None, None, None, 32) 0
_________________________________________________________________
conv_pw_1 (Conv2D) (None, None, None, 64) 2048
_________________________________________________________________
conv_pw_1_bn (BatchNormaliza (None, None, None, 64) 256
_________________________________________________________________
conv_pw_1_relu (ReLU) (None, None, None, 64) 0
_________________________________________________________________
conv_pad_2 (ZeroPadding2D) (None, None, None, 64) 0
_________________________________________________________________
conv_dw_2 (DepthwiseConv2D) (None, None, None, 64) 576
_________________________________________________________________
conv_dw_2_bn (BatchNormaliza (None, None, None, 64) 256
_________________________________________________________________
conv_dw_2_relu (ReLU) (None, None, None, 64) 0
_________________________________________________________________
conv_pw_2 (Conv2D) (None, None, None, 128) 8192
_________________________________________________________________
conv_pw_2_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_pw_2_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_dw_3 (DepthwiseConv2D) (None, None, None, 128) 1152
_________________________________________________________________
conv_dw_3_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_dw_3_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pw_3 (Conv2D) (None, None, None, 128) 16384
_________________________________________________________________
conv_pw_3_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_pw_3_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pad_4 (ZeroPadding2D) (None, None, None, 128) 0
_________________________________________________________________
conv_dw_4 (DepthwiseConv2D) (None, None, None, 128) 1152
_________________________________________________________________
conv_dw_4_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_dw_4_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pw_4 (Conv2D) (None, None, None, 256) 32768
_________________________________________________________________
conv_pw_4_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_pw_4_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_dw_5 (DepthwiseConv2D) (None, None, None, 256) 2304
_________________________________________________________________
conv_dw_5_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_dw_5_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pw_5 (Conv2D) (None, None, None, 256) 65536
_________________________________________________________________
conv_pw_5_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_pw_5_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pad_6 (ZeroPadding2D) (None, None, None, 256) 0
_________________________________________________________________
conv_dw_6 (DepthwiseConv2D) (None, None, None, 256) 2304
_________________________________________________________________
conv_dw_6_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_dw_6_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pw_6 (Conv2D) (None, None, None, 512) 131072
_________________________________________________________________
conv_pw_6_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_6_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_7 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_7_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_7_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_7 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_7_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_7_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_8 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_8_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_8_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_8 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_8_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_8_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_9 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_9_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_9_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_9 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_9_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_9_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_10 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_10_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_10_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_10 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_10_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_10_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_11 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_11_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_11_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_11 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_11_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_11_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pad_12 (ZeroPadding2D) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_12 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_12_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_12_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_12 (Conv2D) (None, None, None, 1024) 524288
_________________________________________________________________
conv_pw_12_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_pw_12_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
conv_dw_13 (DepthwiseConv2D) (None, None, None, 1024) 9216
_________________________________________________________________
conv_dw_13_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_dw_13_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
conv_pw_13 (Conv2D) (None, None, None, 1024) 1048576
_________________________________________________________________
conv_pw_13_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_pw_13_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 2) 2050
_________________________________________________________________
act_softmax (Activation) (None, 2) 0
=================================================================
Total params: 3,230,914
Trainable params: 3,209,026
Non-trainable params: 21,888
_________________________________________________________________
###Markdown
Freeze Layers You can also freeze the layers you don't want to train, by making a list with their names or their indices:
###Code
# Let's freeze the first 10 layers
layers_to_freeze = np.arange(0,10,1)
print(layers_to_freeze)
trainer = Trainer(model_spec=model_spec,
train_dataset_dir=train_dataset_dir,
val_dataset_dir=val_dataset_dir,
output_model_dir=output_model_dir,
output_logs_dir=output_logs_dir,
batch_size=32,
epochs=2,
workers=16,
max_queue_size=128,
num_gpus=1,
optimizer=optim,
class_weights=class_weights,
verbose=False,
input_shape=(None, None, 3),
freeze_layers_list=layers_to_freeze
)
trainer.model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
conv1_pad (ZeroPadding2D) (None, None, None, 3) 0
_________________________________________________________________
conv1 (Conv2D) (None, None, None, 32) 864
_________________________________________________________________
conv1_bn (BatchNormalization (None, None, None, 32) 128
_________________________________________________________________
conv1_relu (ReLU) (None, None, None, 32) 0
_________________________________________________________________
conv_dw_1 (DepthwiseConv2D) (None, None, None, 32) 288
_________________________________________________________________
conv_dw_1_bn (BatchNormaliza (None, None, None, 32) 128
_________________________________________________________________
conv_dw_1_relu (ReLU) (None, None, None, 32) 0
_________________________________________________________________
conv_pw_1 (Conv2D) (None, None, None, 64) 2048
_________________________________________________________________
conv_pw_1_bn (BatchNormaliza (None, None, None, 64) 256
_________________________________________________________________
conv_pw_1_relu (ReLU) (None, None, None, 64) 0
_________________________________________________________________
conv_pad_2 (ZeroPadding2D) (None, None, None, 64) 0
_________________________________________________________________
conv_dw_2 (DepthwiseConv2D) (None, None, None, 64) 576
_________________________________________________________________
conv_dw_2_bn (BatchNormaliza (None, None, None, 64) 256
_________________________________________________________________
conv_dw_2_relu (ReLU) (None, None, None, 64) 0
_________________________________________________________________
conv_pw_2 (Conv2D) (None, None, None, 128) 8192
_________________________________________________________________
conv_pw_2_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_pw_2_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_dw_3 (DepthwiseConv2D) (None, None, None, 128) 1152
_________________________________________________________________
conv_dw_3_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_dw_3_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pw_3 (Conv2D) (None, None, None, 128) 16384
_________________________________________________________________
conv_pw_3_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_pw_3_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pad_4 (ZeroPadding2D) (None, None, None, 128) 0
_________________________________________________________________
conv_dw_4 (DepthwiseConv2D) (None, None, None, 128) 1152
_________________________________________________________________
conv_dw_4_bn (BatchNormaliza (None, None, None, 128) 512
_________________________________________________________________
conv_dw_4_relu (ReLU) (None, None, None, 128) 0
_________________________________________________________________
conv_pw_4 (Conv2D) (None, None, None, 256) 32768
_________________________________________________________________
conv_pw_4_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_pw_4_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_dw_5 (DepthwiseConv2D) (None, None, None, 256) 2304
_________________________________________________________________
conv_dw_5_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_dw_5_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pw_5 (Conv2D) (None, None, None, 256) 65536
_________________________________________________________________
conv_pw_5_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_pw_5_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pad_6 (ZeroPadding2D) (None, None, None, 256) 0
_________________________________________________________________
conv_dw_6 (DepthwiseConv2D) (None, None, None, 256) 2304
_________________________________________________________________
conv_dw_6_bn (BatchNormaliza (None, None, None, 256) 1024
_________________________________________________________________
conv_dw_6_relu (ReLU) (None, None, None, 256) 0
_________________________________________________________________
conv_pw_6 (Conv2D) (None, None, None, 512) 131072
_________________________________________________________________
conv_pw_6_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_6_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_7 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_7_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_7_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_7 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_7_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_7_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_8 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_8_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_8_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_8 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_8_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_8_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_9 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_9_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_9_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_9 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_9_bn (BatchNormaliza (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_9_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_10 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_10_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_10_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_10 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_10_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_10_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_11 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_11_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_11_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_11 (Conv2D) (None, None, None, 512) 262144
_________________________________________________________________
conv_pw_11_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_pw_11_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pad_12 (ZeroPadding2D) (None, None, None, 512) 0
_________________________________________________________________
conv_dw_12 (DepthwiseConv2D) (None, None, None, 512) 4608
_________________________________________________________________
conv_dw_12_bn (BatchNormaliz (None, None, None, 512) 2048
_________________________________________________________________
conv_dw_12_relu (ReLU) (None, None, None, 512) 0
_________________________________________________________________
conv_pw_12 (Conv2D) (None, None, None, 1024) 524288
_________________________________________________________________
conv_pw_12_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_pw_12_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
conv_dw_13 (DepthwiseConv2D) (None, None, None, 1024) 9216
_________________________________________________________________
conv_dw_13_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_dw_13_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
conv_pw_13 (Conv2D) (None, None, None, 1024) 1048576
_________________________________________________________________
conv_pw_13_bn (BatchNormaliz (None, None, None, 1024) 4096
_________________________________________________________________
conv_pw_13_relu (ReLU) (None, None, None, 1024) 0
_________________________________________________________________
global_average_pooling2d_2 ( (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 2) 2050
_________________________________________________________________
act_softmax (Activation) (None, 2) 0
=================================================================
Total params: 3,230,914
Trainable params: 3,205,570
Non-trainable params: 25,344
_________________________________________________________________
###Markdown
Model Training Now, let's train the model
###Code
trainer.run()
###Output
Epoch 1/2
718/718 [==============================] - 57s 80ms/step - loss: 0.8580 - acc: 0.8762 - val_loss: 0.3752 - val_acc: 0.8740
Epoch 00001: val_acc improved from -inf to 0.87399, saving model to output/models/model_max_acc.hdf5
Epoch 00001: val_loss improved from inf to 0.37516, saving model to output/models/model_min_loss.hdf5
Epoch 2/2
718/718 [==============================] - 51s 71ms/step - loss: 0.4881 - acc: 0.9212 - val_loss: 0.1630 - val_acc: 0.9345
Epoch 00002: val_acc improved from 0.87399 to 0.93445, saving model to output/models/model_max_acc.hdf5
Epoch 00002: val_loss improved from 0.37516 to 0.16299, saving model to output/models/model_min_loss.hdf5
###Markdown
After the model is trained we can access to its history:
###Code
history = trainer.history
###Output
_____no_output_____
###Markdown
And inside the history we can find the training stats:
###Code
for i in range(0,len(history.history['val_acc'])):
print('Epoch %d' %i)
print('Training Accuracy was %.3f' %history.history['acc'][i])
print('Training Loss was %.3f' %history.history['loss'][i])
print('Validation Accuracy was %.3f' %history.history['val_acc'][i])
print('Validation Loss was %.3f' %history.history['val_loss'][i])
print()
###Output
Epoch 0
Training Accuracy was 0.876
Training Loss was 0.858
Validation Accuracy was 0.874
Validation Loss was 0.375
Epoch 1
Training Accuracy was 0.921
Training Loss was 0.488
Validation Accuracy was 0.934
Validation Loss was 0.163
###Markdown
Training with multiple losses
###Code
import keras
from keras.applications import mobilenet
from keras_trainer.losses import entropy_penalty_loss
model = mobilenet.MobileNet(alpha=0.25, weights='imagenet', include_top=False, pooling='avg')
top_layers = [keras.layers.Dense(2, name='dense'),
keras.layers.Activation('softmax', name='act_softmax')]
# Layer Assembling
for i, layer in enumerate(top_layers):
if i == 0:
top_layers[i] = layer(model.output)
else:
top_layers[i] = layer(top_layers[i - 1])
model = keras.models.Model(model.input, [top_layers[-1], top_layers[-1]])
trainer = Trainer(custom_model=model,
model_spec=ModelSpec.get('custom', preprocess_func='between_plus_minus_1', target_size=[224, 224, 3]),
train_dataset_dir=train_dataset_dir,
val_dataset_dir=val_dataset_dir,
output_model_dir=output_model_dir,
output_logs_dir=output_logs_dir,
batch_size=128,
epochs=2,
workers=16,
num_classes=2,
max_queue_size=128,
num_gpus=1,
optimizer=optim,
loss_function=['categorical_crossentropy', entropy_penalty_loss],
loss_weights=[1.0, 0.25],
verbose=False,
input_shape=(None, None, 3),
)
trainer.run()
###Output
Epoch 1/2
179/179 [==============================] - 56s 313ms/step - loss: 0.3902 - act_softmax_loss: -0.4200 - act_softmax_acc: 0.7750 - act_softmax_acc_1: 0.7750 - val_loss: 0.1962 - val_act_softmax_loss: -0.3289 - val_act_softmax_acc: 0.8844 - val_act_softmax_acc_1: 0.8844
Epoch 00001: val_act_softmax_acc improved from -inf to 0.88438, saving model to output/models/model_max_acc.hdf5
Epoch 00001: val_loss improved from inf to 0.19624, saving model to output/models/model_min_loss.hdf5
Epoch 2/2
179/179 [==============================] - 45s 253ms/step - loss: 0.2323 - act_softmax_loss: -0.3623 - act_softmax_acc: 0.8602 - act_softmax_acc_1: 0.8602 - val_loss: 0.1724 - val_act_softmax_loss: -0.2929 - val_act_softmax_acc: 0.8926 - val_act_softmax_acc_1: 0.8926
Epoch 00002: val_act_softmax_acc improved from 0.88438 to 0.89263, saving model to output/models/model_max_acc.hdf5
Epoch 00002: val_loss improved from 0.19624 to 0.17236, saving model to output/models/model_min_loss.hdf5
###Markdown
Training with probabilistic labels using Dataframes
###Code
train_catdog_dataset_path = os.path.abspath(os.path.join('tests', 'files', 'catdog', 'train'))
train_catdog_dataframe_path = os.path.abspath(os.path.join('tests', 'files', 'catdog', 'train_data.json'))
val_catdog_dataset_path = os.path.abspath(os.path.join('tests', 'files', 'catdog', 'val'))
val_catdog_dataframe_path = os.path.abspath(os.path.join('tests', 'files', 'catdog', 'val_data.json'))
trainer = Trainer(model_spec=model_spec,
train_dataset_dir=train_catdog_dataset_path,
train_dataset_dataframe=train_catdog_dataframe_path,
val_dataset_dir=val_catdog_dataset_path,
val_dataset_dataframe=val_catdog_dataframe_path,
output_model_dir=output_model_dir,
output_logs_dir=output_logs_dir,
batch_size=1,
epochs=3,
workers=16,
num_classes=2,
max_queue_size=128,
num_gpus=1,
optimizer=optim,
verbose=False,
input_shape=(None, None, 3),
)
trainer.train_dataset_dataframe
trainer.run()
###Output
Epoch 1/3
6/6 [==============================] - 3s 471ms/step - loss: 1.0266 - acc: 0.3333 - val_loss: 2.0801 - val_acc: 0.5000
Epoch 00001: val_acc improved from -inf to 0.50000, saving model to output/models/model_max_acc.hdf5
Epoch 00001: val_loss improved from inf to 2.08009, saving model to output/models/model_min_loss.hdf5
Epoch 2/3
6/6 [==============================] - 0s 30ms/step - loss: 0.9251 - acc: 0.5000 - val_loss: 1.6879 - val_acc: 0.5000
Epoch 00002: val_acc did not improve from 0.50000
Epoch 00002: val_loss improved from 2.08009 to 1.68788, saving model to output/models/model_min_loss.hdf5
Epoch 3/3
6/6 [==============================] - 0s 29ms/step - loss: 0.6888 - acc: 0.5000 - val_loss: 1.2694 - val_acc: 0.5000
Epoch 00003: val_acc did not improve from 0.50000
Epoch 00003: val_loss improved from 1.68788 to 1.26939, saving model to output/models/model_min_loss.hdf5
###Markdown
Load dataset
###Code
g = load_dataset('data/cora_ml.npz')
A, X, z = g['A'], g['X'], g['z']
###Output
_____no_output_____
###Markdown
Train a model and evaluate the link prediction performance
###Code
g2g = Graph2Gauss(A=A, X=X, L=64, verbose=True, p_val=0.10, p_test=0.05)
sess = g2g.train()
test_auc, test_ap = score_link_prediction(g2g.test_ground_truth, sess.run(g2g.neg_test_energy))
print('test_auc: {:.4f}, test_ap: {:.4f}'.format(test_auc, test_ap))
###Output
test_auc: 0.9753, test_ap: 0.9766
###Markdown
Train another model and evaluate the node classification performance
###Code
g2g = Graph2Gauss(A=A, X=X, L=64, verbose=True, p_val=0.0, p_test=0.00, max_iter=150)
sess = g2g.train()
mu, sigma = sess.run([g2g.mu, g2g.sigma])
f1_micro, f1_macro = score_node_classification(mu, z, n_repeat=1, norm=True)
print('f1_micro: {:.4f}, f1_macro: {:.4f}'.format(f1_micro, f1_macro))
###Output
f1_micro: 0.8349, f1_macro: 0.8220
###Markdown
Load and preprocess the data
###Code
data_dir = os.path.expanduser("~/data/cora/")
cora_location = os.path.expanduser(os.path.join(data_dir, "cora.cites"))
g_nx = nx.read_edgelist(path=cora_location)
adj_matrix = nx.to_numpy_array(g_nx)
adj_matrix = sparse.csr_matrix(adj_matrix)
adj_matrix.shape
type(adj_matrix)
if False:
graph = load_dataset('data/cora.npz')
adj_matrix = graph['adj_matrix']
labels = graph['labels']
adj_matrix, labels = standardize(adj_matrix, labels)
n_nodes = adj_matrix.shape[0]
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
n_flips = 1000
dim = 32
window_size = 5
###Output
_____no_output_____
###Markdown
Generate candidate edge flips
###Code
candidates = generate_candidates_removal(adj_matrix=adj_matrix)
###Output
_____no_output_____
###Markdown
Compute simple baselines
###Code
b_eig_flips = baseline_eigencentrality_top_flips(adj_matrix, candidates, n_flips)
b_deg_flips = baseline_degree_top_flips(adj_matrix, candidates, n_flips, True)
b_rnd_flips = baseline_random_top_flips(candidates, n_flips, 0)
###Output
_____no_output_____
###Markdown
Compute adversarial flips using eigenvalue perturbation
###Code
our_flips = perturbation_top_flips(adj_matrix, candidates, n_flips, dim, window_size)
our_flips
###Output
_____no_output_____
###Markdown
Evaluate classification performance using the skipgram objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, None, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding = deepwalk_skipgram(adj_matrix_flipped, dim, window_size=window_size)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
cln, F1: 0.81 0.77
###Markdown
Evaluate classification performance using the SVD objective
###Code
for flips, name in zip([None, b_rnd_flips, b_deg_flips, None, our_flips],
['cln', 'rnd', 'deg', 'eig', 'our']):
if flips is not None:
adj_matrix_flipped = flip_candidates(adj_matrix, flips)
else:
adj_matrix_flipped = adj_matrix
embedding, _, _, _ = deepwalk_svd(adj_matrix_flipped, window_size, dim)
f1_scores_mean, _ = evaluate_embedding_node_classification(embedding, labels)
print('{}, F1: {:.2f} {:.2f}'.format(name, f1_scores_mean[0], f1_scores_mean[1]))
###Output
_____no_output_____
###Markdown
Store attacked graph
###Code
def attack_graph(adj_matrix, n_flips, dim, window_size, seed=0, method="add"):
if method=="add":
candidates = generate_candidates_addition(adj_matrix=adj_matrix,
n_candidates=n_flips,
seed=seed)
else:
candidates = generate_candidates_removal(adj_matrix=adj_matrix,
seed=seed)
our_flips = perturbation_top_flips(adj_matrix, candidates, n_flips, dim, window_size)
#
A = np.array(adj_matrix.todense())
A_flipped = A.copy()
A_flipped[candidates[:, 0], candidates[:, 1]] = 1 - A[candidates[:, 0], candidates[:, 1]]
A_flipped[candidates[:, 1], candidates[:, 0]] = 1 - A[candidates[:, 1], candidates[:, 0]]
return A_flipped
n_flips = 1000
dim = 32
window_size = 5
candidates = generate_candidates_removal(adj_matrix=adj_matrix)
our_flips = perturbation_top_flips(adj_matrix, candidates, n_flips, dim, window_size)
data = 'cora'
ele = 'attack'
#corrupted_A = corrupt_adjacency(A, ele, l)
dir_name = os.path.join("attacked_datasets",data,ele)
print(dir_name)
i = 1
print(type(adj_matrix))
A = np.array(adj_matrix.todense())
print(type(A))
# This flips the selected edges
#adj_matrix_flipped = flip_candidates(A, our_flips)
A_flipped = A.copy()
A_flipped[candidates[:, 0], candidates[:, 1]] = 1 - A[candidates[:, 0], candidates[:, 1]]
A_flipped[candidates[:, 1], candidates[:, 0]] = 1 - A[candidates[:, 1], candidates[:, 0]]
if not os.path.exists(dir_name):
os.makedirs(dir_name)
file_name = data + "_" + ele + "_"+str(n_flips)+"_v"+str(i)
print(f"file_name: {file_name}")
np.save(os.path.join(dir_name,file_name), A_flipped)
num_flips = [ -2000, -1000, -500, 500, 1000, 2000, 5000 ]
for n_flips in num_flips:
print(f"Calculating for n_flips={n_flips}")
if n_flips < 0:
method = "remove"
n_flips = -n_flips
else:
method = "add"
A_flipped = attack_graph(adj_matrix=adj_matrix,
n_flips=n_flips,
dim=dim,
window_size=window_size,
method=method,
seed=0)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
file_name = data + "_" + ele + "_"+str(n_flips)+"_"+method #+"_v"+str(i)
print(f"file_name: {file_name}")
np.save(os.path.join(dir_name,file_name), A_flipped)
graph = nx.from_numpy_array(A_flipped)
file_name += ".gpickle"
nx.write_gpickle(graph, os.path.join(dir_name, file_name))
adj_matrix.shape
###Output
_____no_output_____
###Markdown
Example DocumentThis is an example notebook to try out the ["Notebook as PDF"](https://github.com/betatim/notebook-as-pdf) extension. It contains a few plots from the excellent [matplotlib gallery](https://matplotlib.org/3.1.1/gallery/index.html).To try out the extension click "File -> Download as -> PDF via HTML". This will convert this notebook into a PDF. This extension has three new features compared to the official "save as PDF" extension:* it produces a PDF with the smallest number of page breaks,* the original notebook is attached to the PDF; and* this extension does not require LaTex.The created PDF will have as few pages as possible, in many cases only one. This is useful if you are exporting your notebook to a PDF for sharing with others who will view them on a screen.To make it easier to reproduce the contents of the PDF at a later date the original notebook is attached to the PDF. Not all PDF viewers know how to deal with attachments. This mean you need to use Acrobat Reader or pdf.js to be able to get the attachment from the PDF. Preview for OSX does not know how to display/give you access to PDF attachments.
###Code
import numpy as np
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
# Compute pie slices
N = 20
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
colors = plt.cm.viridis(radii / 10.)
ax = plt.subplot(111, projection='polar')
ax.bar(theta, radii, width=width, bottom=0.0, color=colors, alpha=0.5)
###Output
_____no_output_____
###Markdown
Below we show some more lines that go up and go down. These are noisy lines because we use a random number generator to create them. Fantastic isn't it?
###Code
x = np.linspace(0, 10)
# Fixing random state for reproducibility
np.random.seed(19680801)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) - 2 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + np.random.randn(50));
###Output
_____no_output_____
###Markdown
Author: **Rodrigo C Boufleur (c)** | Date: March, 2021 | Email: rcboufleur at gmail.com | Version: 1.0The code aims to detrend binary stars periodic signals in light curves.The data is modeled after the following equation:\\[ y(t) = x(t) + a(t) + \epsilon(t) \\]where \\(x(t)\\) describes the underlying periodic signal, \\(a(t)\\) describes the trend in the data, and \\(\epsilon(t)\\) is the error associated to each data point.The code does not aim to minimize the error function. Instead, it assesses the common variations present in the data using phase folding methods. Once we are close to the periodic solution the mean phase folded light curve is calculated and subtracted from the original signal. The resulting curve is an estimate of the trend present in the data. A new period can be computed with the original data subtracted from the trend.Eclipses can be masked with the correspondent regions being interpolated. Import dependencies
###Code
from PeriodicDetrend import DetrendLightCurve
import numpy as np
%matplotlib widget
# auto reload local modules
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Read the input data
###Code
file = r'k2sc_240253681.txt'
x, y = np.loadtxt(file, delimiter=',', skiprows=1, unpack=True)
###Output
_____no_output_____
###Markdown
Initialize the ApplicationInstantiate the DetrendLightCurve object passing the parameters time (x), flux (y), and a name used to save the data.```lcd = DetrendLightCurve(x, y, name='k2sc_235009762')```Then, display the application```lcd.display()```
###Code
lc = DetrendLightCurve(x, y, name='k2sc_240253681')
# If the peiod is already none it can be passed in the instantiation
# lc = DetrendLightCurve(x, y, name='k2sc_240253681', period=2.431)
lc.display()
###Output
_____no_output_____
###Markdown
Analyzing Further Residuals
###Code
# let's retrive the residuals from calculations done
residual = lc.trend
residual_lc = DetrendLightCurve(x, residual, name='k2sc_240253681_residual')
residual_lc.display()
###Output
_____no_output_____
###Markdown
Pandas Highcharts Example * Use [Highcharts](http://highcharts.com) to plot [pandas](http://pandas.pydata.org) DataFrame* Code on Github at [pandas-highcharts](https://github.com/gtnx/pandas-highcharts) Import
###Code
%load_ext autoreload
%autoreload 2
import pandas as pd
import datetime
import os
import numpy as np
from pandas.compat import StringIO
from pandas.io.common import urlopen
from IPython.display import display, display_pretty, Javascript, HTML
from pandas_highcharts.core import serialize
from pandas_highcharts.display import display_charts
import matplotlib.pyplot as plt
# Data retrieved from http://www.quandl.com/api/v1/datasets/ODA/DEU_PCPIPCH.csv?column=1
data = """Date,Value\n2019-12-31,1.7\n2018-12-31,1.7\n2017-12-31,1.7\n2016-12-31,1.5\n2015-12-31,1.247\n2014-12-31,0.896\n2013-12-31,1.601\n2012-12-31,2.13\n2011-12-31,2.498\n2010-12-31,1.158\n2009-12-31,0.226\n2008-12-31,2.738\n2007-12-31,2.285\n2006-12-31,1.784\n2005-12-31,1.92\n2004-12-31,1.799\n2003-12-31,1.022\n2002-12-31,1.346\n2001-12-31,1.904\n2000-12-31,1.418\n1999-12-31,0.626\n1998-12-31,0.593\n1997-12-31,1.542\n1996-12-31,1.19\n1995-12-31,1.733\n1994-12-31,2.717\n1993-12-31,4.476\n1992-12-31,5.046\n1991-12-31,3.474\n1990-12-31,2.687\n1989-12-31,2.778\n1988-12-31,1.274\n1987-12-31,0.242\n1986-12-31,-0.125\n1985-12-31,2.084\n1984-12-31,2.396\n1983-12-31,3.284\n1982-12-31,5.256\n1981-12-31,6.324\n1980-12-31,5.447\n"""
df = pd.read_csv(StringIO(data), index_col=0, parse_dates=True)
df = df.sort_index()
###Output
_____no_output_____
###Markdown
Basic examples
###Code
display_charts(df, title="Germany inflation rate")
display_charts(df, chart_type="stock", title="Germany inflation rate")
display_charts(df, kind="bar", title="Germany inflation rate")
display_charts(df, kind="barh", title="Germany inflation rate")
display_charts(df, title="Germany inflation rate", legend=None, kind="bar", figsize = (400, 200))
display_charts(df, title="Germany inflation rate", kind="bar", render_to="chart5", zoom="xy")
# Data retrieved from https://www.quandl.com/api/v1/datasets/CVR/ANGEL_SECTORS.csv
data = """Year,Software,Healthcare,Hardware,Biotech,Telecom,Manufacturing,Financial Products and Services,IT Services,Industrial/Energy,Retail,Media\n2013-12-31,23.0,14.0,,11.0,,,7.0,,,7.0,16.0\n2012-12-31,23.0,14.0,,11.0,,,,,7.0,12.0,7.0\n2011-12-31,23.0,19.0,,13.0,,,,7.0,13.0,,5.0\n2010-12-31,16.0,30.0,,15.0,,,,5.0,8.0,5.0,\n2009-12-31,19.0,17.0,,8.0,,,5.0,,17.0,9.0,\n2008-12-31,13.0,16.0,,11.0,,,,,8.0,12.0,7.0\n2007-12-31,27.0,19.0,,12.0,,,,,8.0,6.0,5.0\n2006-12-31,18.0,21.0,,18.0,,,6.0,,6.0,8.0,\n2005-12-31,18.0,20.0,8.0,12.0,,,,6.0,6.0,,6.0\n2004-12-31,22.0,16.0,10.0,10.0,6.0,,8.0,8.0,,7.0,\n2003-12-31,26.0,13.0,12.0,11.0,5.0,12.0,,,,,\n2002-12-31,40.0,14.0,5.0,5.0,5.0,,,,,,\n"""
df3 = pd.read_csv(StringIO(data), index_col=0, parse_dates=True)
df3 = df3.fillna(0) / 100
df4 = pd.DataFrame(df3.mean(), columns=['ratio'])
df4['total'] = 1
display_charts(df4, kind='pie', y=['ratio'], title='Angel Deals By Sector', tooltip={'pointFormat': '{series.name}: <b>{point.percentage:.1f}%</b>'})
###Output
_____no_output_____
###Markdown
Highcharts specific
###Code
df4 = pd.DataFrame(df3.sum(), columns=['sum'])
#df4.to_dict('series').items()[0][1].tolist()
display_charts(df4, polar=True, kind='bar', ylim=(0, 2.3), title='Angel Deals By Sector')
###Output
_____no_output_____
###Markdown
OverviewThis example demonstrates how to scan query history from a data warehouse and save it in the data lineage app. The app automatically parses and extracts data lineage from the queries.The example consists of the following sequence of operations:* Start docker containers containing a demo. Refer to [docs](https://tokern.io/docs/data-lineage/installation) for detailed instructions on installing demo-wikimedia.* Scan and send queries from query history to data lineage app.* Visualize the graph by visiting Tokern UI.* Analyze the graph InstallationThis demo requires wikimedia demo to be running. Start the demo using the following instructions: in a new directory run wget https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/wikimedia-demo.yml or run curl https://raw.githubusercontent.com/tokern/data-lineage/master/install-manifests/docker-compose/wikimedia-demo.yml -o docker-compose.ymlRun docker-compose docker-compose up -dVerify container are running docker container ls | grep tokern
###Code
# Required configuration for API and wikimedia database network address
docker_address = "http://127.0.0.1:8000"
wikimedia_db = {
"username": "etldev",
"password": "3tld3v",
"uri": "tokern-demo-wikimedia",
"port": "5432",
"database": "wikimedia"
}
# Setup a connection to catalog using the SDK.
from data_lineage import Catalog
catalog = Catalog(docker_address)
# Register wikimedia datawarehouse with data-lineage app.
source = catalog.add_source(name="wikimedia", source_type="postgresql", **wikimedia_db)
# Scan the wikimedia data warehouse and register all schemata, tables and columns.
catalog.scan_source(source)
import json
with open("test/queries.json", "r") as file:
queries = json.load(file)
from data_lineage import Parser
parser = Parser(docker_address)
for query in queries:
print(query)
parser.parse(**query, source=source)
###Output
_____no_output_____
###Markdown
**conformalMaps** An interactive package for intercative use of conformal mappings* Function **w = f(z) should be entered in standard Pythonic form**, (ex:z**2 for $z^2$)* Functions entered should be availabe in SymPy lib and must be entered in same form because internally it uses sympy for symbolic conversion.* The entered function w can be a function of z or of the form x + i y'x' and 'y' are real and imaginary variables respectively.* Typical usage``` z**2 x**2 + I*y**2 tan(z)```* **Note use 'I' for imaginary number $\rm{i}$ iota*** Use transformation slider to see the transformation* Limit range limits the grid to $\pm$ slider value* Ticks increases number of gridlines Supported Grids to transform* **Rectangle*** **Square*** **Donut*** **Circle*** **Single circle** Advanced builtin functions for w* **Rectangle to Eccentric Annulus*** **Rectangle to Elliptic Annulus*** **Concentric Annulus To Eccentric Annulus** **Run the below Cells First** **If you have installed all the dependences, or opening this repo with binder, Skip the next cell**
###Code
!pip install -r requirements.txt
!jupyter labextension install @jupyter-widgets/jupyterlab-manager
!jupyter nbextension enable --py widgetsnbextension
from conformalMaps.grids import *
from conformalMaps.mappings import RectangleToEccentricAnnulus, RectangleToEllipticAnnulus, ConcentricAnnulusToEccentricAnnulus
from ipywidgets import widgets
from ipywidgets import HBox,VBox
###Output
_____no_output_____
###Markdown
Using Rectangle grid
###Code
rect = Rectangle()
left = widgets.FloatSlider(min=-10, max=10, value=-1, description='left')
bottom = widgets.FloatSlider(min=-10, max=10, value=-1, description='bottom')
top = widgets.FloatSlider(min=-10, max=10, value=1, description='top')
right = widgets.FloatSlider(min=-10, max=10, value=1, description='right')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
Hticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Hticks')
Vticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Vticks')
function = widgets.Text( value = 'z**2' , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 5, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(rect.updateFunc,
w = function,
left = left,
right = right,
top= top,
bottom = bottom,
fine = fine,
Hticks = Hticks,
Vticks = Vticks,
frame = frame
)
w1 = VBox([ left, right])
w2 = VBox([top,bottom])
w3 = VBox([Hticks,Vticks])
w4 = HBox([w1,w2,w3])
w5 = HBox([function, fine])
anim_slider = HBox([play, frame])
w = VBox([w4, w5, anim_slider, rect.show()])
w
rect.check_analytic()
###Output
The function is conformal, angles are preserved :)
###Markdown
Using Square Grid
###Code
sq = Square()
side = widgets.FloatSlider(min=0.01, max=10, value=1, description='side')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
Hticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Hticks')
Vticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Vticks')
function = widgets.Text( value = 'z**2' , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 5, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(sq.updateFunc,
w = function,
side = side,
fine = fine,
Hticks = Hticks,
Vticks = Vticks,
frame = frame
)
# w1 = VBox([ left, right])
# w2 = VBox([top,bottom])
box1 = HBox([side, Hticks,Vticks])
box2 = HBox([function, fine])
anim_slider = HBox([play, frame])
w = VBox([box1, box2, anim_slider, sq.show()])
w
sq.check_analytic()
r = sym.sqrt(x**2+y**2)
f = x*(sym.sqrt(x**2+y**2-x**2*y**2))/r + sym.I*y*(sym.sqrt(x**2+y**2-x**2*y**2))/r # transforms unit square
sq2 = Square()
side = widgets.FloatSlider(min=0.01, max=10, value=1, description='side')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
Hticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Hticks')
Vticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Vticks')
function = widgets.Text( value = '%s' %(f) , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 5, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(sq2.updateFunc,
w = function,
side = side,
fine = fine,
Hticks = Hticks,
Vticks = Vticks,
frame = frame
)
# w1 = VBox([ left, right])
# w2 = VBox([top,bottom])
box1 = HBox([side, Hticks,Vticks])
box2 = HBox([function, fine])
anim_slider = HBox([play, frame])
w = VBox([box1, box2, anim_slider, sq2.show()])
w
sq2.check_analytic()
###Output
The function is not conformal, angles are not preserved ...
###Markdown
Using Donut Grid
###Code
donut = Donut()
rin = widgets.FloatSlider(min=0, max=10, value=1, description='Rin')
rout = widgets.FloatSlider(min=1, max=20, value=3, description='Rout')
x0 = widgets.FloatSlider(min=-10, max=10, value=0, description='x0')
y0 = widgets.FloatSlider(min=-10, max=10, value=0, description='y0')
cticks = widgets.IntSlider(min = 2, max = 50, value=4, description='cticks')
rticks = widgets.IntSlider(min = 2, max = 50, value=4, description='rticks')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
function = widgets.Text( value = 'z**2' , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 2, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(donut.updateFunc,
rin = rin,
rout = rout,
x0 = x0,
y0 = y0,
fine = fine,
cticks = cticks,
rticks = rticks,
w = function,
frame = frame)
radius = VBox([rin, rout])
offset = VBox([x0, y0])
ticks = VBox([cticks, rticks])
group = HBox([radius, offset,ticks])
animation = HBox([play, frame])
w1 = VBox([group, HBox([fine, function]), animation, donut.show()])
w1
donut.check_analytic()
###Output
The function is conformal, angles are preserved :)
###Markdown
Using Circle Grid
###Code
circle = Circle()
r = widgets.FloatSlider(min=0.1, max=10, value=1, description='R')
x0 = widgets.FloatSlider(min=-10, max=10, value=0, description='x0')
y0 = widgets.FloatSlider(min=-10, max=10, value=0, description='y0')
cticks = widgets.IntSlider(min = 2, max = 50, value=4, description='cticks')
rticks = widgets.IntSlider(min = 0, max = 50, value=4, description='rticks')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
function = widgets.Text( value = 'z**2' , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 2, description='anim')
play = widgets.Play(min= 0, max = 100, step = 2)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(circle.updateFunc,
r = r,
x0 = x0,
y0 = y0,
fine = fine,
cticks = cticks,
rticks = rticks,
w = function,
frame = frame)
radius = VBox([r, fine])
offset = VBox([x0, y0])
ticks = VBox([cticks, rticks])
group = HBox([radius, offset,ticks])
animation = HBox([play, frame])
w1 = VBox([group, function, animation, circle.show()])
w1
# display(interactive_plot,circle.show())
circle.check_analytic()
###Output
The function is conformal, angles are preserved :)
###Markdown
Using Single_circle
###Code
single = Single_circle(rticks=0)
r = widgets.FloatSlider(min=0.1, max=10, value=1.08, description='R')
x0 = widgets.FloatSlider(min=-10, max=10, value=-0.08, description='x0')
y0 = widgets.FloatSlider(min=-10, max=10, value=0.08, description='y0')
rticks = widgets.IntSlider(min = 0, max = 50, value=0, description='rticks')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
function = widgets.Text( value = 'z+1/z' , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 2, description='anim')
play = widgets.Play(min= 0, max = 100, step = 2)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(single.updateFunc,
r = r,
x0 = x0,
y0 = y0,
fine = fine,
rticks = rticks,
w = function,
frame = frame)
radius = VBox([r, fine])
offset = VBox([x0, y0])
# ticks = VBox([cticks, rticks])
group = HBox([radius, offset,rticks])
animation = HBox([play, frame])
w1 = VBox([group, function, animation, single.show()])
w1
single.check_analytic()
###Output
The function is conformal, angles are preserved :)
###Markdown
Using Builtin complicated functions for wIn engineering one may be interested in soling the Laplace in Poisson equation in "complicated" domains as eccentric annuli or elliptic annuli. With the help of builtin functions from ```conformalMaps``` one can see how those domains are conformally related to simple domains as eccentric annuli or rectangles. using EccentricAnnulus as wMapping a cetrain rectangle to a specific eccentric annulus (donuts)
###Code
R1 = 4 # inner radius of target eccentric annulus
R2 = 7.6 # outer radius of target eccentric annulus
ep = 0.7 # relative eccentricity of target eccentric annulus
trans = RectangleToEccentricAnnulus(R1, R2, ep)
rect = Rectangle()
left = widgets.FloatSlider(min=-10, max=10, value=-pi, description='left')
right = widgets.FloatSlider(min=-10, max=10, value=pi, description='right')
top = widgets.FloatSlider(min=-10, max=10, value=1.5, description='top')
bottom = widgets.FloatSlider(min=-10, max=10, value=0.8, description='bottom')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
Hticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Hticks')
Vticks = widgets.IntSlider(min = 2, max = 50, value=20, description='Vticks')
function = widgets.Text( value = '{0}'.format(trans.mapping(z)) , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 5, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
# widgets.jslink((frame, 'value'), (play, 'value'))
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(rect.updateFunc,
w = function,
left = left,
right = right,
top= top,
bottom = bottom,
fine = fine,
Hticks = Hticks,
Vticks = Vticks,
frame = frame
)
w1 = VBox([ left, right])
w2 = VBox([top,bottom])
w3 = VBox([Hticks,Vticks])
w4 = HBox([w1,w2,w3])
w5 = HBox([function, fine])
anim_slider = HBox([play, frame])
w = VBox([w4, w5, anim_slider, rect.show()])
w
###Output
_____no_output_____
###Markdown
using EccentricAnnulus as wMapping a cetrain donur or concentric annulus to a specific eccentric annulus (donuts)
###Code
R1 = 4 # inner radius of target eccentric annulus
R2 = 7.6 # outer radius of target eccentric annulus
ep = 0.7 # relative eccentricity of target eccentric annulus
trans = ConcentricAnnulusToEccentricAnnulus(R1, R2, ep)
donut = Donut()
rin = widgets.FloatSlider(min=0, max=10, value=trans.rin, description='Rin')
rout = widgets.FloatSlider(min=1, max=20, value=trans.rout, description='Rout')
x0 = widgets.FloatSlider(min=-10, max=10, value=0, description='x0')
y0 = widgets.FloatSlider(min=-10, max=10, value=0, description='y0')
cticks = widgets.IntSlider(min = 2, max = 50, value=20, description='cticks')
rticks = widgets.IntSlider(min = 2, max = 50, value=20, description='rticks')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
function = widgets.Text( value = '%s' % (trans.mapping(z)) , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 2, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(donut.updateFunc,
rin = rin,
rout = rout,
x0 = x0,
y0 = y0,
fine = fine,
cticks = cticks,
rticks = rticks,
w = function,
frame = frame)
radius = VBox([rin, rout])
offset = VBox([x0, y0])
ticks = VBox([cticks, rticks])
group = HBox([radius, offset,ticks])
animation = HBox([play, frame])
w1 = VBox([group, HBox([fine, function]), animation, donut.show()])
w1
###Output
_____no_output_____
###Markdown
Using EllipticAnnulus as wMapping a cetrain rectangle to a specific elliptic annulus (donut)
###Code
a = 5 # half axis of outer ellipse
b = 3.6 # half axis of inner ellipse
trans = RectangleToEllipticAnnulus(b, a)
rect = Rectangle()
left = widgets.FloatSlider(min=-10, max=10, value=trans.left, description='left')
right = widgets.FloatSlider(min=-10, max=10, value=trans.right, description='right')
top = widgets.FloatSlider(min=-10, max=10, value=trans.top, description='top')
bottom = widgets.FloatSlider(min=-10, max=10, value=trans.bottom, description='bottom')
fine = widgets.IntSlider(min = 20, max = 100, value=50, description='Fine')
Hticks = widgets.IntSlider(min = 2, max = 50, value=10, description='Hticks')
Vticks = widgets.IntSlider(min = 2, max = 50, value=20, description='Vticks')
function = widgets.Text( value = '{0}'.format(trans.mapping(z)) , description='w : ')
frame = widgets.FloatSlider(min=0, max=100, value=100, step = 5, description='anim')
play = widgets.Play(min= 0, max = 100, step = 5)
# widgets.jslink((frame, 'value'), (play, 'value'))
widgets.jslink((play, 'value'), (frame, 'value'))
interactive_plot = widgets.interactive(rect.updateFunc,
w = function,
left = left,
right = right,
top= top,
bottom = bottom,
fine = fine,
Hticks = Hticks,
Vticks = Vticks,
frame = frame
)
w1 = VBox([ left, right])
w2 = VBox([top,bottom])
w3 = VBox([Hticks,Vticks])
w4 = HBox([w1,w2,w3])
w5 = HBox([function, fine])
anim_slider = HBox([play, frame])
w = VBox([w4, w5, anim_slider, rect.show()])
w
###Output
_____no_output_____
###Markdown
SuNBEaM(S)pectral (N)on-(B)acktracking (E)mbedding (A)nd Pseudo-(M)etric, or SuNBEaM for short. Thenon-backtracking matrix is a matrix representation of a graph that has deepconnections with the theory homotopy of graphs, in particular the lengthspectrum function. The eigenvalues of the non-backtracking matrix can beeffectively used to compute dissimilarity scores (or distances) betweengraphs. An old version of our manuscript can be found at the followinglink. (Newer version currently under review.)> Leo Torres, P. Suárez Serrato, and T. Eliassi-Rad, **Graph Distance from> the Topological View of Non-Backtracking Cycles**, preprint,> arXiv:1807.09592 [cs.SI], (2018).
###Code
from sunbeam import *
import numpy as np
import networkx as nx
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
The Non-Backtracking MatrixThe non-backtracking matrix is the (unnormalized) transition matrix of a random walker that does not backtrack, that is, it never traverses the same edge twice in succession. It can be used to, among other things, compute the number of non-backtracking walks in a graph. The non-backtracking matrix of a cycle graph is always a permutation matrix.
###Code
graph = nx.cycle_graph(5)
nbm = fast_hashimoto(graph)
nbm.sum(axis=1).T, nbm.sum(axis=0)
###Output
_____no_output_____
###Markdown
The diagonal elements of powers of the non-backtracking matrix can be used to compute the number of non-backtracking cycles. For example, the trace of the cube gives the number of triangles.
###Code
graph = nx.erdos_renyi_graph(100, 0.1)
nbm = fast_hashimoto(graph)
directed_triangles = (nbm.dot(nbm).dot(nbm)).diagonal().sum()
undirected_triangles = sum(nx.triangles(graph).values())
directed_triangles == 2*undirected_triangles
###Output
_____no_output_____
###Markdown
EigenvaluesNon-backtracking cycles are topologically informative, so we wish to count how many of them exist in a graph. The above procedure gives one way to do it in the case of triangles. However, to compute larger cycles, we need the traces of higher powers of the non-backtracking matrix. These can be computed using the eigenvalues of the matrix. SuNBEAM provides this functionality.
###Code
eigs = nbvals(graph, 50, fmt='2D') # Compute the largest 50 eigenvalues
plt.scatter(eigs.T[0], eigs.T[1])
plt.gca().set_aspect('equal')
plt.xlabel('Real')
plt.ylabel('Imaginary')
plt.show()
###Output
_____no_output_____
###Markdown
Geometric features of the eigenvalue distribution in the complex plane are correlated to structural graph features. In the next plot we show the largest 200 eigenvalues of six different random graph models.
###Code
from matplotlib.lines import Line2D
options = [{'color': '#1f77b4', 'label': 'Erdos-Renyi'},
{'color': '#ff7f0e', 'label': 'Kronecker'},
{'color': '#2ca02c', 'label': 'Barabasi-Albert'},
{'color': '#d62728', 'label': 'Configuration Model'},
{'color': '#9467bd', 'label': 'Watts-Strogatz'},
{'color': '#17becf', 'label': 'Hyperbolic Graph'}]
def make_plot(data, get_xy, size=0.2):
"""Plot eigenvalue data."""
handles = []
for i in range(6):
rows = data[50*i : 50*(i+1)]
xx, yy = get_xy(rows)
plt.scatter(xx, yy, s=size, color=options[i]['color'])
handles.append(Line2D([], [], marker='o', markersize=8, color='w',
label=options[i]['label'],
markerfacecolor=options[i]['color']))
plt.gca().set_aspect('equal')
plt.legend(handles=handles)
plt.xlabel('Real')
plt.ylabel('Imaginary')
random_eigs = np.load('data.npy')
plt.figure(figsize=(10, 10))
make_plot(random_eigs, lambda rows: (rows[:, :200], rows[:, 200:]))
plt.xlim(-30, 30)
plt.ylim(-30, 30)
plt.title('Non-Backtracking Eigenvalues of Random Graphs')
plt.show()
###Output
_____no_output_____
###Markdown
Distance The theory of the length spectrum predicts that the eigenvalues of the non-backtracking matrix will be effective at computing distance between graphs.
###Code
import timeit
start = timeit.default_timer()
er = nx.erdos_renyi_graph(300, 0.05)
ba = nx.barabasi_albert_graph(300, 3)
dist1 = nbd(er, er)
dist2 = nbd(ba, ba)
dist3 = nbd(er, ba)
end = timeit.default_timer()
print("{:.3f}, {:.3f}, {:.3f}".format(dist1, dist2, dist3))
print("Elapsed time: {}".format(end - start))
###Output
0.000, 0.044, 1.434
Elapsed time: 7.474624859023606
###Markdown
To avoid computing the eigenvalues each time, we may pre-compute them and use the `vals` keyword as follows.
###Code
start = timeit.default_timer()
er_vals = nbvals(er, fmt="2D", batch=20)
ba_vals = nbvals(ba, fmt="2D", batch=20)
dist1 = nbd(er, er, vals=(er_vals, er_vals))
dist2 = nbd(ba, ba, vals=(ba_vals, ba_vals))
dist3 = nbd(er, ba, vals=(er_vals, ba_vals))
end = timeit.default_timer()
print("{:.3f}, {:.3f}, {:.3f}".format(dist1, dist2, dist3))
print("Elapsed time: {}".format(end - start))
###Output
0.000, 0.000, 1.622
Elapsed time: 1.2504457570030354
###Markdown
Embedding We also use the eigenvectors of the non-backtracking matrix in order to compute an edge embedding of a graph. That is, given a graph with $m$ edges, we compute a 2D point for each directed edge using the `nbed` function.
###Code
emb = nbed(ba)
print(2*ba.size(), emb.shape[0])
###Output
1782 1782
###Markdown
We can then visualize this embedding to understand the underlying structure of the network. Each edge in the graph is represented by two points in the following plots, one for each orientation.
###Code
visualize_nbed(ba, emb=emb, color='source', log=True)
visualize_nbed(er, color='target', log=False)
###Output
_____no_output_____
###Markdown
Example KNNIn this notebook, we will go through two examples on how to use the class KNN. We will first apply it on a toy example using our own generated data. Then, we will use the class to classify cancer by predicting if it is malignant or benign.
###Code
# Import useful libraries
from knn import KNN
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
SEED = 42
###Output
_____no_output_____
###Markdown
Example I - Toy example with randomly generated dataIn this example, we generate data from six different multivariate guassian distributions, all with the same covariance structure. Then, we use KNN to classify three arbitrary points.
###Code
# Generate data from 6 multivariate normal distributions
n = 50
mu = [[0, 0], [2,2], [5,3], [-3, 2], [0,2], [0, 5]]
# Set seed for reproducibility
np.random.seed(SEED)
# Concatenate all data
data = np.concatenate((np.random.randn(n,2)/2 + mu[0], np.random.randn(n,2)/2 + mu[1]))
for i in range(2,6):
data = np.concatenate((data, np.random.randn(n,2)/2 + mu[i]))
labels = np.repeat([i for i in range(6)], n)
colors = np.array(['red', 'blue', 'green', 'yellow', 'purple', 'orange'])
plt.scatter(data[:,0], data[:,1], c=colors[labels])
# Append labels to satisfy the format-requirement of the class KNN
data = np.append(data, np.reshape(labels, (labels.shape[0],1)), axis=1)
# Create a model
model = KNN(data, k=5)
# Generate new points
new_points = np.random.randn(3,2)*3
# Predict labels on new points
predictions = model.predict(new_points)
print(predictions)
# Plot the predictions together with the data
predicted_col = [colors[int(prediction)] for prediction in predictions]
plt.scatter(data[:,0], data[:,1], c=colors[labels])
plt.scatter(new_points[:,0], new_points[:,1], c=predicted_col, marker='x', s=100)
###Output
_____no_output_____
###Markdown
We see that the classifications (illustrated as crosses) are reasonable and what we would expect from the KNN algorithm. Example IIIn this example we use the class KNN to predict if cancer is malignant or benign. We perform hyper-parameter optimization using Leave One Out Cross-Validation (LOOCV). Then, we train a final model and test it on test data.
###Code
# Load data
all_data = load_breast_cancer()
print(f"We have {len(all_data['feature_names'])} recorded features.")
print(all_data['feature_names'])
print(f"We have {np.sum(np.isnan(data))} missing values.")
###Output
We have 0 missing values.
###Markdown
We see that we have 30 recorded features with no missing values. We continue by standardizing the values. Then we proceed with finding the optimal value of k by using LOOCV. We try every odd k from 3 to 19.
###Code
# Extract features (X) and labels (y)
X = all_data['data']
y = all_data['target']
# Standardize
X_std = (X - np.mean(X,axis=0)) / np.std(X, axis=0)
# Split all data into training data (85%) and test data (15%)
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=int(X_std.shape[0]*0.15), random_state=SEED)
# Try every odd k from 3 to 19
cv_error_all_k = []
for k in range(3, 20, 2):
# We calculate the average missclassification rate, we let that represent the cross-validation error
cv_error = np.ones(X_train.shape[0])
for i in range(X_train.shape[0]):
train_data = np.append(np.delete(X_train,i,axis=0), np.reshape(np.delete(y_train,i), (X_train.shape[0]-1,1)), axis=1)
model = KNN(train_data, k)
cv_error[i] = (model.predict(np.reshape(X_train[i,:], (1, X_train[i,:].shape[0]))) != y_train[i])
cv_error_all_k.append(np.sum(cv_error)/len(cv_error)*100)
# Plot the missclassification rate for each k
plt.scatter(range(3, 20, 2), cv_error_all_k)
plt.xticks(range(3, 20, 2))
plt.xlabel("k")
plt.ylabel("Missclassification Rate (%)")
# Mark the lowest k as red
plt.scatter(3 + np.argmin(cv_error_all_k)*2, np.min(cv_error_all_k), c='red')
###Output
_____no_output_____
###Markdown
We see that we had the lowest missclassifcation rate with k=9. Now, we train a model with k=13 and test it on the test data. We evaluate the final model using a confusion matrix.
###Code
# Now we test our model with k=9 on the test data
# Train final model
final_model = KNN(np.append(X_train, np.reshape(y_train, (len(y_train),1)), axis=1), k=9)
# Predict on test data
predictions = final_model.predict(X_test)
# Calculate accuracy
accuracy = np.sum(predictions==y_test)/len(y_test)
print(f"The final model has an accuracy of: {accuracy*100:.2f}%")
# A function for printing a confusion matrix
def print_confusion_matrix(true, predictions):
print(" TRUE")
print(" 1 0")
print("---------------------------")
print(f"Predicted 1| {sum(y_test[predictions==1]==1)} {sum(y_test[predictions==1]==0)}")
print(f" 0| {sum(y_test[predictions==0]==1)} {sum(y_test[predictions==0]==0)}")
print_confusion_matrix(y_test, predictions)
###Output
TRUE
1 0
---------------------------
Predicted 1| 52 2
0| 2 29
###Markdown
Examples using jointcdIn this notebook we illustrate how the jointcd package can be used for change detection and change point estimation by applying it to synthetic data.
###Code
from jointcd import ChangeDetector, ChangePointEstimator
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
np.random.seed(420)
###Output
_____no_output_____
###Markdown
Change DetectionHere we use jointcd to detect which signals experienced change. A synthetic data set is created comprised of two types of signals. Each signal is a sinusoidal oscillation corrupted by noise. The second type of signal has a shifted mean. A change signal is a transition between from one to the other.
###Code
n1, n2, len_signals = 1000,200, 120
n_change_signals = 100
step = 3.0
amplitude = 5.0
# create the synthetic data set
oscil = amplitude*np.sin(np.linspace(0,100,len_signals))
type1 = np.random.randn(n1, len_signals) + oscil
type2 = np.random.randn(n2, len_signals) + step + oscil
change = np.concatenate([
np.random.randn(n_change_signals, len_signals/2),
np.random.randn(n_change_signals, len_signals/2) + step], axis=1) + oscil
X = np.concatenate([
type1,
type2,
change
], axis=0)
# create the labels. 0 -> no change, 1 -> change
y = np.zeros(X.shape[0])
y[-n_change_signals:] = 1
plt.plot(X[-1,:])
plt.title('example random change signal')
plt.show()
cd = ChangeDetector(method='robust')
threshold = 300
pred, dists = cd.fit(X).predict(X, threshold)
sns.distplot(dists[y==0], label='no-change')
sns.distplot(dists[y==1], label='change')
plt.legend()
plt.xlabel("Mahalanobis Distance")
plt.ylabel("Density")
plt.axvline(threshold)
plt.title("Density plots of distance")
plt.show()
###Output
_____no_output_____
###Markdown
Change Point Estimationusing the same data set we will estimate the time of change using the ChangePointEstimator class. Note all time series have the same change point (t=60) to simplify analysis.
###Code
cpe = ChangePointEstimator(method='robust')
change_points, distance_signals = cpe.fit(X).predict(change)
plt.plot(distance_signals.T)
plt.title("Mahalanobis distance for partitioning at a given time index")
plt.xlabel("Time index")
plt.ylabel("Mahalanobis distance")
plt.show()
sns.distplot(change_points - len_signals/2)
plt.title("density plot of difference between predicted and actual change point")
plt.ylabel("density")
plt.xlabel("error")
plt.show()
###Output
_____no_output_____
###Markdown
Set-up (for colab)---
###Code
# %%capture
# !pip install pymc3==3.11
###Output
_____no_output_____
###Markdown
PyShopper example---- This notebook contains a quick example of PyShopper that includes:1. Loading data2. Instantiating and fitting the Shopper model via MCMC sampling or variational inference3. Inference diagnostics4. Prediction on unseen test data
###Code
# Imports
import numpy as np
import pandas as pd
import pymc3 as pm
import filelock
import warnings
import theano
from pyshopper import shopper
from scipy import stats
from tqdm.notebook import tqdm
# Ignore FutureWarning and UserWarning
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
import logging
logger = logging.getLogger('filelock')
logger.setLevel(logging.WARNING)
# URL to datasets
DATA_URL = 'https://github.com/topher-lo/PyShopper/blob/main/data'
###Output
_____no_output_____
###Markdown
1. Load data---
###Code
# Load data
data = shopper.load_data(f'{DATA_URL}/train.tsv?raw=true', f'{DATA_URL}/prices.tsv?raw=true')
unique_items = sorted(data['item_id'].unique())
sessions_list = sorted(data['session_id'].unique())
# Limit data to C (most frequent) items and W sessions
# Note: we filter for trailing sessions because the tested dataset's sessions begin at the end of
# the training dataset's sessions
C = 3
W = 400
# Filter data
X_train = (data.loc[data['item_id'].isin(unique_items[:C])]
.loc[data['session_id'].isin(sessions_list[-W:])]
.reset_index(drop=True))
X_train
###Output
_____no_output_____
###Markdown
2. Instantiate and fit model---
###Code
# Create Shopper instance
model = shopper.Shopper(X_train)
# # Fit model with MCMC sampling
# mcmc_res = model.fit(N=10000, method='MCMC')
# # Results summary:
# # Summary of common posterior statistics
# # and sampling diagnostics
# mcmc_res.summary()
# Fit model with ADVI approximation
advi_res = model.fit(N=50000, method='ADVI')
# # Results summary:
# # Summary of common posterior statistics
# # Note: must define number of draws from approximated posterior distribution
# summary = advi_res.summary(draws=100)
# summary
###Output
_____no_output_____
###Markdown
3. Diagnostics---
###Code
# # Sampling trace plot
# mcmc_res.trace_plot()
# ELBO plot (ADVI)
fig = advi_res.elbo_plot()
# ADVI posterior sampling trace plot
fig = advi_res.trace_plot(draws=5000)
###Output
_____no_output_____
###Markdown
4. Prediction---
###Code
# Load test data
test_data = shopper.load_data(f'{DATA_URL}/test.tsv?raw=true',
f'{DATA_URL}/prices.tsv?raw=true')
test_sessions_list = sorted(test_data['session_id'].unique())
W_test = int(0.33*W)
# Limit data to C items and U users
X_test = (test_data.loc[test_data['item_id'].isin(unique_items[:C])]
.loc[test_data['session_id'].isin(test_sessions_list[-W_test:])]
.reset_index(drop=True))
X_test.iloc[np.r_[0:4, -4:0]]
# ADVI Predictions
preds = advi_res.predict(X_test, draws=5000)
sampled_preds = pd.DataFrame(preds['y'])
# Labels
test_labels = pd.Series(pd.Categorical(X_test['item_id']).codes)
test_labels.name = 'test_labels'
# Number of correctly labelled outcomes
(sampled_preds.mode() == test_labels).T.value_counts()
# Sanity check
sampled_preds.mode().T.join(test_labels)
###Output
_____no_output_____
###Markdown
Johns Hopkins University COVID-19 data viewerReads the time series csv data available from here: https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_seriesThe CSV files: - time_series_covid19_confirmed_global.csv - time_series_covid19_deaths_global.csv - time_series_covid19_recovered_global.csvUseful methods: - loadData(path = ''): parses the CSV files stored in the given location and generates dictionaries with the relevant time series. If no path is given it downloads the files into the current working directory and then reads them. - getData(country, province = ''): returns a self-explanatory dict with data for specified country and province - getCountries(): returns names of all contries on which there have data - getProvinces(country): returns names of provinces for given country - estimateTrueCases(country, province = '', fatalityRate = 0.01, timeToDeath = 17.3): returns estimate of true case count based on fatality rate and average time from infection to death (https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca) - estimateGrowthRate(country, province, minCases = 50, averagingInterval = 1): Returns day-to-day changes in numbers of cases expressed as relative growth in percent. It can also calculate average growth rate over set ammonount of days (given by averagingInterval, needs to be integer)
###Code
import sys
import importlib
sys.path.append("/Users/karel/software/JHUreader/")
import readerJHU as reader
importlib.reload(reader)
import matplotlib.pyplot as plt
%matplotlib inline
# download data and load it into the jhu, the csv files are saved in the current working directory
jhu = reader.CovidData()
jhu.loadData()
# to load local data uncomment the lines below:
#path = '/your/path/to/the/csv/files/'
#jhu.loadData(path)
###Output
_____no_output_____
###Markdown
Plot some data
###Code
# plot confirmed cases for several countries
countries = ['Germany', 'Italy', 'US', 'Japan', 'Czechia', 'Spain']
for country in countries:
data = jhu.getData(country)
plt.semilogy(data['confirmed'], label = data['country'])
plt.legend()
plt.xlabel('day')
plt.ylabel('cases')
plt.title('Confirmed Cases')
plt.show()
# plot numbers of dead for the same countries
for country in countries:
data = jhu.getData(country)
plt.semilogy(data['dead'], label = data['country'])
plt.legend()
plt.xlabel('day')
plt.ylabel('cases')
plt.title('Deaths')
plt.show()
###Output
_____no_output_____
###Markdown
True case estimate for the UKEstimate true case count from the fatality count
###Code
country = 'United Kingdom'
data = jhu.getData(country)
estimate = jhu.estimateTrueCases(country)
plt.semilogy(data['confirmed'],"o", label = data['country'] + ' - confirmed')
plt.semilogy(data['dead'],"o", label = data['country'] + ' - dead')
plt.semilogy(estimate['estimate'],"o", label = estimate['country'] + ' - estimate')
plt.legend()
plt.xlabel('day')
plt.ylabel('cases')
plt.title('True Case Estimate')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Estimate of growth rate Let's plot the relative changes in numbers of cases/deaths from day to day. $$\Delta_{rel}(d) = \left(\frac{N(d)}{N(d-1)} - 1 \right) \times 100$$$N(d)$ is number of cases on day $d$, $\Delta_{rel}(d)$ is the relative increase (in percent) of number of case from day $d-1$ to day $d$. This is a good parameter to describe an exponential growth.In the example below one can see that Italy is getting the outbreak under control but UK and US still have quite a way to go.
###Code
countries = [ 'Italy', 'United Kingdom', 'US']
for country in countries:
gr = jhu.estimateGrowthRate(country)
plt.plot(gr['days'], gr['confirmedRC'],"o", label = country + ' - confirmed')
plt.plot(gr['days'], gr['deadRC'],"o", label = country + ' - dead')
#plt.plot(gr['recoveredRC'],"o", label = data['country'] + ' - recovered')
plt.ylim([-5,50])
plt.xlim(30,80)
plt.grid()
plt.legend()
plt.title(country)
plt.xlabel('day')
plt.ylabel('day-to-day change (%)')
plt.show()
###Output
_____no_output_____
###Markdown
Average growth ratesThe above plots have quite a bit of scatter - let's look at average growth rates in the same contries but now averaged over the week preceeding the given day. (Note - of you try to do a moving arithmetic average of the data in the plots above, you will not get what is plotted below. Geometric averaging of day-to-day ratios is more appropriate. This is how the software does it. The average day-to-day ratio is then expressed as daily growth rate in percent.)
###Code
countries = [ 'Italy', 'United Kingdom', 'US']
averagingInterval = 7
for country in countries:
gr = jhu.estimateGrowthRate(country, averagingInterval = averagingInterval)
plt.plot(gr['days'], gr['confirmedRC'],"o", label = country + ' - confirmed')
plt.plot(gr['days'], gr['deadRC'],"o", label = country + ' - dead')
#plt.plot(gr['recoveredRC'],"o", label = data['country'] + ' - recovered')
plt.ylim([-5,50])
plt.xlim(30,80)
plt.grid()
plt.legend()
plt.title(country)
plt.xlabel('day')
plt.ylabel('day-to-day change (%)')
plt.show()
# have a look at data for one country
# similar dicts are generated by estimateTrueCases and estimateGrowthRate methods
data = jhu.getData('Spain')
for key in data.keys():
print(key)
print(data[key])
print()
# available countries
print(jhu.getCountries())
# provinces in China
print(jhu.getProvinces('China'))
###Output
['Anhui', 'Beijing', 'Chongqing', 'Fujian', 'Gansu', 'Guangdong', 'Guangxi', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hong Kong', 'Hubei', 'Hunan', 'Inner Mongolia', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Macau', 'Ningxia', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanghai', 'Shanxi', 'Sichuan', 'Tianjin', 'Tibet', 'Xinjiang', 'Yunnan', 'Zhejiang']
###Markdown
IPython extension version_information Use the '%version_information' IPython magic extension in a notebook to display information about which versions of dependency package that was used to run the notebook. Installation Install the `version_information` package using pip: pip install version_informationor, alternatively, use the `%install_ext` IPython command (deprecated): %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py Use
###Code
%load_ext version_information
%version_information
%version_information scipy, numpy, Cython, matplotlib, qutip, version_information
###Output
_____no_output_____
###Markdown
How to use KITTI scan unfolding Make sure to install package `unfolding` first.```pip install git+https://github.com/ltriess/kitti_scan_unfolding```
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import unfolding
data_dir = os.path.join(os.getcwd(), "data") # path to sample data
###Output
_____no_output_____
###Markdown
Load the raw KITTI data from the binary file.Given are N points with x, y, z, and remission.Beware, this code only works for the raw KITTI point clouds saved in their original format. It is not suitable for ego-motion corrected data, neither for any other datasets.
###Code
file = os.path.join(data_dir, "sample_raw.bin")
scan = np.fromfile(file, dtype=np.float32).reshape((-1, 4))
print("scan", scan.shape, scan.dtype)
points = scan[:, :3]
remissions = scan[:, 3]
print("--> points {}, remissions {}".format(points.shape, remissions.shape))
###Output
scan (124668, 4) float32
--> points (124668, 3), remissions (124668,)
###Markdown
Using custom projectionYou can simply use `unfolding` to get the indices for the respective rows and columns with `get_kitti_rows()` and `get_kitti_columns()`.You can now proceed with your custom projection mechanism.The package also provides a complete projection into the image-like structure (see next section).
###Code
rows = unfolding.get_kitti_rows(points)
columns = unfolding.get_kitti_columns(points)
print("rows shape: {}, min: {}, max: {}".format(rows.shape, np.min(rows), np.max(rows)))
print("cols shape: {}, min: {}, max: {}".format(rows.shape, np.min(columns), np.max(columns)))
# Put your own projection here.
###Output
rows shape: (124668,), min: 0, max: 63
cols shape: (124668,), min: 0, max: 1999
###Markdown
Using `unfolding` projectionThe default image size for KITTI is `(64, 2000)` for one 360 degree scan. This is due to the number of layers of the sensor and the revolution at 10Hz.
###Code
image_size = (64, 2000)
projection = unfolding.projection(points, image_size=image_size)
###Output
_____no_output_____
###Markdown
The function returns a dictionary.The dictionary contains the projected input points under the key `points`.But it also returns additional useful information, as described below. Get the projected points and depth
###Code
proj_points = projection["points"] # the projected points
proj_depth = projection["depth"] # the projected depth
print("key 'points' {} {} --> projection of the point cloud".format(proj_points.shape, proj_points.dtype))
print("key 'depth' {} {} --> projected depth".format(proj_depth.shape, proj_depth.dtype))
# Visualization the projected depth.
plt.figure(figsize=(20, 4))
plt.imshow(proj_depth, cmap="magma_r")
plt.title("depth")
plt.show()
###Output
_____no_output_____
###Markdown
Get projection informationSometimes it is useful to know to which location a point has been projected or how to restore the original point list from the image-like projection.The following three channels `indices`, `inverse`, and `active` provide all information for transformations in both directions.
###Code
indices = projection["indices"] # the image location for each point it is projected into
inverse = projection["inverse"] # the index of the respective point in the point cloud for each image location
active = projection["active"] # whether a point is actively used in the projection (multiple point occlusions)
print("key 'indices' {} {} --> row and column indices for each point".format(indices.shape, indices.dtype))
print("key 'inverse' {} {} --> point indices for each projected location".format(inverse.shape, inverse.dtype))
print("key 'active' {} {} --> activity flag in the projection for each point".format(active.shape, active.dtype))
###Output
key 'indices' (124668, 2) int32 --> row and column indices for each point
key 'inverse' (64, 2000) int32 --> point indices for each projected location
key 'active' (124668,) bool --> activity flag in the projection for each point
###Markdown
Project additional channelsIn case you wondered if I forgot about the `remissions`: no I did not.The function offers the possibility to feed any number of additional channels into.The channels will then be projected in the same way as `points`.It is necessary that all channels have the same first dimension size as `points`.Take a look at how to add `remissions` to the projection function.
###Code
projection = unfolding.projection(points, remissions, image_size=image_size)
proj_channels = projection["channels"]
###Output
_____no_output_____
###Markdown
The function returns a list of the projections of all additional channels.We added one additional channel, i.e. `remissions`, therefore `len(proj_channels) == 1`.
###Code
proj_remissions = proj_channels[0]
print("remissions", remissions.shape, remissions.dtype, "-->", proj_remissions.shape, proj_remissions.dtype)
###Output
remissions (124668,) float32 --> (64, 2000) float32
###Markdown
If you are using the SemanticKITTI dataset with point-wise labels, simply add them as additional channels to the function.
###Code
file = os.path.join(data_dir, "sample.label")
labels = np.fromfile(file, dtype=np.int32)
labels = labels.reshape((-1))
semantic_ids = labels & 0xFFFF
instance_ids = labels >> 16
projection = unfolding.projection(points, remissions, semantic_ids, instance_ids, image_size=image_size)
proj_depth = projection["depth"]
proj_channels = projection["channels"]
# Visualization of the additional channels.
fig = plt.figure(figsize=(20, 6))
ax0 = fig.add_subplot(411)
ax1 = fig.add_subplot(412)
ax2 = fig.add_subplot(413)
ax3 = fig.add_subplot(414)
ax0.imshow(proj_depth, cmap="magma_r")
ax1.imshow(proj_channels[0], cmap="viridis")
ax2.imshow(proj_channels[1], cmap="terrain")
ax3.imshow(proj_channels[2], cmap="terrain")
ax0.title.set_text("depth")
ax1.title.set_text("remissions")
ax2.title.set_text("semantics")
ax3.title.set_text("instances")
plt.show()
###Output
_____no_output_____
###Markdown
Testing with ego-motion corrected dataTake a look at [README.md](data/README.md) for more information on the difference between `sample_raw.bin` and `sample_ego.bin`.
###Code
file = os.path.join(data_dir, "sample_ego.bin")
scan = np.fromfile(file, dtype=np.float32).reshape((-1, 4))
rows = unfolding.get_kitti_rows(scan[:, :3])
columns = unfolding.get_kitti_columns(scan[:, :3])
###Output
_____no_output_____
###Markdown
The functions do not perform any check or print out warnings.However, looking at the output ranges, you can see that we receive more rows than the sensor has layers.The number of columns is correct, since the function simply divides the data into equal bins over 360 degree.
###Code
print("rows shape: {s}, min: {min}, max: {max}".format(s=rows.shape, min=np.min(rows), max=np.max(rows)))
print("cols shape: {s}, min: {min}, max: {max}".format(s=rows.shape, min=np.min(columns), max=np.max(columns)))
###Output
rows shape: (124668,), min: 0, max: 115
cols shape: (124668,), min: 0, max: 1999
###Markdown
Dummy data
###Code
np.random.seed(1)
docs = [
'A p-value is a measure of the probability that an observed difference could have occurred just by random chance',
'A p-value is a measure of the probability that an observed difference could have occurred just by random chance',
'In null hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the results actually observed',
'A p-value, or probability value, is a number describing how likely it is that your data would have occurred by random chance',
'A p-value is used in hypothesis testing to help you support or reject the null hypothesis',
'The P-value, or calculated probability, is the probability of finding the observed, or more extreme, results when the null hypothesis',
'A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes',
'An artificial neural network is an interconnected group of nodes, inspired by a simplification of neurons in a brain',
'Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning ',
'Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely',
'Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns']
stopwords = ['this', 'is', 'a', 'the', 'of', 'an', 'that', 'or']
docs_toks = [doc.lower().replace(',', '').replace('.', '').split() for doc in docs]
docs_toks = [[w for w in doc if w not in stopwords] for doc in docs_toks]
###Output
_____no_output_____
###Markdown
Document should be a list of documents, where ieach document itself is a list of tokens. Model itself doesn't do ay preporcessing only indexing of tokens. Init model and train
###Code
mgp_ar = MovieGroupProcessArray(K=10, alpha=0.1, beta=0.1, n_iters=22)
mgp = MovieGroupProcess(K=10, alpha=0.1, beta=0.1, n_iters=22)
y = mgp_ar.fit(docs_toks)
y_old = mgp.fit(docs_toks, len(set([item for sublist in docs_toks for item in sublist])))
###Output
In stage 0: transferred 6 clusters with 4 clusters populated
In stage 1: transferred 0 clusters with 4 clusters populated
In stage 2: transferred 2 clusters with 5 clusters populated
In stage 3: transferred 3 clusters with 5 clusters populated
In stage 4: transferred 3 clusters with 5 clusters populated
In stage 5: transferred 1 clusters with 4 clusters populated
In stage 6: transferred 1 clusters with 5 clusters populated
In stage 7: transferred 3 clusters with 5 clusters populated
In stage 8: transferred 3 clusters with 5 clusters populated
In stage 9: transferred 3 clusters with 5 clusters populated
In stage 10: transferred 3 clusters with 5 clusters populated
In stage 11: transferred 3 clusters with 5 clusters populated
In stage 12: transferred 1 clusters with 4 clusters populated
In stage 13: transferred 0 clusters with 4 clusters populated
In stage 14: transferred 0 clusters with 4 clusters populated
In stage 15: transferred 2 clusters with 5 clusters populated
In stage 16: transferred 3 clusters with 5 clusters populated
In stage 17: transferred 1 clusters with 4 clusters populated
In stage 18: transferred 0 clusters with 4 clusters populated
In stage 19: transferred 0 clusters with 4 clusters populated
In stage 20: transferred 1 clusters with 5 clusters populated
In stage 21: transferred 1 clusters with 4 clusters populated
###Markdown
See topics
###Code
#array version skips topics where 0 docs clustered
pprint(mgp_ar.top_words())
pprint(mgp.top_words())
mgp_ar.choose_best_label('p-value is a measure of the probability'.split())
mgp.choose_best_label('p-value is a measure of the probability'.split())
###Output
_____no_output_____
###Markdown
Speed comparison - 20NewsGroups Topics are here not really an interest, would probalby need more cleaning
###Code
categories = ['alt.atheism', 'comp.graphics',
'rec.sport.hockey', 'sci.crypt', 'talk.religion.misc']
newsgroups = fetch_20newsgroups(categories=categories)
y_true = newsgroups.target
###Output
_____no_output_____
###Markdown
preprocess data - this takes some time
###Code
class TextPreprocessor(TransformerMixin):
def __init__(self, text_attribute):
self.text_attribute = text_attribute
def transform(self, X, *_):
X_copy = X.copy()
X_copy[self.text_attribute] = X_copy[self.text_attribute].apply(self._preprocess_text)
return X_copy
def _preprocess_text(self, text):
return self._lemmatize(self._leave_letters_only(self._clean(text)))
def _clean(self, text):
bad_symbols = '!"#%&\'*+,-<=>?[\\]^_`{|}~'
text_without_symbols = text.translate(str.maketrans('', '', bad_symbols))
text_without_bad_words = ''
for line in text_without_symbols.split('\n'):
if not line.lower().startswith('from:') and not line.lower().endswith('writes:'):
text_without_bad_words += line + '\n'
clean_text = text_without_bad_words
email_regex = r'([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)'
regexes_to_remove = [email_regex, r'Subject:', r'Re:']
for r in regexes_to_remove:
clean_text = re.sub(r, '', clean_text)
return clean_text
def _leave_letters_only(self, text):
text_without_punctuation = text.translate(str.maketrans('', '', string.punctuation))
return ' '.join(re.findall("[a-zA-Z]+", text_without_punctuation))
def _lemmatize(self, text):
doc = nlp(text)
words = [x.lemma_ for x in [y for y in doc if not y.is_stop and y.pos_ != 'PUNCT'
and y.pos_ != 'PART' and y.pos_ != 'X']]
return words
def fit(self, *_):
return self
nlp = spacy.load("en_core_web_sm")
df=pd.DataFrame({'text':newsgroups['data']})
text_preprocessor = TextPreprocessor(text_attribute='text')
df_preprocessed = text_preprocessor.transform(df)
docs=df_preprocessed.text.tolist()
docs[0][:10]
###Output
_____no_output_____
###Markdown
train models
###Code
mgp_20news = MovieGroupProcess(K=5, alpha=0.1, beta=0.1, n_iters=22)
mgp_20news_ar = MovieGroupProcessArray(K=5, alpha=0.1, beta=0.1, n_iters=22)
%time y = mgp_20news.fit(df_preprocessed.text.tolist(), len(set([item for sublist in docs for item in sublist])))
%time y = mgp_20news_ar.fit(df_preprocessed.text.tolist())
###Output
In stage 0: transferred 1972 clusters with 5 clusters populated
In stage 1: transferred 606 clusters with 5 clusters populated
In stage 2: transferred 213 clusters with 5 clusters populated
In stage 3: transferred 99 clusters with 5 clusters populated
In stage 4: transferred 60 clusters with 5 clusters populated
In stage 5: transferred 37 clusters with 5 clusters populated
In stage 6: transferred 41 clusters with 5 clusters populated
In stage 7: transferred 39 clusters with 5 clusters populated
In stage 8: transferred 27 clusters with 5 clusters populated
In stage 9: transferred 19 clusters with 5 clusters populated
In stage 10: transferred 27 clusters with 5 clusters populated
In stage 11: transferred 30 clusters with 5 clusters populated
In stage 12: transferred 17 clusters with 5 clusters populated
In stage 13: transferred 16 clusters with 5 clusters populated
In stage 14: transferred 32 clusters with 5 clusters populated
In stage 15: transferred 49 clusters with 5 clusters populated
In stage 16: transferred 56 clusters with 5 clusters populated
In stage 17: transferred 46 clusters with 5 clusters populated
In stage 18: transferred 40 clusters with 5 clusters populated
In stage 19: transferred 35 clusters with 5 clusters populated
In stage 20: transferred 32 clusters with 5 clusters populated
In stage 21: transferred 33 clusters with 5 clusters populated
Wall time: 11.8 s
###Markdown
Compare topics
###Code
pprint(mgp_20news.top_words())
pprint(mgp_20news_ar.top_words())
###Output
{0: ' Organization Lines April know University',
1: ' Windows version Organization University Lines',
2: ' Organization game Lines University team',
3: ' Organization Lines University file know',
4: ' Organization people Lines think key'}
###Markdown
Save and load saved model
###Code
#save to a folder which model creates
mgp_ar.save('example_model')
#load model from folder
mgp_ar_loaded=MovieGroupProcessArray.load('example_model')
pprint(mgp_ar_loaded.top_words())
#original words
pprint(mgp_ar.top_words())
mgp_ar_loaded.choose_best_label('p-value is a measure of the probability'.split())
#original model topic and probability
mgp_ar_loaded.choose_best_label('p-value is a measure of the probability'.split())
###Output
_____no_output_____
###Markdown
Templot Examples We start by installing the package from Pypi, this is not necessary if the templot folder is present in the same diretory as this notebook.
###Code
!pip install --user templot
###Output
_____no_output_____
###Markdown
Importing dependencies
###Code
%matplotlib inline
import os
import pandas as pd
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from IPython.display import HTML #in order to visualize animations
###Output
_____no_output_____
###Markdown
Data Downloading and preprocessingWe can use the `download_irep` function to download and parse the IREP dataset.
###Code
from templot import download_irep
filepath = os.path.join('.', 'templot', 'data', 'df.csv')
if not os.path.exists(filepath):
download_irep(filepath)
df = pd.read_csv(filepath)
df.head()
###Output
_____no_output_____
###Markdown
We can use the `add_regions` function that adds the corresponding Region, Department, or/and Commune by looking at the longitude and latitude columns.
###Code
from templot import add_regions
df = add_regions(df, "LLX", "LLY", add=["regions", "departements"])
df.head()
###Output
_____no_output_____
###Markdown
Some of the functions require the dataset to be in a melted form; i.e. having one columns containing the values and another containing the corresponding year.
###Code
df_melted = pd.melt(
df,
id_vars=df.columns & [
'Identifiant',
'Nom_Etablissement_x',
'LLX',
'LLY',
'Regions',
'Departements',
'Communes'
],
var_name='Annee',
value_name='Quantite',
)
df_melted = df_melted[df_melted.Quantite != 0]
df_melted['Annee'] = df_melted['Annee'].apply(lambda x: int(x[-4:]))
df_melted.head()
###Output
_____no_output_____
###Markdown
Example 1: Plot Aggregated MapHere we can create a map that shows the evolution of the average quantity in every region. The color refelects the average cummulative quantity over the entire period.
###Code
from templot import plot_aggregated_map
my_map = plot_aggregated_map(
data=df,
variables=[
"Quantite2004",
"Quantite2005",
"Quantite2006",
"Quantite2007",
"Quantite2008",
"Quantite2009",
],
group="Regions",
aggregation_method="average",
height=300)
my_map
#If the map is not displayed (if you're using Edge or Chrome) uncomment the follwing lines
# from IPython.display import IFrame
# my_map.save("map.html")
# IFrame("map.html", width='100%', height=750)
###Output
_____no_output_____
###Markdown
We can also look at the evolution of the number of polluting comapnies in each department. The function will warn us if the number of departments is too high.
###Code
my_map_dep = plot_aggregated_map(
data=df,
variables=[col for col in df.columns if "Quantite" in col], # All years
group="Departements",
aggregation_method="count",
height=300)
my_map_dep
#If the map is not displayed (if you're using Edge or Chrome) uncomment the follwing lines
# from IPython.display import IFrame
# my_map_dep.save("map_dep.html")
# IFrame("map_dep.html", width='100%', height=750)
###Output
_____no_output_____
###Markdown
Example 2: Plot Polar Bar Evolution Here we can create an animation that shows the evolution of the maximum quantities per region across all years.
###Code
from templot import plot_polar_bar_evolution
anim = plot_polar_bar_evolution(
df_melted,
var="Quantite",
year="Annee",
agr="max",
y_grid=False,
x_grid=False,
y_ticks=False,
)
HTML(anim.to_jshtml())
###Output
_____no_output_____
###Markdown
Example 3: Plot Polar Bar Evolution InteractiveWe can also look at an interactive version of that graph.
###Code
from templot import plot_polar_bar_evolution_interactive
fig = plot_polar_bar_evolution_interactive(df_melted,
var="Quantite",
year="Annee",
agr="max")
fig
###Output
_____no_output_____
###Markdown
Example 4: Plot Pie Chart InteractiveHere we can look closely at two specific years and compare between them.
###Code
from templot import plot_pie_chart_interactive
fig = plot_pie_chart_interactive(df, "Quantite", 2004, 2005, "Regions")
fig
###Output
_____no_output_____
###Markdown
Example 5: Plot Top 10 BarchartFinally we take a look at the top 10 worst offenders every year.
###Code
from templot import plot_top10_barchart
# Delete a few outliers/incorrect values
df_melted.drop(df_melted.index[[80307, 78095, 73504]], inplace=True)
# Plot animation from 2003 to 2017 :
fig, ax = plt.subplots(figsize=(16, 9), dpi=220, facecolor='#F8F7F7')
ani = animation.FuncAnimation(
fig,
plot_top10_barchart,
frames=range(2003, 2018),
interval=1500,
fargs=[
df_melted,
"Quantite",
"Annee",
"Regions",
'Nom_Etablissement_x',
'Les établissement émettant le plus de déchets dangereux de 2003 à 2017',
'Déchets dangereux (t/an)',
],
)
HTML(ani.to_jshtml())
###Output
_____no_output_____
###Markdown
Example and demo of collaboration_count function
###Code
from knetwork import collaboration_count as colcnt
import matplotlib.pyplot as plt
#Enter data_source, year_list, country_list and column_name:
data_source = 'web of science'
year_list = range(2000,2017)
country_list = ['USA','Mexico','Canada','Guatemala','Cuba','Dominican Republic','Haiti','Honduras','El Salvador','Nicaragua','Costa Rica','Panama','Jamaica','Trinidad',
'Brazil','Colombia','Argentina','Venezuela','Peru','Chile','Ecuador','Bolivia','Paraguay','Uruguay',
'Nigeria','Algeria','Congo','Sudan','Chad','Niger','Angola','Mali','South Africa','Ethiopia','Egypt','Tanzania','Morocco','Kenya','Uganda','Ghana','Mozambique','Madagascar','Cameroon','Ivory Coast','Zambia','Zimbabwe','Malawi','Senegal','Somalia',
'China','India','Indonesia','Pakistan','Bangladesh','Russia','Japan','Philippines','Vietnam','Turkey','Iran','Thailand','Myanmar','Korea','Iraq','Arabia','Malaysia','Uzbekistan','Nepal','Afghanistan','Yemen','Syria','Sri Lanka','Cambodia','Azerbaijan','Emirates','Tajikistan','Israel','Laos','Jordan','Singapore','Lebanon','Kuwait','Oman','Qatar','Bahrain','Taiwan',
'Germany','France','Kingdom','Italy','UK','Spain','Ukraine','Poland','Romania','Netherlands','Belgium','Greece','Czech','Portugal','Hungary','Sweden','Austria','Belarus','Switzerland','Bulgaria','Denmark','Slovakia','Finland','Norway','Georgia','Ireland','Croatia','Bosnia','Moldova','Lithuania','Latvia','Macedonia','Slovenia','Estonia','Cyprus','Montenegro','Luxembourg','Malta','Iceland','Andorra','Liechtenstein','San Marino','Monaco','Vatican',
'Australia','New Zealand','Papau New Guinea'
]
column_name = 'C1'
continent={}
for i in range(country_list.index('USA'),country_list.index('Trinidad')+1):
continent[country_list[i]]='North America'
for i in range(country_list.index('Brazil'),country_list.index('Uruguay')+1):
continent[country_list[i]]='South America'
for i in range(country_list.index('Nigeria'),country_list.index('Somalia')+1):
continent[country_list[i]]='Africa'
for i in range(country_list.index('China'),country_list.index('Taiwan')+1):
continent[country_list[i]]='Asia'
for i in range(country_list.index('Germany'),country_list.index('Vatican')+1):
continent[country_list[i]]='Europe'
for i in range(country_list.index('Australia'),country_list.index('Papau New Guinea')+1):
continent[country_list[i]]='Oceania'
list_of_year_wise_results = colcnt.count_all_years(data_source,year_list,country_list,column_name)
#Year wise results
#Enter any year between 2000-2016 to see result matrix
a = 2003
#Get results
b=list_of_year_wise_results[a-2000]
list_of_year_wise_results[2][0][0]
yearwise_count={}
country=[]
edge_label=[]
edge_weight=[]
node_weight=[]
year=[]
year_total=[]
cont=[]
percent_of_total_pubs=[]
rank=[]
c=0
#Rank of country
v={}
for c in year_list:
val1[c-2000]={}
for i in range(len(country_list)):
val1[c-2000][country_list[i]]=list_of_year_wise_results[c-2000][i][i]
v[c-2000]={key: rank for rank, key in enumerate(sorted(val1[c-2000], key=val1[c-2000].get, reverse=True), 1)}
#Total count in a year
for c in year_list:
val=0
for i in range(len(country_list)):
val=val+list_of_year_wise_results[c-2000][i][i]
yearwise_count[c]=val
#Creating columns of csv
for c in year_list:
for i in range(len(country_list)):
for j in range(i+1,len(country_list)):
country.append(country_list[i])
country.append(country_list[j])
edge_label.append('%s & %s'%(country_list[i],country_list[j]))
edge_label.append('%s & %s'%(country_list[i],country_list[j]))
edge_weight.append(list_of_year_wise_results[c-2000][i][j])
edge_weight.append(list_of_year_wise_results[c-2000][i][j])
node_weight.append(list_of_year_wise_results[c-2000][i][i])
node_weight.append(list_of_year_wise_results[c-2000][j][j])
year.append(c)
year.append(c)
year_total.append(yearwise_count[c])
year_total.append(yearwise_count[c])
cont.append(continent[country_list[i]])
cont.append(continent[country_list[j]])
percent_of_total_pubs.append(list_of_year_wise_results[c-2000][i][i]*100/yearwise_count[c])
percent_of_total_pubs.append(list_of_year_wise_results[c-2000][j][j]*100/yearwise_count[c])
rank.append('%d/%d'%(v[c-2000][country_list[i]],len(country_list)))
rank.append('%d/%d'%(v[c-2000][country_list[j]],len(country_list)))
import pandas as pd
df=pd.DataFrame(data=[year,country,country,edge_weight,node_weight,edge_label,year_total,cont,percent_of_total_pubs,rank]).T
df.rename(columns={0:'Year',1:'Country',2:'Country1',3:'No. of Collaborations',4:'No. of Publications',5:'Collaborators',6:'Total No. of Publications',7:'Continent',8:'Percent of Total Publications',9:'Rank'},inplace=True)
df=df.loc[(df!=0).all(1)]
df.to_csv('knetwork1.csv')
df
#Function to create and export dataframe as CSV
def get_csv(list_of_year_wise_results,country_list):
x=[]
columns=[]
for i in range(len(country_list)):
for j in range(len(country_list)):
x.append([list_of_year_wise_results[k][country_list.index(country_list[i])][country_list.index(country_list[j])] for k in range(len(year_list))])
if i == j:
columns.append('%s'%(country_list[i]))
else:
columns.append('%s & %s'%(country_list[i],country_list[j]))
data = pd.DataFrame(x,columns=range(2000,2017)).T
data.columns = [columns]
data.index.name = 'Year'
data.to_csv('collaboration_data.csv')
get_csv(list_of_year_wise_results,country_list)
#Simple bar graph for any 4 countries
#Enter 4 countries as a list
country_names = ['USA','China','India','Russia']
#Code to plot bar graphs of 4 countries specified above
country1 = [list_of_year_wise_results[i][country_list.index(country_names[0])][country_list.index(country_names[0])] for i in range(len(year_list))]
country2 = [list_of_year_wise_results[i][country_list.index(country_names[1])][country_list.index(country_names[1])] for i in range(len(year_list))]
country3 = [list_of_year_wise_results[i][country_list.index(country_names[2])][country_list.index(country_names[2])] for i in range(len(year_list))]
country4 = [list_of_year_wise_results[i][country_list.index(country_names[3])][country_list.index(country_names[3])] for i in range(len(year_list))]
fig = plt.figure(figsize=(12,6))
fig.suptitle('Year-wise papers in Nuclear Science and Technology', fontsize=14, fontweight='bold')
plt.rcParams.update({'font.size': 8})
axes = fig.add_subplot(221)
axes.bar(year_list, country1, 0.5, color='r')
axes.set_ylabel('Publications')
axes.set_ylim(0,500)
axes.set_title('%s'%country_names[0])
axes = fig.add_subplot(222)
axes.bar(year_list, country2, 0.5, color='b')
axes.set_ylabel('Publications')
axes.set_ylim(0,500)
axes.set_title('%s'%country_names[1])
axes = fig.add_subplot(223)
axes.bar(year_list, country3, 0.5, color='g')
axes.set_ylabel('Publications')
axes.set_ylim(0,500)
axes.set_title('%s'%country_names[2])
axes = fig.add_subplot(224)
axes.bar(year_list, country4, 0.5, color='y')
axes.set_ylabel('Publications')
axes.set_ylim(0,500)
axes.set_title('%s'%country_names[3])
plt.tight_layout(pad=4, w_pad=4)
plt.show()
#Collaboration between 3 countries
#Enter 3 countries as a list:
coll_countries = ['USA','China','India']
count1 = [list_of_year_wise_results[i][country_list.index(coll_countries[0])][country_list.index(coll_countries[1])] for i in range(len(year_list))]
count2 = [list_of_year_wise_results[i][country_list.index(coll_countries[1])][country_list.index(coll_countries[2])] for i in range(len(year_list))]
count3 = [list_of_year_wise_results[i][country_list.index(coll_countries[2])][country_list.index(coll_countries[0])] for i in range(len(year_list))]
fig = plt.figure(figsize=(12,3))
fig.suptitle('Year-wise collaboration in Nuclear Science and Technology', fontsize=14, fontweight='bold')
plt.rcParams.update({'font.size': 8})
axes = fig.add_subplot(131)
axes.bar(year_list, count1, 0.5, color='r')
axes.set_ylabel('Publications')
axes.set_ylim(0,max(count1))
axes.set_title('%s and %s'%(coll_countries[0],coll_countries[1]))
axes = fig.add_subplot(132)
axes.bar(year_list, count2, 0.5, color='r')
axes.set_ylabel('Publications')
axes.set_ylim(0,max(count2))
axes.set_title('%s and %s'%(coll_countries[1],coll_countries[2]))
axes = fig.add_subplot(133)
axes.bar(year_list, count3, 0.5, color='r')
axes.set_ylabel('Publications')
axes.set_ylim(0,max(count3))
axes.set_title('%s and %s'%(coll_countries[2],coll_countries[0]))
plt.tight_layout(pad=4, w_pad=2)
plt.show()
###Output
_____no_output_____
###Markdown
Get the example data at: https://doi.org/10.6084/m9.figshare.12646217
###Code
import pandas as pd
from datetime import timedelta
import devicely
tag_file = 'data/Tags/tags.csv'
empatica_folder = 'data/Empatica'
faros_file = 'data/Faros/faros.EDF'
everion_folder = 'data/Everion'
spacelabs_file = 'data/SpaceLabs/spacelabs.abp'
shimmer_file = 'data/Shimmer/shimmer.csv'
shift = pd.Timedelta(15,unit='d')
###Output
_____no_output_____
###Markdown
Read Tags Data
###Code
tags = devicely.TagReader(tag_file)
tags.data.head()
###Output
_____no_output_____
###Markdown
Timeshift and Write Tags Data
###Code
tags.timeshift(shift)
tags.data.head()
tags.write('tags.csv')
###Output
_____no_output_____
###Markdown
Read Bittium Faros 180 Data
###Code
faros = devicely.FarosReader(faros_file)
faros.data.head()
faros.data['acc_mag'].interpolate(method="time").plot()
###Output
_____no_output_____
###Markdown
Timeshift and Write Faros Data
###Code
faros.timeshift(shift)
faros.data.head()
faros.write('faros.csv')
###Output
_____no_output_____
###Markdown
Read Empatica E4 Data
###Code
empatica = devicely.EmpaticaReader(empatica_folder)
empatica.sample_freqs
empatica.start_times
# empatica.ACC
# empatica.EDA
# empatica.BVP
# empatica.HR
empatica.IBI.head()
empatica.data.head()
empatica.data['acc_mag'].interpolate().plot()
###Output
_____no_output_____
###Markdown
Timeshift and Write Empatica Data
###Code
empatica.timeshift(shift)
empatica.data.head()
empatica.write('Empatica')
###Output
_____no_output_____
###Markdown
Read Biovotion Everion Data
###Code
everion = devicely.EverionReader(everion_folder)
everion.data.head(1)
everion.data['heart_rate'].plot(style='.')
###Output
_____no_output_____
###Markdown
Timeshift and Write Everion Data
###Code
everion.timeshift(shift)
everion.data.head()
everion.write('Everion')
###Output
_____no_output_____
###Markdown
Read Spacelabs
###Code
spacelabs = devicely.SpacelabsReader(spacelabs_file)
spacelabs.data.head()
spacelabs.data.plot.scatter('DIA(mmHg)', 'SYS(mmHg)')
###Output
_____no_output_____
###Markdown
Set Windown and Drop EB on Spacelabs
###Code
spacelabs.set_window(timedelta(seconds=30), 'ffill')
spacelabs.data.head(1)
spacelabs.drop_EB()
spacelabs.data.head(1)
###Output
_____no_output_____
###Markdown
Timeshift, Deidentify and Write Spacelabs Data
###Code
spacelabs.deidentify('001')
spacelabs.timeshift(shift)
spacelabs.data.head()
spacelabs.write('spacelabs.abp')
###Output
_____no_output_____
###Markdown
Read Shimmer Consensys GSR (Shimmer3 GSR Development Kit)
###Code
shimmer_plus = devicely.ShimmerPlusReader(shimmer_file, delimiter=';')
shimmer_plus.data.head(1)
shimmer_plus.data['acc_mag'].interpolate(method="time").plot()
###Output
_____no_output_____
###Markdown
Timeshift and Write Shimmer Data
###Code
shimmer_plus.timeshift(shift)
shimmer_plus.data.head()
shimmer_plus.write('shimmer_plus.csv')
###Output
_____no_output_____
###Markdown
Examples for the Dynamic Solow Model This relates to the paper: "Capital Demand Driven Business Cycles: Mechanism and Effects" by Naumann-Woleske et al. 2021. Import some basic functions to manipulate our outputs
###Code
from matplotlib import pyplot as plt
from matplotlib import rc
import pandas as pd
import numpy as np
import demandSolow as ds
import solowModel as sm
###Output
_____no_output_____
###Markdown
The General Case Choose the default parameters for the model and the noise setup, as shown in the paper:
###Code
parameters = dict(rho=0.33, epsilon=2.5e-5, tau_y=1e3, dep=2e-4,
tau_h=25, tau_s=250, c1=3, c2=7e-4, beta1=1.1,
beta2=1.0, gamma=2000, saving0=0.15, h_h=10)
noise = dict(decay=0.2, diffusion=1.0)
general_model = sm.SolowModel(parameters, noise)
###Output
_____no_output_____
###Markdown
Set the starting values of the model to be in the capital supply regime. Note the order of the starting values is [y, ks, kd, s, h, switch, xi]
###Code
start = np.array([1, 10, 9, 0, 0, 1, 0])
start[0] = 1e-5 + (min(start[1:3]) / 3)
###Output
_____no_output_____
###Markdown
Simulate a path for the general Solow Model
###Code
path = general_model.simulate(start, t_end=1e7, seed=0)
###Output
_____no_output_____
###Markdown
Visualise some of the output dynamics, in particular the production, capital demand and supply, and the sentiment
###Code
fig = plt.figure(figsize=(10,10))
# Which periods to show
start = 0
end = 5e5
# Set up the axes
ax_s = fig.add_subplot(3, 1, 3)
ax_y = fig.add_subplot(3, 1, 1, sharex=ax_s)
ax_k = fig.add_subplot(3, 1, 2, sharex=ax_s)
# Production
ax_y.plot(path.y.loc[start:end], color='navy', linewidth=0.8)
ax_y.set_ylabel(r'$y$', rotation=0)
# Capital Timeseries
ax_k.plot(path.ks.loc[start:end], label=r'Supply', color='black', linewidth=0.8)
ax_k.plot(path.kd.loc[start:end], label=r'Demand', color='firebrick', linewidth=0.8)
ax_k.legend(frameon=False, loc='upper left', ncol=2, bbox_to_anchor=(0, 1.0))
ax_k.set_ylabel(r'$k$', rotation=0)
# Sentiment timeseries
ax_s.plot(path.s.loc[start:end], color='black', linewidth=0.8)
ax_s.set_ylabel(r'$s$', rotation=0)
ax_s.set_ylim(-1, 1)
# Formatting
ax_s.set_xlim(start, end)
fig.align_ylabels()
fig.tight_layout()
###Output
_____no_output_____
###Markdown
One can then also analyse the distribution of the sentiment to show the bistability of the sentiment dependent on the different regimes. In particular in the demand regimes it is strongly bistable with two wells, in the supply case it retains a positive peak (needed to get to supply limit) but becomes centered around 0.
###Code
fig, ax = plt.subplots(ncols=3, figsize=(15,5))
bins = np.linspace(-1.0, 1.0, 100)
ax[0].hist(path.s, bins=bins, color='navy')
ax[0].set_title("Sentiment in the general case")
ax[1].hist(path.s.loc[path.kd<path.ks], bins=bins, color='navy')
ax[1].set_title("Sentiment in the kd<ks case")
ax[2].hist(path.s.loc[path.kd>=path.ks], bins=bins, color='navy')
ax[2].set_title("Sentiment in the kd>ks case")
plt.show()
###Output
_____no_output_____
###Markdown
Vietnamese Financial Report DataThis notebook is provided as a demo for how to use the data.
###Code
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import os
###Output
_____no_output_____
###Markdown
Display config
###Code
def format_float(float_num):
return '{:,.2f}'.format(float_num).replace(',', ' ')
pd.set_option('max_colwidth', None)
pd.set_option('max_rows', None)
pd.set_option('float_format', format_float)
###Output
_____no_output_____
###Markdown
Load data
###Code
BALANCE_SHEET_PATH = os.path.join('data', 'Balance Sheet', 'csv')
bs4 = pd.read_csv(os.path.join(BALANCE_SHEET_PATH, 'Q4 2021.csv'), encoding='utf8', index_col='ID')
bs4.head()
print(bs4.index)
bs4.columns
###Output
_____no_output_____
###Markdown
Balance sheet of a specific company (AAA)
###Code
bs_aaa = bs4['AAA'].dropna()
bs_aaa.reset_index()
###Output
_____no_output_____
###Markdown
Transpose dataIn many cases, you'll want to use sections in the balance sheet as features. In those circumtances, you might need to transpose data.In this example, we will plot top 10 companies that have the highest `TỔNG CỘNG NGUỒN VỐN`
###Code
t_bs4 = bs4.T
t_bs4.head()
top10 = t_bs4['TỔNG CỘNG NGUỒN VỐN'].nlargest(10)
top10
plt.figure(figsize=(15, 8))
sns.barplot(x=top10.index, y=top10, palette="Blues_d")
plt.show()
###Output
_____no_output_____
###Markdown
Generating White-Box HeatmapsThis notebook illustrates how to generate the heatmaps appearing in the paper.You will need to import a white-box network, an attribution method, and the function `html_heatmap`.
###Code
from models.whitebox import CounterRNN
from attribution import IGAttribution, LRPAttribution
from attribution.src.heatmap import html_heatmap
from IPython.core.display import display, HTML
###Output
_____no_output_____
###Markdown
Attribution scores are produced using attribution objects, which are initialized with a model.
###Code
model = CounterRNN()
ig = IGAttribution(model)
lrp = LRPAttribution(model)
###Output
_____no_output_____
###Markdown
You can compute attribution scores by directly calling the attribution object on a string. Use `html_heatmap` to generate a heatmap.
###Code
ig_scores = ig("aaabb")
lrp_scores = lrp("aaabb")
display(HTML(html_heatmap("aaabb", ig_scores)))
display(HTML(html_heatmap("aaabb", lrp_scores)))
###Output
_____no_output_____
###Markdown
You can specify a target class using the `target` keyword argument.
###Code
ig_scores = ig("aaabb", target=3)
lrp_scores = lrp("aaabb", target=2)
display(HTML(html_heatmap("aaabb", ig_scores)))
display(HTML(html_heatmap("aaabb", lrp_scores)))
###Output
_____no_output_____
###Markdown
Use `model.y_stoi` to see the output class indices and `model.x_stoi` to see the one-hot vector indices.
###Code
model.y_stoi
###Output
_____no_output_____
###Markdown
Let's see another example.
###Code
from models.whitebox import BracketRNN
bracket_model = BracketRNN(50)
bracket_lrp = LRPAttribution(bracket_model)
lrp_scores = bracket_lrp("[[(()")
display(HTML(html_heatmap("[[(()", lrp_scores)))
###Output
_____no_output_____
###Markdown
###Code
!pip3 install git+https://github.com/k-timy/SimpleCPUMonitor.git
from SimpleCPUMonitor import CPUMonitor
import time
monitor = CPUMonitor()
# run a time consuming thread...
for i in range(10):
time.sleep(0.6)
# done with the process.
monitor.stop()
###Output
_____no_output_____
###Markdown
Example how to use bank record importerfirst initalize, also add some styles so things look nicer
###Code
%%html
<style>
.table_basic, .table_basic td, .table_basic th {
text-align: left;
}
td.number_cell {
text-align: right;
}
td.comment_cell {
width=25%
}
</style>
from IPython.core.display import display
from statement_reader import csv2bookings, pdf2bookings, txt2bookings
###Output
_____no_output_____
###Markdown
Load a pdf bank statement and display it
###Code
bookings1 = pdf2bookings('AZG114123440_003_20190329.pdf')
display(bookings1)
###Output
_____no_output_____
###Markdown
Manually edit the text data from a bank statement and load itsometime it is usefull to first convert the pdf to text, edit it and afterwards import it.To convert the pdf run`pdftotext -layout statement.pdf statement.txt`Afterwards you can use sed to remove not needed blocks`cat statement.txt | sed 's/^[ ]*\\([0-3][0-9][.][0-1][0-9].[ ]*\\) /\\1 /' | sed -ne '/[0-3][0-9][.][0-1][0-9].[ ]\\{{1,4\\}}/,/^[_ \\t-]*\(SALDO NEU.*\)\{{0,1\}}$/ p' | sed '/^[_ \\t-]*$/ d' > imports.txt`
###Code
bookings2 = txt2bookings('imports.txt')
###Output
_____no_output_____
###Markdown
Load csvis also super easy
###Code
bookings3 = csv2bookings('bookings.csv')
###Output
_____no_output_____
###Markdown
An example for clinical concept extraction with visualization We highly recommend our [sentence segment tool](https://github.com/noc-lab/simple_sentence_segment) for detecting sentence boundary if the text contains arbitrary line breaks, such as the sample text in the following. To use this package, just run```pip install git+https://github.com/noc-lab/simple_sentence_segment.git```Alternatively, you can use the sentence segmentation tool in NLTK or Spacy. Also, you can use other tokenization tools than NLTK. But this example uses NTLK for the illustrative purpose.
###Code
import nltk
import re
from spacy import displacy
from IPython.core.display import display, HTML
from simple_sentence_segment import sentence_segment
from clinical_concept_extraction import clinical_concept_extraction
# An example of a discharge summary contains arbitrary line breaks. I faked this reports.
sample_text = """
This is an 119 year old woman with a history of diabetes
who has a CT-scan at 2020-20-20. Insulin is prescribed
for the type-2 diabetes. Within the past year, the diabetic
symptoms have progressively gotten worse.
"""
def parse_text(text):
# Perform sentence segmentation, tokenization and return the lists of tokens,
# spans, and text for every sentence respectively
tokenizer = nltk.tokenize.TreebankWordTokenizer()
all_sentences = []
all_spans = []
start = 0
normalized_text = ''
for span in sentence_segment(text):
sentence = sample_text[span[0]:span[1]]
sentence = re.sub('\n', ' ', sentence)
sentence = re.sub(r'\ +', ' ', sentence)
sentence = sentence.strip()
if len(sentence) > 0:
tokens_span = tokenizer.span_tokenize(sentence)
tokens = []
spans = []
for span in tokens_span:
tokens.append(sentence[span[0]:span[1]])
spans.append([start + span[0], start + span[1]])
all_sentences.append(tokens)
all_spans.append(spans)
start += len(sentence) + 1
normalized_text += sentence + '\n'
return all_sentences, all_spans, normalized_text.strip()
tokenized_sentences, all_spans, normalized_text = parse_text(sample_text)
print('Variable tokenized_sentences contains token lists for every sentence:')
for tokens in tokenized_sentences:
print(tokens)
print('')
print('Variable all_spans contains lists of token spans for every sentence:')
for spans in all_spans:
print(spans)
print('')
print('Variable normalized_text contains strings for every sentence concatented by line break:')
print(normalized_text)
# function clinical_concept_extraction takes the lists of tokens as input and outputs the annotations
all_annotations = clinical_concept_extraction(tokenized_sentences)
# see annotations for each tokens
for sent_, ann_ in zip(tokenized_sentences, all_annotations):
for t, a in zip(sent_, ann_):
print('%30s %s' % (t, a))
print('='*61)
def build_display_elements(tokens, annotations, spans):
# convert the annotations to the format used in displacy
all_ann = []
for sent_id, sent_info in enumerate(tokens):
sent_length = len(tokens[sent_id])
last_ann = 'O'
last_start = None
last_end = None
for token_id in range(sent_length):
this_ann = annotations[sent_id][token_id]
# separated cases:
if this_ann != last_ann:
if last_ann != 'O':
# write last item
new_ent = {}
new_ent['start'] = last_start
new_ent['end'] = last_end
new_ent['label'] = last_ann[2:]
all_ann.append(new_ent)
# record this instance
last_ann = 'O' if this_ann == 'O' else 'I' + this_ann[1:]
last_start = spans[sent_id][token_id][0]
last_end = spans[sent_id][token_id][1]
else:
last_ann = this_ann
last_end = spans[sent_id][token_id][1]
if last_ann != 'O':
new_ent = {}
new_ent['start'] = last_start
new_ent['end'] = last_end
new_ent['label'] = last_ann[2:]
all_ann.append(new_ent)
return all_ann
ent = build_display_elements(tokenized_sentences, all_annotations, all_spans)
ent_inp = {
'text': normalized_text,
'ents': ent,
'title': ''
}
colors = {'PROBLEM': '#fe4a49', 'TEST': '#fed766', 'TREATMENT': '#2ab7ca'}
options = {'colors': colors}
html = displacy.render(ent_inp, style='ent', manual=True, options=options)
display(HTML(html))
###Output
_____no_output_____
###Markdown
Using equation with LaTeX notation in a markdown cellThe well known Pythagorean theorem $x^2 + y^2 = z^2$ was proved to be invalid for other exponents. Meaning the next equation has no integer solutions:\begin{equation} x^n + y^n = z^n \end{equation}
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 2.0, 0.01)
s = 1 + np.sin(2 * np.pi * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='About as simple as it gets, folks')
ax.grid()
fig.savefig("test.png")
plt.show()
# !conda list
###Output
_____no_output_____
###Markdown
Settings
###Code
params = TrainerParameters()
params.folder = 'results/'
params.comment = 'BasicIllustration'
params.debug = False
params.Iterations = 15000
params.identifier = 'highres32'
params.trainer['lr_init'] = 1e-2
params.trainer['N_PE_updates'] = 3
params.trainer['N_monte_carlo_analysis'] = 64
params.trainer['N_monte_carlo_analysis_final'] = 1024
params.trainer['N_monitor_interval'] = 1000
params.trainer['N_PE_updates_final'] = 250
params.trainer['N_tensorboard_logging_interval'] = 1000
params.margs['dim_latent'] = 16
params.margs['ptype'] = 'NDP'
params.margs['device'] = 'best'
params.trainer['N_vo_update_interval'] = 250
params.trainer['N_vo_holdoff'] = 250 # 1000
params.trainer['N_monte_carlo_vo'] = 128
params.scheduler['milestones'] = [250, 1500]
params.scheduler['factor'] = math.sqrt(0.1)
########### DATA AND VO ##############
params.data['N_u'] = 1024
params.data['N_s'] = 128
params.data['N_u_max'] = 2048
params.data['N_s_max'] = 128
params.data['N_vo_max'] = 128
params.data['N_vo'] = 0
params.data['N_val'] = 128
params.data['armortized_bs'] = 64
params.data['vo_spec'] = dict()
###Output
_____no_output_____
###Markdown
Create trainerLoad data, and create trainer (with the generative model being created in the background). Use stochastic variational inference for optimization of the ELBO.
###Code
# get data for training and validation
df = DataFactory.FromIdentifier(params.identifier)
dl, dlu = df.setup()
# create trainer
trainer = CreateTrainer(params, dl, dlu)
# run trainer
trainer.run(params.Iterations, verbose=False)
###Output
100%|██████████| 15000/15000 [07:01<00:00, 35.57it/s]
###Markdown
Training
###Code
trainer.plot_elbo(figsize=(6,4))
print("Achieved relative error: {}".format(trainer.results()['r2_y']))
print("Achieved predictive logscore: {}".format(trainer.results()['logscore_y']))
###Output
Achieved relative error: 0.9799582958221436
Achieved predictive logscore: 2.329190492630005
###Markdown
Examples: Mean prediction vs. reference (on validation dataset)
###Code
Plot2D(trainer, [0,7,8])
###Output
_____no_output_____
###Markdown
Example of anom_detect UsageBelow I use an example from the commonly used sunspots dataset to show some features of the anomaly detection library, especially some of the plotting functionalities.If you want to run the example, download the data set from the below commented link and then run the example.
###Code
from anom_detect import anom_detect
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load data set into Pandas
###Code
#!wget -c http://www-personal.umich.edu/~mejn/cp/data/sunspots.txt -P .
df = pd.DataFrame.from_csv('sunspots.txt',sep='\t',header=None)
df.index.name = 'time'
df.columns = ['sunspots']
df.head()
###Output
_____no_output_____
###Markdown
Evaluate for Anomalies There are a number of options available in the anom_detect method. It is recommended a small description below helps to:- method : This is the data filtering method used, for the moment only 'average' is avaiable representing the moving average method. In the future more data modelling techniques will be implemented.- max_outliers : This is defaulted to 'None', which means that the max number of outliers is set to the size of your data set. For more efficient computation this should be limited.- window : The window size for the moving average, defaulted to 5.- alpha : the significance level used for ESD test.- mode : Method used in discrete linear convolution for dealing with boudaries. Please read seperate documentation. Default is 'same', this means that the window of averaging must intersect with data points with a length of >len(window)/2
###Code
# Use default values
an = anom_detect()
# Find the anomalies and print them
an.evaluate(df)
an.plot()
an.plot(left=200,right=400,top=200,bottom=0)
###Output
_____no_output_____
###Markdown
Accessing data
###Code
# The graph values can be accessed using 'results'.
an.results.head()
# Anomalous data points can be printed from anoma_points.
an.anoma_points.head()
###Output
_____no_output_____
###Markdown
Check Normality of ResidualIn order to use the ESD test, it is important that the quantity being tested is approximately normally distributed. You can use the normality function in order to check this through two plots. In this implementation we calculate a residual value between the approximated curve (in this case the 5 day moving average) and the actual data:residual = (actual data point) - (estimated value from moving average)The plots are simple and qualitative checks for normality:- Distribution of residuals : is just a histogram of the residual in 100 bins.- Probability plot : plots the actual data against it's corresponding normal value approximation (uses scipy.stats.probplot). A perfectly normal data set would lie along the straight line.
###Code
an.normality()
###Output
_____no_output_____
###Markdown
Example 1. Glasshttps://refractiveindex.info/?shelf=3d&book=glass&page=BK7
###Code
# Load Data
df = pd.read_csv('data/glass.csv').interpolate().dropna()
df['w'] = df['Photon energy, eV']*8065.5
w = df['w'].values
n = df['n'].values
k = df['k'].values
# Setup Layers
nk_vacuum = constant_refractive_index(1, w)
nk_glass = n + 1j * k
layers = [set_layer(nk_vacuum, thickness=0.0, coherence=True),
set_layer(nk_glass, thickness=0.05, coherence=False),
set_layer(nk_vacuum, thickness=0.0, coherence=True)]
# Incidence angle and polarization
incidence_angle = 0
polarization = 'p'
# Calculate T and R
TR = get_TR(layers, layers[0]['refractive_index'], incidence_angle, w, sp=polarization)
T = TR['T']
R = TR['R']
# Plot
plt.figure(figsize=(20, 4))
plt.subplot(131)
graph_nk(w, n, k, title='Glass')
plt.subplot(132)
graph_TR(w, T, R, title='Glass')
plt.show()
###Output
_____no_output_____
###Markdown
Example 2. Waterhttps://refractiveindex.info/?shelf=3d&book=liquids&page=water
###Code
# Load Data
df = pd.read_csv('data/water.csv').interpolate().dropna()
df['w'] = df['Photon energy, eV']*8065.5
w = df['w'].values
n = df['n'].values
k = df['k'].values
# Setup Layers
nk_vacuum = constant_refractive_index(1, w)
nk_water = n + 1j * k
layers = [set_layer(nk_vacuum, thickness=0.0, coherence=True),
set_layer(nk_water, thickness=0.05, coherence=False),
set_layer(nk_vacuum, thickness=0.0, coherence=True)]
# Incidence angle and polarization
incidence_angle = 0
polarization = 's'
# Calculate T and R
TR = get_TR(layers, layers[0]['refractive_index'], incidence_angle, w, sp=polarization)
T = TR['T']
R = TR['R']
# Plot
plt.figure(figsize=(20, 4))
plt.subplot(131)
graph_nk(w, n, k, title='Water')
plt.subplot(132)
graph_TR(w, T, R, title='Water')
plt.show()
###Output
_____no_output_____
###Markdown
Example 3. Siliconhttps://refractiveindex.info/?shelf=main&book=Si&page=Green-2008
###Code
# Load Data
df = pd.read_csv('data/silicon.csv').interpolate().dropna()
df['w'] = df['Photon energy, eV']*8065.5
w = df['w'].values
n = df['n'].values
k = df['k'].values
# Setup Layers
nk_vacuum = constant_refractive_index(1, w)
nk_si = n + 1j * k
layers = [set_layer(nk_vacuum, thickness=0.0, coherence=True),
set_layer(nk_si, thickness=0.05, coherence=False),
set_layer(nk_vacuum, thickness=0.0, coherence=True)]
# Incidence angle and polarization
incidence_angle = 0
polarization = 's'
# Calculate T and R
TR = get_TR(layers, layers[0]['refractive_index'], incidence_angle, w, sp=polarization)
T = TR['T']
R = TR['R']
# Plot
plt.figure(figsize=(20, 4))
plt.subplot(131)
graph_nk(w, n, k, title='Silicon')
plt.subplot(132)
graph_TR(w, T, R, title='Silicon')
plt.show()
###Output
_____no_output_____
###Markdown
Example 4. 300nm SiO2 on SiliconSilicon: https://refractiveindex.info/?shelf=main&book=Si&page=Green-2008 SiO2: https://refractiveindex.info/?shelf=main&book=SiO2&page=Lemarchand
###Code
# Load Data
df_1 = pd.read_csv('data/silicon.csv')
df_2 = pd.read_csv('data/sio2.csv')
df = df_1.merge(df_2, left_on=['Photon energy, eV'], right_on=['Photon energy, eV']).interpolate().dropna()
df['w'] = df['Photon energy, eV']*8065.5
w = df['w'].values
n_si = df['n_x'].values
k_si = df['k_x'].values
n_sio2 = df['n_y'].values
k_sio2 = df['k_y'].values
# Setup Layers
nk_vacuum = constant_refractive_index(1, w)
nk_si = n_si + 1j * k_si
nk_sio2 = n_sio2 + 1j * k_sio2
layers = [set_layer(nk_vacuum, thickness=0.0, coherence=True),
set_layer(nk_sio2, thickness=3e-5, coherence=True),
set_layer(nk_si, thickness=0.05, coherence=False),
set_layer(nk_vacuum, thickness=0.0, coherence=True)]
# Incidence angle and polarization
incidence_angle = 0
polarization = 's'
# Calculate T and R
TR = get_TR(layers, layers[0]['refractive_index'], incidence_angle, w, sp=polarization)
T = TR['T']
R = TR['R']
# Plot
plt.figure(figsize=(20, 4))
plt.subplot(131)
graph_nk(w, n_si, k_si, title='Silicon')
plt.subplot(132)
graph_nk(w, n_sio2, k_sio2, title='SiO2')
plt.subplot(133)
graph_TR(w, T, R, title='SiO2 on Silicon')
plt.show()
###Output
_____no_output_____
###Markdown
INBOX: (I)nspect the (N)on-(B)acktracking (o)r (X)-centrality of graphsThis notebook contains example usage of all functions found in the `inbox` package. It is meant to be additional documentation on top of the docstrings provided in the source code. This document is not meant to contain a deep explanation of the underlying concepts. For that, please see the paper.
###Code
import inbox
import numpy as np
import networkx as nx
import scipy.sparse as sparse
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
For all our examples we will use the Karate Club network,
###Code
graph = nx.karate_club_graph()
###Output
_____no_output_____
###Markdown
`inbox` provides functions that compute three different related topics: matrices, centralities, and targeted immunization, presented in the following sections. If you plan on using `inbox` for heavy duty computing (large and/or many networks), please also read the final section "Implementation Notes". ----- Matrices Non-Backtracking matrix The fundamental matrix used is the Non-Backtracking matrix (or NB-matrix). The NB-matrix of a graph is computed using `nb_matrix`. This matrix has a number of rows and columns equal to twice the number of edges in the graph.
###Code
nbm = inbox.nb_matrix(graph)
2*graph.size(), nbm.shape
###Output
_____no_output_____
###Markdown
A different version of the NB-matrix is the auxiliary NB-matrix. This is a smaller matrix, with size equal to twice the number of nodes of the graph.
###Code
aux = inbox.nb_matrix(graph, aux=True)
2*graph.order(), aux.shape
###Output
_____no_output_____
###Markdown
The utility of the auxiliary version is that all its eigenvalues are also eigenvalue of the NB-matrix.
###Code
nbm_vals = sparse.linalg.eigs(nbm, k=10, return_eigenvectors=False)
aux_vals = sparse.linalg.eigs(aux, k=10, return_eigenvectors=False)
nbm_vals.sort()
aux_vals.sort()
np.allclose(nbm_vals, aux_vals)
###Output
_____no_output_____
###Markdown
The rows and columns of the NB-matrix are indexed by directed edges of the graph, even if the graph is undirected. The rows and columns of the NB-matrix are by default sorted as follows. The first $m$ rows correspond to the edges in the orientation found in the NetworkX graph. The last $m$ edges correspond to the opposite orientations, in the same order. That is to say, if the first edge returned by `graph.edges()` is `(u ,v)`, then the first row corresponds to the directed edge `u -> v`, while the $m^{th}$ row corresponds to the directed edge `v -> u`. This row order creates a rather appealing visual structure in the matrix.
###Code
plt.imshow(nbm.A);
###Output
_____no_output_____
###Markdown
The auxiliary NB-matrix is a $2 \times 2$ block matrix whose bottom right block is the adjacency matrix of the graph.
###Code
_, axes = plt.subplots(1, 2);
aux_plot = aux.A
aux_plot[aux_plot == 0] = 'nan'
axes[0].imshow(aux_plot);
axes[0].set_title('Auxiliary NB-matrix')
adj_plot = nx.adjacency_matrix(graph).A.astype('d')
adj_plot[adj_plot == 0] = 'nan'
axes[1].imshow(adj_plot);
axes[1].set_title('Adjacency matrix');
###Output
_____no_output_____
###Markdown
As can be seen, there is a very rich structure in the rows and columns of these matrices. The permutation matrix The NB-matrix is not symmetric and therefore its spectral analysis can become cumbersome. However, it contains non-standard forms of symmetry. Concretely, a permutation of its rows and columns will make it symmetric.
###Code
perm = inbox.perm_matrix(nbm.shape[0] // 2)
_, axes = plt.subplots(1, 3, sharey=True);
axes[0].imshow(perm.A);
axes[0].set_title(r'Permutation $P$');
axes[1].imshow(nbm.A);
axes[1].set_title(r'NB-matrix $B$');
axes[2].imshow(perm.dot(nbm).A);
axes[2].set_title(r'$PB$');
###Output
_____no_output_____
###Markdown
Note that the order of rows and columns is extremeley important in all the above computations, and choosing a different basis will invalidate these properties. Half incidence matrices The half incidence matrices are two rectangular matrices that are used when computing the NB-matrix and other associated computations. They store information about the incidence of directed edges to their (source and target) endpoints. By default, the columns are sorted in the same way as the NB-matrix. Once again, the order is extremely important.
###Code
source, target = inbox.half_incidence(graph)
_, axes = plt.subplots(1, 2, sharey=True);
axes[0].imshow(source.A);
axes[0].set_title(r'Source $S$');
axes[1].imshow(target.A);
axes[1].set_title(r'Target $T$');
###Output
_____no_output_____
###Markdown
Note that the product of the source and target matrices is *almost* the NB-matrix, but not quite.
###Code
_, axes = plt.subplots(1, 2, sharey=True)
axes[0].imshow(source.T.dot(target).A);
axes[0].set_title(r'Product $ST$')
axes[1].imshow(nbm.A);
axes[1].set_title(r'NB-matrix $B$');
###Output
_____no_output_____
###Markdown
In fact, the product of soruce and target minus the permutation matrix equals the NB-matrix
###Code
np.allclose((source.T.dot(target) - perm).A, nbm.A)
###Output
_____no_output_____
###Markdown
X matrix The `X` matrix is used when defining the `X`-centrality framework, in particular the `X`-Non-Backtracking centrality and `X`-degree centrality. `inbox` can compute the `X` matrix in both cases of node removal or node addition. Further, `inbox` can also compute the `X`-centrality measures, as discussed in the Section. Node removal Removing a node from the graph is equivalent to removing some rows from the NB-matrix. By re-arranging the rows and columns, we can get a nice block formation for the NB-matrix. However, this must be made carefully since the row order is so important. To see what these blocks are when removing a node, we can do the following,
###Code
node_to_remove = 2
B, D, E, F = inbox.x_matrix(graph, remove_node=node_to_remove, return_all=True)
###Output
_____no_output_____
###Markdown
Now, the matrix [B', D] [E , F]is the same as the NB-matrix, but with reordered rows and columns. `E` is indexed in the rows by those that would be removed when removing the node, while the same is true for the columns of `D`. `F` is completely removed when removing the node.
###Code
_, axes = plt.subplots(1, 2)
axes[0].imshow(sparse.bmat([[B, D],
[E, F]]).A);
axes[0].set_title(r'Standard row order')
axes[1].imshow(nbm.A);
axes[1].set_title(r'Block form');
###Output
_____no_output_____
###Markdown
Therefore, the NB-matrix of the graph after removing the node is exactly equal to `B'`, the top-left block, but we did not need to recompute the new order of rows and columns. Finally, the `X` matrix is defined as the product of `D`, `F`, `E`. In fact, it can be computed directly by using `return_all=False`,
###Code
X = D.dot(F).dot(E)
# Use return_all=False to get only the X matrix
X2 = inbox.x_matrix(graph, remove_node=node_to_remove, return_all=False)
np.allclose(X2.A, X.A)
###Output
_____no_output_____
###Markdown
Node addition When adding a new node to the graph, the NB-matrix can be put in a similar block form as before. In this case, `F` is a whole new block of the new matrix, while `E` is indexed in the rows by the newly added directed edges, while the same is true for the columns of `D`. In this case, the NB-matrix of the graph after node addition is [B D] [E F], where `B` is the NB-matrix of the original graph before node addition. `x_matrix` can also compute the blocks in this case, by specifying the neighbors of the node to be added, rather than a node to be removed. This is done via `add_neighbors`.
###Code
B, D, E, F = inbox.x_matrix(graph, add_neighbors=[0, 13, 30], return_all=True)
X = D.dot(F).dot(E)
X2 = inbox.x_matrix(graph, add_neighbors=[0, 13, 30], return_all=False)
np.allclose(X.A, X2.A)
###Output
_____no_output_____
###Markdown
Note that `x_matrix` function never adds or removes a node from the graph, but only returns the `X` matrix, or the blocks `B`, `D`, `E`, `F` in an appropriate row order. ----- Centralities `inbox` contains functionality to compute several centrality measures. Notably, it can compute X-degree and X-NB centrality. To compute these, it is always better to use the following functions, rather than computing the `X` matrix and directly operating with it. X-NB centrality The first is X-NB centrality, which uses the `X` matrix from above. It is an aggregation of the NB-centralities of a node's neighbors. For details, see the paper.
###Code
xnb_cent = inbox.x_nb_centrality(graph)
###Output
_____no_output_____
###Markdown
X-degree centrality The second is X-degree centrality, which is an aggregation of a node's neighbors' degrees.
###Code
xdeg_cent = inbox.x_degree(graph)
###Output
_____no_output_____
###Markdown
General X-centrality One can also compute arbitrary centrality measures using the `X` matrix. If `vector` contains a centrality value for each directed edge (with elements sorted in the same row order as the NB-matrix), then one can use the following to transform these values into node centralities,
###Code
directed_edge_centralities = np.random.random(size=2*graph.size())
x_cent = inbox.x_centrality(graph, directed_edge_centralities)
###Output
_____no_output_____
###Markdown
NB centrality NB-centrality was first proposed by [[1](ref-1)] as an alternative to the standard eigenvector centrality that is more robust to localization.
###Code
nb_cent = inbox.nb_centrality(graph)
###Output
_____no_output_____
###Markdown
NB-centrality is by default normalized in an appropriate way (see paper for details). An unnormalized version is also available. The unnormalized version is slightly more efficient to compute (in the order of $O(n)$).
###Code
nb_cent_unnormalized = inbox.nb_centrality(graph, normalized=False)
###Output
_____no_output_____
###Markdown
Note: in the course of computing `nb_centrality`, the leading eigenvalue of the NB-matrix of the graph is computed, and can also be returned by using the option `return_eigenvalue=True`. Collective Influence Finally, collective influence was proposed by [[3](ref-3)] as another centrality measure based on the NB-matrix. `inbox` considers only the immediate neighbors of a node to compute its collective influence, but generalizations are possible.
###Code
ci_cent = inbox.collective_influence(graph)
###Output
_____no_output_____
###Markdown
Visualization When putting together all centrality measures we get the following picture of the network.
###Code
# As a baseline, also show degree
deg_cent = dict(graph.degree())
scatter = lambda c, l: plt.scatter([n for n in graph], [c[n] for n in graph], label=l)
scatter(deg_cent, 'Degree');
scatter(nb_cent, 'NB');
scatter(xnb_cent, 'X-NB');
scatter(xdeg_cent, 'X-deg');
scatter(ci_cent, 'CI');
plt.yscale('symlog');
plt.xlabel('Node label');
plt.ylabel('Centrality');
plt.legend();
###Output
_____no_output_____
###Markdown
As can be seen, they are all highly correlated to each other. CI and X-deg can be computed most efficiently. X-NB and X-deg are the best choices for immunization purposes. ----- Immunization Targeted immunization works by (i) computing a score of each node, (ii) removing the node with the highest score, and (iii) iterating until the target number of nodes has been removed. Importantly, the score has to be recomputed at each step. `inbox` provides functionality to perform targeted immunization using all of the above centrality measures to compute the score. Among these, X-degree and CI are the fastest computationally, though X-NB was observed to be the most effective.
###Code
removed_nodes, new_graph = inbox.immunize(graph, 5, strategy='xdeg')
###Output
_____no_output_____
###Markdown
`inbox.immunize` supports the following strategies for computing the score: `deg` (degree), `core` ($k$-core index, or coreness), `nb` (NB centrality), `xnb` (X-NB centrality), `xdeg` (X-degree centrality), `ci` (Collective Influence), `ns` (NetShield). NetShield [[4](ref-4)] is an efficient algorithm based on the adjacency matrix, not on the NB-matrix. In our work, we evaluate the effectiveness of targeted immunization by computing the difference between the leading eigenvalue of the NB-matrix of the graph before and after immunization. We call this difference the eigen-drop.
###Code
eig = lambda g: sparse.linalg.eigs(
inbox.nb_matrix(g, aux=True),
k=1, return_eigenvectors=False,
tol=1e-4)[0].real
eig_before = eig(graph)
all_strategies = ['xdeg', 'ci', 'deg', 'ns', 'core', 'nb' ,'xnb']
eigen_drop = {}
for strategy in all_strategies:
_, new_graph = inbox.immunize(graph, 3, strategy=strategy)
eigen_drop[strategy] = eig_before - eig(new_graph)
###Output
_____no_output_____
###Markdown
A larger eigen-drop means more efficient immunization:
###Code
order = sorted(eigen_drop, key=eigen_drop.get, reverse=True)
for s in order:
print('{}\teigen-drop: {:.3f}.'.format(s, eigen_drop[s]))
###Output
xdeg eigen-drop: 3.229.
xnb eigen-drop: 3.229.
nb eigen-drop: 3.229.
ns eigen-drop: 3.229.
ci eigen-drop: 3.229.
deg eigen-drop: 2.554.
core eigen-drop: 1.196.
###Markdown
A few strategies achieve the same eigen-drop because they are identifying the exact same nodes for removal (possibly in different order). However, in a more involved experiment below, using Barabasi-Albert networks, the strategies start to differ, as shown by the average eigen-drop. **Warning: the following cell may take several minutes to compute**.
###Code
import os
from multiprocessing import Pool
def run_all(idx):
graph = nx.barabasi_albert_graph(1000, 4)
eig_before = eig(graph)
eigen_drop = {s: 0 for s in all_strategies}
for strategy in all_strategies:
_, new_graph = inbox.immunize(graph, 10, strategy=strategy)
eigen_drop[strategy] = eig_before - eig(new_graph)
return eigen_drop
num_graphs = 30
with Pool(processes=os.cpu_count() - 1) as pool:
results = pool.map(run_all, range(num_graphs))
eigen_drop = {s: sum(r[s] for r in results) / num_graphs for s in all_strategies}
order = sorted(eigen_drop, key=eigen_drop.get, reverse=True)
print('Strategy\tAverage eigen-drop')
for s in order:
print('{}\t\t{:.3f}'.format(s, eigen_drop[s]))
###Output
Strategy Average eigen-drop
nb 5.390
xnb 5.384
xdeg 5.383
ci 5.375
deg 5.324
ns 5.313
core 4.355
###Markdown
Minimum degree of nodes for immunization Nodes of degree 1 always have a zero value of X-degree, X-NB centrality, NB-centrality, and Collective Influence. Therefore, they will never be picked for immunization. For this purpose, `inbox` allows the user to specify the minimum degree of nodes to be considered. When using the aforementioned strategies, this is always faster and will yield the same output. For other strategies, this is always faster though the output may differ.
###Code
# Remove nodes of degree 0 or 1
%timeit inbox.immunize(nx.barabasi_albert_graph(1000, 4), 10, min_deg=2)
# Faster: remove nodes of degree less than 8
%timeit inbox.immunize(nx.barabasi_albert_graph(1000, 4), 10, min_deg=8)
###Output
52.6 ms ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
----- Implementation Notes Here we document some implementation details as well as unexpected behavior or known bugs. Immunization: Queues and dictionaries `inbox.immunization` provides two different versions for the strategies `deg`, `ci`, and `xdeg`. One uses an indexed priority queue to store and update the values at each iteration, while the other uses a standard python dictionary. The dictionary version has a better worst-case scenario runtime, while the queue version was observed to be faster in practice. The queue version is the default, though one can use the dictionary version by setting the parameter `queue` to `False`. See the paper for more details on the runtime complexity.
###Code
print(inbox.immunize(graph, 5, strategy='xdeg', queue=True)[0])
print(inbox.immunize(graph, 5, strategy='xdeg', queue=False)[0])
###Output
[2, 33, 0, 30, 23]
[2, 33, 0, 30, 23]
###Markdown
Immunization: Tie Breaking Ties are broken arbitrarily, i.e. when immunizing using strategy `xdeg`, if two nodes have the exact same value of X-degree, either one can be chosen for immunization, and **it is not guaranteed that the same node will be chosen when running the same algorithm twice**. In particular, using the queue or dictionary versions of `deg`, `ci`, or `xdeg` may yield diferrent results as the underlying data structures may break ties in different ways. In the example below, the first 8 nodes are removed in the same order by the queue and dictionary versions.
###Code
graph = nx.karate_club_graph()
print(inbox.immunize(graph, 8, strategy='xdeg', queue=True)[0])
print(inbox.immunize(graph, 8, strategy='xdeg', queue=False)[0])
###Output
[2, 33, 0, 30, 23, 31, 7, 13]
[2, 33, 0, 30, 23, 31, 7, 13]
###Markdown
However, the ninth node removed is different.
###Code
immunized = graph.copy()
immunized.remove_nodes_from([2, 33, 0, 30, 23, 31, 7, 13])
print(inbox.immunize(immunized, 1, strategy='xdeg', queue=True)[0])
print(inbox.immunize(immunized, 1, strategy='xdeg', queue=False)[0])
###Output
[6]
[5]
###Markdown
This occurs because the nodes 5 and 6 have the same X-degree centrality after removing the first eight nodes. The queue and map break the tie differently.
###Code
xdeg = inbox.x_degree(immunized)
print(xdeg[5], xdeg[6])
###Output
10 10
###Markdown
Further, removing either node 5 or node 6 has different impact on the X-degree of remaining nodes. Accordingly, the nodes removed thereafter are different.
###Code
print(inbox.immunize(immunized, 3, strategy='xdeg', queue=True)[0])
print(inbox.immunize(immunized, 3, strategy='xdeg', queue=False)[0])
###Output
[6, 10, 16]
[5, 32, 1]
###Markdown
In the Karate Club case, this does not have a large impact on the final result, and we do not foresee this becoming a problem for larger graphs either. A deep analysis of tie-breaking strategies is out of scope at this time. Centrality: Connected Components Matrix computations in `inbox` should work for graphs with multiple connected components. The largest eigenvalue, and corresponding eigenvector which in turn determines NB-centrality and X-NB centrality, always corresponds to the largest component. However, there is one case to be aware of. In the case where the graph has two connected components **whose 2-cores are isomorphic**, then the principall eigenvector is no longer well-defined. In particular, the NB-centrality and X-NB centralities are no longer well-defined. However, this will happen only in the rarest of cases. Surprisingly, it does occur when immunizing the Karate Club Graph.
###Code
graph = nx.karate_club_graph()
_, immunized = inbox.immunize(graph, 5, strategy='xnb')
no_isolates = immunized.subgraph(n for n in immunized if immunized.degree(n) > 0)
_, axes = plt.subplots(1, 2, figsize=(12, 5))
nx.draw(no_isolates, node_size=30, ax=axes[0], with_labels=False,
node_color=['k', 'b', 'b', 'b', 'k', 'k', 'b', 'k', 'k', 'b', 'b', 'b', 'b', 'k', 'b', 'k', 'k', 'k', 'b'])
axes[0].set_title('Karate Club after immunizing\n5 nodes by X-NB centrality\n2-core of two largest components in blue\n(Isolate nodes not shown)');
nx.draw(nx.Graph([(0, 1), (1, 2), (2, 3), (3, 4), (4, 0), (4, 1)]), node_size=100, ax=axes[1], node_color='b')
axes[1].set_title('Note the two largest components\nhave 2-cores isomorphic to this graph');
plt.subplots_adjust(wspace=0.8)
###Output
_____no_output_____
###Markdown
The above plot shows that the 2-cores of the two largest components of the Karate Club graph after immunizing 5 nodes using strategy `xnb` are isomorphic. In this case, both components determine the largest eigenvalues of the NB-matrix, and the principal eigenvector is no longer well-defined. Accordingly, computing the NB-centrality or X-NB-centrality of the graph is no longer supported (the behavior is undefined). In fact, computing the NB-centrality of a node in this pathological case may return different values.
###Code
print(inbox.nb_centrality(immunized)[4],
inbox.nb_centrality(immunized)[4])
###Output
-0.5334363446339787 0.6467638885214784
###Markdown
And ditto for X-NB centrality.
###Code
print(inbox.x_nb_centrality(immunized)[4],
inbox.x_nb_centrality(immunized)[4])
###Output
0.7210095375238408 0.20176071663935585
###Markdown
Note that in this case, only the `nb` and `xnb` strategies are affected. The Degree, Core, Collective Influence, and X-degree values are well defined in all cases and there is no problem in continuing to immunize the graph with the corresponding strategies.
###Code
print(inbox.x_degree(immunized)[4],
inbox.x_degree(immunized)[4])
###Output
4 4
###Markdown
Examples - api.py: contains colectica api methods- colectica.py: contains functions to get stuff out using colectica api methods. Some scripts:- get_question_groups.py: get all question group- instrument_to_dict.py: pull raw json formatted items
###Code
import colectica
from colectica import ColecticaObject
import api
import pprint
import pandas as pd
pp = pprint.PrettyPrinter(depth=4)
hostname = "discovery-pp.closer.ac.uk"
username = None
password = None
if not hostname:
hostname = input ("enter the url of the site: ")
if not username:
username = input("enter your username: ")
if not password:
password = input("enter your password: ")
C = ColecticaObject(hostname, username, password)
# Instrument
agency = "uk.cls.nextsteps"
Id_instrument = "a6f96245-5c00-4ad3-89e9-79afaefa0c28"
df_instrument, instrument_info = C.item_info_set(agency, Id_instrument)
print(df_instrument.head(2))
pp.pprint(instrument_info)
# Mode of Data Collection for a study
mode = C.item_to_dict('uk.cls.bcs70', 'f3a09755-23db-45df-bab3-387f1fa66790')
pp.pprint(mode)
# all question group
r = C.general_search('5cc915a1-23c9-4487-9613-779c62f8c205', '')
print(r['TotalResults'])
pp.pprint(r['Results'][0])
###Output
/usr/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 'discovery-pp.closer.ac.uk'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
warnings.warn(
###Markdown
Example how to use bslib for SPI calculation
###Code
from openbatlib import controller
from openbatlib import model
import numpy as np
###Output
_____no_output_____
###Markdown
Choose system and start simulation
###Code
c = controller.Controller()
c.sim(system="H", ref_case="2", dt=1)
###Output
_____no_output_____
###Markdown
Show results
###Code
c.print_E()
###Output
Name MWh
El 9.3944
Epv 10.3806
Ebatin 2.5485
Ebatout 2.4687
Eac2g 4.8544
Eg2ac 4.5598
Eg2l 4.4519
Eperi 0.0312
Ect 0.1563
Epvs 10.1415
Eac2bs 2.7531
Ebs2ac 2.3005
Epvs2l 2.6692
Epvs2bs 2.6451
Eg2bs 0.1079
Epvs2g 4.8272
Ebs2l 2.2734
Ebs2g 0.0272
###Markdown
Basic Usages To Create a Line Plot
###Code
pt.lines_from_csv('examples/example_data.csv').draw()
###Output
_____no_output_____
###Markdown
To Set Labels
###Code
pt.lines_from_csv('examples/example_data.csv') \
.x_label('Time (seconds)') \
.x_label_size(15) \
.y_label('Performance') \
.y_label_size(15) \
.draw()
###Output
_____no_output_____
###Markdown
To Show Legend
###Code
pt.lines_from_csv('examples/example_data.csv') \
.show_legend() \
.draw()
###Output
_____no_output_____
###Markdown
To Pull the Legend Out
###Code
pt.lines_from_csv('examples/example_data.csv') \
.legend_out() \
.draw()
###Output
_____no_output_____
###Markdown
To Change Colors of the Lines
###Code
pt.lines_from_csv('examples/example_data.csv') \
.colors(['#008828', '#121259', '#df5349']) \
.legend_out() \
.draw()
###Output
_____no_output_____
###Markdown
To Add Markers to the LinesTo check marker options, please see [the document of matplotlib](https://matplotlib.org/3.2.2/api/markers_api.html).
###Code
pt.lines_from_csv('examples/example_data.csv') \
.markers(['o', '^', 's']) \
.legend_out() \
.draw()
###Output
_____no_output_____
###Markdown
To Change Line StylesTo check style options, please see [the document of matplotlib](https://matplotlib.org/gallery/lines_bars_and_markers/line_styles_reference.html).
###Code
pt.lines_from_csv('examples/example_data.csv') \
.line_styles(['--', ':', '-.']) \
.legend_out() \
.draw()
###Output
_____no_output_____
###Markdown
The random style persists until another WB_Augmenter object is initialized
###Code
print(transform.style)
img_out= transform(img)
print(type(img_out))
plt.imshow(img_out)
###Output
<class 'PIL.Image.Image'>
###Markdown
Goal of this notebook is to create a flow for a data scientist to be able to- [x] Create a wallet, see balances- [x] Get tokens from faucet using links (Looks like the faucets discourage automated methods)- [] - [ ] Search for a dataset on Ocean- [ ] Download the datasetOriginal Notebook found here: https://github.com/AlgoveraAI/generative-art/blob/main/notebooks/1-cryptopunks-dataset.ipynbIPFS Code is found here: https://docs.ipfs.io/how-to/command-line-quick-start/take-your-node-online
###Code
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Create IPFS Node
###Code
!curl -O https://dist.ipfs.io/go-ipfs/v0.11.0/go-ipfs_v0.11.0_darwin-amd64.tar.gz
!tar -xvzf ipfs/go-ipfs_v0.11.0_darwin-amd64.tar.gz -C /i/
!cd go-ipfs
!bash go-ipfs/install.sh
!ipfs --version
!ipfs init
!ipfs cat /ipfs/QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc/readme
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23.1M 100 23.1M 0 0 13.2M 0 0:00:01 0:00:01 --:--:-- 12.9M 0:00:01 0:00:01 --:--:-- 13.2M
tar: Error opening archive: Failed to open 'ipfs/go-ipfs_v0.11.0_darwin-amd64.tar.gz'
zsh:cd:1: no such file or directory: go-ipfs
bash: go-ipfs/install.sh: No such file or directory
ipfs version 0.11.0
Error: ipfs daemon is running. please stop it to run this command
Use 'ipfs init --help' for information about this command
Hello and Welcome to IPFS!
██╗██████╗ ███████╗███████╗
██║██╔══██╗██╔════╝██╔════╝
██║██████╔╝█████╗ ███████╗
██║██╔═══╝ ██╔══╝ ╚════██║
██║██║ ██║ ███████║
╚═╝╚═╝ ╚═╝ ╚══════╝
If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!
-------------------------------------------------------
| Warning: |
| This is alpha software. Use at your own discretion! |
| Much is missing or lacking polish. There are bugs. |
| Not yet secure. Read the security notes for more. |
-------------------------------------------------------
Check out some of the other files in this directory:
./about
./help
./quick-start <-- usage examples
./readme <-- this file
./security-notes
###Markdown
Starting the IPFS Node Must Complete Before Proceeding* Create Terminal Window * Navigate to File -> New -> Terminal * run **ipfs daemon** to start the IPFS node initialized above IPFS
###Code
from dataset.exdataset import Datasets
from storage.ipfs import IPFS
from datamarket.ocean import Ocean
from wallet.ethwallet import Wallet
Ocean.get_example_datasets()
wallet = Wallet.create_wallet()
#Get and create samples
Datasets.create_np_ones_file(10,10,fn="data/numpyarray.txt")
Datasets.create_test_file(10,fn="data/numpyarray.dat")
example_image_hash = Datasets.get_example_image_hash()
# a = IPFS.add("nparray.txt")
file_hash = IPFS.get_file(example_image_hash)
###Output
Retrieved file hash QmSgvgwxZGaBLqkGyWemEDqikCqU52XxsYLKtdy3vGZ8uq from IPFS - Response 200
###Markdown
Need Help Here
###Code
# I am struggling to figure out how to read the bytes that are returned from get_file
Image(file_hash.content)
import numpy as np
with open("file.txt", "w") as f:
f.write(file_hash)
np.load("file.txt",allow_pickle=True)
import pandas as pd
#Other Live Peers
peers = IPFS.get_peers()
df = pd.json_normalize(pd.DataFrame.from_dict(peers)["Peers"])
df
###Output
_____no_output_____
###Markdown
Used https://flyingzumwalt.gitbooks.io/decentralized-web-primer/content/files-on-ipfs/lessons/add-and-retrieve-file-content.html Add and retrieve file from IPFS
###Code
# Don't proceed
assert 1 == 0
###Output
_____no_output_____
###Markdown
Dat - https://github.com/hypercore-protocol/cli i hyp daemon
###Code
# !npm install -g @hyperspace/cli
###Output
[K[?25h[37;40mnpm[0m [0m[30;43mWARN[0m [0m[35mdeprecated[0m dat-encoding@5.0.2: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
[K[?25h[37;40mnpm[0m [0m[30;43mWARN[0m [0m[35mdeprecated[0m debug@3.2.6: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
[K[?25h[37;40mnpm[0m [0m[30;43mWARN[0m [0m[35mdeprecated[0m fsevents@2.1.3: "Please update to latest v2.3 or v2.2"Ksio[0m[K
[K[?25h[37;40mnpm[0m [0m[30;43mWARN[0m [0m[35mdeprecated[0m cross-spawn-async@2.2.5: cross-spawn no longer requires a build toolchain, use it instead
[K[?25h[37;40mnpm[0m [0m[30;43mWARN[0m [0m[35mcheckPermissions[0m Missing write access to /usr/local/lib/node_modulesm[K
[0m[37;40mnpm[0m [0m[31;40mERR![0m [0m[35mcode[0m EACCES
[0m[37;40mnpm[0m [0m[31;40mERR![0m [0m[35msyscall[0m access
[0m[37;40mnpm[0m [0m[31;40mERR![0m [0m[35mpath[0m /usr/local/lib/node_modules
[0m[37;40mnpm[0m [0m[31;40mERR![0m [0m[35merrno[0m -13
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m Error: EACCES: permission denied, access '/usr/local/lib/node_modules'
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m [Error: EACCES: permission denied, access '/usr/local/lib/node_modules'] {
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m errno: -13,
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m code: 'EACCES',
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m syscall: 'access',
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m path: '/usr/local/lib/node_modules'
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m }
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m The operation was rejected by your operating system.
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m It is likely you do not have the permissions to access this file as the current user
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m If you believe this might be a permissions issue, please double-check the
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m permissions of the file and its containing directories, or try running
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m the command again as root/Administrator.
[0m
[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m A complete log of this run can be found in:
[0m[37;40mnpm[0m [0m[31;40mERR![0m[35m[0m /Users/adamgoldstein/.npm/_logs/2022-01-12T03_25_10_376Z-debug.log
[0m
###Markdown
Setup If you are running this generator locally(i.e. in a jupyter notebook in conda, just make sure you installed:- RDKit- DeepChem 2.5.0 & above- Tensorflow 2.4.0 & aboveThen, please skip the following part and continue from `Data Preparations`. To increase efficiency, we recommend running this molecule generator in Colab.Then, we'll first need to run the following lines of code, these will download conda with the deepchem environment in colab.
###Code
#!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
#import conda_installer
#conda_installer.install()
#!/root/miniconda/bin/conda info -e
#!pip install --pre deepchem
#import deepchem
#deepchem.__version__
###Output
_____no_output_____
###Markdown
Data PreparationsNow we are ready to import some useful functions/packages, along with our model. Import Data
###Code
import model##our model
from rdkit import Chem
from rdkit.Chem import AllChem
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import deepchem as dc
###Output
_____no_output_____
###Markdown
Then, we are ready to import our dataset for training. Here, for demonstration, we'll be using this dataset of in-vitro assay that detects inhibition of SARS-CoV 3CL protease via fluorescence.The dataset is originally from [PubChem AID1706](https://pubchem.ncbi.nlm.nih.gov/bioassay/1706), previously handled by [JClinic AIcure](https://www.aicures.mit.edu/) team at MIT into this [binarized label form](https://github.com/yangkevin2/coronavirus_data/blob/master/data/AID1706_binarized_sars.csv).
###Code
df = pd.read_csv('AID1706_binarized_sars.csv')
###Output
_____no_output_____
###Markdown
Observe the data above, it contains a 'smiles' column, which stands for the smiles representation of the molecules. There is also an 'activity' column, in which it is the label specifying whether that molecule is considered as hit for the protein.Here, we only need those 405 molecules considered as hits, and we'll be extracting features from them to generate new molecules that may as well be hits.
###Code
true = df[df['activity']==1]
###Output
_____no_output_____
###Markdown
Set Minimum Length for molecules Since we'll be using graphic neural network, it might be more helpful and efficient if our graph data are of the same size, thus, we'll eliminate the molecules from the training set that are shorter(i.e. lacking enough atoms) than our desired minimum size.
###Code
num_atoms = 6 #here the minimum length of molecules is 6
input_df = true['smiles']
df_length = []
for _ in input_df:
df_length.append(Chem.MolFromSmiles(_).GetNumAtoms() )
true['length'] = df_length #create a new column containing each molecule's length
true = true[true['length']>num_atoms] #Here we leave only the ones longer than 6
input_df = true['smiles']
input_df_smiles = input_df.apply(Chem.MolFromSmiles) #convert the smiles representations into rdkit molecules
###Output
_____no_output_____
###Markdown
Now, we are ready to apply the `featurizer` function to our molecules to convert them into graphs with nodes and edges for training.
###Code
#input_df = input_df.apply(Chem.MolFromSmiles)
train_set = input_df_smiles.apply( lambda x: model.featurizer(x,max_length = num_atoms))
train_set
###Output
_____no_output_____
###Markdown
We'll take one more step to make the train_set into separate nodes and edges, which fits the format later to supply to the model for training
###Code
nodes_train, edges_train = list(zip(*train_set) )
###Output
_____no_output_____
###Markdown
Training Now, we're finally ready for generating new molecules. We'll first import some necessay functions from tensorflow.
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
The network here we'll be using is Generative Adversarial Network, as mentioned in the project introduction. Here's a great [introduction](https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/). ![Screen Shot 2021-06-08 at 7 40 49 PM](https://user-images.githubusercontent.com/67823308/121178738-709bbd80-c891-11eb-91dc-d45e69f8f4d5.png) Here we'll first initiate a discriminator and a generator model with the corresponding functions in the package.
###Code
disc = model.make_discriminator(num_atoms)
gene = model.make_generator(num_atoms, noise_input_shape = 100)
###Output
_____no_output_____
###Markdown
Then, with the `train_batch` function, we'll supply the necessary inputs and train our network. Upon some experimentations, an epoch of around 160 would be nice for this dataset.
###Code
generator_trained = model.train_batch(
disc, gene,
np.array(nodes_train), np.array(edges_train),
noise_input_shape = 100, EPOCH = 160, BATCHSIZE = 2,
plot_hist = True, temp_result = False
)
###Output
>0, d1=0.221, d2=0.833 g=0.681, a1=100, a2=0
>1, d1=0.054, d2=0.714 g=0.569, a1=100, a2=0
>2, d1=0.026, d2=0.725 g=0.631, a1=100, a2=0
>3, d1=0.016, d2=0.894 g=0.636, a1=100, a2=0
>4, d1=0.016, d2=0.920 g=0.612, a1=100, a2=0
>5, d1=0.012, d2=0.789 g=0.684, a1=100, a2=0
>6, d1=0.014, d2=0.733 g=0.622, a1=100, a2=0
>7, d1=0.056, d2=0.671 g=0.798, a1=100, a2=100
>8, d1=0.029, d2=0.587 g=0.653, a1=100, a2=100
>9, d1=0.133, d2=0.537 g=0.753, a1=100, a2=100
>10, d1=0.049, d2=0.640 g=0.839, a1=100, a2=100
>11, d1=0.056, d2=0.789 g=0.836, a1=100, a2=0
>12, d1=0.086, d2=0.564 g=0.916, a1=100, a2=100
>13, d1=0.067, d2=0.550 g=0.963, a1=100, a2=100
>14, d1=0.062, d2=0.575 g=0.940, a1=100, a2=100
>15, d1=0.053, d2=0.534 g=1.019, a1=100, a2=100
>16, d1=0.179, d2=0.594 g=1.087, a1=100, a2=100
>17, d1=0.084, d2=0.471 g=0.987, a1=100, a2=100
>18, d1=0.052, d2=0.366 g=1.226, a1=100, a2=100
>19, d1=0.065, d2=0.404 g=1.220, a1=100, a2=100
>20, d1=0.044, d2=0.311 g=1.274, a1=100, a2=100
>21, d1=0.015, d2=0.231 g=1.567, a1=100, a2=100
>22, d1=0.010, d2=0.222 g=1.838, a1=100, a2=100
>23, d1=0.007, d2=0.177 g=1.903, a1=100, a2=100
>24, d1=0.004, d2=0.139 g=2.155, a1=100, a2=100
>25, d1=0.132, d2=0.111 g=2.316, a1=100, a2=100
>26, d1=0.004, d2=0.139 g=2.556, a1=100, a2=100
>27, d1=0.266, d2=0.133 g=2.131, a1=100, a2=100
>28, d1=0.001, d2=0.199 g=2.211, a1=100, a2=100
>29, d1=0.000, d2=0.252 g=2.585, a1=100, a2=100
>30, d1=0.000, d2=0.187 g=2.543, a1=100, a2=100
>31, d1=0.002, d2=0.081 g=2.454, a1=100, a2=100
>32, d1=0.171, d2=0.061 g=2.837, a1=100, a2=100
>33, d1=0.028, d2=0.045 g=2.858, a1=100, a2=100
>34, d1=0.011, d2=0.072 g=2.627, a1=100, a2=100
>35, d1=2.599, d2=0.115 g=1.308, a1=0, a2=100
>36, d1=0.000, d2=0.505 g=0.549, a1=100, a2=100
>37, d1=0.000, d2=1.463 g=0.292, a1=100, a2=0
>38, d1=0.002, d2=1.086 g=0.689, a1=100, a2=0
>39, d1=0.153, d2=0.643 g=0.861, a1=100, a2=100
>40, d1=0.000, d2=0.353 g=1.862, a1=100, a2=100
>41, d1=0.034, d2=0.143 g=2.683, a1=100, a2=100
>42, d1=0.003, d2=0.110 g=2.784, a1=100, a2=100
>43, d1=0.093, d2=0.058 g=2.977, a1=100, a2=100
>44, d1=0.046, d2=0.051 g=3.051, a1=100, a2=100
>45, d1=0.185, d2=0.062 g=2.922, a1=100, a2=100
>46, d1=0.097, d2=0.070 g=2.670, a1=100, a2=100
>47, d1=0.060, d2=0.073 g=2.444, a1=100, a2=100
>48, d1=0.093, d2=0.156 g=2.385, a1=100, a2=100
>49, d1=0.785, d2=0.346 g=1.026, a1=0, a2=100
>50, d1=0.057, d2=0.869 g=0.667, a1=100, a2=0
>51, d1=0.002, d2=1.001 g=0.564, a1=100, a2=0
>52, d1=0.000, d2=0.764 g=1.047, a1=100, a2=0
>53, d1=0.010, d2=0.362 g=1.586, a1=100, a2=100
>54, d1=0.033, d2=0.230 g=2.469, a1=100, a2=100
>55, d1=0.179, d2=0.134 g=2.554, a1=100, a2=100
>56, d1=0.459, d2=0.103 g=2.356, a1=100, a2=100
>57, d1=0.245, d2=0.185 g=1.769, a1=100, a2=100
>58, d1=0.014, d2=0.227 g=1.229, a1=100, a2=100
>59, d1=0.016, d2=0.699 g=0.882, a1=100, a2=0
>60, d1=0.002, d2=0.534 g=1.192, a1=100, a2=100
>61, d1=0.010, d2=0.335 g=1.630, a1=100, a2=100
>62, d1=0.019, d2=0.283 g=2.246, a1=100, a2=100
>63, d1=0.240, d2=0.132 g=2.547, a1=100, a2=100
>64, d1=0.965, d2=0.219 g=1.534, a1=0, a2=100
>65, d1=0.040, d2=0.529 g=0.950, a1=100, a2=100
>66, d1=0.012, d2=0.611 g=0.978, a1=100, a2=100
>67, d1=0.015, d2=0.576 g=1.311, a1=100, a2=100
>68, d1=0.102, d2=0.214 g=1.840, a1=100, a2=100
>69, d1=0.020, d2=0.140 g=2.544, a1=100, a2=100
>70, d1=5.089, d2=0.314 g=1.231, a1=0, a2=100
>71, d1=0.026, d2=0.700 g=0.556, a1=100, a2=0
>72, d1=0.005, d2=1.299 g=0.460, a1=100, a2=0
>73, d1=0.009, d2=1.033 g=0.791, a1=100, a2=0
>74, d1=0.013, d2=0.343 g=1.408, a1=100, a2=100
>75, d1=0.247, d2=0.267 g=1.740, a1=100, a2=100
>76, d1=0.184, d2=0.172 g=2.105, a1=100, a2=100
>77, d1=0.150, d2=0.133 g=2.297, a1=100, a2=100
>78, d1=0.589, d2=0.112 g=2.557, a1=100, a2=100
>79, d1=0.477, d2=0.232 g=1.474, a1=100, a2=100
>80, d1=0.173, d2=0.360 g=1.034, a1=100, a2=100
>81, d1=0.052, d2=0.790 g=0.936, a1=100, a2=0
>82, d1=0.042, d2=0.537 g=1.135, a1=100, a2=100
>83, d1=0.296, d2=0.363 g=1.152, a1=100, a2=100
>84, d1=0.157, d2=0.377 g=1.283, a1=100, a2=100
>85, d1=0.139, d2=0.436 g=1.445, a1=100, a2=100
>86, d1=0.163, d2=0.343 g=1.370, a1=100, a2=100
>87, d1=0.189, d2=0.290 g=1.576, a1=100, a2=100
>88, d1=1.223, d2=0.548 g=0.822, a1=0, a2=100
>89, d1=0.016, d2=1.042 g=0.499, a1=100, a2=0
>90, d1=0.013, d2=1.033 g=0.829, a1=100, a2=0
>91, d1=0.006, d2=0.589 g=1.421, a1=100, a2=100
>92, d1=0.054, d2=0.160 g=2.414, a1=100, a2=100
>93, d1=0.214, d2=0.070 g=3.094, a1=100, a2=100
>94, d1=0.445, d2=0.089 g=2.564, a1=100, a2=100
>95, d1=2.902, d2=0.180 g=1.358, a1=0, a2=100
>96, d1=0.485, d2=0.684 g=0.625, a1=100, a2=100
>97, d1=0.287, d2=1.296 g=0.405, a1=100, a2=0
>98, d1=0.159, d2=1.149 g=0.689, a1=100, a2=0
>99, d1=0.021, d2=0.557 g=1.405, a1=100, a2=100
>100, d1=0.319, d2=0.243 g=1.905, a1=100, a2=100
>101, d1=0.811, d2=0.241 g=1.523, a1=0, a2=100
>102, d1=0.469, d2=0.439 g=0.987, a1=100, a2=100
>103, d1=0.073, d2=0.760 g=0.698, a1=100, a2=0
>104, d1=0.040, d2=0.762 g=0.869, a1=100, a2=0
>105, d1=0.073, d2=0.444 g=1.453, a1=100, a2=100
>106, d1=0.455, d2=0.272 g=1.632, a1=100, a2=100
>107, d1=0.320, d2=0.365 g=1.416, a1=100, a2=100
>108, d1=0.245, d2=0.409 g=1.245, a1=100, a2=100
>109, d1=0.258, d2=0.572 g=1.146, a1=100, a2=100
>110, d1=0.120, d2=0.447 g=1.538, a1=100, a2=100
>111, d1=2.707, d2=0.376 g=1.343, a1=0, a2=100
>112, d1=3.112, d2=0.604 g=0.873, a1=0, a2=100
>113, d1=0.107, d2=0.750 g=0.873, a1=100, a2=0
>114, d1=0.284, d2=0.682 g=0.905, a1=100, a2=100
>115, d1=1.768, d2=0.717 g=0.824, a1=0, a2=0
>116, d1=0.530, d2=0.822 g=0.560, a1=100, a2=0
>117, d1=0.424, d2=0.984 g=0.613, a1=100, a2=0
>118, d1=1.608, d2=1.398 g=0.244, a1=0, a2=0
>119, d1=4.422, d2=2.402 g=0.135, a1=0, a2=0
>120, d1=0.011, d2=1.998 g=0.321, a1=100, a2=0
>121, d1=0.085, d2=1.066 g=0.815, a1=100, a2=0
>122, d1=0.895, d2=0.444 g=1.495, a1=0, a2=100
>123, d1=2.659, d2=0.288 g=1.417, a1=0, a2=100
>124, d1=1.780, d2=0.450 g=0.869, a1=0, a2=100
>125, d1=2.271, d2=1.046 g=0.324, a1=0, a2=0
>126, d1=0.836, d2=1.970 g=0.123, a1=0, a2=0
>127, d1=0.108, d2=2.396 g=0.103, a1=100, a2=0
>128, d1=0.146, d2=2.371 g=0.174, a1=100, a2=0
>129, d1=0.189, d2=1.623 g=0.424, a1=100, a2=0
>130, d1=0.508, d2=0.877 g=0.876, a1=100, a2=0
>131, d1=0.723, d2=0.423 g=1.367, a1=0, a2=100
>132, d1=1.306, d2=0.292 g=1.445, a1=0, a2=100
>133, d1=0.920, d2=0.318 g=1.378, a1=0, a2=100
>134, d1=1.120, d2=0.481 g=0.827, a1=0, a2=100
>135, d1=0.278, d2=0.763 g=0.562, a1=100, a2=0
>136, d1=0.134, d2=0.901 g=0.555, a1=100, a2=0
>137, d1=0.061, d2=0.816 g=0.864, a1=100, a2=0
>138, d1=0.057, d2=0.451 g=1.533, a1=100, a2=100
>139, d1=0.111, d2=0.214 g=2.145, a1=100, a2=100
>140, d1=0.260, d2=0.107 g=2.451, a1=100, a2=100
>141, d1=4.498, d2=0.209 g=1.266, a1=0, a2=100
>142, d1=0.016, d2=0.681 g=0.672, a1=100, a2=100
>143, d1=0.007, d2=0.952 g=0.702, a1=100, a2=0
>144, d1=0.008, d2=0.624 g=1.337, a1=100, a2=100
>145, d1=0.010, d2=0.241 g=2.114, a1=100, a2=100
>146, d1=2.108, d2=0.121 g=2.536, a1=0, a2=100
>147, d1=4.086, d2=0.111 g=2.315, a1=0, a2=100
>148, d1=1.247, d2=0.177 g=1.781, a1=0, a2=100
>149, d1=2.684, d2=0.377 g=1.026, a1=0, a2=100
>150, d1=0.572, d2=0.701 g=0.710, a1=100, a2=0
>151, d1=0.608, d2=0.899 g=0.571, a1=100, a2=0
>152, d1=0.118, d2=0.904 g=0.592, a1=100, a2=0
>153, d1=0.228, d2=0.837 g=0.735, a1=100, a2=0
>154, d1=0.353, d2=0.671 g=0.912, a1=100, a2=100
>155, d1=0.959, d2=0.563 g=0.985, a1=0, a2=100
>156, d1=0.427, d2=0.478 g=1.184, a1=100, a2=100
>157, d1=0.307, d2=0.348 g=1.438, a1=100, a2=100
>158, d1=0.488, d2=0.286 g=1.383, a1=100, a2=100
>159, d1=0.264, d2=0.333 g=1.312, a1=100, a2=100
###Markdown
There are two possible kind of failures regarding a GAN model: model collapse and failure of convergence. Model collapse would often mean that the generative part of the model wouldn't be able to generate diverse outcomes. Failure of convergence between the generative and the discriminative model could likely way be identified as that the loss for the discriminator has gone to zero or close to zero. Observe the above generated plot, in the upper plot, the loss of discriminator has not gone to zero/close to zero, indicating that the model has possibily find a balance between the generator and the discriminator. In the lower plot, the accuracy is fluctuating between 1 and 0, indicating possible variability within the data generated. Therefore, it is reasonable to conclude that within the possible range of epoch and other parameters, the model has successfully avoided the two common types of failures associated with GAN. Rewarding Phase The above `train_batch` function is set to return a trained generator. Thus, we could use that function directly and observe the possible molecules we could get from that function.
###Code
no, ed = generator_trained(np.random.randint(0,20
, size =(1,100)))#generated nodes and edges
abs(no.numpy()).astype(int).reshape(num_atoms), abs(ed.numpy()).astype(int).reshape(num_atoms,num_atoms)
###Output
_____no_output_____
###Markdown
With the `de_featurizer`, we could convert the generated matrix into a smiles molecule and plot it out=)
###Code
cat, dog = model.de_featurizer(abs(no.numpy()).astype(int).reshape(num_atoms), abs(ed.numpy()).astype(int).reshape(num_atoms,num_atoms))
Chem.MolToSmiles(cat)
Chem.MolFromSmiles(Chem.MolToSmiles(cat))
###Output
RDKit ERROR: [14:09:13] Explicit valence for atom # 1 O, 5, is greater than permitted
###Markdown
Brief Result Analysis
###Code
from rdkit import DataStructs
###Output
_____no_output_____
###Markdown
With the rdkit function of comparing similarities, here we'll demonstrate a preliminary analysis of the molecule we've generated. With "CCO" molecule as a control, we could observe that the new molecule we've generated is more similar to a random selected molecule(the fourth molecule) from the initial training set.This may indicate that our model has indeed extracted some features from our original dataset and generated a new molecule that is relevant.
###Code
DataStructs.FingerprintSimilarity(Chem.RDKFingerprint(Chem.MolFromSmiles("[Li]NBBC=N")), Chem.RDKFingerprint(Chem.MolFromSmiles("CCO")))# compare with the control
#compare with one from the original data
DataStructs.FingerprintSimilarity(Chem.RDKFingerprint(Chem.MolFromSmiles("[Li]NBBC=N")), Chem.RDKFingerprint(Chem.MolFromSmiles("CCN1C2=NC(=O)N(C(=O)C2=NC(=N1)C3=CC=CC=C3)C")))
###Output
_____no_output_____
###Markdown
Example of simple use of active learning APICompare 3 query strategies: random sampling, uncertainty sampling, and active search.Observe how we trade off between finding targets and accuracy. Imports
###Code
import warnings
warnings.filterwarnings(action='ignore', category=RuntimeWarning)
from matplotlib import pyplot as plt
import numpy as np
from sklearn.base import clone
from sklearn.datasets import make_moons
from sklearn.svm import SVC
import active_learning
from active_learning.utils import *
from active_learning.query_strats import random_sampling, uncertainty_sampling, active_search
%matplotlib inline
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Load toy data Have a little binary classification task that is not linearly separable.
###Code
X, y = make_moons(noise=0.1, n_samples=200)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
###Output
_____no_output_____
###Markdown
Training Models
###Code
# Our basic classifier will be a SVM with rbf kernel
base_clf = SVC(probability=True)
# size of the initial labeled set
init_L_size = 5
# Make 30 queries
n_queries = 30
# set random state for consistency in training data
random_state = 123
###Output
_____no_output_____
###Markdown
Random Sampling
###Code
random_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=random_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
###Output
100%|██████████| 30/30 [00:00<00:00, 650.20it/s]
###Markdown
Uncertainty Sampling
###Code
uncertainty_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=uncertainty_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
###Output
100%|██████████| 30/30 [00:00<00:00, 506.46it/s]
###Markdown
Active Search
###Code
as_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=active_search,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
###Output
100%|██████████| 30/30 [00:10<00:00, 3.00it/s]
###Markdown
Compare
###Code
xx = np.arange(n_queries)
plt.plot(xx, random_experiment_data["accuracy"], label="Random")
plt.plot(xx, uncertainty_experiment_data["accuracy"], label="Uncertainty")
plt.plot(xx, as_experiment_data["accuracy"], label="AS")
plt.title("Accuracy on Test Set vs Num Queries")
plt.ylabel("accuracy")
plt.xlabel("# queries")
plt.legend()
plt.plot(xx, random_experiment_data["history"], label="Random")
plt.plot(xx, uncertainty_experiment_data["history"], label="Uncertainty")
plt.plot(xx, as_experiment_data["history"], label="AS")
plt.title("Number of targets found")
plt.ylabel("# of targets")
plt.xlabel("# queries")
plt.legend()
###Output
_____no_output_____
###Markdown
Example of Data Analysis with DCD Hub Data First, we import the Python SDK
###Code
from dcd.entities.thing import Thing
###Output
_____no_output_____
###Markdown
We provide the thing ID and access token (replace with yours)
###Code
from dotenv import load_dotenv
import os
load_dotenv()
THING_ID = os.environ['THING_ID']
THING_TOKEN = os.environ['THING_TOKEN']
###Output
_____no_output_____
###Markdown
We instantiate a Thing with its credential, then we fetch its details
###Code
my_thing = Thing(thing_id=THING_ID, token=THING_TOKEN)
my_thing.read()
###Output
INFO:dcd:things:my-test-thing-9b80:Initialising MQTT connection for Thing 'dcd:things:my-test-thing-9b80'
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-9b80 HTTP/1.1" 200 3739
###Markdown
What does a Thing look like?
###Code
my_thing.to_json()
###Output
_____no_output_____
###Markdown
Which property do we want to explore and over which time frame?
###Code
from datetime import datetime
# What dates?
START_DATE = "2019-10-08 21:17:00"
END_DATE = "2019-11-08 21:25:00"
from datetime import datetime
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
from_ts = datetime.timestamp(datetime.strptime(START_DATE, DATE_FORMAT)) * 1000
to_ts = datetime.timestamp(datetime.strptime(END_DATE, DATE_FORMAT)) * 1000
###Output
_____no_output_____
###Markdown
Let's find this property and read the data.
###Code
PROPERTY_NAME = "IMU"
my_property = my_thing.find_property_by_name(PROPERTY_NAME)
my_property.read(from_ts, to_ts)
###Output
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): dwd.tudelft.nl:443
DEBUG:urllib3.connectionpool:https://dwd.tudelft.nl:443 "GET /api/things/dcd:things:my-test-thing-9b80/properties/imu-dc94?from=1570562220000.0&to=1573244700000.0 HTTP/1.1" 200 294149
###Markdown
How many data point did we get?
###Code
print(len(my_property.values))
###Output
3331
###Markdown
Display values
###Code
my_property.values
###Output
_____no_output_____
###Markdown
From CSV
###Code
from numpy import genfromtxt
import pandas as pd
data = genfromtxt('data.csv', delimiter=',')
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')), columns = ['x', 'y', 'z'])
data_frame
###Output
_____no_output_____
###Markdown
Plot some charts with MatplotlibIn this example we plot an histogram, distribution of all values and dimensions.
###Code
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from numpy import ma
data = np.array(my_property.values)
figure(num=None, figsize=(15, 5))
t = data_frame.index
plt.plot(t, data_frame.x, t, data_frame.y, t, data_frame.z)
plt.hist(data[:,1:])
plt.show()
###Output
_____no_output_____
###Markdown
Generate statistics with NumPy and Pandas
###Code
import numpy as np
from scipy.stats import kurtosis, skew
np.min(data[:,1:4], axis=0)
skew(data[:,1:4])
###Output
_____no_output_____
###Markdown
You can select a column (slice) of data, or a subset of data. In the example below we select rowsfrom 10 to 20 (10 in total) and the colum 1 to x (i.e skiping the first column representing the time).
###Code
data[:10,1:]
###Output
_____no_output_____
###Markdown
Out of the box, Pandas give you some statistics, do not forget to convert your array into a DataFrame.
###Code
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')))
pd.DataFrame.describe(data_frame)
data_frame.rolling(10).std()
###Output
_____no_output_____
###Markdown
Rolling / Sliding WindowTo apply statistics on a sliding (or rolling) window, we can use the rolling() function of a data frame. In the example below, we roll with a window size of 4 elements to apply a skew()
###Code
rolling2s = data_frame.rolling('2s').std()
plt.plot(rolling2s)
plt.show()
rolling100_data_points = data_frame.rolling(100).skew()
plt.plot(rolling100_data_points)
plt.show()
###Output
_____no_output_____
###Markdown
Zero Crossing
###Code
plt.hist(np.where(np.diff(np.sign(data[:,1]))))
plt.show()
###Output
_____no_output_____
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
def cal_undistort(img, objpoints, imgpoints):
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1:], None, None)
undist = cv2.undistort(img, mtx, dist, None, mtx)
return undist
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib inline
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('./camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
cv2.imshow('img',img)
cv2.waitKey(500)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
And so on and so forth...
###Code
img =cv2.imread("./camera_cal/calibration1.jpg")
img = cal_undistort(img, objpoints, imgpoints)
plt.imshow(img)
def abs_sobel_thresh(img, orient='x', sobel_kernel=3, thresh=(0, 255)):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# 2) Take the derivative in x or y given orient = 'x' or 'y'
if orient == 'x':
sobel = cv2.Sobel(gray, cv2.CV_64F, 1, 0)
else:
sobel = cv2.Sobel(gray, cv2.CV_64F, 0, 1)
# 3) Take the absolute value of the derivative or gradient
abs_sobel = np.absolute(sobel)
# 4) Scale to 8-bit (0 - 255) then convert to type = np.uint8
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
# 5) Create a mask of 1's where the scaled gradient magnitude
# is > thresh_min and < thresh_max
binary_output = np.zeros_like(scaled_sobel)
binary_output[(scaled_sobel >= thresh[0]) & (scaled_sobel <= thresh[1])] = 1
# 6) Return this mask as your binary_output image
return binary_output
def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):
# Apply the following steps to img
# 1) Convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
# 2) Take the gradient in x and y separately
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 3) Take the absolute value of the x and y gradients
abs_soblx = np.absolute(sobelx)
abs_sobly = np.absolute(sobely)
# 4) Use np.arctan2(abs_sobely, abs_sobelx) to calculate the direction of the gradient
direction = np.arctan2(abs_sobly, abs_soblx)
# 5) Create a binary mask where direction thresholds are met
binary_output = np.zeros_like(direction)
binary_output[(direction >= thresh[0]) & (direction <= thresh[1])] = 1
# 6) Return this mask as your binary_output image
return binary_output
def mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):
# Apply the following steps to img
# 1) Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# 2) Take the gradient in x and y separately
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0,ksize=sobel_kernel)
sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1,ksize=sobel_kernel)
# 3) Calculate the magnitude
soblxsq = np.square(sobelx)
soblysq = np.square(sobely)
abs_sobelxy = np.sqrt(soblxsq + soblysq)
# 4) Scale to 8-bit (0 - 255) and convert to type = np.uint8
scaled_sobelxy = np.uint8(255*abs_sobelxy/np.max(abs_sobelxy))
# 5) Create a binary mask where mag thresholds are met
binary_output = np.zeros_like(scaled_sobelxy)
binary_output[(scaled_sobelxy >= mag_thresh[0]) & (scaled_sobelxy <= mag_thresh[1])] = 1
# 6) Return this mask as your binary_output image
return binary_output
def pipeline(img, s_thresh=(170, 255), sx_thresh=(30, 200)):
img = np.copy(img)
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
combined = np.zeros_like(s_binary)
combined[(s_binary == 1) | (sxbinary == 1)] = 1
return color_binary, combined
img =plt.imread("./test_images/test2.jpg")
img = cal_undistort(img, objpoints, imgpoints)
result, combined = pipeline(img)
# Plot the result
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(24, 9))
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=40)
ax2.imshow(result)
ax2.set_title('Pipeline Result', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
ax3.imshow(combined,cmap="gray")
ax3.set_title('combined', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
def warper(img):
bottomY = 720
topY = 455
offset = 200
src = np.float32([
[585, topY],
[705, topY],
[1130, bottomY],
[190, bottomY]])
dst = np.float32([
[offset, 0],
[img.shape[1]-offset, 0],
[img.shape[1]-offset, img.shape[0]],
[offset, img.shape[0]]])
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(img, M, (img.shape[1], img.shape[0])) # keep same size as input image
return warped
img =plt.imread("./test_images/test2.jpg")
img = cal_undistort(img, objpoints, imgpoints)
test = warper(img)
result, combined = pipeline(test)
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(24, 9))
f.tight_layout()
ax1.imshow(test)
ax1.set_title('Warped', fontsize=40)
ax2.imshow(result)
ax2.set_title('Pipeline Result', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
ax3.imshow(combined,cmap="gray")
ax3.set_title('combined', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
def find_lane_pixels(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
# Identify the nonzero pixels in x and y within the window #
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
### TO-DO: Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
print(left_fit)
print(right_fit)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Plots the left and right polynomials on the lane lines
plt.plot(left_fitx, ploty, color='yellow')
plt.plot(right_fitx, ploty, color='yellow')
return out_img
img =plt.imread("./test_images/test2.jpg")
out_img = fit_polynomial(combined)
plt.imshow(out_img)
def fit_polynomial_line(binary_warped):
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
###Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
return left_fit, right_fit
def window_search(img):
img = cal_undistort(img, objpoints, imgpoints)
img = warper(img)
result, binary_warped = pipeline(img)
left_fit, right_fit = fit_polynomial_line(binary_warped)
return left_fit, right_fit
img =plt.imread("./test_images/test3.jpg")
window_search(img)
def fit_poly(img_shape, leftx, lefty, rightx, righty):
###Fit a second order polynomial to each with np.polyfit() ###
left_fit = np.polyfit(lefty,leftx,2)
right_fit = np.polyfit(righty,rightx,2)
# Generate x and y values for plotting
ploty = np.linspace(0, img_shape[0]-1, img_shape[0])
###Calc both polynomials using ploty, left_fit and right_fit ###
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
return left_fit, right_fit, ploty
def search_around_poly(img,left_fit, right_fit):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 100
# warp the image
img = cal_undistort(img, objpoints, imgpoints)
img = warper(img)
result, binary_warped = pipeline(img)
# Grab activated pixels
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit new polynomials
left_fitx, right_fitx, ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty)
return left_fitx, right_fitx
left_fit, right_fit = window_search(img)
search_around_poly(img,left_fit, right_fit)
def measure_curvature_real(left_fit,right_fit,shape_img):
'''
Calculates the curvature of polynomial functions in meters.
'''
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = shape_img[0] - 1
##### TO-DO: Implement the calculation of R_curve (radius of curvature) #####
left_curverad = ((1 + (2*left_fit[0]*y_eval*ym_per_pix + left_fit[1])**2)**1.5) / np.absolute(2*left_fit[0])
right_curverad = ((1 + (2*right_fit[0]*y_eval*ym_per_pix + right_fit[1])**2)**1.5) / np.absolute(2*right_fit[0])
return left_curverad, right_curverad
img =plt.imread("./test_images/test1.jpg")
left_fit, right_fit = window_search(img)
measure_curvature_real(left_fit,right_fit,img.shape)
(1238.7524942803361, 1723.66434908264)
def drawLine(img, left_fit, right_fit):
"""
Draw the lane lines on the image `img` using the poly `left_fit` and `right_fit`.
"""
yMax = img.shape[0]
ploty = np.linspace(0, yMax - 1, yMax)
color_warp = np.zeros_like(img).astype(np.uint8)
# Calculate points.
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
print(color_warp.shape)
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
bottomY = 720
topY = 455
offset = 200
src = np.float32([
[585, topY],
[705, topY],
[1130, bottomY],
[190, bottomY]])
dst = np.float32([
[offset, 0],
[img.shape[1]-offset, 0],
[img.shape[1]-offset, img.shape[0]],
[offset, img.shape[0]]])
M = cv2.getPerspectiveTransform(src, dst)
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, np.linalg.inv(M), (img.shape[1], img.shape[0]))
return cv2.addWeighted(img, 1, newwarp, 0.3, 0)
img =plt.imread("./test_images/test6.jpg")
left_fit, right_fit = window_search(img)
output = drawLine(img, left_fit, right_fit)
plt.imshow(output)
###Output
(720, 1280, 3)
###Markdown
Normal Tolerance Interval Example
###Code
import normtolint as nti
###Output
_____no_output_____
###Markdown
Example 4.8 from Meeker, William Q.; Hahn, Gerald J.; Escobar, Luis A.. Statistical Intervals: A Guide for Practitioners and Researchers (Wiley Series in Probability and Statistics) (p. 54). Wiley. Kindle Edition.Assume an electronic circuit is designed to produce an output voltage. For $n = 5$ units, the voltages show mean $\bar{x} = 50.10$ volts and standard deviation $s = 1.31$ volts. Suppose now that the manufacturer wanted a two-sided 95% confidence tolerance interval to contain a proportion 0.90 of the distribution of the broader population of shipping units under a normal distribution assumption.
###Code
n = 5
x_bar = 50.1
s = 1.31
coverage = 0.90
confidence = 0.95
###Output
_____no_output_____
###Markdown
Compute the appropriate tolerance factor.
###Code
k = nti.tolerance_factor(n, coverage, confidence)
###Output
_____no_output_____
###Markdown
Form the interval.
###Code
(x_bar - k * s, x_bar + k * s)
###Output
_____no_output_____
###Markdown
Imports
###Code
import copy
import requests
from validscrape.utils.data_munge import (clean_text, checkbox_boolean,
parse_datetime, parse_date)
from validscrape import target
from validscrape import extract
###Output
_____no_output_____
###Markdown
stuff from pupa
###Code
#from pupa.scrape.schemas.common import fuzzy_date, fuzzy_datetime_blank
fuzzy_date = {
"type": "string",
"pattern": "(^[0-9]{4})?(-[0-9]{2}){0,2}$"
}
fuzzy_date_blank = {
"type": "string",
"pattern": "(^[0-9]{4})?(-[0-9]{2}){0,2}$",
"blank": True
}
fuzzy_datetime_blank = {
"type": "string",
"pattern": "(^[0-9]{4})?(-[0-9]{2}){0,2}( [0-9]{2}:[0-9]{2}:[0-9]{2})?$",
"blank": True
}
def pupa_date(parse_properties):
pd = copy.deepcopy(fuzzy_date)
pd.update(parse_properties)
return pd
def pupa_datetime_blank(parse_properties):
pd = copy.deepcopy(fuzzy_datetime_blank)
pd.update(parse_properties)
return pd
###Output
_____no_output_____
###Markdown
Reference data
###Code
# scrapers_us_federal: unitedstates.ref.sopr_lobbying_reference
FILING_TYPES = [
{
"action": "registration",
"code": "1",
"name": "REGISTRATION"
},
{
"action": "registration_amendment",
"code": "2",
"name": "REGISTRATION AMENDMENT"
},
{
"action": "report",
"code": "3",
"name": "MID-YEAR REPORT"
},
{
"action": "report",
"code": "4",
"name": "MID-YEAR (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "5",
"name": "MID-YEAR AMENDMENT"
},
{
"action": "termination",
"code": "6",
"name": "MID-YEAR TERMINATION"
},
{
"action": "termination_letter",
"code": "7",
"name": "MID-YEAR TERMINATION LETTER"
},
{
"action": "termination_amendment",
"code": "8",
"name": "MID-YEAR TERMINATION AMENDMENT"
},
{
"action": "report",
"code": "9",
"name": "YEAR-END REPORT"
},
{
"action": "report",
"code": "10",
"name": "YEAR-END (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "11",
"name": "YEAR-END AMENDMENT"
},
{
"action": "termination",
"code": "12",
"name": "YEAR-END TERMINATION"
},
{
"action": "termination_letter",
"code": "13",
"name": "YEAR-END TERMINATION LETTER"
},
{
"action": "termination_amendment",
"code": "14",
"name": "YEAR-END TERMINATION AMENDMENT"
},
{
"action": "termination",
"code": "15",
"name": "YEAR-END TERMINATION (NO ACTIVITY)"
},
{
"action": "termination",
"code": "16",
"name": "MID-YEAR TERMINATION (NO ACTIVITY)"
},
{
"action": "misc_termination",
"code": "17",
"name": "MISC TERM"
},
{
"action": "misc_document",
"code": "18",
"name": "MISC. DOC"
},
{
"action": "termination_amendment",
"code": "19",
"name": "MID-YEAR TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "20",
"name": "MID-YEAR AMENDMENT (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "21",
"name": "YEAR-END AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_amendment",
"code": "22",
"name": "YEAR-END TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "misc_update",
"code": "29",
"name": "UPDATE PAGE IN A REPORT"
},
{
"action": "report",
"code": "51",
"name": "FIRST QUARTER REPORT"
},
{
"action": "report",
"code": "52",
"name": "FIRST QUARTER (NO ACTIVITY)"
},
{
"action": "termination",
"code": "53",
"name": "FIRST QUARTER TERMINATION"
},
{
"action": "termination",
"code": "54",
"name": "FIRST QUARTER TERMINATION (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "55",
"name": "FIRST QUARTER AMENDMENT"
},
{
"action": "report_amendment",
"code": "56",
"name": "FIRST QUARTER AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_amendment",
"code": "57",
"name": "FIRST QUARTER TERMINATION AMENDMENT"
},
{
"action": "termination_amendment",
"code": "58",
"name": "FIRST QUARTER TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_letter",
"code": "59",
"name": "FIRST QUARTER TERMINATION LETTER"
},
{
"action": "report",
"code": "60",
"name": "SECOND QUARTER REPORT"
},
{
"action": "report",
"code": "61",
"name": "SECOND QUARTER (NO ACTIVITY)"
},
{
"action": "termination",
"code": "62",
"name": "SECOND QUARTER TERMINATION"
},
{
"action": "termination",
"code": "63",
"name": "SECOND QUARTER TERMINATION (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "64",
"name": "SECOND QUARTER AMENDMENT"
},
{
"action": "report_amendment",
"code": "65",
"name": "SECOND QUARTER AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_amendment",
"code": "66",
"name": "SECOND QUARTER TERMINATION AMENDMENT"
},
{
"action": "termination_amendment",
"code": "67",
"name": "SECOND QUARTER TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_letter",
"code": "68",
"name": "SECOND QUARTER TERMINATION LETTER"
},
{
"action": "report",
"code": "69",
"name": "THIRD QUARTER REPORT"
},
{
"action": "report",
"code": "70",
"name": "THIRD QUARTER (NO ACTIVITY)"
},
{
"action": "termination",
"code": "71",
"name": "THIRD QUARTER TERMINATION"
},
{
"action": "termination",
"code": "72",
"name": "THIRD QUARTER TERMINATION (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "73",
"name": "THIRD QUARTER AMENDMENT"
},
{
"action": "report_amendment",
"code": "74",
"name": "THIRD QUARTER AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_amendment",
"code": "75",
"name": "THIRD QUARTER TERMINATION AMENDMENT"
},
{
"action": "termination_amendment",
"code": "76",
"name": "THIRD QUARTER TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_letter",
"code": "77",
"name": "THIRD QUARTER TERMINATION LETTER"
},
{
"action": "report",
"code": "78",
"name": "FOURTH QUARTER REPORT"
},
{
"action": "report",
"code": "79",
"name": "FOURTH QUARTER (NO ACTIVITY)"
},
{
"action": "termination",
"code": "80",
"name": "FOURTH QUARTER TERMINATION"
},
{
"action": "termination",
"code": "81",
"name": "FOURTH QUARTER TERMINATION (NO ACTIVITY)"
},
{
"action": "report_amendment",
"code": "82",
"name": "FOURTH QUARTER AMENDMENT"
},
{
"action": "report_amendment",
"code": "83",
"name": "FOURTH QUARTER AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_amendment",
"code": "84",
"name": "FOURTH QUARTER TERMINATION AMENDMENT"
},
{
"action": "termination_amendment",
"code": "85",
"name": "FOURTH QUARTER TERMINATION AMENDMENT (NO ACTIVITY)"
},
{
"action": "termination_letter",
"code": "86",
"name": "FOURTH QUARTER TERMINATION LETTER"
}
]
GENERAL_ISSUE_CODES = [
{
"issue_code": "ACC",
"description": "Accounting"
},
{
"issue_code": "CPI",
"description": "Computer Industry"
},
{
"issue_code": "AER",
"description": "Aerospace"
},
{
"issue_code": "REL",
"description": "Religion"
},
{
"issue_code": "MIA",
"description": "Media (Information/Publishing)"
},
{
"issue_code": "DOC",
"description": "District of Columbia"
},
{
"issue_code": "CAW",
"description": "Clean Air & Water (Quality)"
},
{
"issue_code": "CPT",
"description": "Copyright/Patent/Trademark"
},
{
"issue_code": "ANI",
"description": "Animals"
},
{
"issue_code": "TOB",
"description": "Tobacco"
},
{
"issue_code": "FUE",
"description": "Fuel/Gas/Oil"
},
{
"issue_code": "TOU",
"description": "Travel/Tourism"
},
{
"issue_code": "CIV",
"description": "Civil Rights/Civil Liberties"
},
{
"issue_code": "NAT",
"description": "Natural Resources"
},
{
"issue_code": "BAN",
"description": "Banking"
},
{
"issue_code": "BEV",
"description": "Beverage Industry"
},
{
"issue_code": "AGR",
"description": "Agriculture"
},
{
"issue_code": "DEF",
"description": "Defense"
},
{
"issue_code": "CON",
"description": "Constitution"
},
{
"issue_code": "MMM",
"description": "Medicare/Medicaid"
},
{
"issue_code": "GOV",
"description": "Government Issues"
},
{
"issue_code": "SCI",
"description": "Science/Technology"
},
{
"issue_code": "URB",
"description": "Urban Development/Municipalities"
},
{
"issue_code": "TAR",
"description": "Miscellaneous Tariff Bills"
},
{
"issue_code": "COM",
"description": "Communications/Broadcasting/Radio/TV"
},
{
"issue_code": "TAX",
"description": "Taxation/Internal Revenue Code"
},
{
"issue_code": "TEC",
"description": "Telecommunications"
},
{
"issue_code": "ROD",
"description": "Roads/Highway"
},
{
"issue_code": "POS",
"description": "Postal"
},
{
"issue_code": "RET",
"description": "Retirement"
},
{
"issue_code": "TOR",
"description": "Torts"
},
{
"issue_code": "GAM",
"description": "Gaming/Gambling/Casino"
},
{
"issue_code": "SMB",
"description": "Small Business"
},
{
"issue_code": "FAM",
"description": "Family Issues/Abortion/Adoption"
},
{
"issue_code": "WAS",
"description": "Waste (hazardous/solid/interstate/nuclear)"
},
{
"issue_code": "UTI",
"description": "Utilities"
},
{
"issue_code": "DIS",
"description": "Disaster Planning/Emergencies"
},
{
"issue_code": "WEL",
"description": "Welfare"
},
{
"issue_code": "RRR",
"description": "Railroads"
},
{
"issue_code": "BUD",
"description": "Budget/Appropriations"
},
{
"issue_code": "MON",
"description": "Minting/Money/Gold Standard"
},
{
"issue_code": "ADV",
"description": "Advertising"
},
{
"issue_code": "VET",
"description": "Veterans"
},
{
"issue_code": "HOM",
"description": "Homeland Security"
},
{
"issue_code": "TRU",
"description": "Trucking/Shipping"
},
{
"issue_code": "UNM",
"description": "Unemployment"
},
{
"issue_code": "FOR",
"description": "Foreign Relations"
},
{
"issue_code": "ENG",
"description": "Energy/Nuclear"
},
{
"issue_code": "FIR",
"description": "Firearms/Guns/Ammunition"
},
{
"issue_code": "EDU",
"description": "Education"
},
{
"issue_code": "IMM",
"description": "Immigration"
},
{
"issue_code": "CHM",
"description": "Chemicals/Chemical Industry"
},
{
"issue_code": "TRD",
"description": "Trade (Domestic & Foreign)"
},
{
"issue_code": "BNK",
"description": "Bankruptcy"
},
{
"issue_code": "HCR",
"description": "Health Issues"
},
{
"issue_code": "HOU",
"description": "Housing"
},
{
"issue_code": "AUT",
"description": "Automotive Industry"
},
{
"issue_code": "ENV",
"description": "Environmental/Superfund"
},
{
"issue_code": "RES",
"description": "Real Estate/Land Use/Conservation"
},
{
"issue_code": "FOO",
"description": "Food Industry (Safety, Labeling, etc.)"
},
{
"issue_code": "FIN",
"description": "Financial Institutions/Investments/Securities"
},
{
"issue_code": "CSP",
"description": "Consumer Issues/Safety/Protection"
},
{
"issue_code": "MED",
"description": "Medical/Disease Research/Clinical Labs"
},
{
"issue_code": "MAR",
"description": "Marine/Maritime/Boating/Fisheries"
},
{
"issue_code": "ART",
"description": "Arts/Entertainment"
},
{
"issue_code": "INT",
"description": "Intelligence and Surveillance"
},
{
"issue_code": "APP",
"description": "Apparel/Clothing Industry/Textiles"
},
{
"issue_code": "TRA",
"description": "Transportation"
},
{
"issue_code": "ALC",
"description": "Alcohol & Drug Abuse"
},
{
"issue_code": "INS",
"description": "Insurance"
},
{
"issue_code": "CDT",
"description": "Commodities (Big Ticket)"
},
{
"issue_code": "LBR",
"description": "Labor Issues/Antitrust/Workplace"
},
{
"issue_code": "AVI",
"description": "Aviation/Aircraft/Airlines"
},
{
"issue_code": "ECN",
"description": "Economics/Economic Development"
},
{
"issue_code": "IND",
"description": "Indian/Native American Affairs"
},
{
"issue_code": "SPO",
"description": "Sports/Athletics"
},
{
"issue_code": "LAW",
"description": "Law Enforcement/Crime/Criminal Justice"
},
{
"issue_code": "PHA",
"description": "Pharmacy"
},
{
"issue_code": "MAN",
"description": "Manufacturing"
}
]
###Output
_____no_output_____
###Markdown
Schemas
###Code
sopr_general_issue_codes = [i['issue_code'] for i in GENERAL_ISSUE_CODES]
###Output
_____no_output_____
###Markdown
LD1
###Code
ld1_schema = {
"title": "Lobbying Registration",
"description": "Lobbying Disclosure Act of 1995 (Section 4)",
"type": "object",
"properties": {
"_meta": {
"type": "object",
"properties": {
"document_id": {
"type": "string",
"format": "uuid_hex",
},
}
},
"affiliated_organizations_url": {
"type": ["null", "string"],
"format": "url_http",
"missing": True,
"blank": True,
'path': '/html/body/table[15]/tbody/td[2]/div',
'parser': clean_text
},
"signature": {
"type": "string",
"blank": False,
'path': '/html/body/table[20]/tbody/tr/td[2]/div',
'parser': clean_text
},
"datetimes": {
"type": "object",
"properties": {
"signature_date": pupa_datetime_blank({
'path': '/html/body/table[20]/tbody/tr/td[4]/div',
'parser': parse_datetime
}),
"effective_date": pupa_datetime_blank({
'path': '/html/body/table[2]/tbody/tr[1]/td[3]/div',
'parser': parse_datetime
})
}
},
"registration_type": {
"type": "object",
"properties": {
"new_registrant": {
"type": "boolean",
'path': '/html/body/div[1]/input[1]',
'parser': checkbox_boolean
},
"new_client_for_existing_registrant": {
"type": "boolean",
'path': '/html/body/div[1]/input[2]',
'parser': checkbox_boolean
},
"is_amendment": {
"type": "boolean",
'path': '/html/body/div[1]/input[3]',
'parser': checkbox_boolean
}
}
},
"registrant": {
"type": "object",
"properties": {
"organization_or_lobbying_firm": {
"type": "boolean",
'path': '/html/body/p[3]/input[1]',
'parser': checkbox_boolean
},
"self_employed_individual": {
"type": "boolean",
'path': '/html/body/p[3]/input[2]',
'parser': checkbox_boolean
},
"registrant_org_name": {
"type": ["null", "string"],
'path': '/html/body/table[3]/tbody/tr/td[contains(.,"Organization")]/following-sibling::td[1]/div',
'parser': clean_text,
'missing': True,
},
"registrant_individual_prefix": {
"type": ["null", "string"],
'path': '/html/body/table[3]/tbody/tr/td[contains(.,"Prefix")]/following-sibling::td[1]/div',
'parser': clean_text,
'missing': True,
},
"registrant_individual_firstname": {
"type": ["null", "string"],
'path': '/html/body/table[3]/tbody/tr/td[5]/div',
'parser': clean_text,
'missing': True,
},
"registrant_individual_lastname": {
"type": ["null", "string"],
'path': '/html/body/table[3]/tbody/tr/td[7]/div',
'parser': clean_text,
'missing': True,
},
"registrant_address_one": {
"type": "string",
'path': '/html/body/table[4]/tbody/tr/td[2]/div',
'parser': clean_text
},
"registrant_address_two": {
"type": "string",
"blank": True,
'path': '/html/body/table[4]/tbody/tr/td[4]/div',
'parser': clean_text
},
"registrant_city": {
"type": "string",
'path': '/html/body/table[5]/tbody/tr/td[2]/div',
'parser': clean_text
},
"registrant_state": {
"type": "string",
"blank": True,
'path': '/html/body/table[5]/tbody/tr/td[4]/div',
'parser': clean_text
},
"registrant_zip": {
"type": "string",
"blank": True,
'path': '/html/body/table[5]/tbody/tr/td[6]/div',
'parser': clean_text
},
"registrant_country": {
"type": "string",
'path': '/html/body/table[5]/tbody/tr/td[8]/div',
'parser': clean_text
},
"registrant_ppb_city": {
"type": "string",
"blank": True,
'path': '/html/body/table[6]/tbody/tr/td[2]/div',
'parser': clean_text
},
"registrant_ppb_state": {
"type": "string",
"blank": True,
'path': '/html/body/table[6]/tbody/tr/td[4]/div',
'parser': clean_text
},
"registrant_ppb_zip": {
"type": "string",
"blank": True,
'path': '/html/body/table[6]/tbody/tr/td[6]/div',
'parser': clean_text
},
"registrant_ppb_country": {
"type": "string",
"blank": True,
'path': '/html/body/table[6]/tbody/tr/td[8]/div',
'parser': clean_text
},
"registrant_international_phone": {
"type": "boolean",
'path': '/html/body/table[7]/tbody/tr/td[2]/input',
'parser': checkbox_boolean
},
"registrant_contact_name": {
"type": "string",
'path': '/html/body/table[8]/tbody/tr/td[2]/div',
'parser': clean_text
},
"registrant_contact_phone": {
"type": "string",
'path': '/html/body/table[8]/tbody/tr/td[4]/div',
'parser': clean_text
},
"registrant_contact_email": {
"type": "string",
"format": "email",
'path': '/html/body/table[8]/tbody/tr/td[6]/div',
'parser': clean_text
},
"registrant_general_description": {
"type": "string",
'path': '/html/body/div[2]',
'parser': clean_text
},
"registrant_house_id": {
"type": "string",
"blank": True,
'path': '/html/body/table[2]/tbody/tr[2]/td[2]/div',
'parser': clean_text
},
"registrant_senate_id": {
"type": "string",
'path': '/html/body/table[2]/tbody/tr[2]/td[5]/div',
'parser': clean_text
}
}
},
"client": {
"type": "object",
"properties": {
"client_self": {
"type": "boolean",
'path': '/html/body/p[4]/input',
'parser': checkbox_boolean
},
"client_name": {
"type": "string",
'path': '/html/body/table[9]/tbody/tr[1]/td[2]/div',
'parser': clean_text
},
"client_general_description": {
"type": "string",
"blank": True,
'path': '/html/body/div[3]',
'parser': clean_text
},
"client_address": {
"type": "string",
"blank": True,
'path': '/html/body/table[9]/tbody/tr[2]/td[2]/div',
'parser': clean_text
},
"client_city": {
"type": "string",
"blank": True,
'path': '/html/body/table[10]/tbody/tr/td[2]/div',
'parser': clean_text
},
"client_state": {
"type": "string",
"blank": True,
'path': '/html/body/table[10]/tbody/tr/td[4]/div',
'parser': clean_text
},
"client_zip": {
"type": "string",
"blank": True,
'path': '/html/body/table[10]/tbody/tr/td[6]/div',
'parser': clean_text
},
"client_country": {
"type": "string",
"blank": True,
'path': '/html/body/table[10]/tbody/tr/td[8]/div',
'parser': clean_text
},
"client_ppb_city": {
"type": "string",
"blank": True,
'path': '/html/body/table[11]/tbody/tr/td[2]/div',
'parser': clean_text
},
"client_ppb_state": {
"type": "string",
"blank": True,
'path': '/html/body/table[11]/tbody/tr/td[4]/div',
'parser': clean_text
},
"client_ppb_zip": {
"type": "string",
"blank": True,
'path': '/html/body/table[11]/tbody/tr/td[6]/div',
'parser': clean_text
},
"client_ppb_country": {
"type": "string",
"blank": True,
'path': '/html/body/table[11]/tbody/tr/td[8]/div',
'parser': clean_text
}
}
},
"lobbying_issues_detail": {
"type": "string",
"blank": True,
'path': '/html/body/p[10]',
'parser': clean_text
},
"lobbying_issues": {
"type": "array",
'even_odd': False,
'path': '/html/body/table[13]/tbody',
"items": {
"type": "object",
"path": "tr//td/div",
"properties": {
"general_issue_area": {
"type": ["string"],
"enum": sopr_general_issue_codes,
'path': '.',
'parser': clean_text,
'blank': True
}
}
}
},
"affiliated_organizations": {
"type": "array",
'even_odd': True,
'path': '/html/body/table[16]/tbody',
"items": {
"type": "object",
'path': 'tr[position() > 3]',
'missing': True,
"properties": {
"affiliated_organization_name": {
"type": "string",
"even_odd": "even",
'path': 'td[1]/div',
'parser': clean_text
},
"affiliated_organization_address": {
"type": "string",
"even_odd": "even",
'path': 'td[2]/div',
'parser': clean_text
},
"affiliated_organization_city": {
"type": "string",
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[1]/div',
'parser': clean_text
},
"affiliated_organization_state": {
"type": "string",
"blank": True,
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"affiliated_organization_zip": {
"type": "string",
"blank": True,
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[3]/div',
'parser': clean_text
},
"affiliated_organization_country": {
"type": "string",
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[4]/div',
'parser': clean_text
},
"affiliated_organization_ppb_state": {
"type": "string",
"blank": True,
"even_odd": "odd",
'path': 'td[3]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"affiliated_organization_ppb_city": {
"type": "string",
"blank": True,
"even_odd": "even",
'path': 'td[3]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"affiliated_organization_ppb_country": {
"type": "string",
"blank": True,
"even_odd": "odd",
'path': 'td[3]/table/tbody/tr/td[4]/div',
'parser': clean_text
}
}
}
},
'foreign_entities_no': {
'type': 'boolean',
'path': '/html/body/table[17]/tbody/tr/td[1]/input',
'parser': checkbox_boolean
},
'foreign_entities_yes': {
'type': 'boolean',
'path': '/html/body/table[17]/tbody/tr/td[3]/input',
'parser': checkbox_boolean
},
"foreign_entities": {
"type": "array",
'even_odd': True,
'path': '/html/body/table[19]/tbody',
'missing': True,
"items": {
"type": "object",
"path": "tr",
'missing': True,
"properties": {
"foreign_entity_name": {
"type": "string",
"even_odd": "odd",
'path': 'td[1]/div',
'parser': clean_text
},
"foreign_entity_address": {
"type": "string",
"even_odd": "even",
'path': 'td[2]/div',
'parser': clean_text
},
"foreign_entity_city": {
"type": "string",
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[1]/div',
'parser': clean_text
},
"foreign_entity_state": {
"type": "string",
"even_odd": "odd",
"blank": True,
'path': 'td[2]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"foreign_entity_country": {
"type": "string",
"even_odd": "odd",
'path': 'td[2]/table/tbody/tr/td[3]/div',
'parser': clean_text
},
"foreign_entity_ppb_city": {
"type": "string",
"even_odd": "even",
"blank": True,
'path': 'td[3]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"foreign_entity_ppb_state": {
"type": "string",
"even_odd": "odd",
"blank": True,
'path': 'td[3]/table/tbody/tr/td[2]/div',
'parser': clean_text
},
"foreign_entity_ppb_country": {
"type": "string",
"even_odd": "odd",
"blank": True,
'path': 'td[3]/table/tbody/tr/td[4]/div',
'parser': clean_text
},
"foreign_entity_amount": {
"type": "string",
"even_odd": "odd",
"blank": True,
'path': 'td[4]/div',
'parser': clean_text
},
"foreign_entity_ownership_percentage": {
"type": "string",
"even_odd": "odd",
"blank": True,
'path': 'td[5]/div',
'parser': clean_text
}
}
}
},
"lobbyists": {
"type": "array",
'path': '/html/body/table[12]/tbody',
"items": {
"type": "object",
"path": "tr[position() > 2]",
"properties": {
"lobbyist_suffix": {
"type": "string",
"blank": True,
'path': 'td[3]',
'parser': clean_text
},
"lobbyist_first_name": {
"type": "string",
'path': 'td[1]',
'parser': clean_text
},
"lobbyist_last_name": {
"type": "string",
'path': 'td[2]',
"blank": True,
'parser': clean_text
},
"lobbyist_covered_official_position": {
"type": "string",
"blank": True,
'path': 'td[4]',
'parser': clean_text
}
}
}
},
}
}
###Output
_____no_output_____
###Markdown
House Post-Employment
###Code
post_employment_schema = {
"title": "House Post-Employment Lobbying Restriction",
"description": "Lobbying restriction reported by the House Clerk's Office",
"type": "object",
"object_path": "/PostEmployment/Employee",
"properties": {
"_meta": {
"type": "object",
"properties": {
"document_id": {
"type": "string",
"format": "uuid_hex",
},
}
},
"employee_name": {
"type": "string",
'path': 'EmployeeName',
'parser': clean_text,
},
"office_name": {
"type": ["string"],
'path': 'OfficeName',
'parser': clean_text,
},
"termination_date": pupa_date({
'path': 'TerminationDate',
'parser': parse_date
}),
"lobbying_eligibility_date": pupa_date({
'path': 'LobbyingEligibilityDate',
'parser': parse_date
}),
}
}
###Output
_____no_output_____
###Markdown
Validscrape Setup Targets
###Code
class LobbyingRegistrationTarget(target.Target):
schema = ld1_schema
class PostEmploymentTarget(target.Target):
schema = post_employment_schema
###Output
_____no_output_____
###Markdown
Extractors
###Code
lobbying_registration_extractor = extract.HTMLSchemaExtractor(LobbyingRegistrationTarget)
postemployment_extractor = extract.XMLSchemaExtractor(PostEmploymentTarget)
###Output
_____no_output_____
###Markdown
Extracting Registration (HTML)
###Code
ld1_eg = 'http://soprweb.senate.gov/index.cfm?event=getFilingDetails&filingID=e031bb00-861b-4121-b3d6-e609e3afe62b&filingTypeID=1'
resp = requests.get(ld1_eg)
type(resp.content)
from io import BytesIO
r_targets = [t for t in lobbying_registration_extractor.do_extract(resp.content)]
r_targets
r_target = targets[0]
r_target.record
###Output
_____no_output_____
###Markdown
Post-Employment
###Code
with open('/home/blannon/og_data/post-employment/house/PostEmployment.xml') as fin:
pe_targets = [t for t in postemployment_extractor.do_extract(fin)]
pe_targets[:10]
pe_target = pe_targets[0]
pe_target.record
###Output
_____no_output_____
###Markdown
Deep Prior Distribution of Relaxation Times In this tutorial we will reproduce Figure 2 in Liu, J., & Ciucci, F. (2020). The Deep-Prior Distribution of Relaxation Times. Journal of The Electrochemical Society, 167(2), 026506 https://iopscience.iop.org/article/10.1149/1945-7111/ab631a/metaThe DP-DRT method is our next newly developed deep learning based approach to obtain the DRT from the EIS data. The DP-DRT is trained on a single electrochemical impedance spectrum. A single random input is given to the nerural network underlying the DP-DRT.
###Code
import numpy as np
import os
import matplotlib.pyplot as plt
import random as rnd
import math
from math import sin, cos, pi
import torch
import torch.nn.functional as F
import compute_DRT
%matplotlib inline
# check the device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**2,1), 'MB')
print('Cached: ', round(torch.cuda.memory_cached(0)/1024**2,1), 'MB')
# we will assume you have a cpu
#if you want to use a GPU, you will need to use cuda
###Output
Using device: cpu
###Markdown
1) Problem setup 1.1) Generate a single stochastic experiment note: the exact circuit is a ZARCThe impedance of a ZARC can be written as$$Z^{\rm exact}(f) = R_\infty + \displaystyle \frac{1}{\displaystyle \frac{1}{R_{\rm ct}}+C \left(i 2\pi f\right)^\phi}$$where $\displaystyle C = \frac{\tau_0^\phi}{R_{\rm ct}}$.The analytical DRT can be computed analytically as$$\gamma(\log \tau) = \displaystyle \frac{\displaystyle R_{\rm ct}}{\displaystyle 2\pi} \displaystyle \frac{\displaystyle \sin\left((1-\phi)\pi\right)}{\displaystyle \cosh(\phi \log(\tau/\tau_0))-\cos(\pi(1-\phi))}$$
###Code
# set the seed for the random number generators
rng = rnd.seed(214975)
rng_np = np.random.seed(213912)
torch.manual_seed(213912)
# define frequency range, from 1E-4 to 1E4 with 10 ppd
N_freqs = 81
freq_vec = np.logspace(-4., 4., num=N_freqs, endpoint=True)
tau_vec = 1./freq_vec
# define parameters for ZARC model and calculate the impedance and gamma following the above equations
R_inf = 10
R_ct = 50
phi = 0.8
tau_0 = 1
C = tau_0**phi/R_ct
# exact Z and gamma
Z = R_inf + 1./(1./R_ct+C*(1j*2.*pi*freq_vec)**phi)
gamma_exact = (R_ct)/(2.*pi)*sin((1.-phi)*pi)/(np.cosh(phi*np.log(tau_vec/tau_0))-cos((1.-phi)*pi))
# adding noise to the impedance data
sigma_n_exp = 0.1
Z_exp = Z + sigma_n_exp*(np.random.normal(0,1,N_freqs) + 1j*np.random.normal(0,1,N_freqs))
###Output
_____no_output_____
###Markdown
1.2) Build $\mathbf A_{\rm re}$ and $\mathbf A_{\rm im}$ matrices
###Code
# define the matrices that calculate the impedace from DRT, i.e., Z_re = A_re * gamma, Z_im = A_im * gamma
A_re = compute_DRT.A_re(freq_vec)
A_im = compute_DRT.A_im(freq_vec)
###Output
_____no_output_____
###Markdown
1.3) Take vectors and matrices from numpy to torch
###Code
# transform impedance variables to tensors
Z_exp_re_torch = torch.from_numpy(np.real(Z_exp)).type(torch.FloatTensor).reshape(1,N_freqs)
Z_exp_im_torch = torch.from_numpy(np.imag(Z_exp)).type(torch.FloatTensor).reshape(1,N_freqs)
# tranform gamma
gamma_exact_torch = torch.from_numpy(gamma_exact).type(torch.FloatTensor)
# transform these matrices into tensors
A_re_torch = torch.from_numpy(A_re.T).type(torch.FloatTensor)
A_im_torch = torch.from_numpy(A_im.T).type(torch.FloatTensor)
###Output
_____no_output_____
###Markdown
2) Setup DP-DRT model 2.1) Deep network
###Code
# size of the arbitrary zeta input
N_zeta = 1
# define the neural network
# N is batch size, D_in is input dimension, H is hidden dimension, D_out is output dimension.
N = 1
D_in = N_zeta
H = max(N_freqs,10*N_zeta)
# the output also includes the R_inf, so it has dimension N_freq+1
# note that
# 1) there is no inductance (in this specific example - the DP-DRT can include inductive features, see article)
# 2) R_inf is stored as the last item in the NN output
D_out = N_freqs+1
# Construct the neural network structure
class vanilla_model(torch.nn.Module):
def __init__(self):
super(vanilla_model, self).__init__()
self.fct_1 = torch.nn.Linear(D_in, H)
self.fct_2 = torch.nn.Linear(H, H)
self.fct_3 = torch.nn.Linear(H, H)
self.fct_4 = torch.nn.Linear(H, D_out)
# initialize the weight parameters
torch.nn.init.zeros_(self.fct_1.weight)
torch.nn.init.zeros_(self.fct_2.weight)
torch.nn.init.zeros_(self.fct_3.weight)
torch.nn.init.zeros_(self.fct_4.weight)
# forward
def forward(self, zeta):
h = F.elu(self.fct_1(zeta))
h = F.elu(self.fct_2(h))
h = F.elu(self.fct_3(h))
gamma_pred = F.softplus(self.fct_4(h), beta = 5)
return gamma_pred
###Output
_____no_output_____
###Markdown
2.2) Loss function
###Code
def loss_fn(output, Z_exp_re_torch, Z_exp_im_torch, A_re_torch, A_im_torch):
# we assume no inductance and the R_inf is stored as the last item in the NN output
MSE_re = torch.sum((output[:, -1] + torch.mm(output[:, 0:-1], A_re_torch) - Z_exp_re_torch)**2)
MSE_im = torch.sum((torch.mm(output[:, 0:-1], A_im_torch) - Z_exp_im_torch)**2)
MSE = MSE_re + MSE_im
return MSE
###Output
_____no_output_____
###Markdown
3) Train the model
###Code
model = vanilla_model()
# initialize following variables
zeta = torch.randn(N, N_zeta)
loss_vec = np.array([])
distance_vec = np.array([])
lambda_vec = np.array([])
# optimize the neural network
learning_rate = 1e-5
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# max iterations
max_iters = 100001
gamma_NN_store = torch.zeros((max_iters, N_freqs))
R_inf_NN_store = torch.zeros((max_iters, 1))
for t in range(max_iters):
# Forward pass: compute predicted y by passing x to the model.
gamma = model(zeta)
# Compute the loss
loss = loss_fn(gamma, Z_exp_re_torch, Z_exp_im_torch, A_re_torch, A_im_torch)
# save it
loss_vec = np.append(loss_vec, loss.item())
# store gamma
gamma_NN = gamma[:, 0:-1].detach().reshape(-1)
gamma_NN_store[t, :] = gamma_NN
# store R_inf
R_inf_NN_store[t,:] = gamma[:, -1].detach().reshape(-1)
# Compute the distance
distance = math.sqrt(torch.sum((gamma_NN-gamma_exact_torch)**2).item())
# save it
distance_vec = np.append(distance_vec, distance)
# and print it
if not t%100:
print('iter=', t, '; loss=', loss.item(), '; distance=', distance)
# zero all gradients (purge any cache)
optimizer.zero_grad()
# compute the gradient of the loss with respect to model parameters
loss.backward()
# Update the optimizer
optimizer.step()
###Output
iter= 0 ; loss= 108280.3203125 ; distance= 54.83923369937234
iter= 100 ; loss= 108098.078125 ; distance= 54.82739695655129
iter= 200 ; loss= 107681.28125 ; distance= 54.80032557926778
iter= 300 ; loss= 106687.40625 ; distance= 54.73580901989574
iter= 400 ; loss= 104597.9375 ; distance= 54.6002920736808
iter= 500 ; loss= 100640.1796875 ; distance= 54.34389151433558
iter= 600 ; loss= 93995.84375 ; distance= 53.914215896934365
iter= 700 ; loss= 84615.265625 ; distance= 53.31095213169148
iter= 800 ; loss= 73558.4453125 ; distance= 52.60951653901364
iter= 900 ; loss= 62076.08203125 ; distance= 51.898792077856925
iter= 1000 ; loss= 51006.81640625 ; distance= 51.23900892051143
iter= 1100 ; loss= 40869.55078125 ; distance= 50.666478474585396
iter= 1200 ; loss= 32011.671875 ; distance= 50.201366974353896
iter= 1300 ; loss= 24650.890625 ; distance= 49.84826831009905
iter= 1400 ; loss= 18868.6640625 ; distance= 49.5953843461012
iter= 1500 ; loss= 14603.1435546875 ; distance= 49.41785373301966
iter= 1600 ; loss= 11663.845703125 ; distance= 49.28418765023677
iter= 1700 ; loss= 9773.79296875 ; distance= 49.162703704605946
iter= 1800 ; loss= 8629.65234375 ; distance= 49.02754278129667
iter= 1900 ; loss= 7959.4306640625 ; distance= 48.86271274976068
iter= 2000 ; loss= 7557.595703125 ; distance= 48.66272987920915
iter= 2100 ; loss= 7291.193359375 ; distance= 48.429865829845895
iter= 2200 ; loss= 7085.5625 ; distance= 48.17025570989062
iter= 2300 ; loss= 6903.849609375 ; distance= 47.89060460847853
iter= 2400 ; loss= 6729.85693359375 ; distance= 47.59651481627883
iter= 2500 ; loss= 6557.25244140625 ; distance= 47.291950313484904
iter= 2600 ; loss= 6383.87158203125 ; distance= 46.9794331454933
iter= 2700 ; loss= 6209.16650390625 ; distance= 46.66036276246682
iter= 2800 ; loss= 6033.1484375 ; distance= 46.335383600845475
iter= 2900 ; loss= 5856.00830078125 ; distance= 46.00471008070125
iter= 3000 ; loss= 5678.0068359375 ; distance= 45.668352156300486
iter= 3100 ; loss= 5499.4228515625 ; distance= 45.32621559663348
iter= 3200 ; loss= 5320.53515625 ; distance= 44.97822281219824
iter= 3300 ; loss= 5141.64013671875 ; distance= 44.62427646253424
iter= 3400 ; loss= 4963.0234375 ; distance= 44.26430818946781
iter= 3500 ; loss= 4784.97607421875 ; distance= 43.8982977673964
iter= 3600 ; loss= 4607.7890625 ; distance= 43.526254031780205
iter= 3700 ; loss= 4431.7412109375 ; distance= 43.148210465861155
iter= 3800 ; loss= 4257.1142578125 ; distance= 42.76423631057338
iter= 3900 ; loss= 4084.171875 ; distance= 42.3744267337698
iter= 4000 ; loss= 3913.168212890625 ; distance= 41.978933228000656
iter= 4100 ; loss= 3744.3466796875 ; distance= 41.57791067682454
iter= 4200 ; loss= 3577.936279296875 ; distance= 41.17158888657702
iter= 4300 ; loss= 3414.15576171875 ; distance= 40.760225662939476
iter= 4400 ; loss= 3253.2099609375 ; distance= 40.34410098607974
iter= 4500 ; loss= 3095.293701171875 ; distance= 39.92355671604642
iter= 4600 ; loss= 2940.58203125 ; distance= 39.49891216856484
iter= 4700 ; loss= 2789.22900390625 ; distance= 39.07056323029926
iter= 4800 ; loss= 2641.3759765625 ; distance= 38.6389269365065
iter= 4900 ; loss= 2497.164794921875 ; distance= 38.204439841892885
iter= 5000 ; loss= 2356.71923828125 ; distance= 37.76756595394966
iter= 5100 ; loss= 2220.154052734375 ; distance= 37.328765953057236
iter= 5200 ; loss= 2087.567626953125 ; distance= 36.88853201125402
iter= 5300 ; loss= 1959.0533447265625 ; distance= 36.447369817334724
iter= 5400 ; loss= 1834.6973876953125 ; distance= 36.00583177590819
iter= 5500 ; loss= 1714.5970458984375 ; distance= 35.564473665574965
iter= 5600 ; loss= 1598.8634033203125 ; distance= 35.12389135817136
iter= 5700 ; loss= 1487.6126708984375 ; distance= 34.68467225242027
iter= 5800 ; loss= 1380.97119140625 ; distance= 34.247408803669
iter= 5900 ; loss= 1279.065185546875 ; distance= 33.81271119677572
iter= 6000 ; loss= 1182.0303955078125 ; distance= 33.381169695419224
iter= 6100 ; loss= 1090.0140380859375 ; distance= 32.95336207833163
iter= 6200 ; loss= 1003.1500244140625 ; distance= 32.52977527276933
iter= 6300 ; loss= 921.5451049804688 ; distance= 32.11081391240632
iter= 6400 ; loss= 845.2596435546875 ; distance= 31.696777911650084
iter= 6500 ; loss= 774.2965087890625 ; distance= 31.28784427247665
iter= 6600 ; loss= 708.5841064453125 ; distance= 30.884052632892107
iter= 6700 ; loss= 647.9737548828125 ; distance= 30.48535104632992
iter= 6800 ; loss= 592.24658203125 ; distance= 30.091589924423857
iter= 6900 ; loss= 541.1287841796875 ; distance= 29.70262053260463
iter= 7000 ; loss= 494.3125 ; distance= 29.31828696707217
iter= 7100 ; loss= 451.4748229980469 ; distance= 28.93849552823194
iter= 7200 ; loss= 412.29791259765625 ; distance= 28.56319341506489
iter= 7300 ; loss= 376.48583984375 ; distance= 28.192381307181755
iter= 7400 ; loss= 343.75537109375 ; distance= 27.82609832597583
iter= 7500 ; loss= 313.84271240234375 ; distance= 27.464444566239298
iter= 7600 ; loss= 286.51287841796875 ; distance= 27.107531007170085
iter= 7700 ; loss= 261.5546875 ; distance= 26.755483475282105
iter= 7800 ; loss= 238.76596069335938 ; distance= 26.408441107319266
iter= 7900 ; loss= 217.961181640625 ; distance= 26.06659444943674
iter= 8000 ; loss= 198.97840881347656 ; distance= 25.73009128736721
iter= 8100 ; loss= 181.67550659179688 ; distance= 25.39903226195341
iter= 8200 ; loss= 165.91220092773438 ; distance= 25.07352347776123
iter= 8300 ; loss= 151.55076599121094 ; distance= 24.75368526264385
iter= 8400 ; loss= 138.46751403808594 ; distance= 24.43966907133577
iter= 8500 ; loss= 126.55825805664062 ; distance= 24.131566868548962
iter= 8600 ; loss= 115.73117065429688 ; distance= 23.829374967214836
iter= 8700 ; loss= 105.89653015136719 ; distance= 23.533046132065145
iter= 8800 ; loss= 96.96206665039062 ; distance= 23.2426260462828
iter= 8900 ; loss= 88.8411865234375 ; distance= 22.958203360944864
iter= 9000 ; loss= 81.4599380493164 ; distance= 22.679874536345544
iter= 9100 ; loss= 74.75885009765625 ; distance= 22.40765759503845
iter= 9200 ; loss= 68.68638610839844 ; distance= 22.141451306696155
iter= 9300 ; loss= 63.193214416503906 ; distance= 21.881082441205418
iter= 9400 ; loss= 58.228759765625 ; distance= 21.62642387156638
iter= 9500 ; loss= 53.742218017578125 ; distance= 21.377417712088366
iter= 9600 ; loss= 49.687721252441406 ; distance= 21.134078924622845
iter= 9700 ; loss= 46.025230407714844 ; distance= 20.896442661801764
iter= 9800 ; loss= 42.72130584716797 ; distance= 20.664485611216453
iter= 9900 ; loss= 39.747135162353516 ; distance= 20.438112955496663
iter= 10000 ; loss= 37.07573699951172 ; distance= 20.217150759057347
iter= 10100 ; loss= 34.681968688964844 ; distance= 20.00138087273992
iter= 10200 ; loss= 32.54048156738281 ; distance= 19.790559864885708
iter= 10300 ; loss= 30.626609802246094 ; distance= 19.584475912756606
iter= 10400 ; loss= 28.916820526123047 ; distance= 19.382937669638164
iter= 10500 ; loss= 27.389169692993164 ; distance= 19.185775033160553
iter= 10600 ; loss= 26.023544311523438 ; distance= 18.99281014379397
iter= 10700 ; loss= 24.800979614257812 ; distance= 18.803836285003204
iter= 10800 ; loss= 23.703882217407227 ; distance= 18.61860454767581
iter= 10900 ; loss= 22.7152156829834 ; distance= 18.436804806808986
iter= 11000 ; loss= 21.817922592163086 ; distance= 18.258060713171577
iter= 11100 ; loss= 20.994699478149414 ; distance= 18.081916062092603
iter= 11200 ; loss= 20.226619720458984 ; distance= 17.907831518291097
iter= 11300 ; loss= 19.492816925048828 ; distance= 17.735173440176023
iter= 11400 ; loss= 18.768024444580078 ; distance= 17.56315769036538
iter= 11500 ; loss= 18.01973533630371 ; distance= 17.39085575852068
iter= 11600 ; loss= 17.202545166015625 ; distance= 17.217092776987823
iter= 11700 ; loss= 16.243179321289062 ; distance= 17.0404191559294
iter= 11800 ; loss= 14.982314109802246 ; distance= 16.859261866800676
iter= 11900 ; loss= 13.254838943481445 ; distance= 16.676728733477677
iter= 12000 ; loss= 11.806711196899414 ; distance= 16.49618671164311
iter= 12100 ; loss= 10.755365371704102 ; distance= 16.30844919471863
iter= 12200 ; loss= 9.852766036987305 ; distance= 16.119337403568366
iter= 12300 ; loss= 9.043342590332031 ; distance= 15.933447707868345
###Markdown
4) Analyze results 4.1) Find early stopping value
###Code
index_opt = np.argmin(distance_vec)
index_early_stop = np.flatnonzero(np.abs(np.diff(loss_vec))<1E-8)
gamma_DIP_torch_opt = gamma_NN_store[index_opt, :]
R_inf_DIP_torch_opt = R_inf_NN_store[index_opt, :]
gamma_DIP_opt = gamma_DIP_torch_opt.detach().numpy()
R_DIP_opt = R_inf_DIP_torch_opt.detach().numpy()
if len(index_early_stop):
gamma_DIP_torch_early_stop = gamma_NN_store[index_early_stop[0], :]
gamma_DIP = gamma_DIP_torch_early_stop.detach().numpy()
R_DIP = R_inf_NN_store[index_early_stop[0], :]
R_DIP = R_DIP.detach().numpy()
else:
gamma_DIP = gamma_DIP_opt
R_DIP = R_DIP_opt
###Output
_____no_output_____
###Markdown
4.2) Plot the loss
###Code
plt.semilogy(loss_vec, linewidth=4, color="black")
plt.semilogy(np.array([index_early_stop[0], index_early_stop[0]]), np.array([1E-3, 1E7]),
':', linewidth=3, color="red")
plt.semilogy(np.array([index_opt, index_opt]), np.array([1E-3, 1E7]),
':', linewidth=3, color="blue")
plt.text(30000, 1E2, r'early stop',
{'color': 'red', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="red", pad=0.2)})
plt.text(0.93E5, 1E2, r'optimal',
{'color': 'blue', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="blue", pad=0.2)})
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel(r'iter', fontsize=20)
plt.ylabel(r'loss', fontsize=20)
plt.axis([0,1.01E5,0.9E-2,1.1E6])
fig = plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.3) Plot the error curve vs. iterationThe error is defined as the distance between predicted DRT and exact DRT, i.e.,$ \rm error = ||\mathbf \gamma_{\rm exact} - \mathbf \gamma_{\rm DP-DRT}||$
###Code
plt.semilogy(distance_vec, linewidth=4, color="black")
plt.semilogy(np.array([index_early_stop[0], index_early_stop[0]]), np.array([1E-3, 1E7]),
':', linewidth=4, color="red")
plt.semilogy(np.array([index_opt, index_opt]), np.array([1E-3, 1E7]),
':', linewidth=4, color="blue")
plt.text(30000, 2E1, r'early stop',
{'color': 'red', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="red", pad=0.2)})
plt.text(0.93E5, 2E1, r'optimal',
{'color': 'blue', 'fontsize': 20, 'ha': 'center', 'va': 'center',
'rotation': 90,
'bbox': dict(boxstyle="round", fc="white", ec="blue", pad=0.2)})
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.xlabel(r'iter', fontsize=20)
plt.ylabel(r'error', fontsize=20)
plt.axis([0,1.01E5,0.9E0,1.1E2])
fig=plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.4) Plot the impedanceWe compare the DP-DRT EIS spectrum against the one from the stochastic experiment
###Code
Z_DIP = R_DIP + np.matmul(A_re, gamma_DIP) + 1j*np.matmul(A_im, gamma_DIP)
plt.plot(np.real(Z_exp), -np.imag(Z_exp), "o", markersize=10, color="black", label="synth exp")
plt.plot(np.real(Z_DIP), -np.imag(Z_DIP), linewidth=4, color="red", label="DP-DRT")
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=20)
plt.annotate(r'$10^{-2}$', xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])),
xytext=(np.real(Z_exp[20])-2, 10-np.imag(Z_exp[20])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10^{-1}$', xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])),
xytext=(np.real(Z_exp[30])-2, 6-np.imag(Z_exp[30])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$1$', xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])),
xytext=(np.real(Z_exp[40]), 10-np.imag(Z_exp[40])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.annotate(r'$10$', xy=(np.real(Z_exp[50]), -np.imag(Z_exp[50])),
xytext=(np.real(Z_exp[50])-1, 10-np.imag(Z_exp[50])),
arrowprops=dict(arrowstyle="-",connectionstyle="arc"))
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.legend(frameon=False, fontsize = 15)
plt.xlim(10, 65)
plt.ylim(0, 55)
plt.xticks(range(0, 70, 10))
plt.yticks(range(0, 60, 10))
plt.gca().set_aspect('equal', adjustable='box')
plt.xlabel(r'$Z_{\rm re}/\Omega$', fontsize = 20)
plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20)
fig = plt.gcf()
size = fig.get_size_inches()
plt.show()
###Output
_____no_output_____
###Markdown
4.5) Plot the DRTWe compare the $\gamma$ from the DP-DRT model against the exact one
###Code
plt.semilogx(tau_vec, gamma_exact, linewidth=4, color="black", label="exact")
plt.semilogx(tau_vec, gamma_DIP, linewidth=4, color="red", label="early stop")
plt.semilogx(tau_vec, gamma_DIP_opt, linestyle='None', marker='o', color="blue", label="optimal")
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=15)
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
plt.axis([1E-4,1E4,-0.4,25])
plt.legend(frameon=False, fontsize = 15)
plt.xlabel(r'$\tau/{\rm s}$', fontsize = 20)
plt.ylabel(r'$\gamma/\Omega$', fontsize = 20)
fig = plt.gcf()
fig.set_size_inches(5, 4)
plt.show()
###Output
_____no_output_____
###Markdown
4.6) Ancillary data
###Code
print('total number parameters = ', compute_DRT.count_parameters(model))
print('distance_early_stop = ', distance_vec[index_early_stop[0]])
print('distance_opt= ', distance_vec[index_opt])
###Output
total number parameters = 20170
distance_early_stop = 6.249378631221442
distance_opt= 3.9961969655001686
###Markdown
Mean Shift To apply clustering to a data, a cluster object has to be created, which is in this case a MeanShift instance. By invoking the object's fit method with the data (2D Numpy array) as parameter, the returned value will be the indexes of the clusters for the data points in the same order as it was provided in the input parameter.
###Code
ms = MeanShift(kernel='gaussian', bandwidth=1)
labels = ms.fit(data)
plot(ms.history, data, labels, ms.centroids)
###Output
_____no_output_____
###Markdown
To assign a cluster to new data point(s), the cluster object's predict method can be used. It will calculate the nearest centroid for each entry and return the labels, analogously to the fit method.
###Code
x = np.array([[-10, -10], [-3, -3], [2, 2]])
ms.predict(x)
###Output
_____no_output_____
###Markdown
K-Means
###Code
ms = KMeans(n_clusters=3)
labels = ms.fit(data)
plot(ms.history, data, labels, ms.centroids)
###Output
_____no_output_____
###Markdown
Using H5Web in the notebook Display a simple HDF5 file
###Code
import numpy as np
import h5py
with h5py.File("simple.h5", "w") as h5file:
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
Xg, Yg = np.meshgrid(X, Y)
h5file['threeD'] = [np.sin(2*np.pi*f*np.sqrt(Xg**2 + Yg**2)) for f in np.arange(0.1, 1.1, 0.1)]
h5file['twoD'] = np.sin(np.sqrt(Xg**2 + Yg**2))
h5file['oneD'] = X
h5file['scalar'] = 42
from jupyterlab_h5web import H5Web
H5Web('simple.h5')
###Output
_____no_output_____
###Markdown
Display a NeXus file
###Code
import numpy as np
import h5py
with h5py.File("nexus.nx", "w") as h5file:
root_group = h5file
root_group.attrs["NX_class"] = "NXroot"
root_group.attrs["default"] = "entry"
entry = root_group.create_group("entry")
entry.attrs["NX_class"] = "NXentry"
entry.attrs["default"] = "process/spectrum"
process = entry.create_group("process")
process.attrs["NX_class"] = "NXprocess"
process.attrs["default"] = "spectrum"
spectrum = process.create_group("spectrum")
spectrum.attrs["NX_class"] = "NXdata"
spectrum.attrs["signal"] = "data"
spectrum.attrs["auxiliary_signals"] = ["aux1", "aux2"]
data = np.array([np.linspace(-x, x, 10) for x in range(1, 6)])
spectrum["data"] = data ** 2
spectrum["aux1"] = -(data ** 2)
spectrum["aux2"] = -data
spectrum["data"].attrs["interpretation"] = "spectrum"
image = process.create_group("image")
image.attrs["NX_class"] = "NXdata"
image.attrs["signal"] = "data"
x = np.linspace(-5, 5, 50)
x0 = np.linspace(10, 100, 10)
image["data"] = [a*x**2 for a in x0]
image["X"] = np.linspace(-2, 2, 50, endpoint=False)
image["X"].attrs["units"] = u"µm"
image["Y"] = np.linspace(0, 0.1, 10, endpoint=False)
image["Y"].attrs["units"] = "s"
image.attrs["axes"] = ["X"]
image.attrs["axes"] = ["Y", "X"]
from jupyterlab_h5web import H5Web
H5Web('nexus.nx')
###Output
_____no_output_____
###Markdown
Goal:The primary goal of this example script is to showcase the tools available in the bmpmod package using mock data. The mock data is produced by randomly sampling the density and temperature profiles models published in Vikhlinin+06 for a sample of clusters (Vikhlinin, A., et al. 2006, ApJ, 640, 691). A secondary goal of this example is thus to also explore how the backwards mass modeling process used in the bmpmod package compares to the forward fitting results of Vikhlinin+. The mock profiles generated here allow for a flexible choice in noise and radial sampling rate, which enables an exploration of how these quantities affect the output of the backwards-fitting process. There is also some flexibility built into the bmpmod package that can be additionally tested such as allowing for the stellar mass of the central galaxy to be included (or not included) in the model of total gravitating mass. If the stellar mass profile of the BCG is toggled on, the values for the BCG effective radius Re are pulled from the 2MASS catalog values for a de Vaucouleurs fit to K-band data . After generating the mock temperature and density profiles, the script walks the user through performing the backwards-fitting mass modelling analysis which can be summarized as fitting the below $T_{\mathrm{model}}$ expression to the observed temperature profile by constraining the parameters in the total gravitating mass model $M_{\mathrm{tot}}$.$kT_{\mathrm{model}}(R) = \frac{kT(R_{\mathrm{ref}}) \ n_{e}(R_{\mathrm{ref}})}{n_{e}(R)} -\frac{\mu m_{p} G}{n_{e}(R)}\int_{R_{\mathrm{ref}}}^R \frac{n_{e}(r) M_{\mathrm{grav}}(r)}{r^2} dr$The output of the bmpmod analysis includes a parametric model fit to the gas denisty profile, a non-parametric model fit to the temperature profile, the total mass profile and its associated parameters describing the profile (e.g., the NFW c, Rs), and the contributions of different mass components (i.e., DM, gas, stars) to the total mass profile.This tutorial will go over: 1. Generating mock gas density and temperature data2. Fiting the gas density profile with a parametric model3. Maximum likelihood mass profile parameter estimation 4. MCMC mass profile parameter estimation5. Plotting and summarizing the results A note on usage:Any of the clusters in Vikhlinin+06 are options to be used to generate randomly sampled temperature and density profiles. The full list of clusters is as follows: Vikhlinin+ clusters: [A133, A262, A383, A478, A907, A1413, A1795, A1991, A2029, A2390, RXJ1159+5531, MKW4, USGCS152] After selecting one of these clusters, this example script will automatically generate the cluster and profile data in the proper format to be used by the bmpmod modules. If you have your own data you would like to analyze with the bmpmod package, please see the included template.py file.
###Code
#select any cluster ID from the Vikhlinin+ paper
clusterID='A1991'
###Output
_____no_output_____
###Markdown
1. Generate mock gas density and temperature profiles To generate the mock profiles, the density and temperature models define in Table 2 and 3 of Vikhlinin+06 are sampled. The sampling of the models occurs in equally log-spaced radial bins with the number of bins set by N_ne and N_temp in gen_mock_data(). At each radial point, the density and temperature values are randomly sampled from a Gaussian distribution centered on the model value and with standard deviation equal to noise_ne and noise_temp multiplied by the model value for density or temperature.Args for gen_mock_data(): N_ne: the number of gas density profile data points N_temp: the number of temperature profile data pointsnoise_ne: the percent noise on the density values noise_temp: the percent noise on the temperature values refindex: index into profile where Tmodel = Tspecincl_mstar: include stellar mass of the central galaxy in the model for total gravitating mass incl_mgas: include gas mass of ICM in the model for total gravitating mass
###Code
clustermeta, ne_data, tspec_data, nemodel_vikhlinin, tmodel_vikhlinin \
= gen_mock_data(clusterID=clusterID,
N_ne=30,
N_temp=10,
noise_ne=0.10,
noise_temp=0.03,
refindex=-1,
incl_mstar=0,
incl_mgas=1)
###Output
_____no_output_____
###Markdown
Now let's take a look at the returns... while these are generated automatically here, if you use your own data, things should be in a similar form.
###Code
# clustermeta:
# dictionary that stores relevant properties of cluster
# (i.e., name, redshift, bcg_re: the effective radius of the central galaxy in kpc,
# bcg_sersc_n: the sersic index of the central galaxy)
# as well as selections for analysis
# (i.e., incl_mstar, incl_mgas, refindex as input previously)
clustermeta
#ne_data: dictionary that stores the mock "observed" gas density profile
ne_data[:3]
#tspec_data: dictionary that store the mock "observed" temperature profile
tspec_data[:3]
###Output
_____no_output_____
###Markdown
Let's take a look at how our mock profiles compare to the model we're sampling from ...
###Code
fig1 = plt.figure(1, (12, 4))
ax = fig1.add_subplot(1, 2, 1)
'''
mock gas denisty profile
'''
# plot Vikhlinin+06 density model
xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000)
plt.loglog(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k')
plt.xlim(xmin=min(ne_data['radius']))
# plot sampled density data
plt.errorbar(ne_data['radius'], ne_data['ne'],
xerr=[ne_data['radius_lowerbound'], ne_data['radius_upperbound']],
yerr=ne_data['ne_err'], marker='o', markersize=2, linestyle='none', color='b')
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposy='clip')
plt.xlabel('r [kpc]')
plt.ylabel('$n_{e}$ [cm$^{-3}$]')
'''
mock temperature profile
'''
ax = fig1.add_subplot(1, 2, 2)
# plot Vikhlinin+06 temperature model
xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000)
plt.semilogx(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-')
# plot sampled temperature data
plt.errorbar(tspec_data['radius'], tspec_data['tspec'],
xerr=[tspec_data['radius_lowerbound'], tspec_data['radius_upperbound']],
yerr=[tspec_data['tspec_lowerbound'], tspec_data['tspec_upperbound']],
marker='o', linestyle='none', color='b')
plt.xlabel('r [kpc]')
plt.ylabel('kT [keV]')
###Output
_____no_output_____
###Markdown
2. Fitting the gas density profile with a parametric model To determine the best-fitting gas density model, bmpmod has the option of fitting the four following $n_{e}$ models through the Levenberg-Marquardt optimization method. "single\_beta": $n_{e} = n_{e,0} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta}$"cusped\_beta": $n_{e} = n_{e,0} \ (r/r_{c})^{-\alpha} \ (1+(r/r_{c})^{2})^{-\frac{3}{2}\beta+\frac{1}{2}\alpha}$"double\_beta\_tied": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta)$"double\_beta": $n_{e} = n_{e,1}(n_{e,0,1}, r_{c,1}, \beta_1)+n_{e,2}(n_{e,0,2}, r_{c,2}, \beta_2)$All four models can be fit and compared using the find_nemodeltype() function. A selected model must then be chosen for the following mass profile analysis with the fitne() function.
###Code
#suppress verbose log info from sherpa
logger = logging.getLogger("sherpa")
logger.setLevel(logging.ERROR)
#fit all four ne moels and return the model with the lowest reduced chi-squared as nemodeltype
nemodeltype, fig=find_nemodeltype(ne_data=ne_data,
tspec_data=tspec_data,
optplt=1)
print 'model with lowest reduced chi-squared:', nemodeltype
###Output
bmpmod/mod_gasdensity.py:71: RuntimeWarning: overflow encountered in power
* ((1.+((x/rc)**2.))**((-3.*beta/2.)+(alpha/2.))) # [cm^-3]
###Markdown
*Note*: while the function find_nemodeltype() returns the model type producing the lowest reduced chi-squared fit, it may be better to choose a simpler model with fewer free-parameters if the reduced chi-squared values are similar
###Code
# Turn on logging for sherpa to see details of fit
import logging
logger = logging.getLogger("sherpa")
logger.setLevel(logging.INFO)
# Find the parameters and errors of the seleted gas density model
nemodel=fitne(ne_data=ne_data,tspec_data=tspec_data,nemodeltype=str(nemodeltype)) #[cm^-3]
#nemodel stores all the useful information from the fit to the gas denisty profile
print nemodel.keys()
###Output
['parmins', 'nefit', 'dof', 'parmaxes', 'rchisq', 'chisq', 'parvals', 'parnames', 'type']
###Markdown
3. Maximum likelihood estimation of mass profile free-parameters The maximum likelihood method can be used to perform an initial estimation of the free-parameters in the cluster mass profile model. The free parameters in the mass model, which will be returned in this estimation, are:- the mass concentration $c$ of the NFW profile used to model the DM halo, - the scale radius $R_s$ of the NFW profile- optionally, the log of the normalization of the Sersic model $\rho_{\star,0}$ used to model the stellar mass profile of the central galaxyThe maximum likelihood estimation is performed using a Gaussian log-likelihood function of the form:$\ln(p) = -\frac{1}{2} \sum_{n} \left[\frac{(T_{\mathrm{spec},n} - T_{\mathrm{model},n})^{2}}{\sigma_{T_{\mathrm{spec},n}}^{2}} + \ln (2 \pi \sigma_{T_{\mathrm{spec},n}}^{2}) \right]$
###Code
ml_results = fit_ml(ne_data, tspec_data, nemodel, clustermeta)
###Output
MLE results
MLE: c= 3.9645873144
MLE: rs= 190.964014574
###Markdown
bmpmod uses these maximum likelihood results to initialize the walkers in the MCMC chain... 4. MCMC estimation of mass profile model parameters Here the emcee python package is implemented to estimate the free-parameters of the mass model through the MCMC algorithm. bmpmod utilizes the ensemble sampler from emcee, and initializes the walkers in narrow Gaussian distribution about the parameter values returned from the maximum likelihood analysis.Returns of fit_mcmc(): samples - the marginalized posterior distribution sampler - the sampler class output by emcee
###Code
#fit for the mass model and temperature profile model through MCMC
samples, sampler = fit_mcmc(ne_data=ne_data,
tspec_data=tspec_data,
nemodel=nemodel,
ml_results=ml_results,
clustermeta=clustermeta,
Ncores=3,
Nwalkers=100,
Nsteps=150,
Nburnin=50)
###Output
MCMC progress: 10.0%
MCMC progress: 20.0%
MCMC progress: 30.0%
MCMC progress: 40.0%
MCMC progress: 50.0%
MCMC progress: 60.0%
MCMC progress: 70.0%
MCMC progress: 80.0%
MCMC progress: 90.0%
MCMC progress: 100.0%
autocorrelation time: [ 3.27557418 1.03391502]
###Markdown
**Analysis of the marginalized MCMC distribution**We also want to calculate the radius of the cluster $R_{500}$ and the mass (total, DM, gas, stars) within this radius. The auxililary calculations are taken care of in samples_aux() for each step of the MCMC chain.
###Code
# calculate R500 and M(R500) for each step of MCMC chain
samples_aux = calc_posterior_mcmc(samples=samples,
nemodel=nemodel,
clustermeta=clustermeta,
Ncores=3)
###Output
_____no_output_____
###Markdown
From the marginialized MCMC distribution, we can calculate the free-parameter and auxiliary parameter (R500, M500) values as the median of the distribution with confidence intervals defined by the 16th and 84th percentiles. With samples_results() we combine all output parameter values and their upper and lower 1$\sigma$ error bounds.
###Code
# combine all MCMC results
mcmc_results = samples_results(samples=samples,
samples_aux=samples_aux,
clustermeta=clustermeta)
for key in mcmc_results.keys():
print 'MCMC: '+str(key)+' = '+str(mcmc_results[str(key)])
#Corner plot of marginalized posterior distribution of free params from MCMC
fig1 = plt_mcmc_freeparam(mcmc_results=mcmc_results,
samples=samples,
sampler=sampler,
tspec_data=tspec_data,
clustermeta=clustermeta)
###Output
_____no_output_____
###Markdown
5. Summary plot
###Code
# Summary plot: density profile, temperature profile, mass profile
fig2, ax1, ax2 = plt_summary(ne_data=ne_data,
tspec_data=tspec_data,
nemodel=nemodel,
mcmc_results=mcmc_results,
clustermeta=clustermeta)
# add vikhlinin model to density plot
xplot = np.logspace(np.log10(min(ne_data['radius'])), np.log10(max(ne_data['radius'])), 1000)
ax1.plot(xplot, vikhlinin_neprof(nemodel_vikhlinin, xplot), 'k')
#plt.xlim(xmin=min(ne_data['radius']))
# add viklinin model to temperature plot
xplot = np.logspace(np.log10(min(tspec_data['radius'])), np.log10(max(tspec_data['radius'])), 1000)
ax2.plot(xplot, vikhlinin_tprof(tmodel_vikhlinin, xplot), 'k-')
###Output
_____no_output_____
###Markdown
Example Geohash Code
###Code
## Basic Stuff
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%load_ext autoreload
%autoreload 2
## Imports
import pygeohash
lat = 35.5
lng = -86.7
geo = pygeohash.encode(latitude=lat, longitude=lng, precision=8)
print("Geohash is {0}".format(geo))
###Output
Geohash is dn63gndf
###Markdown
Neighbors
###Code
neighbors = pygeohash.neighbors(geo)
print("These are geo {0}'s neighbors: {1}".format(geo, ", ".join(neighbors)))
###Output
These are geo dn63gndf's neighbors: dn63gndd, dn63gne4, dn63gnde, dn63gndg, dn63gne5, dn63gnd9, dn63gndc, dn63gne1
###Markdown
Geohash Characters
###Code
chars = pygeohash.sys
chars
###Output
_____no_output_____
###Markdown
Read the data
###Code
path = 'data/parliament/'
A = sio.mmread(os.path.join(path, 'A.mtx')).tocsr()
X = sio.mmread(os.path.join(path, 'X.mtx')).tocsr()
z = np.load(os.path.join(path, 'z.npy'))
K = len(np.unique(z))
print(A.shape, X.shape, K)
###Output
(451, 451) (451, 108) 7
###Markdown
Preprocessing: make undirected + filter singletons + (optinally) select largest connected component
###Code
# make sure the graph is undirected
A = A.maximum(A.T)
# remove singleton nodes (without any edges)
filter_singletons = A.sum(1).A1 != 0
A = A[filter_singletons][:, filter_singletons]
X = X[filter_singletons]
z = z[filter_singletons]
# (optionally) make sure the graph has a single connected component
cc = sp.csgraph.connected_components(A)[1]
cc_filter = cc == np.bincount(cc).argmax()
A = A[cc_filter][:, cc_filter]
X = X[cc_filter]
z = z[cc_filter]
###Output
_____no_output_____
###Markdown
Fit PAICAN
###Code
paican = PAICAN(A, X, K, verbose=True)
z_pr, ca_pr, cx_pr = paican.fit_predict()
###Output
iter 0, ELBO: -1751.73962
iter 1, ELBO: -1590.77063
iter 2, ELBO: -1579.55896
iter 3, ELBO: -1578.55103
iter 4, ELBO: -1578.30579
iter 5, ELBO: -1578.20215
iter 6, ELBO: -1578.14893
iter 7, ELBO: -1578.12830
iter 8, ELBO: -1578.10156
iter 9, ELBO: -1578.05591
iter 10, ELBO: -1577.97839
iter 11, ELBO: -1577.84412
iter 12, ELBO: -1577.63074
iter 13, ELBO: -1577.29712
iter 14, ELBO: -1576.77478
iter 15, ELBO: -1576.05420
iter 16, ELBO: -1575.55151
iter 17, ELBO: -1575.44434
iter 18, ELBO: -1575.41663
iter 19, ELBO: -1575.38794
iter 20, ELBO: -1575.34827
iter 21, ELBO: -1575.30627
iter 22, ELBO: -1575.28784
iter 23, ELBO: -1575.25049
iter 24, ELBO: -1575.20581
iter 25, ELBO: -1575.17957
iter 26, ELBO: -1575.16504
iter 27, ELBO: -1575.15735
iter 28, ELBO: -1575.15710
###Markdown
Evaluate NMI
###Code
print('NMI: {:.2f}'.format(nmi(z_pr, z) * 100))
###Output
NMI: 80.30
###Markdown
Import packages
###Code
import cv2
import skimage
from matplotlib import pyplot as plt
import numpy as np
import time
import pandas as pd
plt.style.use("default")
import cvxpy as cp
from math import pi,sin,cos,sqrt
###Output
_____no_output_____
###Markdown
Helper methods
###Code
def get_warp(
img_path,
size = None, # (dd, n)
save_warp = False,
show_warp = False,
):
"""
Remaps image to polar coordinates space
"""
image = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)/255.0
h, w = image.shape
if size == None:
dd, n = h,w
else:
dd, n = size
image_polar = cv2.warpPolar(
image,
center=(w/2, h/2),
maxRadius=min(w,h)/2,
dsize=(n,dd),
flags=cv2.INTER_LINEAR + cv2.WARP_FILL_OUTLIERS)
if save_warp:
target_path = f"{time.strftime('%Y-%m-%d-%H%M%S')}.png"
cv2.imwrite(target_path, image_polar*255.0)
if show_warp:
plt.imshow(image_polar, cmap="gray")
return image_polar
def create_dft_matrix(n):
"""
Discrete Fourier Transform Matrix
"""
W = np.zeros((2*n+1,2*n+1)).astype(complex)
for i in range(2*n+1):
for j in range(2*n+1):
ii = i - n
arg = ii * j * (2 * pi) / (2*n+1)
W[i,j] = cos(arg) + 1j * sin(arg)
W = W / sqrt(2*n+1)
return W
###Output
_____no_output_____
###Markdown
Get image- `REF_PATH`: path to image to be used as reference- `OBS_PATH`: path to image to be transformed and align to reference- `DD`, `N`: dimensions for convex program - required to be an odd positive integer less than input images - higher value provides better solution, but longer to solve the convex program
###Code
REF_PATH = 'mona_lisa_ref.png'
OBS_PATH = 'mona_lisa_obs.png'
DD, N = 101, 101
"""
View input images
"""
fig = plt.figure()
for idx, path in enumerate([REF_PATH, OBS_PATH]):
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
_ = fig.add_subplot(2,2,idx+1)
_.imshow(img, cmap="gray")
plt.grid()
"""
Map images to polar coordinates
"""
ref_mat = get_warp(REF_PATH, (DD, N))
obs_mat = get_warp(OBS_PATH, (DD, N))
fig = plt.figure()
for idx, img in enumerate([ref_mat, obs_mat]):
_ = fig.add_subplot(2,2,idx+1)
_.imshow(img, cmap="gray")
plt.grid()
###Output
_____no_output_____
###Markdown
Solve the following optimization instance$\begin{align*}\max \quad &\sum_{j=1}^N \langle \phi_{j, ref} \circ \phi_{j,obs}^\dag, x \rangle + \langle x, \phi_{j, ref} \circ \phi_{j,obs}^\dag \rangle \\\text{s.t.} \quad & x=X[:,0] \\& X \text{ is PSD, Toeplitz} \\& X[i,i] = 1\end{align*}$
###Code
d = int((DD-1)/2)
dft = create_dft_matrix(d) # DFT matrix
inv_dft = np.linalg.inv(dft) # inverse DFT matrix
# Input data
phi_ref = (dft @ ref_mat)[d:,]
phi_obs = (dft @ obs_mat)[d:,]
# Variables
X_mat = cp.Variable((d+1,d+1), hermitian=True)
objective = cp.Maximize(cp.real(cp.sum(
cp.matmul(cp.conj(cp.diag(X_mat[:,0])), cp.multiply(phi_ref, cp.conj(phi_obs)))
+ cp.matmul(cp.diag(X_mat[:,0]), cp.conj(cp.multiply(phi_ref, cp.conj(phi_obs))))
)))
# Constraints
constraints = [X_mat >> 0] # PSD
constraints += [X_mat[i,j] == X_mat[i+1,j+1] for i in range(d) for j in range(d)] # Toeplitz <-- to vectorize?
constraints += [X_mat[0,0] == 1]
# Setup problem
prob = cp.Problem(objective, constraints)
start_time = time.time()
prob.solve()
time_elapsed = time.time() - start_time
print("[Solver: {} | Status: {}] \nOpt Val: {} [{:.3f}s]".format(prob.solver_stats.solver_name, prob.status, prob.value, time_elapsed))
###Output
[Solver: SCS | Status: optimal]
Opt Val: 3533.375341617581 [12.993s]
###Markdown
Transform observed image to align with reference image
###Code
"""
Recover top half of the matrix phi_obs which was truncated
"""
phi_opt = cp.matmul(cp.diag(X_mat[:,0]), phi_obs)
truncated_top = np.flip(phi_opt.value[1:,].conj(), axis=0)
original_phi_opt = np.concatenate([truncated_top, phi_opt.value])
recovered_phi = np.real(inv_dft @ original_phi_opt)
# plt.imshow(recovered_phi, cmap="gray")
"""
Inverse mapping back from polar coordinates
"""
recovered_img = cv2.warpPolar(
recovered_phi,
center=(DD/2, N/2),
maxRadius=min(DD,N)/2,
dsize=(DD,N),
flags=cv2.INTER_NEAREST + cv2.WARP_FILL_OUTLIERS + cv2.WARP_INVERSE_MAP)
plt.imshow(recovered_img, cmap="gray")
# skimage.io.imsave(f"images/output/{time.strftime('%Y-%m-%d-%H%M%S')}.png", skimage.util.img_as_ubyte(recovered_img))
###Output
_____no_output_____
###Markdown
1. Import pyebas
###Code
from pyebas import *
###Output
_____no_output_____
###Markdown
2. Download EBAS data (.nc files)
###Code
# set selection conditions
# if you need the whole EBAS database, set conditions as None
conditions = {
"start_year": 1990,
"end_year": 2021,
"site": ['ES0010R', 'ES0011R'],
"matrix": ['air'],
"components": ['NOx'],
}
# set local stroage path
db_dir = r'ebas_db'
downloader = EbasDownloader(loc=db_dir)
# download requires multiprocessing, error may occurs because of multiprocessing
# use command line or Jupyter Notebook to prevent errors
downloader.get_raw_files(conditions=conditions, download=True)
###Output
Make data folder ebas_db\raw_data...
0 raw data (*.nc) files have been downloaded.
Requesting data from ebas sever...
13126 files found on ftp server.
0 files need to be deleted...
###Markdown
3. Export to .csv file
###Code
# export all the downloaded .nc files in the output path to .csv
# important: .csv file might be very large.
csv_exporter = csvExporter(loc=db_dir)
csv_exporter.export_csv('export.csv')
###Output
Processing files...: 100%|██████████| 5/5 [00:00<00:00, 19.52it/s]
###Markdown
4. Create local database
###Code
# set local stroage path, must be the same as previous path
db_dir = r'ebas_db'
# local database object
db = EbasDB(dir=db_dir, dump='xz', detailed=True)
# create/update database with new files
db.update_db()
###Output
Make data folder ebas_db\dumps...
Gathering site information...
Using 5 threads...
###Markdown
5.Open local database
###Code
# set local stroage path
db_dir = r'ebas_db'
# local database object
db = EbasDB(dir=db_dir, dump='xz', detailed=True)
# open database if it is created
db.init_db()
###Output
0%| | 0/2 [00:00<?, ?it/s]
###Markdown
6. Query data from local database as pandas.DataFrame
###Code
condition = {
"id":["AM0001R", "EE0009R", 'ES0010R', 'ES0011R'],
"component":["NOx", "nitrate", "nitric_acid"],
"matrix":["air", "aerosol"],
"stat":['arithmetic mean',"median"],
"st":np.datetime64("1970-01-01"),
"ed":np.datetime64("2021-10-01"),
# if you want to include all, just remove the condition
#"country":["Denmark","France"],
}
df = db.query(condition, use_number_indexing=False)
df.head(20)
###Output
seraching...: 100%|██████████| 2/2 [00:00<?, ?it/s]
###Markdown
7. Access detail information
###Code
# access information for one site
db.site_index["ES0011R"]
db.site_index["ES0011R"]["components"].keys()
db.site_index["ES0011R"]["files"].keys()
###Output
_____no_output_____
###Markdown
8. Get summary
###Code
# get summary information
db.list_sites()
# possible keys are: "id","name","country","station_setting", "lat", "lon","alt","land_use", "file_num","components"
db.list_sites(keys=["name","lat","lon"])
# if components are selected, set list_time=True to see the starting and ending time
db.list_sites(keys=["name", "components"], list_time=True)
###Output
_____no_output_____
###Markdown
In this example we are extracting the dates of reference insertion, date of id insertion and final reference deletion for each reference by its id:
###Code
def getting_data(df):
df_upt = pd.DataFrame(df[['ref_ids','ref_ids_type', 'ref_id_ins']])
df_upt['ins_time'] = df['first_rev_time']
df_upt['del_time'] = 'None'
for i in df_upt.index:
if df['deleted'][i]:
df_upt['del_time'][i] = df['del_time'][i][-1]
return df_upt
df_upt = getting_data(df)
qgrid.show_grid(getting_data(df))
###Output
_____no_output_____
###Markdown
Original
###Code
hlp.plot1d(x_train[0])
###Output
_____no_output_____
###Markdown
Jittering
###Code
hlp.plot1d(x_train[0], aug.jitter(x_train)[0])
## Scaling
hlp.plot1d(x_train[0], aug.scaling(x_train)[0])
## Permutation
hlp.plot1d(x_train[0], aug.permutation(x_train)[0])
## Magnitude Warping
hlp.plot1d(x_train[0], aug.magnitude_warp(x_train)[0])
## Time Warping
hlp.plot1d(x_train[0], aug.time_warp(x_train)[0])
## Rotation
hlp.plot1d(x_train[0], aug.rotation(x_train)[0])
## Window Slicing
hlp.plot1d(x_train[0], aug.window_slice(x_train)[0])
## Window Warping
hlp.plot1d(x_train[0], aug.window_warp(x_train)[0])
## Suboptimal Warping Time Series Generator (SPAWNER)
hlp.plot1d(x_train[0], aug.spawner(x_train, y_train)[0])
## Weighted Dynamic Time Series Barycenter Averaging (wDBA)
hlp.plot1d(x_train[0], aug.wdba(x_train, y_train)[0])
## Random Guided Warping
hlp.plot1d(x_train[0], aug.random_guided_warp(x_train, y_train)[0])
## Discriminative Guided Warping
hlp.plot1d(x_train[0], aug.discriminative_guided_warp(x_train, y_train)[0])
###Output
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:02<00:00, 10.02it/s]
###Markdown
Build a POMDP environment: Pendulum-V (only observe the velocity)
###Code
cuda_id = 0 # -1 if using cpu
ptu.set_gpu_mode(torch.cuda.is_available() and cuda_id >= 0, cuda_id)
env_name = "Pendulum-V-v0"
env = gym.make(env_name)
max_trajectory_len = env._max_episode_steps
act_dim = env.action_space.shape[0]
obs_dim = env.observation_space.shape[0]
print(env, obs_dim, act_dim, max_trajectory_len)
###Output
<TimeLimit<POMDPWrapper<TimeLimit<PendulumEnv<Pendulum-V-v0>>>>> 1 1 200
###Markdown
Build a recurent model-free RL agent: separate architecture, `lstm` encoder, `oar` policy input space, `td3` RL algorithm (context length set later)
###Code
agent = Policy_RNN(
obs_dim=obs_dim,
action_dim=act_dim,
encoder="lstm",
algo="td3",
action_embedding_size=8,
state_embedding_size=32,
reward_embedding_size=8,
rnn_hidden_size=128,
dqn_layers=[128, 128],
policy_layers=[128, 128],
lr=0.0003,
gamma=0.9,
tau=0.005,
).to(ptu.device)
###Output
Critic_RNN(
(state_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(action_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(reward_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(rnn): LSTM(48, 128)
(current_state_action_encoder): FeatureExtractor(
(fc): Linear(in_features=2, out_features=48, bias=True)
)
(qf1): FlattenMlp(
(fc0): Linear(in_features=176, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
(qf2): FlattenMlp(
(fc0): Linear(in_features=176, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
)
Actor_RNN(
(state_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(action_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(reward_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=8, bias=True)
)
(rnn): LSTM(48, 128)
(current_state_encoder): FeatureExtractor(
(fc): Linear(in_features=1, out_features=32, bias=True)
)
(policy): DeterministicPolicy(
(fc0): Linear(in_features=160, out_features=128, bias=True)
(fc1): Linear(in_features=128, out_features=128, bias=True)
(last_fc): Linear(in_features=128, out_features=1, bias=True)
)
)
###Markdown
Define other training parameters such as context length and training frequency
###Code
num_updates_per_iter = 1.0 # training frequency
sampled_seq_len = 64 # context length
buffer_size = 1e6
batch_size = 32
num_iters = 150
num_init_rollouts_pool = 5
num_rollouts_per_iter = 1
total_rollouts = num_init_rollouts_pool + num_iters * num_rollouts_per_iter
n_env_steps_total = max_trajectory_len * total_rollouts
_n_env_steps_total = 0
print("total env episodes", total_rollouts, "total env steps", n_env_steps_total)
###Output
total env episodes 155 total env steps 31000
###Markdown
Define key functions: collect rollouts and policy update
###Code
@torch.no_grad()
def collect_rollouts(
num_rollouts,
random_actions=False,
deterministic=False,
train_mode=True
):
"""collect num_rollouts of trajectories in task and save into policy buffer
:param
random_actions: whether to use policy to sample actions, or randomly sample action space
deterministic: deterministic action selection?
train_mode: whether to train (stored to buffer) or test
"""
if not train_mode:
assert random_actions == False and deterministic == True
total_steps = 0
total_rewards = 0.0
for idx in range(num_rollouts):
steps = 0
rewards = 0.0
obs = ptu.from_numpy(env.reset())
obs = obs.reshape(1, obs.shape[-1])
done_rollout = False
# get hidden state at timestep=0, None for mlp
action, reward, internal_state = agent.get_initial_info()
if train_mode:
# temporary storage
obs_list, act_list, rew_list, next_obs_list, term_list = (
[],
[],
[],
[],
[],
)
while not done_rollout:
if random_actions:
action = ptu.FloatTensor(
[env.action_space.sample()]
) # (1, A)
else:
# policy takes hidden state as input for rnn, while takes obs for mlp
(action, _, _, _), internal_state = agent.act(
prev_internal_state=internal_state,
prev_action=action,
reward=reward,
obs=obs,
deterministic=deterministic,
)
# observe reward and next obs (B=1, dim)
next_obs, reward, done, info = utl.env_step(
env, action.squeeze(dim=0)
)
done_rollout = False if ptu.get_numpy(done[0][0]) == 0.0 else True
# update statistics
steps += 1
rewards += reward.item()
# early stopping env: such as rmdp, pomdp, generalize tasks. term ignores timeout
term = (
False
if "TimeLimit.truncated" in info
or steps >= max_trajectory_len
else done_rollout
)
if train_mode:
# append tensors to temporary storage
obs_list.append(obs) # (1, dim)
act_list.append(action) # (1, dim)
rew_list.append(reward) # (1, dim)
term_list.append(term) # bool
next_obs_list.append(next_obs) # (1, dim)
# set: obs <- next_obs
obs = next_obs.clone()
if train_mode:
# add collected sequence to buffer
policy_storage.add_episode(
observations=ptu.get_numpy(torch.cat(obs_list, dim=0)), # (L, dim)
actions=ptu.get_numpy(torch.cat(act_list, dim=0)), # (L, dim)
rewards=ptu.get_numpy(torch.cat(rew_list, dim=0)), # (L, dim)
terminals=np.array(term_list).reshape(-1, 1), # (L, 1)
next_observations=ptu.get_numpy(
torch.cat(next_obs_list, dim=0)
), # (L, dim)
)
print("Mode:", "Train" if train_mode else "Test",
"env_steps", steps,
"total rewards", rewards)
total_steps += steps
total_rewards += rewards
if train_mode:
return total_steps
else:
return total_rewards / num_rollouts
def update(num_updates):
rl_losses_agg = {}
# print(num_updates)
for update in range(num_updates):
# sample random RL batch: in transitions
batch = ptu.np_to_pytorch_batch(
policy_storage.random_episodes(batch_size)
)
# RL update
rl_losses = agent.update(batch)
for k, v in rl_losses.items():
if update == 0: # first iterate - create list
rl_losses_agg[k] = [v]
else: # append values
rl_losses_agg[k].append(v)
# statistics
for k in rl_losses_agg:
rl_losses_agg[k] = np.mean(rl_losses_agg[k])
return rl_losses_agg
###Output
_____no_output_____
###Markdown
Train and Evaluate the agent: only costs < 20 min
###Code
policy_storage = SeqReplayBuffer(
max_replay_buffer_size=int(buffer_size),
observation_dim=obs_dim,
action_dim=act_dim,
sampled_seq_len=sampled_seq_len,
sample_weight_baseline=0.0,
)
env_steps = collect_rollouts(num_rollouts=num_init_rollouts_pool,
random_actions=True,
train_mode=True
)
_n_env_steps_total += env_steps
# evaluation parameters
last_eval_num_iters = 0
log_interval = 5
eval_num_rollouts = 10
learning_curve = {
'x': [],
'y': [],
}
while _n_env_steps_total < n_env_steps_total:
env_steps = collect_rollouts(num_rollouts=num_rollouts_per_iter,
train_mode=True
)
_n_env_steps_total += env_steps
train_stats = update(int(num_updates_per_iter * env_steps))
current_num_iters = _n_env_steps_total // (
num_rollouts_per_iter * max_trajectory_len)
if (current_num_iters != last_eval_num_iters
and current_num_iters % log_interval == 0):
last_eval_num_iters = current_num_iters
average_returns = collect_rollouts(
num_rollouts=eval_num_rollouts,
train_mode=False,
random_actions=False,
deterministic=True
)
learning_curve['x'].append(_n_env_steps_total)
learning_curve['y'].append(average_returns)
print(_n_env_steps_total, average_returns)
###Output
Mode: Train env_steps 200 total rewards -1215.5405168533325
Mode: Train env_steps 200 total rewards -1309.3240714073181
Mode: Train env_steps 200 total rewards -1070.255422860384
Mode: Train env_steps 200 total rewards -1716.9817371368408
Mode: Train env_steps 200 total rewards -1348.119238615036
Mode: Train env_steps 200 total rewards -1794.5983276367188
Mode: Train env_steps 200 total rewards -1641.6694905161858
Mode: Train env_steps 200 total rewards -1590.8518767878413
Mode: Train env_steps 200 total rewards -1717.778513431549
Mode: Train env_steps 200 total rewards -1716.919951915741
Mode: Test env_steps 200 total rewards -1690.6299517154694
Mode: Test env_steps 200 total rewards -1667.401160120964
Mode: Test env_steps 200 total rewards -1683.2179251909256
Mode: Test env_steps 200 total rewards -1629.752505838871
Mode: Test env_steps 200 total rewards -1730.7712788581848
Mode: Test env_steps 200 total rewards -1709.7121629714966
Mode: Test env_steps 200 total rewards -1737.636411190033
Mode: Test env_steps 200 total rewards -1724.8275074958801
Mode: Test env_steps 200 total rewards -1644.5090357661247
Mode: Test env_steps 200 total rewards -1670.3785852193832
2000 -1688.8836524367332
Mode: Train env_steps 200 total rewards -1675.8528361320496
Mode: Train env_steps 200 total rewards -1658.8392679691315
Mode: Train env_steps 200 total rewards -1519.6182126998901
Mode: Train env_steps 200 total rewards -1543.8249187469482
Mode: Train env_steps 200 total rewards -1378.7394891306758
Mode: Test env_steps 200 total rewards -1243.581422328949
Mode: Test env_steps 200 total rewards -1279.0839395523071
Mode: Test env_steps 200 total rewards -1115.5180749297142
Mode: Test env_steps 200 total rewards -1240.0015530586243
Mode: Test env_steps 200 total rewards -1131.4246773123741
Mode: Test env_steps 200 total rewards -1271.0484585762024
Mode: Test env_steps 200 total rewards -1296.8658256530762
Mode: Test env_steps 200 total rewards -1268.0181958675385
Mode: Test env_steps 200 total rewards -1105.4287464022636
Mode: Test env_steps 200 total rewards -1221.9913232326508
3000 -1217.29622169137
Mode: Train env_steps 200 total rewards -1086.907365836203
Mode: Train env_steps 200 total rewards -809.5890567302704
Mode: Train env_steps 200 total rewards -1509.1656613349915
Mode: Train env_steps 200 total rewards -875.1950886547565
Mode: Train env_steps 200 total rewards -883.6977178305387
Mode: Test env_steps 200 total rewards -932.8838503956795
Mode: Test env_steps 200 total rewards -916.5262511968613
Mode: Test env_steps 200 total rewards -853.4724770113826
Mode: Test env_steps 200 total rewards -972.6363238096237
Mode: Test env_steps 200 total rewards -916.7851620316505
Mode: Test env_steps 200 total rewards -892.7446937561035
Mode: Test env_steps 200 total rewards -911.9960522651672
Mode: Test env_steps 200 total rewards -862.5102658420801
Mode: Test env_steps 200 total rewards -909.3836004137993
Mode: Test env_steps 200 total rewards -902.3712181299925
4000 -907.1309894852341
Mode: Train env_steps 200 total rewards -896.5191862247884
Mode: Train env_steps 200 total rewards -1148.8554611206055
Mode: Train env_steps 200 total rewards -919.8976370096207
Mode: Train env_steps 200 total rewards -894.6185926496983
Mode: Train env_steps 200 total rewards -777.0896812826395
Mode: Test env_steps 200 total rewards -800.0095049291849
Mode: Test env_steps 200 total rewards -729.1357635855675
Mode: Test env_steps 200 total rewards -790.4656649529934
Mode: Test env_steps 200 total rewards -658.2100356258452
Mode: Test env_steps 200 total rewards -678.3389454782009
Mode: Test env_steps 200 total rewards -764.867270976305
Mode: Test env_steps 200 total rewards -711.1784103494138
Mode: Test env_steps 200 total rewards -704.299937158823
Mode: Test env_steps 200 total rewards -703.3847205489874
Mode: Test env_steps 200 total rewards -769.4560797959566
5000 -730.9346333401278
Mode: Train env_steps 200 total rewards -774.3973034918308
Mode: Train env_steps 200 total rewards -863.303290605545
Mode: Train env_steps 200 total rewards -754.3786760801449
Mode: Train env_steps 200 total rewards -787.7701032310724
Mode: Train env_steps 200 total rewards -814.8449696339667
Mode: Test env_steps 200 total rewards -641.1826608031988
Mode: Test env_steps 200 total rewards -673.1848703697324
Mode: Test env_steps 200 total rewards -636.2317231073976
Mode: Test env_steps 200 total rewards -636.3841380421072
Mode: Test env_steps 200 total rewards -634.7440396994352
Mode: Test env_steps 200 total rewards -1434.365993976593
Mode: Test env_steps 200 total rewards -639.5609966111369
Mode: Test env_steps 200 total rewards -638.4026339892298
Mode: Test env_steps 200 total rewards -629.0861927568913
Mode: Test env_steps 200 total rewards -635.3440890386701
6000 -719.8487338394392
Mode: Train env_steps 200 total rewards -624.8576611503959
Mode: Train env_steps 200 total rewards -731.2055732905865
Mode: Train env_steps 200 total rewards -643.7517330273986
Mode: Train env_steps 200 total rewards -512.888639099896
Mode: Train env_steps 200 total rewards -678.9873680695891
Mode: Test env_steps 200 total rewards -649.3965282291174
Mode: Test env_steps 200 total rewards -541.0664244294167
Mode: Test env_steps 200 total rewards -656.5433887466788
Mode: Test env_steps 200 total rewards -701.5938144102693
Mode: Test env_steps 200 total rewards -570.9794048666954
Mode: Test env_steps 200 total rewards -526.0970221487805
Mode: Test env_steps 200 total rewards -528.7169065512717
Mode: Test env_steps 200 total rewards -791.1858232319355
Mode: Test env_steps 200 total rewards -760.1559834107757
Mode: Test env_steps 200 total rewards -796.3674455285072
7000 -652.2102741553448
Mode: Train env_steps 200 total rewards -575.0728849545121
Mode: Train env_steps 200 total rewards -538.9270869866014
Mode: Train env_steps 200 total rewards -703.1943583320826
Mode: Train env_steps 200 total rewards -522.5574248465709
Mode: Train env_steps 200 total rewards -526.6231522634625
Mode: Test env_steps 200 total rewards -471.21681063994765
Mode: Test env_steps 200 total rewards -407.10355828516185
Mode: Test env_steps 200 total rewards -429.82667701132596
Mode: Test env_steps 200 total rewards -396.4019733443856
Mode: Test env_steps 200 total rewards -1491.0763459205627
Mode: Test env_steps 200 total rewards -326.2651424361393
Mode: Test env_steps 200 total rewards -464.98171285912395
Mode: Test env_steps 200 total rewards -392.0769012141973
Mode: Test env_steps 200 total rewards -269.7005622461438
Mode: Test env_steps 200 total rewards -509.407666021958
8000 -515.8057349978947
Mode: Train env_steps 200 total rewards -639.5204429877922
Mode: Train env_steps 200 total rewards -396.447283314541
Mode: Train env_steps 200 total rewards -519.2145761235151
Mode: Train env_steps 200 total rewards -386.9386151973158
Mode: Train env_steps 200 total rewards -393.6131444051862
Mode: Test env_steps 200 total rewards -136.34055368886766
Mode: Test env_steps 200 total rewards -130.04246410355336
Mode: Test env_steps 200 total rewards -137.05444939476
Mode: Test env_steps 200 total rewards -134.1194399067317
Mode: Test env_steps 200 total rewards -131.07375583963585
Mode: Test env_steps 200 total rewards -130.39294535505906
Mode: Test env_steps 200 total rewards -256.4807607967232
Mode: Test env_steps 200 total rewards -133.45546923366783
Mode: Test env_steps 200 total rewards -137.30824294477497
Mode: Test env_steps 200 total rewards -397.2588393399783
9000 -172.3526920603752
Mode: Train env_steps 200 total rewards -260.3047589848429
Mode: Train env_steps 200 total rewards -260.44967386405915
Mode: Train env_steps 200 total rewards -9.588460055063479
Mode: Train env_steps 200 total rewards -503.4001742233813
Mode: Train env_steps 200 total rewards -132.90466969866975
Mode: Test env_steps 200 total rewards -245.46063787024468
Mode: Test env_steps 200 total rewards -258.87249805172905
Mode: Test env_steps 200 total rewards -253.1965181294363
Mode: Test env_steps 200 total rewards -256.33532144408673
Mode: Test env_steps 200 total rewards -122.02367229596712
Mode: Test env_steps 200 total rewards -378.40153571846895
Mode: Test env_steps 200 total rewards -129.97556851245463
Mode: Test env_steps 200 total rewards -256.6560115632601
Mode: Test env_steps 200 total rewards -128.58447807095945
Mode: Test env_steps 200 total rewards -468.4694554193411
10000 -249.79756970759482
Mode: Train env_steps 200 total rewards -253.84205745416693
Mode: Train env_steps 200 total rewards -258.597339340964
Mode: Train env_steps 200 total rewards -249.67442950383338
Mode: Train env_steps 200 total rewards -264.99233946722234
Mode: Train env_steps 200 total rewards -123.49480776841665
Mode: Test env_steps 200 total rewards -386.33284205210657
Mode: Test env_steps 200 total rewards -374.89824844955365
Mode: Test env_steps 200 total rewards -127.82263034246353
Mode: Test env_steps 200 total rewards -3.396543635226408
Mode: Test env_steps 200 total rewards -0.3892205822030519
Mode: Test env_steps 200 total rewards -127.58443048472691
Mode: Test env_steps 200 total rewards -123.29965032166001
Mode: Test env_steps 200 total rewards -405.617472100781
Mode: Test env_steps 200 total rewards -131.20015325089298
Mode: Test env_steps 200 total rewards -270.9554879873649
11000 -195.1496679206979
Mode: Train env_steps 200 total rewards -128.46735045554306
Mode: Train env_steps 200 total rewards -385.3559364905559
Mode: Train env_steps 200 total rewards -133.3203926575943
Mode: Train env_steps 200 total rewards -130.180486971527
Mode: Train env_steps 200 total rewards -129.11331324546154
Mode: Test env_steps 200 total rewards -259.27573602375924
Mode: Test env_steps 200 total rewards -127.15911891811993
Mode: Test env_steps 200 total rewards -131.78587026067544
Mode: Test env_steps 200 total rewards -124.41451870201854
Mode: Test env_steps 200 total rewards -120.47274359833682
Mode: Test env_steps 200 total rewards -124.89280595941818
Mode: Test env_steps 200 total rewards -121.65913894737605
Mode: Test env_steps 200 total rewards -249.62018572923262
Mode: Test env_steps 200 total rewards -1.0191547659342177
Mode: Test env_steps 200 total rewards -130.19940298219444
12000 -139.04986758870655
Mode: Train env_steps 200 total rewards -130.7861404924015
Mode: Train env_steps 200 total rewards -128.20895186233065
Mode: Train env_steps 200 total rewards -240.80124919944137
Mode: Train env_steps 200 total rewards -127.05305419189972
Mode: Train env_steps 200 total rewards -389.74735507116566
Mode: Test env_steps 200 total rewards -125.799274083809
Mode: Test env_steps 200 total rewards -126.80654663550376
Mode: Test env_steps 200 total rewards -128.47082148335176
Mode: Test env_steps 200 total rewards -125.38395279903489
Mode: Test env_steps 200 total rewards -265.4943495452462
Mode: Test env_steps 200 total rewards -391.3820340028615
Mode: Test env_steps 200 total rewards -124.5938728672918
Mode: Test env_steps 200 total rewards -115.8693172446583
Mode: Test env_steps 200 total rewards -121.6324416497664
Mode: Test env_steps 200 total rewards -403.91459427748487
13000 -192.93472045890084
Mode: Train env_steps 200 total rewards -120.75656462824372
Mode: Train env_steps 200 total rewards -244.2110134603572
Mode: Train env_steps 200 total rewards -271.4861283576247
Mode: Train env_steps 200 total rewards -299.46712611912517
Mode: Train env_steps 200 total rewards -276.9068454174121
Mode: Test env_steps 200 total rewards -130.26577123824973
Mode: Test env_steps 200 total rewards -122.85300587835081
Mode: Test env_steps 200 total rewards -125.84164321703429
Mode: Test env_steps 200 total rewards -127.25999846162449
Mode: Test env_steps 200 total rewards -245.0846909333195
Mode: Test env_steps 200 total rewards -251.7522211139776
Mode: Test env_steps 200 total rewards -117.7094244834152
Mode: Test env_steps 200 total rewards -249.07677362083632
Mode: Test env_steps 200 total rewards -259.21219713821483
Mode: Test env_steps 200 total rewards -118.03599187266809
14000 -174.7091717957691
Mode: Train env_steps 200 total rewards -242.31402633567632
Mode: Train env_steps 200 total rewards -127.27280326851178
Mode: Train env_steps 200 total rewards -243.62500214390457
Mode: Train env_steps 200 total rewards -126.50611761247274
Mode: Train env_steps 200 total rewards -123.3945286332164
Mode: Test env_steps 200 total rewards -257.4191315458156
Mode: Test env_steps 200 total rewards -119.91926783090457
Mode: Test env_steps 200 total rewards -4.727449198719114
Mode: Test env_steps 200 total rewards -378.35922101838514
Mode: Test env_steps 200 total rewards -123.7072509995196
Mode: Test env_steps 200 total rewards -280.62047006061766
Mode: Test env_steps 200 total rewards -248.55686107743531
Mode: Test env_steps 200 total rewards -125.25552876619622
Mode: Test env_steps 200 total rewards -245.17300941608846
Mode: Test env_steps 200 total rewards -263.7774709605146
15000 -204.75156608741963
Mode: Train env_steps 200 total rewards -369.5970004310366
Mode: Train env_steps 200 total rewards -117.8776598579716
Mode: Train env_steps 200 total rewards -266.6137974287849
Mode: Train env_steps 200 total rewards -247.84643931523897
Mode: Train env_steps 200 total rewards -133.65093973837793
Mode: Test env_steps 200 total rewards -132.58213516324759
Mode: Test env_steps 200 total rewards -317.6314685828984
Mode: Test env_steps 200 total rewards -120.63207617402077
Mode: Test env_steps 200 total rewards -134.50522946193814
Mode: Test env_steps 200 total rewards -249.93733799178153
Mode: Test env_steps 200 total rewards -126.03254494443536
Mode: Test env_steps 200 total rewards -127.51484705973417
Mode: Test env_steps 200 total rewards -133.02907354477793
Mode: Test env_steps 200 total rewards -131.04472528398037
Mode: Test env_steps 200 total rewards -133.04624734260142
16000 -160.59556855494156
Mode: Train env_steps 200 total rewards -131.4692294076085
Mode: Train env_steps 200 total rewards -257.0220946841873
Mode: Train env_steps 200 total rewards -132.60133136808872
Mode: Train env_steps 200 total rewards -252.69747569982428
Mode: Train env_steps 200 total rewards -122.5156181063503
Mode: Test env_steps 200 total rewards -120.0488967075944
Mode: Test env_steps 200 total rewards -125.59240189334378
Mode: Test env_steps 200 total rewards -122.92463257256895
Mode: Test env_steps 200 total rewards -266.6653274325654
Mode: Test env_steps 200 total rewards -129.52725801430643
Mode: Test env_steps 200 total rewards -386.4986750278622
Mode: Test env_steps 200 total rewards -127.47746223770082
Mode: Test env_steps 200 total rewards -131.84532477753237
Mode: Test env_steps 200 total rewards -123.68566208239645
Mode: Test env_steps 200 total rewards -133.80112480558455
17000 -166.80667655514554
Mode: Train env_steps 200 total rewards -130.1032104054466
Mode: Train env_steps 200 total rewards -5.792526931327302
Mode: Train env_steps 200 total rewards -129.94445695829927
Mode: Train env_steps 200 total rewards -1.8074299860745668
Mode: Train env_steps 200 total rewards -371.67741363390815
Mode: Test env_steps 200 total rewards -129.01796465553343
Mode: Test env_steps 200 total rewards -255.2657772154198
Mode: Test env_steps 200 total rewards -124.8317355401814
Mode: Test env_steps 200 total rewards -127.61366206099046
Mode: Test env_steps 200 total rewards -130.1721339863725
Mode: Test env_steps 200 total rewards -128.43343426752836
Mode: Test env_steps 200 total rewards -264.26960422779666
Mode: Test env_steps 200 total rewards -3.667812744155526
Mode: Test env_steps 200 total rewards -251.8668613290938
Mode: Test env_steps 200 total rewards -251.72904552519321
18000 -166.68680315522653
Mode: Train env_steps 200 total rewards -129.41188386362046
Mode: Train env_steps 200 total rewards -122.25436197966337
Mode: Train env_steps 200 total rewards -132.0075741810724
Mode: Train env_steps 200 total rewards -125.08316496918269
Mode: Train env_steps 200 total rewards -120.87805001712695
Mode: Test env_steps 200 total rewards -130.77035507211986
Mode: Test env_steps 200 total rewards -130.97795120121737
Mode: Test env_steps 200 total rewards -285.9067427550326
Mode: Test env_steps 200 total rewards -130.19821366295218
Mode: Test env_steps 200 total rewards -248.72471698420122
Mode: Test env_steps 200 total rewards -131.5111675742737
Mode: Test env_steps 200 total rewards -252.134106502519
Mode: Test env_steps 200 total rewards -249.68509305920452
Mode: Test env_steps 200 total rewards -259.2564549049275
Mode: Test env_steps 200 total rewards -131.86590750053256
19000 -195.10307092169805
Mode: Train env_steps 200 total rewards -336.72006702711224
Mode: Train env_steps 200 total rewards -3.6598976548411883
Mode: Train env_steps 200 total rewards -128.5459162555635
Mode: Train env_steps 200 total rewards -389.0736679392867
Mode: Train env_steps 200 total rewards -132.46394797693938
Mode: Test env_steps 200 total rewards -127.63480124925263
Mode: Test env_steps 200 total rewards -132.9844055683352
Mode: Test env_steps 200 total rewards -350.4678683485836
Mode: Test env_steps 200 total rewards -1491.0205211639404
Mode: Test env_steps 200 total rewards -123.56267284578644
Mode: Test env_steps 200 total rewards -253.39906679093838
Mode: Test env_steps 200 total rewards -131.26202398515306
Mode: Test env_steps 200 total rewards -375.1163965202868
Mode: Test env_steps 200 total rewards -132.37188876396976
Mode: Test env_steps 200 total rewards -254.79661067272536
20000 -337.26162559089715
Mode: Train env_steps 200 total rewards -127.21233860775828
Mode: Train env_steps 200 total rewards -397.9239173475653
Mode: Train env_steps 200 total rewards -261.70106873475015
Mode: Train env_steps 200 total rewards -136.95836029946804
Mode: Train env_steps 200 total rewards -130.52756336517632
Mode: Test env_steps 200 total rewards -127.33369559422135
Mode: Test env_steps 200 total rewards -283.45684512890875
Mode: Test env_steps 200 total rewards -136.14634452015162
Mode: Test env_steps 200 total rewards -137.2795043103397
Mode: Test env_steps 200 total rewards -248.97463169554248
Mode: Test env_steps 200 total rewards -8.958229891955853
Mode: Test env_steps 200 total rewards -10.105981927365065
Mode: Test env_steps 200 total rewards -132.38649014476687
Mode: Test env_steps 200 total rewards -133.52735120104626
Mode: Test env_steps 200 total rewards -132.87370552495122
21000 -135.1042779939249
Mode: Train env_steps 200 total rewards -135.44952426105738
Mode: Train env_steps 200 total rewards -136.6360167451203
Mode: Train env_steps 200 total rewards -126.07958034798503
Mode: Train env_steps 200 total rewards -129.10063152387738
Mode: Train env_steps 200 total rewards -254.23420189972967
Mode: Test env_steps 200 total rewards -9.132988084107637
Mode: Test env_steps 200 total rewards -122.19331623334438
Mode: Test env_steps 200 total rewards -253.2292528897524
Mode: Test env_steps 200 total rewards -291.03938596788794
Mode: Test env_steps 200 total rewards -127.90111041348428
Mode: Test env_steps 200 total rewards -7.189530588919297
Mode: Test env_steps 200 total rewards -122.86703424248844
Mode: Test env_steps 200 total rewards -252.5274507328868
Mode: Test env_steps 200 total rewards -126.35793518205173
Mode: Test env_steps 200 total rewards -252.72059313277714
22000 -156.51585974677
Mode: Train env_steps 200 total rewards -132.3777971100062
Mode: Train env_steps 200 total rewards -263.93837735801935
Mode: Train env_steps 200 total rewards -380.18561655655503
Mode: Train env_steps 200 total rewards -408.3316973443143
Mode: Train env_steps 200 total rewards -134.41268048726488
Mode: Test env_steps 200 total rewards -252.1836907789111
Mode: Test env_steps 200 total rewards -136.87916581658646
Mode: Test env_steps 200 total rewards -130.30568698607385
Mode: Test env_steps 200 total rewards -295.1264161616564
Mode: Test env_steps 200 total rewards -285.27469485998154
Mode: Test env_steps 200 total rewards -257.36417460720986
Mode: Test env_steps 200 total rewards -122.39938643248752
Mode: Test env_steps 200 total rewards -136.13417248800397
Mode: Test env_steps 200 total rewards -251.1970808338374
Mode: Test env_steps 200 total rewards -135.31905758287758
23000 -200.21835265476255
Mode: Train env_steps 200 total rewards -265.19849015702493
Mode: Train env_steps 200 total rewards -268.84571858868003
Mode: Train env_steps 200 total rewards -137.15437516197562
Mode: Train env_steps 200 total rewards -131.01147694559768
Mode: Train env_steps 200 total rewards -389.00455401837826
Mode: Test env_steps 200 total rewards -123.15574537939392
Mode: Test env_steps 200 total rewards -264.8135799880838
Mode: Test env_steps 200 total rewards -359.71586162620224
Mode: Test env_steps 200 total rewards -121.86481238342822
Mode: Test env_steps 200 total rewards -134.40076231583953
Mode: Test env_steps 200 total rewards -127.8359218480764
Mode: Test env_steps 200 total rewards -252.95195665210485
Mode: Test env_steps 200 total rewards -133.68351730890572
Mode: Test env_steps 200 total rewards -249.9511700947769
Mode: Test env_steps 200 total rewards -416.6168870218098
24000 -218.49902146186213
Mode: Train env_steps 200 total rewards -133.75552151724696
Mode: Train env_steps 200 total rewards -249.84270376106724
Mode: Train env_steps 200 total rewards -119.0928434144007
Mode: Train env_steps 200 total rewards -252.1334647499025
Mode: Train env_steps 200 total rewards -4.308382875751704
Mode: Test env_steps 200 total rewards -250.32012339681387
Mode: Test env_steps 200 total rewards -130.86303978820797
Mode: Test env_steps 200 total rewards -268.61977915861644
Mode: Test env_steps 200 total rewards -256.51407427561935
Mode: Test env_steps 200 total rewards -268.53248357982375
Mode: Test env_steps 200 total rewards -131.89295327838045
Mode: Test env_steps 200 total rewards -247.8418615491828
Mode: Test env_steps 200 total rewards -132.06573122669943
Mode: Test env_steps 200 total rewards -246.07906676083803
Mode: Test env_steps 200 total rewards -128.755500536412
25000 -206.1484613550594
Mode: Train env_steps 200 total rewards -268.73735208273865
Mode: Train env_steps 200 total rewards -249.699738193769
Mode: Train env_steps 200 total rewards -257.7146478953655
Mode: Train env_steps 200 total rewards -132.48573947069235
Mode: Train env_steps 200 total rewards -117.73745695047546
Mode: Test env_steps 200 total rewards -117.13273281010333
Mode: Test env_steps 200 total rewards -125.37805172341177
Mode: Test env_steps 200 total rewards -246.70760537590832
Mode: Test env_steps 200 total rewards -126.25057095201919
Mode: Test env_steps 200 total rewards -356.92420602519996
Mode: Test env_steps 200 total rewards -247.3438758761622
Mode: Test env_steps 200 total rewards -123.14953158609569
Mode: Test env_steps 200 total rewards -127.49349682836328
Mode: Test env_steps 200 total rewards -130.86493495781906
Mode: Test env_steps 200 total rewards -131.28574351139832
26000 -173.2530749646481
Mode: Train env_steps 200 total rewards -129.1364300606656
Mode: Train env_steps 200 total rewards -131.16975290200207
Mode: Train env_steps 200 total rewards -121.95525176647061
Mode: Train env_steps 200 total rewards -347.63898885797244
Mode: Train env_steps 200 total rewards -1516.262550830841
Mode: Test env_steps 200 total rewards -125.29759021170321
Mode: Test env_steps 200 total rewards -116.29971585396561
Mode: Test env_steps 200 total rewards -132.65588944178307
Mode: Test env_steps 200 total rewards -242.80255369469523
Mode: Test env_steps 200 total rewards -120.76851275190711
Mode: Test env_steps 200 total rewards -129.98449951899238
Mode: Test env_steps 200 total rewards -263.6801114343107
Mode: Test env_steps 200 total rewards -133.65415045432746
Mode: Test env_steps 200 total rewards -247.21006692014635
Mode: Test env_steps 200 total rewards -117.64420653533307
27000 -162.99972968171642
Mode: Train env_steps 200 total rewards -130.20218588324497
Mode: Train env_steps 200 total rewards -118.29003828013083
Mode: Train env_steps 200 total rewards -247.1906664679991
Mode: Train env_steps 200 total rewards -251.76994302743697
Mode: Train env_steps 200 total rewards -380.8231740617193
Mode: Test env_steps 200 total rewards -128.14449329604395
Mode: Test env_steps 200 total rewards -133.00257929693907
Mode: Test env_steps 200 total rewards -121.33280960656703
Mode: Test env_steps 200 total rewards -117.21745651622768
Mode: Test env_steps 200 total rewards -260.304541438818
Mode: Test env_steps 200 total rewards -129.4903052574955
Mode: Test env_steps 200 total rewards -123.66184103582054
Mode: Test env_steps 200 total rewards -4.47467941895593
Mode: Test env_steps 200 total rewards -136.82730377465487
Mode: Test env_steps 200 total rewards -128.40459588193335
28000 -128.2860605523456
Mode: Train env_steps 200 total rewards -359.02930258901324
Mode: Train env_steps 200 total rewards -126.99004180729389
Mode: Train env_steps 200 total rewards -130.01239318959415
Mode: Train env_steps 200 total rewards -132.86401597573422
Mode: Train env_steps 200 total rewards -131.5378251487855
Mode: Test env_steps 200 total rewards -377.7228271923959
Mode: Test env_steps 200 total rewards -388.79292901046574
Mode: Test env_steps 200 total rewards -134.40097275190055
Mode: Test env_steps 200 total rewards -121.09551488608122
Mode: Test env_steps 200 total rewards -238.15228960616514
Mode: Test env_steps 200 total rewards -131.88327238895
Mode: Test env_steps 200 total rewards -246.09436088893563
Mode: Test env_steps 200 total rewards -5.141647985205054
Mode: Test env_steps 200 total rewards -130.17426304146647
Mode: Test env_steps 200 total rewards -125.60784388473257
29000 -189.90659216362982
Mode: Train env_steps 200 total rewards -376.1674876296893
Mode: Train env_steps 200 total rewards -375.34097828599624
Mode: Train env_steps 200 total rewards -127.59093644656241
Mode: Train env_steps 200 total rewards -136.18268738826737
Mode: Train env_steps 200 total rewards -129.42341559915803
Mode: Test env_steps 200 total rewards -391.77343064476736
Mode: Test env_steps 200 total rewards -254.3057643007487
Mode: Test env_steps 200 total rewards -134.01842796755955
Mode: Test env_steps 200 total rewards -391.50856303423643
Mode: Test env_steps 200 total rewards -265.35276218969375
Mode: Test env_steps 200 total rewards -136.64729456044734
Mode: Test env_steps 200 total rewards -133.1267894115299
Mode: Test env_steps 200 total rewards -5.491715028416365
Mode: Test env_steps 200 total rewards -133.11291719414294
Mode: Test env_steps 200 total rewards -127.73738552071154
30000 -197.3075049852254
Mode: Train env_steps 200 total rewards -121.78846242744476
Mode: Train env_steps 200 total rewards -131.7180840705987
Mode: Train env_steps 200 total rewards -3.245894107458298
Mode: Train env_steps 200 total rewards -129.29797964007594
Mode: Train env_steps 200 total rewards -379.41606050374685
Mode: Test env_steps 200 total rewards -121.7213050108403
Mode: Test env_steps 200 total rewards -131.86788710579276
Mode: Test env_steps 200 total rewards -264.3296286612749
Mode: Test env_steps 200 total rewards -126.13307171873748
Mode: Test env_steps 200 total rewards -269.3273641727865
Mode: Test env_steps 200 total rewards -126.06584425829351
Mode: Test env_steps 200 total rewards -138.2838618159294
Mode: Test env_steps 200 total rewards -128.50390940532088
Mode: Test env_steps 200 total rewards -255.43328048475087
Mode: Test env_steps 200 total rewards -273.4956193007529
31000 -183.51617719344796
###Markdown
Draw the learning curve
###Code
import matplotlib.pyplot as plt
plt.plot(learning_curve["x"], learning_curve["y"])
plt.xlabel("env steps")
plt.ylabel("return")
plt.show()
###Output
_____no_output_____
###Markdown
gridfinderRun through the full gridfinder model from data input to final guess for Burundi.Note that the 'truth' data used for the grid here is very bad, so the accuracy results don't mean much.
###Code
import os
from pathlib import Path
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
import seaborn as sns
from IPython.display import display, Markdown
import numpy as np
import rasterio
import geopandas as gpd
import folium
import gridfinder as gf
from gridfinder import save_raster
###Output
_____no_output_____
###Markdown
Set folders and parameters
###Code
folder_inputs = Path('test_data')
folder_ntl_in = folder_inputs / 'ntl'
aoi_in = folder_inputs / 'gadm.gpkg'
roads_in = folder_inputs / 'roads.gpkg'
pop_in = folder_inputs / 'pop.tif'
grid_truth = folder_inputs / 'grid.gpkg'
folder_out = Path('test_output')
folder_ntl_out = folder_out / 'ntl_clipped'
raster_merged_out = folder_out / 'ntl_merged.tif'
targets_out = folder_out / 'targets.tif'
targets_clean_out = folder_out / 'targets_clean.tif'
roads_out = folder_out / 'roads.tif'
dist_out = folder_out / 'dist.tif'
guess_out = folder_out / 'guess.tif'
guess_skeletonized_out = folder_out / 'guess_skel.tif'
guess_nulled = folder_out / 'guess_nulled.tif'
guess_vec_out = folder_out / 'guess.gpkg'
animate_out = folder_out / 'animated'
percentile = 70 # percentile value to use when merging monthly NTL rasters
ntl_threshold = 0.1 # threshold when converting filtered NTL to binary (probably shouldn't change)
upsample_by = 2 # factor by which to upsample before processing roads (both dimensions are scaled by this)
cutoff = 0.0 # cutoff to apply to output dist raster, values below this are considered grid
###Output
_____no_output_____
###Markdown
Clip and merge monthly rasters
###Code
gf.clip_rasters(folder_ntl_in, folder_ntl_out, aoi_in)
raster_merged, affine = gf.merge_rasters(folder_ntl_out, percentile=percentile)
save_raster(raster_merged_out, raster_merged, affine)
print('Merged')
plt.imshow(raster_merged, vmin=0, vmax=1)
###Output
_____no_output_____
###Markdown
Create filter
###Code
ntl_filter = gf.create_filter()
X = np.fromfunction(lambda i, j: i, ntl_filter.shape)
Y = np.fromfunction(lambda i, j: j, ntl_filter.shape)
fig = plt.figure()
sns.set()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, ntl_filter, cmap=cm.coolwarm, linewidth=0, antialiased=False)
###Output
_____no_output_____
###Markdown
Clip, filter and resample NTL
###Code
ntl_thresh, affine = gf.prepare_ntl(raster_merged_out,
aoi_in,
ntl_filter=ntl_filter,
threshold=ntl_threshold,
upsample_by=upsample_by)
save_raster(targets_out, ntl_thresh, affine)
print('Targets prepared')
plt.imshow(ntl_thresh, cmap='viridis')
###Output
_____no_output_____
###Markdown
Remove target areas with no underlying population
###Code
targets_clean = gf.drop_zero_pop(targets_out, pop_in, aoi_in)
save_raster(targets_clean_out, targets_clean, affine)
print('Removed zero pop')
plt.imshow(ntl_thresh, cmap='viridis')
###Output
_____no_output_____
###Markdown
Roads: assign values, clip and rasterize
###Code
roads_raster, affine = gf.prepare_roads(roads_in,
aoi_in,
targets_out)
save_raster(roads_out, roads_raster, affine, nodata=-1)
print('Costs prepared')
plt.imshow(roads_raster, cmap='viridis', vmin=0, vmax=1)
###Output
_____no_output_____
###Markdown
Get targets and costs and run algorithm
###Code
targets, costs, start, affine = gf.get_targets_costs(targets_clean_out, roads_out)
est_mem = gf.estimate_mem_use(targets, costs)
print(f'Estimated memory usage: {est_mem:.2f} GB')
dist = gf.optimise(targets, costs, start,
jupyter=True,
animate=True,
affine=affine,
animate_path=animate_out)
save_raster(dist_out, dist, affine)
plt.imshow(dist)
###Output
_____no_output_____
###Markdown
Filter dist results to grid guess
###Code
guess, affine = gf.threshold(dist_out, cutoff=cutoff)
save_raster(guess_out, guess, affine)
print('Got guess')
plt.imshow(guess, cmap='viridis')
###Output
_____no_output_____
###Markdown
Check results
###Code
true_pos, false_neg = gf.accuracy(grid_truth, guess_out, aoi_in)
print(f'Points identified as grid that are grid: {100*true_pos:.0f}%')
print(f'Actual grid that was missed: {100*false_neg:.0f}%')
###Output
_____no_output_____
###Markdown
Skeletonize
###Code
guess_skel, affine = gf.thin(guess_out)
save_raster(guess_skeletonized_out, guess_skel, affine)
print('Skeletonized')
plt.imshow(guess_skel)
###Output
_____no_output_____
###Markdown
Convert to geometry
###Code
guess_gdf = gf.raster_to_lines(guess_skeletonized_out)
guess_gdf.to_file(guess_vec_out, driver='GPKG')
print('Converted to geom')
minx, miny, maxx, maxy = list(guess_gdf.bounds.iloc[0])
bounds = ((miny, minx), (maxy, maxx))
m = folium.Map(control_scale=True)
m.fit_bounds(bounds)
folium.GeoJson(guess_gdf).add_to(m)
m
###Output
_____no_output_____
###Markdown
Now lets calculate inverted SIFTImage from https://eprints.soton.ac.uk/272237/1/Paper_17.pdf![image.png](attachment:image.png)
###Code
#Patch is rotated 180 deg, because orientation detection on the inverted patch would be +180 deg.
inv_rot_patch = 255-patch[::-1,::-1]
plt.imshow(inv_rot_patch, cmap="gray")
sift_patch_inverted_and_rot = SD.describe(inv_rot_patch)
print (sift_patch_inverted_and_rot)
from copy import deepcopy
import numpy as np
#Finally, let's calculate inverted SIFT
def invert_sift_desc(sift_desc):
return sift_desc.reshape(8,4,4)[:,::-1,::-1].flatten()
print (sift_patch_inverted_and_rot - invert_sift_desc(sift))
###Output
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 -1 -1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0]
###Markdown
kicht'ai: Example for rap corpus creation, model training and text generation.
###Code
import numpy as np
from sklearn.utils import shuffle
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow import TensorShape
from kichtai.genius import GeniusParser
from kichtai.corpus import RapCorpus
from kichtai.nn import rnn_seq_loss, get_rnn_seq_model, plot_history, talk_from_text
###Output
_____no_output_____
###Markdown
1. Rap corpus creation using Genius API Reference: https://dev.to/willamesoares/how-to-integrate-spotify-and-genius-api-to-easily-crawl-song-lyrics-with-python-4o62
###Code
# Read your Genius token, stored in a 'token.txt' file, and test its validity
token = open('token.txt', 'r').read()
rap_parser = GeniusParser(token)
rap_parser.test_token()
# Initialize artists dict.
list_artists = ['Gazo']
rap_parser.create_dict_artists(list_artists=list_artists)
# Search for songs of artists in 'list_artists'
rap_parser.search_for_songs(nb_page=1, per_page=1)
rap_parser.dict_artists
# Search for raw lyrics
rap_parser.search_for_lyrics()
rap_parser.dict_artists
# Create final corpus by concatenation and cleaning of lyrics
corpus = RapCorpus(rap_parser.dict_artists)
corpus.info()
# Consolidate and clean corpus
corpus.create_corpus()
corpus.clean_text()
corpus.print_text(limit=500, random_select=False)
# Plot top words in corpus
corpus.plot_dictionary(top=15)
# Plot vocabulary of the corpus
corpus.plot_vocabulary()
###Output
_____no_output_____
###Markdown
2. Train a text generation model using RNN Refrence: https://www.tensorflow.org/tutorials/text/text_generation
###Code
# Random seed
random_state=0
# Parameters
len_seq = 64
embedding_dim = 8
rnn_units = 8
batch_size = 64
epochs = 1000
patience = 10
lr=1e-3
# Get text
text = corpus.corpus
# Vocab
vocab = sorted(set(text))
vocab_size = len(vocab)
# Mapping
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
# Data
X = []
Y = []
for i in range(len(text)-len_seq-1):
X.append(text[i:i+len_seq])
Y.append(text[i+1:i+len_seq+1])
data = np.array([[char2idx[i] for i in x] for x in X])
targets = np.array([[char2idx[i] for i in y] for y in Y])
data, targets = shuffle(data, targets, random_state=random_state)
print(f"Data shape: {data.shape}")
# Split train/test
TRAIN_BUF = int(data.shape[0]*0.8) - (int(data.shape[0]*0.8) % batch_size)
TEST_BUF = int(data.shape[0]*0.2) - (int(data.shape[0]*0.2) % batch_size)
data_train = data[:TRAIN_BUF]
data_validation = data[TRAIN_BUF:TRAIN_BUF+TEST_BUF]
targets_train = targets[:TRAIN_BUF]
targets_validation = targets[TRAIN_BUF:TRAIN_BUF+TEST_BUF]
# Create tf model
model = get_rnn_seq_model(vocab_size, embedding_dim, rnn_units, batch_size)
name=f'sequence_model_{len_seq}_{embedding_dim}_{rnn_units}_{batch_size}'
# Callbacks and compil
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=patience)
mc = ModelCheckpoint(f'outputs/{name}.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
optimizer = Adam(learning_rate=lr)
model.compile(optimizer=optimizer, loss=rnn_seq_loss)
# Train
history = model.fit(data_train, targets_train,
validation_data = (data_validation, targets_validation),
epochs=epochs,
batch_size=batch_size,
verbose=0,
callbacks=[es, mc])
# Plot history
plot_history(history)
###Output
_____no_output_____
###Markdown
3. Generate lyrics from initial text
###Code
# Load final model for generation
model = get_rnn_seq_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
name=f'sequence_model_{len_seq}_{embedding_dim}_{rnn_units}_{batch_size}'
model.load_weights(f'outputs/{name}.h5')
model.build(TensorShape([1, None]))
text_input = "ekip ekip"
nb_steps = 500
temperature = 1.0
text_predict = talk_from_text(text_input, model, char2idx, idx2char, len_seq, nb_steps=nb_steps, temperature=temperature)
print(f"{text_input}...\n...{text_predict[len(text_input):]}")
###Output
_____no_output_____
###Markdown
Load data
###Code
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.datasets import fetch_mldata
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score, accuracy_score
mnist = input_data.read_data_sets("MNIST_data/")
mnist_images = mnist.train.images
mnist_labels = mnist.train.labels
n_three, n_five = sum(mnist_labels==3), sum(mnist_labels==5)
X_all = np.vstack([
mnist_images[mnist_labels==3,:],
mnist_images[mnist_labels==5,:]
])
y_all = np.array([1]*n_three + [0]*n_five)
# make it more sparse
X_all = X_all * (np.random.uniform(0, 1, X_all.shape) > 0.8)
print('Dataset shape: {}'.format(X_all.shape))
print('Non-zeros rate: {:.05f}'.format(np.mean(X_all != 0)))
print('Classes balance: {:.03f} / {:.03f}'.format(np.mean(y_all==0), np.mean(y_all==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_all, y_all, random_state=42, test_size=0.3)
###Output
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
Dataset shape: (10625, 784)
Non-zeros rate: 0.04036
Classes balance: 0.469 / 0.531
###Markdown
Baselines
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
for model in [
LogisticRegression(),
RandomForestClassifier(n_jobs=-1, n_estimators=200)
]:
model.fit(X_tr, y_tr)
predictions = model.predict(X_te)
acc = accuracy_score(y_te, predictions)
print('model: {}'.format(model.__str__()))
print('accuracy: {}'.format(acc))
print()
###Output
model: LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
accuracy: 0.8930363864491845
model: RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=-1,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
accuracy: 0.8880175658720201
###Markdown
Dense example
###Code
from tffm import TFFMClassifier
for order in [2, 3]:
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
# this will close tf.Session and free resources
model.destroy()
###Output
100%|██████████| 50/50 [00:03<00:00, 13.62epoch/s]
###Markdown
Sparse example
###Code
import scipy.sparse as sp
# only CSR format supported
X_tr_sparse = sp.csr_matrix(X_tr)
X_te_sparse = sp.csr_matrix(X_te)
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='sparse',
seed=42
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
model.destroy()
###Output
100%|██████████| 50/50 [00:03<00:00, 17.12epoch/s]
###Markdown
Regression example
###Code
from tffm import TFFMRegressor
from sklearn.metrics import mean_squared_error
model = TFFMRegressor(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='sparse'
)
# translate Y from {0,1} to {-10, 10}
model.fit(X_tr_sparse, y_tr*20-10, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions > 0)))
print('MSE: {}'.format(mean_squared_error(y_te*20-10, predictions)))
model.destroy()
###Output
100%|██████████| 50/50 [00:02<00:00, 19.15epoch/s]
###Markdown
n_features/time complexity
###Code
n_features = X_all.shape[1]
used_features = range(100, 1000, 100)
n_repeats = 5
elapsed_mean = []
elapsed_std = []
model_title = ''
for cur_n_feats in tqdm(used_features):
time_observation = []
for _ in range(n_repeats):
active_features = np.random.choice(range(n_features), size=cur_n_feats)
model = TFFMClassifier(
order=5,
rank=50,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=1,
batch_size=-1,
init_std=0.01,
input_type='dense'
)
model_title = model.__str__()
# manually initialize model without calling .fit()
model.core.set_num_features(cur_n_feats)
model.core.build_graph()
model.initialize_session()
start_time = time.time()
predictions = model.decision_function(X_all[:, active_features])
end_time = time.time()
model.destroy()
time_observation.append(end_time - start_time)
elapsed_mean.append(np.mean(time_observation))
elapsed_std.append(np.std(time_observation))
%pylab inline
errorbar(used_features, elapsed_mean, yerr=elapsed_std)
xlim(0, 1000)
title(model_title)
xlabel('n_features')
ylabel('test time')
grid()
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Logging example
###Code
order = 3
model = TFFMClassifier(
order=order,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=10,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
log_dir='./tmp/logs',
verbose=1
)
model.fit(X_tr_sparse, y_tr, show_progress=True)
predictions = model.predict(X_te_sparse)
print('[order={}] accuracy: {}'.format(order, accuracy_score(y_te, predictions)))
###Output
Initialize logs, use:
tensorboard --logdir=/Users/mikhail/std/repos/tffm/tmp/logs
###Markdown
Save/load example
###Code
model.save_state('./tmp/state.tf')
model.destroy()
model = TFFMClassifier(
order=3,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
n_epochs=10,
batch_size=-1,
init_std=0.001,
reg=0.001,
input_type='sparse',
log_dir='./tmp/logs',
verbose=1
)
# internally model need to allocate memory before load previous weights,
# so need to set num_features explicitly
model.core.set_num_features(X_tr.shape[1])
model.load_state('./tmp/state.tf')
###Output
Initialize logs, use:
tensorboard --logdir=/Users/mikhail/std/repos/tffm/tmp/logs
INFO:tensorflow:Restoring parameters from ./tmp/state.tf
###Markdown
Different optimizers
###Code
for optim, title in [(tf.train.AdamOptimizer(learning_rate=0.001), 'Adam'),
(tf.train.FtrlOptimizer(0.01, l1_regularization_strength=0.01), 'FTRL')]:
acc = []
model = TFFMClassifier(
order=3,
rank=10,
optimizer=optim,
batch_size=1024,
init_std=0.001,
reg=0.1,
input_type='sparse',
)
n_epochs = 5
anchor_epochs = range(0, 200+1, n_epochs)
for _ in anchor_epochs:
# score result every 5 epochs
model.fit(X_tr_sparse, y_tr, n_epochs=n_epochs)
predictions = model.predict(X_te_sparse)
acc.append(accuracy_score(y_te, predictions))
plot(anchor_epochs, acc, label=title)
model.destroy()
xlabel('n_epochs')
ylabel('accuracy')
legend()
grid()
###Output
_____no_output_____
###Markdown
Different regularization strategies
###Code
X_all = np.vstack([
mnist_images[mnist_labels==3,:],
mnist_images[mnist_labels==5,:]
])
y_all = np.array([1]*n_three + [0]*n_five)
# make it more sparse (sparseness is about 97%)
X_all = X_all * (np.random.uniform(0, 1, X_all.shape) > 0.97)
print('Dataset shape: {}'.format(X_all.shape))
print('Non-zeros rate: {}'.format(np.mean(X_all != 0)))
print('Classes balance: {} / {}'.format(np.mean(y_all==0), np.mean(y_all==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_all, y_all, random_state=42, test_size=0.3)
for use_reweight, title in [(False, 'no reweight reg'), (True, 'reweight reg')]:
acc = []
model = TFFMClassifier(
order=3,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
batch_size=1024,
init_std=0.001,
reg=1.0,
input_type='sparse',
reweight_reg = use_reweight
)
n_epochs = 2
anchor_epochs = range(0, 20+1, n_epochs)
for _ in anchor_epochs:
# score result every 5 epochs
model.fit(X_tr_sparse, y_tr, n_epochs=n_epochs)
predictions = model.predict(X_te_sparse)
acc.append(accuracy_score(y_te, predictions))
plot(anchor_epochs, acc, label=title)
model.destroy()
xlabel('n_epochs')
ylabel('accuracy')
legend(loc=4)
grid()
###Output
_____no_output_____
###Markdown
Weighted Loss FunctionWhen using `TFFMClassifier`, one can set the parameter `sample_weights` in order to 1. Use a "balanced" weighting scheme, in which the weight applied to the positive class is $w_+ = n_- / n_+$.2. Prove a custom weight that is applied to every sample from the positive class.2. Prove arbitrary weights to be applied to each sample.We will demonstrate the first two approaches.
###Code
from sklearn.metrics import confusion_matrix
# generate imbalanced data:
X_imbalanced = X_all[4000:,:]
y_imbalanced = y_all[4000:]
print('Classes balance: {:.03f} / {:.03f}'.format(np.mean(y_imbalanced==0),
np.mean(y_imbalanced==1)))
print('Balanced positive weight is {:.03f}.'.format(np.mean(y_imbalanced==0)/np.mean(y_imbalanced==1)))
X_tr, X_te, y_tr, y_te = train_test_split(X_imbalanced, y_imbalanced, random_state=42, test_size=0.3)
# use default weighting
model = TFFMClassifier(
order=2,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
###Output
_____no_output_____
###Markdown
Unweighted loss shows good performance on prevalent class, but poor performance on class with smaller representation
###Code
# use balanced weighting
model = TFFMClassifier(
order=2,
sample_weight='balanced',
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
###Output
_____no_output_____
###Markdown
Performance in underrepresented class improved, at the cost of performance in prevalent class.
###Code
# use manully weighting for positive class
model = TFFMClassifier(
order=2,
pos_class_weight=6.0,
rank=10,
optimizer=tf.train.AdamOptimizer(learning_rate=0.001),
n_epochs=50,
batch_size=1024,
init_std=0.001,
reg=0.01,
input_type='dense',
seed=42
)
model.fit(X_tr, y_tr, show_progress=True)
predictions = model.predict(X_te)
print('accuracy: {}'.format(accuracy_score(y_te, predictions)))
model.destroy()
confusion_matrix(y_te,predictions)
###Output
_____no_output_____
###Markdown
Setup and login- Define the database backend. Local json files is the default.- Inititize FetchJson- login, Make sure to have a config.ini configured
###Code
from fetch import FetchJson
from ZFileDb import ZFileDb
# Define the database
db = ZFileDb(db_path="database/ZFileDb")
z = FetchJson(db=db)
z.login()
###Output
_____no_output_____
###Markdown
Get event result and plot- All API reusts are cached into local TinyDB database by default.- The api data is not proccessed.- If multiple APIs are present they are combined
###Code
result, status = z.fetch_result(zid=2552316)
print(f"Cache or refresh: {status}")
print(f"Event ID, 'zid' is: {result['zid']}")
print(f"Top level data in JSON: {result.keys()}")
print("Top five")
for racer in result['zwift_data'][:5]:
if int(racer['pos']) <=5:
print(f"{racer['pos']}: {racer['name']} with a time of {racer['race_time'][0]}")
###Output
Cache or refresh: cache
Event ID, 'zid' is: 2552316
Top level data in JSON: dict_keys(['zid', 'timestamp', 'view_data', 'zwift_data'])
Top five
1: Seigo. Ito[TKB] with a time of 4894.206
2: Alexander Bojsen [ACR] with a time of 4896.972
3: Nicolas Rou with a time of 4897.824
4: Oscar Feldfos with a time of 4898.2
5: Anders Broberg[SZ](UMARA) with a time of 4898.256
###Markdown
Getting started with analysis- Tools for this will be added in the future- Many columns are a list of two values with the second=0 use splitlist()- Many integer columns have blank "" values that mayb better be 0
###Code
import pandas as pd
df = pd.DataFrame(result['zwift_data'])
def splitlist(df, col, drop2=True):
df[[f'{col}', f'{col}_2' ]] = df[col].tolist()
if drop2:
df.drop(f'{col}_2', axis=1, inplace=True)
splitlist(df, 'watts')
splitlist(df, 'wkg')
splitlist(df, 'wkg_ftp')
df.head()
df[['watts', 'wkg', 'wkg_ftp']] = df[['watts', 'wkg', 'wkg_ftp']].astype(float)
df[['wkg', 'wkg_ftp']].plot()
###Output
_____no_output_____
###Markdown
###Code
main()
###Output
_____no_output_____
###Markdown
First, we specify several configuration hyperparameters and we store them in a dictionary.Not all of them are used at the same time. For example, if we decide to use an LSTM model, the parameters that specifies the TCN and RF model are not used.
###Code
target_idx = [0] # target variables to predict
B = 3 # number of ensembles
alpha = 0.1 # confidence level
quantiles = [alpha/2, # quantiles to predict
0.5,
1-(alpha/2)]
# rf only
n_trees = 20 # number of trees in each rf model
# lstm and tcn only
regression = 'quantile' # options: {'quantile', 'linear'}. If 'linear', just set one quantile
l2_lambda = 1e-4 # weight of l2 regularization in the lstm and tcn models
batch_size = 16 # size of batches using to train the lstm and tcn models
# lstm only
units = 128 # number of units in each lstm layer
n_layers = 3 # number of lstm layers in the model
# tcn only
dilations = [1,2,4,8] # dilation rate of the Conv1D layers
n_filters = 128 # filters in each Conv1D layer
kernel_size = 7 # kernel size in each ConvID layer
# Store the configuration in a dictionary
P = {'B':B, 'alpha':alpha, 'quantiles':quantiles,
'n_trees':n_trees,
'regression':regression,'l2':l2_lambda, 'batch_size':batch_size,
'units':units,'n_layers':n_layers,
'dilations':dilations, 'n_filters':n_filters, 'kernel_size':kernel_size}
###Output
_____no_output_____
###Markdown
Data loadingFor this example, we will use 3 years of data relative to solar power production.We will use the first year for training, the second for validation, and the last year as test set.*Note:* To use your own data, you must write a data loader which returns a single DataFrame or, like in this case, 3 DataFrames (one for training, one for validation, and one for test). You can find two examples of data loaders in [data_loaders.py](https://github.com/FilippoMB/Ensemble-Conformalized-Quantile-Regression/blob/main/data_loaders.py).
###Code
train_df, val_df, test_df = data_loaders.get_solar_data()
train_df.head()
###Output
_____no_output_____
###Markdown
Data preprocessingThe ``data_windowing()`` function transforms each DataFrame into 3-dimensional arrays of shape \[*number of samples*, *time steps*, *number of variables* \].The input data, X, might have a different number of time steps and a different number of variables than the output data, Y. In this case, we want to predict the energy production for the next day given the measurements of the past week. Therefore, the second dimension of X is ``time_steps_in=168`` (hours in the past week) and the second dimension of Y is ``time_steps_out=24`` (hours of the next day). The input variables are the historical energy production plus 5 exogenous variables, so the last dimension of X is ``n_vars=6``. Since we want to predict the future energy production, we specify the target variable to predict: ``label_columns=['MWH']``. Note that in Y ``n_vars=1``.``data_windowing()`` also rescales each variable in \[0,1\] and return the scaler, which is used to invert the transformation.In addition, it also splits training data in *B* disjoint sets, used to train the ensemble model. In this case, ``B=3``.![data_shape.drawio.png](attachment:data_shape.drawio.png)
###Code
train_data, val_x, val_y, test_x, test_y, Scaler = data_preprocessing.data_windowing(df=train_df,
val_data=val_df,
test_data=test_df,
B=3,
time_steps_in=168,
time_steps_out=24,
label_columns=['MWH'])
print("-- Training data --")
for i in range(len(train_data)):
print(f"Set {i} - x: {train_data[i][0].shape}, y: {train_data[i][1].shape}")
print("-- Validation data --")
print(f"x: {val_x.shape}, y: {val_y.shape}")
print("-- Test data --")
print(f"x: {test_x.shape}, y: {test_y.shape}")
# Update configuration dict
P['time_steps_in'] = test_x.shape[1]
P['n_vars'] = test_x.shape[2]
P['time_steps_out'] = test_y.shape[1]
###Output
-- Training data --
Set 0 - x: (119, 168, 6), y: (119, 24)
Set 1 - x: (119, 168, 6), y: (119, 24)
Set 2 - x: (119, 168, 6), y: (119, 24)
-- Validation data --
x: (357, 168, 6), y: (357, 24)
-- Test data --
x: (357, 168, 6), y: (357, 24)
###Markdown
Training the quantile regression modelsBefore looking into the conformalization of the PI, let's see how we can train different models that perform quantile regression.In the paper we considered three models:- a random forest (rf)- a recurrent neural network with LSTM cells- a feedforward neural network with 1-dimensional convolutional cells (TCN).In principle, any other model performing quantile regression can be used. Each model must implement a ``fit()`` function with is used to train the model parameters and a ``transform()`` function used to predict new data.The ``fit()`` function uses ``val_x`` and ``val_y`` to perform early stopping.Let's start with the **TCN** model.
###Code
P['model_type'] = 'tcn'
# Train
model = regression_model(P)
hist = model.fit(train_data[0][0], train_data[0][1], val_x, val_y)
utils.plot_history(hist)
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='TCN model')
###Output
_____no_output_____
###Markdown
The function ``plot_hist()`` plots how the loss, coverage, and PI length evolve during training on the train and validation set.Note that here we trained the model only on the first subset of the training set.Next we train the **LSTM** model. To do that, we just change ``model_type`` in the hyperparameters dictionary.
###Code
P['model_type'] = 'lstm'
# Train
model = regression_model(P)
hist = model.fit(train_data[0][0], train_data[0][1], val_x, val_y)
utils.plot_history(hist)
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='LSTM model')
###Output
_____no_output_____
###Markdown
Finally, we train the **RF** model. As before, we change ``model_type`` in the hyperparameters dictionary. Contrairly to the previous two neural network model, the ``fit()`` function does not use ``val_x`` and ``val_y`` since there is no early stopping.
###Code
# Train
P['model_type'] = 'rf'
model = regression_model(P)
model.fit(train_data[0][0], train_data[0][1])
# Test
PI = model.transform(test_x)
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
x_lims=[0,168], scaler=Scaler, title='RF model')
###Output
_____no_output_____
###Markdown
EnCQRFinally, we compute the intervals with the EnCQR method.This is done by calling the function ``conformalized_PI()``, which returns two intervals:- the PI computed by the ensemble of QR models- the conformalized PIIn this example, we consider an ensemble of TCN models and show that after conformalization the coverage of the PI gets much closer to the desired confidence level.
###Code
P['model_type'] = 'tcn'
# compute the conformalized PI with EnCQR
PI, conf_PI = EnCQR(train_data, val_x, val_y, test_x, test_y, P)
# Plot original and conformalized PI
utils.plot_PIs(test_y, PI[:,:,1],
PI[:,:,0], PI[:,:,2],
conf_PI[:,:,0], conf_PI[:,:,2],
x_lims=[0,168], scaler=Scaler)
# Compute PI coverage and length before and after conformalization
print("Before conformalization:")
utils.compute_coverage_len(test_y.flatten(), PI[:,:,0].flatten(), PI[:,:,2].flatten(), verbose=True)
print("After conformalization:")
utils.compute_coverage_len(test_y.flatten(), conf_PI[:,:,0].flatten(), conf_PI[:,:,2].flatten(), verbose=True)
###Output
_____no_output_____
###Markdown
localtileserverLearn more: https://localtileserver.banesullivan.com/
###Code
from localtileserver import examples, get_leaflet_tile_layer, TileClient
from ipyleaflet import Map
# First, create a tile server from local raster file
bahamas = TileClient('bahamas_rgb.tif')
# Create ipyleaflet tile layer from that server
bahamas_layer = get_leaflet_tile_layer(bahamas)
# Create ipyleaflet map, add layers, add controls, and display
m = Map(center=bahamas.center(), zoom=8)
m.add_layer(bahamas_layer)
m
# Create a tile server from an raster URL
oam = TileClient('https://oin-hotosm.s3.amazonaws.com/59c66c5223c8440011d7b1e4/0/7ad397c0-bba2-4f98-a08a-931ec3a6e943.tif')
# Create ipyleaflet tile layer from that server
oam_layer = get_leaflet_tile_layer(oam)
# Create ipyleaflet map, add layers, add controls, and display
m = Map(center=oam.center(), zoom=16)
m.add_layer(oam_layer)
m
###Output
_____no_output_____
###Markdown
Generating a sample spectrum (1D gaussian mixture model) with noise and outliers
###Code
xmin = 400
xmax = 500
dx = 0.1
x = np.arange(xmin,xmax,dx)
print(f"Data size: {len(x)}")
pi = np.array([0.3,0.2,0.5])
mu = np.array([430,460,490])
v = np.array([10,40,10])
print(f"Ratio : pi = {pi}")
print(f"Position : mu = {mu}")
print(f"Variance : v = {v}")
y = 0
for i in range(len(mu)):
y += pi[i] / np.sqrt(2*np.pi*v[i]) * np.exp(-0.5/v[i]*(x-mu[i])**2)
plt.plot(x,y)
plt.show()
# --- Scaling, shifting, and adding noise ---
np.random.seed(seed=100)
y *= 50
y += - 100
y += 0.5 * np.random.randn(len(x))
plt.plot(x,y)
plt.show()
# --- Adding outliers ---
y_outlier = np.zeros(len(y)-4)
y_outlier = np.insert(y_outlier,100,5)
y_outlier = np.insert(y_outlier,200,5)
y_outlier = np.insert(y_outlier,800,5)
y_outlier = np.insert(y_outlier,800,5)
y += y_outlier
plt.plot(x,y)
plt.show()
###Output
Data size: 1000
Ratio : pi = [0.3 0.2 0.5]
Position : mu = [430 460 490]
Variance : v = [10 40 10]
###Markdown
Peak fitting by Gaussian Comparison of each method (KM, EM and VB)
###Code
#gmm = GMM(k=4,itr=50,algo='em',seed=None,fig=False,nd=1e6)
# k : # of Gaussians. It is always better to take one or two more peaks than you can see.
# itr : # of iterations
# algo="km": k-means method
# "em": EM algorithm
# "vb": variational Bayes
# seed : random seed in numpy
# fig : plot figure in each itration or not (for progress checking)
# nd : # of dummy data (used in variational Bayes algorithm)
seed = 101
gmm = GMM(k=4,itr=5,algo="km",seed=seed,fig=True).fit(y)
gmm = GMM(k=4,itr=5,algo="em",seed=seed,fig=True).fit(y)
gmm = GMM(k=4,itr=5,algo="vb",seed=seed,fig=True).fit(y)
###Output
=== KM (k-means method)===
###Markdown
Switching options along the way and displaying the final result.
###Code
gmm = GMM(k=4,itr=10,seed=101,algo="km").fit(y)
gmm.plot(x,y)
gmm.set_options(itr=100,algo="em").fit(y)
gmm.plot(x,y)
print("\n=== Final result ===\n")
yp = gmm.curve(y)
gmm.plot(x,y,yp)
###Output
=== KM (k-means method)===
###Markdown
Preprocessing Preprocessings to remove outliers and reduce noise
###Code
print("Original data")
print(f"Noise: {prep.noise(y)}")
plt.plot(x,y)
plt.show()
y_prep = y.copy()
print("Pooling data (midpoint pooling)")
# The values of p-th neighbors data (2*p+1 candidates) are compared,
# and only the midpoint value is employed.
# This removes up to p-consecutive outliers.
y_prep = prep.mid_pooling(y_prep,p=3)
print(f"Noise: {prep.noise(y_prep)}")
plt.plot(x,y_prep)
plt.show()
print("Smoothing data")
# Simply the average value of data up to p-th neighbors is taken.
y_prep = prep.smoothing(y_prep,p=3)
print(f"Noise: {prep.noise(y_prep)}")
plt.plot(x,y_prep)
#plt.hlines(prep.base(y_prep),min(x),max(x))
plt.show()
print("Cutting data")
# Data below a baseline (automatically given) are trimmed to the values of the baseline.
y_prep = prep.above(y_prep)
print(f"Noise: {prep.noise(y_prep)}")
plt.plot(x,y_prep)
plt.show()
###Output
Original data
Noise: 0.6315549569925408
###Markdown
Improvement of fitting accuracy by the preprocessings
###Code
gmm = GMM(k=4,itr=10,seed=101,algo="km").fit(y_prep)
gmm.set_options(itr=50,algo="em").fit(y_prep)
print("Final result")
yp = gmm.curve(y_prep)
gmm.plot(x,y_prep,yp)
score = gmm.score(y,yp)
print(f"R2 score = {score:.5f}")
plt.plot(x,y,color="k",lw=0.5)
plt.plot(x,yp,color="r",lw=2)
plt.show()
###Output
=== KM (k-means method)===
=== EM (EM algorithm)===
Final result
Peak ID Position(mu) Height Ratio(pi) Variance(v)
1 418.83663 0.09196 0.04028 93.93105
2 429.93652 2.05462 0.28154 9.19276
3 458.66438 0.61437 0.21242 58.52839
4 489.78933 2.94377 0.46575 12.25517
###Markdown
Peak extraction Take only the peak positions of Gaussians with large height above noise (the height is proportional to ratio/variance).
###Code
# get parameters for original data scale.
pi,mu,v,h = gmm.params(x,y_prep)
print(f"Ratio : pi = {pi}")
print(f"Position : mu = {mu}")
print(f"Variance : v = {v}")
print(f"Height : h = {h}")
print()
# Peaks are extracted based on the height of the peak
# relative to the volume of noise in the origiral data.
print(f"Noise = {prep.noise(y)}")
peaks = prep.peak_extraction(y,mu,h)
print(f"The positions of peaks with a significant height:")
print(f"{peaks}")
###Output
Ratio : pi = [0.040281 0.28154353 0.21242479 0.46575068]
Position : mu = [418.8366326 429.9365155 458.66438235 489.78933056]
Variance : v = [93.9310549 9.19276395 58.52838639 12.25516897]
Height : h = [0.09196127 2.05462134 0.61437171 2.94376505]
Noise = 0.6315549569925408
The positions of peaks with a significant height:
[429.9365155 458.66438235 489.78933056]
###Markdown
gdf 2 bokeh Import all required librairies
###Code
import geopandas as gpd
from bokeh.plotting import output_notebook
from bokeh.plotting import show
from gdf2bokeh import Gdf2Bokeh
output_notebook()
###Output
_____no_output_____
###Markdown
How to define style ?Check bokeh documentation : * [bokeh marker style options](https://docs.bokeh.org/en/latest/docs/reference/models/markers.html) to style point features* [bokeh multi_line style options](https://docs.bokeh.org/en/latest/docs/reference/plotting.html?highlight=multi_polygonsbokeh.plotting.figure.Figure.multi_line) to style LineString and MultiLineString features* [bokeh multi_polygon style options](https://docs.bokeh.org/en/latest/docs/reference/plotting.html?highlight=multi_polygonsbokeh.plotting.figure.Figure.multi_polygons) to style polygon and multipolygons features first way Prepare input data from geojson and map them
###Code
layers_to_add = [
{
# contains both Polygon and MultiPolygon features (Ugly but only for testing)
"input_gdf": gpd.GeoDataFrame.from_file("tests/fixtures/multipolygons.geojson"),
"legend": "MultiPolygons layer", # required
"fill_color": "orange", # bokeh multi_polygon style option
},
{
"input_gdf": gpd.GeoDataFrame.from_file("tests/fixtures/polygons.geojson"),
"legend": "Polygons layer", # required
"fill_color": "red", # bokeh multi_polygon style option
"line_color": "black", # bokeh multi_polygon style option
},
{
"input_gdf": gpd.GeoDataFrame.from_file("tests/fixtures/linestrings.geojson"),
"legend": "name", # we can use the attribute called 'name' containing name value (as usual on bokeh)
"color": "color", # we can use the attribute called 'color' containing name color (as usual on bokeh)
"line_width": 4 # bokeh multi_line style option
},
{
# contains both LineString and MultiLineString features (Ugly but only for testing)
"input_gdf": gpd.GeoDataFrame.from_file("tests/fixtures/multilinestrings.geojson"),
"legend": "multilinestrings layer", # required
"color": "blue", # bokeh multi_line style option
"line_width": 6 # bokeh multi_line style option
},
{
"input_gdf": gpd.GeoDataFrame.from_file("tests/fixtures/points.geojson"),
"legend": "points layer", # required
"style": "square", # required
"size": 6, # bokeh marker style option
"fill_color": "red", # bokeh marker style option
"line_color": "blue", # bokeh marker style option
},
]
###Output
_____no_output_____
###Markdown
Let's go to map our data
###Code
%%time
my_map = Gdf2Bokeh(
"My beautiful map", # required: map title
width=800, # optional: figure width, default 800
height=600, # optional: figure width, default 600
x_range=None, # optional: x_range, default None
y_range=None, # optional: y_range, default None
background_map_name="CARTODBPOSITRON", # optional: background map name, default: CARTODBPOSITRON
layers=layers_to_add # optional: bokeh layer to add from a list of dict contains geodataframe settings, see dict above
)
show(my_map.figure)
###Output
_____no_output_____
###Markdown
Second way
###Code
%%time
my_map = Gdf2Bokeh(
"My beautiful map v2", # required: map title
width=700, # optional: figure width, default 800
height=800, # optional: figure width, default 600
x_range=None, # optional: x_range, default None
y_range=None, # optional: y_range, default None
background_map_name="STAMEN_TERRAIN", # optional: background map name, default: CARTODBPOSITRON
)
my_map.add_points(
gpd.GeoDataFrame.from_file("tests/fixtures/points.geojson"),
legend="points layer", # required
style="cross", # optional, check list : https://docs.bokeh.org/en/latest/docs/reference/models/markers.html
size=10, # bokeh marker style option
fill_color="red", # bokeh marker style option
)
my_map.add_lines(
gpd.GeoDataFrame.from_file("tests/fixtures/multilinestrings.geojson"),
legend="multilinestrings layer", # required
color="green", # bokeh multi_line style option
line_width=6 # bokeh multi_line style option
)
my_map.add_lines(
gpd.GeoDataFrame.from_file("tests/fixtures/linestrings.geojson"),
legend="linestrings layer", # required
color="orange", # bokeh multi_line style option
line_width=4 # bokeh multi_line style option
)
my_map.add_polygons(
gpd.GeoDataFrame.from_file("tests/fixtures/polygons.geojson"),
legend="Polygons layer", # required
fill_color="red", # bokeh multi_polygon style option
line_width=5, # bokeh multi_polygon style option
line_color="yellow" # bokeh multi_polygon style option
)
my_map.add_polygons(
gpd.GeoDataFrame.from_file("tests/fixtures/multipolygons.geojson"),
legend="MultiPolygons layer", # required
fill_color="blue", # bokeh multi_polygon style option
line_color="black", # bokeh multi_polygon style option
)
show(my_map.figure)
###Output
_____no_output_____
###Markdown
Example notebook> Example notebook showing how to load and predict with the models used in the AnDi Challenge The models are named following the convention `name_dim{dimension}_t{task}_{id}_custom.pth`. We've only had time to train the models for dimension 1 and tasks 1 and 2. The following function will load the ensemble asuming that the pre-trained models are in a `models/` directory, change the path at convenience.
###Code
def load_task_model(task, dim=1, model_path=Path("models/")):
"Loads a pre-trained model given a task and a dimension."
if task == 1: n_mod, act = 7, False
elif task == 2: n_mod, act = 10, True
names = [f"hydra_dim{dim}_t{task}_{i}_custom.pth" for i in range(n_mod)]
models = [load_model(name, path=model_path).cuda() for name in names]
for model in models: model.eval()
return Ensemble(models, add_act=act)
###Output
_____no_output_____
###Markdown
The way our models work is with dataloaders that take the raw dataset in `.txt` format and transform it to a dataframe with pytorch tensors (may take a while). Provide a path to the directory where the `task{task}.txt` and `ref{task}.txt` files are. I am assuming you won't be trying to train a model, so the dataloader will be ready for validation, preserving the order of the data.
###Code
def get_dataloader(task, path, dim=1, bs=128):
"Provides dataloader from .txt files."
if not isinstance(path, Path): path = Path(path)
df = pd.DataFrame(columns=['dim', 'y', 'x', 'len'], dtype=object)
with open(path/f"task{task}.txt", "r") as D, open(path/f"ref{task}.txt") as Y:
trajs = csv.reader(D, delimiter=";", lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
labels = csv.reader(Y, delimiter=";", lineterminator="\n", quoting=csv.QUOTE_NONNUMERIC)
for t, y in zip(trajs, labels):
d, x = int(t[0]), t[1:]
x = tensor(x).view(d, -1).T
label = tensor(y[1:]) if task is 3 else y[1]
df = df.append({'dim': d, 'y': label, 'x': x, 'len': len(x)}, ignore_index=True)
df = df[df['dim'] == dim]
ds = L(zip(df['x'], df['y'])) if task == 1 else L(zip(df['x'], df['y'].astype(int)))
return DataLoader(ds, bs=bs, before_batch=pad_trajectories, device=default_device())
###Output
_____no_output_____
###Markdown
In order to get the predictions, the next functions can be called.
###Code
def get_preds_truth(model, dl): return get_preds(model, dl), get_truth(dl)
def get_preds(model, dl):
"Validates model on specific task and dimension."
return torch.cat([to_detach(model(xb)) for xb, _ in dl])
def get_truth(dl):
"Retrieves labels from dataloader"
return torch.cat([to_detach(yb) for _, yb in dl])
###Output
_____no_output_____
###Markdown
Task 1 exampleHere we assume that there's a directory `data/train` containing the validation data. Change the `data_path` at your convenience.
###Code
task = 1
data_path = Path("data/train")
model = load_task_model(task)
dl = get_dataloader(task, data_path)
###Output
_____no_output_____
###Markdown
The predictions are the exponents so we can compute the mean absolute error straight away.
###Code
preds, true = get_preds_truth(model, dl)
score = mae(preds, true)
print(f"MAE: {score:.4f}")
###Output
_____no_output_____
###Markdown
Task 2 exampleSame as in the previous example, change the `data_path` at convenience.
###Code
task = 2
data_path = Path("data/train")
model = load_task_model(task)
dl = get_dataloader(task, data_path)
###Output
_____no_output_____
###Markdown
In this case, the predictions are in the format required for the submission. Hence, if we want to get the actual labels we need to call `.argmax(1)` over the output.
###Code
preds, true = get_preds_truth(model, dl)
labels = preds.argmax(1).squeeze()
score = f1_score(true, labels, average='micro')
print(f"F1: {score:.4f}")
###Output
_____no_output_____
###Markdown
Setup evnironment
###Code
import os
import numpy as np
import pandas as pd
import json
from skimage.io import imread
from psf import compute, plotPSF
###Output
_____no_output_____
###Markdown
Setup plotting
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context('paper', font_scale=2.0)
sns.set_style('ticks')
from ipywidgets import interactive
from ipywidgets import IntSlider
from IPython.display import display
###Output
_____no_output_____
###Markdown
Define parameters
###Code
FOVumLat = 61.0
FOVpxLat = 512.0 # 512
pxPerUmLat = FOVpxLat/FOVumLat
pxPerUmAx = 2.0 # 2.0
wavelength = 970.0
NA = 0.6
windowUm = [12, 2, 2]
options = {'FOVumLat':FOVumLat, 'FOVpxLat':FOVpxLat, 'pxPerUmLat':FOVpxLat/FOVumLat, 'pxPerUmAx':pxPerUmAx, 'wavelength':970.0, 'NA':0.6, 'windowUm':windowUm}
options['thresh'] = .05
options
###Output
_____no_output_____
###Markdown
Get PSF
###Code
im = imread('./data/images.tif', plugin='tifffile')
im
data, beads, maxima, centers, smoothed = compute(im, options)
PSF = pd.concat([x[0] for x in data])
PSF['Max'] = maxima
PSF = PSF.reset_index().drop(['index'],axis=1)
latProfile = [x[1] for x in data]
axProfile = [x[2] for x in data]
PSF
print(len(PSF))
print(PSF.mean())
print(PSF.std())
###Output
14
FWHMlat 0.951830
FWHMax 4.772319
Max 286.214286
dtype: float64
FWHMlat 0.061514
FWHMax 0.425010
Max 212.956904
dtype: float64
###Markdown
Plot max projection
###Code
plt.figure(figsize=(5,5));
plt.imshow(smoothed);
plt.plot(centers[:, 2], centers[:, 1], 'r.', ms=10);
plt.xlim([0, smoothed.shape[0]])
plt.ylim([smoothed.shape[1], 0])
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plot max projection
###Code
beadInd = 1
average = beads[beadInd]
simplest = lambda arg: arg
simplest(1)
plane = IntSlider(min=0, max=average.shape[0]-1, step=1, value=average.shape[0]/2)
interactive(lambda i: plt.imshow(average[i]), i=plane)
###Output
_____no_output_____
###Markdown
Plot 2D slices
###Code
plt.imshow(average.mean(axis=0));
plt.axis('off');
plt.imshow(average.mean(axis=1), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
plt.imshow(average.mean(axis=2), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plotting
###Code
plotPSF(latProfile[beadInd][0],latProfile[beadInd][1],latProfile[beadInd][2],latProfile[beadInd][3],pxPerUmLat,PSF.Max.iloc[beadInd])
plotPSF(axProfile[beadInd][0],axProfile[beadInd][1],axProfile[beadInd][2],axProfile[beadInd][3],pxPerUmAx,PSF.Max.iloc[beadInd])
###Output
_____no_output_____
###Markdown
Tree species classification exampleThis notebook gives an example of using a convolutional neural network to classify tree species in the Sierra Nevada forest. First we download the NEON data and label files from our dataset stored on Zenodo.
###Code
import os
import sys
import tqdm
import argparse
from wget import download
from experiment.paths import *
# make output directory if necessary
if not os.path.exists('data'):
os.makedirs('data')
files = [ 'Labels_Trimmed_Selective.CPG',
'Labels_Trimmed_Selective.dbf',
'Labels_Trimmed_Selective.prj',
'Labels_Trimmed_Selective.sbn',
'Labels_Trimmed_Selective.sbx',
'Labels_Trimmed_Selective.shp',
'Labels_Trimmed_Selective.shp.xml',
'Labels_Trimmed_Selective.shx',
'NEON_D17_TEAK_DP1_20170627_181333_reflectance.tif',
'NEON_D17_TEAK_DP1_20170627_181333_reflectance.tif.aux.xml',
'NEON_D17_TEAK_DP1_20170627_181333_reflectance.tif.enp',
'NEON_D17_TEAK_DP1_20170627_181333_reflectance.tif.ovr',
'D17_CHM_all.tfw',
'D17_CHM_all.tif',
'D17_CHM_all.tif.aux.xml',
'D17_CHM_all.tif.ovr',
]
for f in files:
if not os.path.exists('data/%s'%f):
print('downloading %s'%f)
download('https://zenodo.org/record/3468720/files/%s?download=1'%f,'data/%s'%f)
print('')
###Output
_____no_output_____
###Markdown
Next we loads and co-register our data sources, including the hyperspectral image, the canopy height model, and the tree labels. Then we build a dataset of patches and their corresponding labels and store it in a HDF5 file for easy use in Keras.
###Code
import numpy as np
import tqdm
from experiment.paths import *
import os
from canopy.vector_utils import *
from canopy.extract import *
import h5py as h5
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
# Load the metadata from the image.
with rasterio.open(image_uri) as src:
image_meta = src.meta.copy()
os.makedirs('example',exist_ok=True)
seed = 0
# Load the shapefile and transform it to the hypersectral image's CRS.
polygons, labels = load_and_transform_shapefile(labels_shp_uri,'SP',image_meta['crs'])
# Cluster polygons for use in stratified sampling
centroids = np.stack([np.mean(np.array(poly['coordinates'][0]),axis=0) for poly in polygons])
cluster_ids = KMeans(10).fit_predict(centroids)
rasterize_shapefile(polygons, cluster_ids, image_meta, 'example/clusters.tiff')
stratify = cluster_ids
# alternative: stratify by species label
# stratify = labels
# Split up polygons into train, val, test here
train_inds, test_inds = train_test_split(range(len(polygons)),test_size=0.1,random_state=seed,stratify=stratify)
# Save ids of train,val,test polygons
with open('example/' + train_ids_uri,'w') as f:
f.writelines(["%d\n"%ind for ind in train_inds])
with open('example/' + test_ids_uri,'w') as f:
f.writelines(["%d\n"%ind for ind in test_inds])
# Separate out polygons
train_polygons = [polygons[ind] for ind in train_inds]
train_labels = [labels[ind] for ind in train_inds]
test_polygons = [polygons[ind] for ind in test_inds]
test_labels = [labels[ind] for ind in test_inds]
# Rasterize the shapefile to a TIFF. Using LZW compression, the resulting file is pretty small.
train_labels_raster = rasterize_shapefile(train_polygons, train_labels, image_meta, 'example/' + train_labels_uri)
test_labels_raster = rasterize_shapefile(test_polygons, test_labels, image_meta, 'example/' + test_labels_uri)
# Extract patches and labels
patch_radius = 7
height_threshold = 5
train_image_patches, train_patch_labels = extract_patches(image_uri,patch_radius,chm_uri,height_threshold,'example/' + train_labels_uri)
test_image_patches, test_patch_labels = extract_patches(image_uri,patch_radius,chm_uri,height_threshold,'example/' + test_labels_uri)
###Output
100%|██████████| 15668/15668 [05:17<00:00, 49.38it/s]
100%|██████████| 1909/1909 [00:39<00:00, 48.41it/s]
###Markdown
Now we set up and train the convolutional neural network model.
###Code
import numpy as np
import h5py as h5
from tqdm import tqdm, trange
import os
import sys
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.optimizers import SGD, Adam
from sklearn.decomposition import PCA
from joblib import dump, load
from sklearn.utils.class_weight import compute_class_weight
from sklearn.model_selection import train_test_split
from canopy.model import PatchClassifier
from experiment.paths import *
from tensorflow.keras import backend as K
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
K.set_session(sess)
np.random.seed(0)
tf.set_random_seed(0)
out = 'example'
lr = 0.0001
epochs = 20
x_all = train_image_patches
y_all = train_patch_labels
class_weights = compute_class_weight('balanced',range(8),y_all)
print('class weights: ',class_weights)
class_weight_dict = {}
for i in range(8):
class_weight_dict[i] = class_weights[i]
def estimate_pca():
x_samples = x_all[:,7,7]
pca = PCA(32,whiten=True)
pca.fit(x_samples)
return pca
"""Normalize training data"""
pca = estimate_pca()
dump(pca,out + '/pca.joblib')
x_shape = x_all.shape[1:]
x_dtype = x_all.dtype
y_shape = y_all.shape[1:]
y_dtype = y_all.dtype
x_shape = x_shape[:-1] + (pca.n_components_,)
print(x_shape, x_dtype)
print(y_shape, y_dtype)
classifier = PatchClassifier(num_classes=8)
model = classifier.get_patch_model(x_shape)
print(model.summary())
model.compile(optimizer=SGD(lr,momentum=0.9), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
def apply_pca(x):
N,H,W,C = x.shape
x = np.reshape(x,(-1,C))
x = pca.transform(x)
x = np.reshape(x,(-1,H,W,x.shape[-1]))
return x
checkpoint = ModelCheckpoint(filepath=out + '/' + weights_uri, monitor='val_acc', verbose=True, save_best_only=True, save_weights_only=True)
reducelr = ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=10, verbose=1, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0)
x_all = apply_pca(x_all)
def augment_images(x,y):
x_aug = []
y_aug = []
with tqdm(total=len(x)*8,desc='augmenting images') as pbar:
for rot in range(4):
for flip in range(2):
for patch,label in zip(x,y):
patch = np.rot90(patch,rot)
if flip:
patch = np.flip(patch,axis=0)
patch = np.flip(patch,axis=1)
x_aug.append(patch)
y_aug.append(label)
pbar.update(1)
return np.stack(x_aug,axis=0), np.stack(y_aug,axis=0)
x_all, y_all = augment_images(x_all,y_all)
train_inds, val_inds = train_test_split(range(len(x_all)),test_size=0.1,random_state=0)
x_train = np.stack([x_all[i] for i in train_inds],axis=0)
y_train = np.stack([y_all[i] for i in train_inds],axis=0)
x_val = np.stack([x_all[i] for i in val_inds],axis=0)
y_val = np.stack([y_all[i] for i in val_inds],axis=0)
batch_size = 32
model.fit( x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_val,y_val),
verbose=1,
callbacks=[checkpoint,reducelr],
class_weight=class_weight_dict)
###Output
class weights: [ 0.74829501 2.29405615 1.21758085 0.48317187 0.7970631 24.93668831
2.45540281 0.61169959]
(15, 15, 32) int16
() uint8
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_7 (InputLayer) (None, 15, 15, 32) 0
_________________________________________________________________
conv2d_16 (Conv2D) (None, 13, 13, 32) 9248
_________________________________________________________________
conv2d_17 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
conv2d_18 (Conv2D) (None, 9, 9, 128) 73856
_________________________________________________________________
conv2d_19 (Conv2D) (None, 7, 7, 128) 147584
_________________________________________________________________
conv2d_20 (Conv2D) (None, 5, 5, 128) 147584
_________________________________________________________________
conv2d_21 (Conv2D) (None, 3, 3, 128) 147584
_________________________________________________________________
conv2d_22 (Conv2D) (None, 1, 1, 128) 147584
_________________________________________________________________
conv2d_23 (Conv2D) (None, 1, 1, 8) 1032
_________________________________________________________________
flatten_2 (Flatten) (None, 8) 0
=================================================================
Total params: 692,968
Trainable params: 692,968
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Now we run the trained model on the full image in tiles.
###Code
import numpy as np
import cv2
from math import floor, ceil
import tqdm
from joblib import dump, load
import rasterio
from rasterio.windows import Window
from rasterio.enums import Resampling
from rasterio.vrt import WarpedVRT
from canopy.model import PatchClassifier
from experiment.paths import *
from tensorflow.keras import backend as K
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
K.set_session(sess)
pca = load(out + '/pca.joblib')
# "no data value" for labels
label_ndv = 255
# radius of square patch (side of patch = 2*radius+1)
patch_radius = 7
# height threshold for CHM -- pixels at or below this height will be discarded
height_threshold = 5
# tile size for processing
tile_size = 128
# tile size with padding
padded_tile_size = tile_size + 2*patch_radius
# open the hyperspectral or RGB image
image = rasterio.open(image_uri)
image_meta = image.meta.copy()
image_ndv = image.meta['nodata']
image_width = image.meta['width']
image_height = image.meta['height']
image_channels = image.meta['count']
# load model
input_shape = (padded_tile_size,padded_tile_size,pca.n_components_)
tree_classifier = PatchClassifier(num_classes=8)
training_model = tree_classifier.get_patch_model(input_shape)
training_model.load_weights(out + '/' + weights_uri)
model = tree_classifier.get_convolutional_model(input_shape)
# calculate number of tiles
num_tiles_y = ceil(image_height / float(tile_size))
num_tiles_x = ceil(image_width / float(tile_size))
print('Metadata for image')
for key in image_meta.keys():
print('%s:'%key)
print(image_meta[key])
print()
# create predicted label raster
predict_meta = image_meta.copy()
predict_meta['dtype'] = 'uint8'
predict_meta['nodata'] = label_ndv
predict_meta['count'] = 1
predict = rasterio.open(out + '/' + predict_uri, 'w', compress='lzw', **predict_meta)
# open the CHM
chm = rasterio.open(chm_uri)
chm_vrt = WarpedVRT(chm, crs=image.meta['crs'], transform=image.meta['transform'], width=image.meta['width'], height=image.meta['height'],
resampling=Resampling.bilinear)
# dilation kernel
kernel = np.ones((patch_radius*2+1,patch_radius*2+1),dtype=np.uint8)
def apply_pca(x):
N,H,W,C = x.shape
x = np.reshape(x,(-1,C))
x = pca.transform(x)
x = np.reshape(x,(-1,H,W,x.shape[-1]))
return x
# go through all tiles of input image
# run convolutional model on tile
# write labels to output label raster
with tqdm.tqdm(total=num_tiles_y*num_tiles_x) as pbar:
for y in range(patch_radius,image_height-patch_radius,tile_size):
for x in range(patch_radius,image_width-patch_radius,tile_size):
pbar.update(1)
window = Window(x-patch_radius,y-patch_radius,padded_tile_size,padded_tile_size)
# get tile from chm
chm_tile = chm_vrt.read(1,window=window)
if chm_tile.shape[0] != padded_tile_size or chm_tile.shape[1] != padded_tile_size:
pad = ((0,padded_tile_size-chm_tile.shape[0]),(0,padded_tile_size-chm_tile.shape[1]))
chm_tile = np.pad(chm_tile,pad,mode='constant',constant_values=0)
chm_tile = np.expand_dims(chm_tile,axis=0)
chm_bad = chm_tile <= height_threshold
# get tile from image
image_tile = image.read(window=window)
image_pad_y = padded_tile_size-image_tile.shape[1]
image_pad_x = padded_tile_size-image_tile.shape[2]
output_window = Window(x,y,tile_size-image_pad_x,tile_size-image_pad_y)
if image_tile.shape[1] != padded_tile_size or image_tile.shape[2] != padded_tile_size:
pad = ((0,0),(0,image_pad_y),(0,image_pad_x))
image_tile = np.pad(image_tile,pad,mode='constant',constant_values=-1)
# re-order image tile to have height,width,channels
image_tile = np.transpose(image_tile,axes=[1,2,0])
# add batch axis
image_tile = np.expand_dims(image_tile,axis=0)
image_bad = np.any(image_tile < 0,axis=-1)
image_tile = image_tile.astype('float32')
image_tile = apply_pca(image_tile)
# run tile through network
predict_tile = np.argmax(model.predict(image_tile),axis=-1).astype('uint8')
# dilate mask
image_bad = cv2.dilate(image_bad.astype('uint8'),kernel).astype('bool')
# set bad pixels to NDV
predict_tile[chm_bad[:,patch_radius:-patch_radius,patch_radius:-patch_radius]] = label_ndv
predict_tile[image_bad[:,patch_radius:-patch_radius,patch_radius:-patch_radius]] = label_ndv
# undo padding
if image_pad_y > 0:
predict_tile = predict_tile[:,:-image_pad_y,:]
if image_pad_x > 0:
predict_tile = predict_tile[:,:,:-image_pad_x]
# write to file
predict.write(predict_tile,window=output_window)
image.close()
chm.close()
predict.close()
###Output
0%| | 0/774 [00:00<?, ?it/s]
###Markdown
Finally we run an analysis of the classification performance on the test set.
###Code
import numpy as np
import rasterio
from rasterio.windows import Window
from rasterio.enums import Resampling
from rasterio.vrt import WarpedVRT
from rasterio.mask import mask
from shapely.geometry import Polygon
from shapely.geometry import Point
from shapely.geometry import mapping
import tqdm
from math import floor, ceil
from experiment.paths import *
from canopy.vector_utils import *
from canopy.extract import *
import sklearn.metrics
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report, cohen_kappa_score
train_inds = np.loadtxt(out + '/' + train_ids_uri,dtype='int32')
test_inds = np.loadtxt(out + '/' + test_ids_uri,dtype='int32')
# Load the metadata from the image.
with rasterio.open(image_uri) as src:
image_meta = src.meta.copy()
# Load the shapefile and transform it to the hypersectral image's CRS.
polygons, labels = load_and_transform_shapefile(labels_shp_uri,'SP',image_meta['crs'])
train_labels = [labels[ind] for ind in train_inds]
test_labels = [labels[ind] for ind in test_inds]
# open predicted label raster
predict = rasterio.open(out + '/' + predict_uri)
predict_raster = predict.read(1)
ndv = predict.meta['nodata']
def get_predictions(inds):
preds = []
for ind in inds:
poly = [mapping(Polygon(polygons[ind]['coordinates'][0]))]
out_image, out_transform = mask(predict, poly, crop=False)
out_image = out_image[0]
label = labels[ind]
rows, cols = np.where(out_image != ndv)
predict_labels = []
for row, col in zip(rows,cols):
predict_labels.append(predict_raster[row,col])
predict_labels = np.array(predict_labels)
hist = [np.count_nonzero(predict_labels==i) for i in range(8)]
majority_label = np.argmax(hist)
preds.append(majority_label)
return preds
def calculate_confusion_matrix(labels,preds):
mat = np.zeros((8,8),dtype='int32')
for label,pred in zip(labels,preds):
mat[label,pred] += 1
return mat
def calculate_fscore(labels,preds):
return sklearn.metrics.f1_score(labels,preds,average='micro')
test_preds = get_predictions(test_inds)
report = classification_report(test_labels, test_preds)
mat = confusion_matrix(test_labels,test_preds)
print('classification report:')
print(report)
print('confusion matrix:')
print(mat)
###Output
classification report:
precision recall f1-score support
0 0.62 0.89 0.73 9
1 0.00 0.00 0.00 1
2 0.82 1.00 0.90 9
3 1.00 0.88 0.93 16
4 0.88 1.00 0.93 7
5 0.00 0.00 0.00 2
6 0.56 0.71 0.63 7
7 1.00 0.67 0.80 21
avg / total 0.83 0.79 0.80 72
confusion matrix:
[[ 8 1 0 0 0 0 0 0]
[ 1 0 0 0 0 0 0 0]
[ 0 0 9 0 0 0 0 0]
[ 1 0 0 14 1 0 0 0]
[ 0 0 0 0 7 0 0 0]
[ 0 0 0 0 0 0 2 0]
[ 0 0 0 0 0 2 5 0]
[ 3 0 2 0 0 0 2 14]]
###Markdown
Denton proportional procedureDenton procedure will interpolate high frequency data into low frecuency. Let be $I$ a vector indicator with high frecuency data from t=1 to T. Lets assume that $A_n$ is a vector of low frecuency data where $A_n$ represent the n period and is length N. So the objetive is interpolate the vector $I$ into $A$. Lets assume that the length of I divide by A is equal to q (if q=4 => that is annual data with quarterly data). Lets assume that q=4.Original source: https://www.imf.org/external/pubs/ft/qna/pdf/2017/chapter6.pdfThe minimization problem is define as:$$\min_{X_t} \sum_{t=2}^T (\frac{X_t}{I_t} - \frac{X_{t-1}}{I_{t-1}})^2$$Subject to:$$\sum_{t=4n-3}^{4n} X_t = A_n \text{ for n = 1,...,N}$$So we want the vector $X_t$ such that aggregate the same value in annual data but minimize the variation growth of the quarterly data.It is better to express this problem as a quadratic matricial minimization problem. Lets define:\begin{equation}D = \begin{pmatrix}-1 & 1 & 0 &\cdots & 0\\0 & -1 & 1 & \cdots & 0\\\vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & \cdots & 0\end{pmatrix}\end{equation}D is square matrix (TxT) with -1 in the diagonal and 1 in the subsecuent element of the diagonal and the last rows has 0 in all of his elements. \begin{equation}J = \begin{pmatrix}1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 &\cdots & 0\\0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 &\cdots & 0\\\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 1\end{pmatrix}\end{equation}J is matrix (N x T) with 1s in rows and 0 that it is use to aggregate the data of X (in the representation above it is assume to be year data with quarterly data).Let $\tilde{I}$ be the diagonal inverse matrix of $I$. It is a square matrix of $TxT$ and $\tilde{X}= \tilde{I}X$. Therefore the problem can be represented as:$$\min_{X} (D\tilde{X})^T D\tilde{X}$$Subject to:$$JX = A \text{ and } \tilde{X}= \tilde{I}X$$Let assume that $M=\tilde{I^T} D^TD\tilde{I}$Rewrite the problem as:$$\min_{X} X^T M X$$Subject to:$$JX = A$$The langrange is given by:$$L = X^T M X - \lambda^{T} (JX - A )$$The FOC under the lagrange multiplier will be $(M + M^T)X - J^T \lambda = 0$ and $JX =A$. In matricial form:\begin{equation}\begin{pmatrix}(M+M^T) & -J^T \\J & 0\end{pmatrix}\begin{pmatrix}X \\\lambda\end{pmatrix}=\begin{pmatrix}0 \\A\end{pmatrix}\end{equation}The solution is given by:\begin{equation}\begin{pmatrix}X \\\lambda\end{pmatrix}=\begin{pmatrix}(M+M^T) & -J^T \\J & 0\end{pmatrix}^{-1}\begin{pmatrix}0 \\A\end{pmatrix}\end{equation}
###Code
import denton
import numpy as np
help(denton.proportional_method)
I = np.array([99.4,99.6,100.1,100.9,101.7,102.2,102.9,
103.8,104.9,106.3,107.3,107.8,107.9,
107.5,107.2,107.5])
A = np.array([1000, 1040, 1060.8, 1064.9])
#the average of every 4 in the data annual data
B = denton.proportional_method(I, A)
#to replicate the table then divide by 4
B_imf = denton.proportional_method(I, A)/4
print(B_imf)
###Output
[[247.47624703]
[248.38181462]
[250.44888312]
[253.69305523]
[257.37943434]
[259.40742807]
[261.02059637]
[262.19254122]
[262.88387148]
[264.79745537]
[266.21069991]
[266.90797325]
[267.15445131]
[266.16323935]
[265.41990401]
[266.16240533]]
###Markdown
Some explanations What is the pair correlation function?In very short: the pair correlation function (aka radial distribution function) represents "the density of points found in average at a distance r of a given point in a sample".See the [Wikipedia page](https://en.wikipedia.org/wiki/Radial_distribution_function) for more information (but I assume that you already are familiar with this notion if you are looking for a script that computes it). What do you mean by "corrected to take account of boundary effects"?When the set of points is of finite size (which often occurs), some points near the boundaries will have less neighbours than what they would have in an infinite sample. This effect has to be corrected to properly compute the pair correlation function.This scripts has two methods to deal with boundaries:- the "normalization factors method": for the points that are too close to a boundary, the number of other points found at a given distance from them will be corrected to take account of the fact that in the bulk those particles should have more neighbours. This is the default behavior: all the points are considered (i.e. no data is lost), at the price of a time consuming computation.- the "exclusion method": all the points that are too close to a boundary are simply excluded from the computation. In this case, the computation is faster, at the price of dropping some points (i.e. losing data). A simple example:
###Code
import numpy as np
import matplotlib.pyplot as plt
from paircorrelation2d import pcf2d
###Output
_____no_output_____
###Markdown
Create a hexagonal-like array of pointsThis will be the set of points for which we want to compute the pair correlation function in this example
###Code
l_size=100 #the points will be placed in a square of size l_size*l_size
noise_amp=0.25 #we add some noise to mimic "real" data
col=np.arange(l_size)
points=np.zeros((l_size*l_size,2))
noise=np.random.rand(l_size*l_size,2)*noise_amp
for ii in range(l_size):
points[ii*l_size:(ii+1)*l_size,0]=col+np.ones(l_size)*(1+(-1)**ii)/4+noise[ii*l_size:(ii+1)*l_size,0]
points[ii*l_size:(ii+1)*l_size,1]=np.ones(l_size)*ii+noise[ii*l_size:(ii+1)*l_size,1]
###Output
_____no_output_____
###Markdown
Let's look at the set of points:
###Code
plt.scatter(points[:,0],points[:,1])
###Output
_____no_output_____
###Markdown
Let's look at a subset of points (to see the general pattern):
###Code
plt.scatter(points[:,0],points[:,1])
plt.axis([10,20,10,20])
###Output
_____no_output_____
###Markdown
Compute the pair correlation function (pcf) taking account of all points:
###Code
bins=np.linspace(0,5,100) #Since the distance between two particles is about 1 here,
#we choose to compute the pcf only for distances up to 5.
[g_of_r_all,r] = pcf2d(points,bins,show_timing=True)
#the "show_timing" argument let you knows how long the script takes to run
###Output
Creating boundary polygon and array of points inside took 0.063812 s
Creating all ring polygons took 0.015636 s
Computing normalization factors took 34.886039 s
Computing g(r) took 8.330064 s
Total time: 43.295551 s for 10000 points
###Markdown
Here you can see that the total computation time is about 45 s (and that the computation of the normalization factor is the most time consuming operation).
###Code
plt.plot(r,g_of_r_all)
plt.xlabel('r')
plt.ylabel('g(r)')
plt.title('Taking account of all points')
###Output
_____no_output_____
###Markdown
Compute the pair correlation function (pcf) excluding points too close to the boundary:
###Code
[g_of_r_exclude,r] = pcf2d(points,bins,fast_method=True,show_timing=True)
#the "show_timing" argument let you knows how long the script takes to run
###Output
Creating boundary polygon and array of points inside took 0.063288 s
Creating all ring polygons took 0.021129 s
Computing normalization factors took 0.131086 s
Computing g(r) took 5.544017 s
Total time: 5.759521 s for 8090 points
###Markdown
Here you can see that the total computation time is about 6 s (and that the computation of g(r) is faster than in the previous case because less points are considered).
###Code
plt.plot(r,g_of_r_exclude)
plt.xlabel('r')
plt.ylabel('g(r)')
plt.title('Excluding points too close to the boundary')
###Output
_____no_output_____
###Markdown
Wait, they look exactly the same, don't they?
###Code
plt.plot(r,g_of_r_all,label='all points')
plt.plot(r,g_of_r_exclude,label='excluding points')
plt.xlabel('r')
plt.ylabel('g(r)')
plt.legend()
###Output
_____no_output_____
###Markdown
This is because we have a set of points that is "bulky" (enough points are at a distance > 5 from the boundaries), but this is not always the case. Let's take another set of points:
###Code
subpoints=points[np.where(points[:,1]<15)[0],:] #we exclude all the points with y>20
plt.scatter(subpoints[:,0],subpoints[:,1])
[g_of_r_all,r] = pcf2d(subpoints,bins)
[g_of_r_exclude,r] = pcf2d(subpoints,bins,fast_method=True)
plt.plot(r,g_of_r_all,label='all points')
plt.plot(r,g_of_r_exclude,label='excluding points')
plt.xlabel('r')
plt.ylabel('g(r)')
plt.legend()
###Output
_____no_output_____
###Markdown
Here the difference is more noticeable, because more than half of the points are close enough to a boundary (i.e. their distance to the boundary is less than 5, which is the maximal distance we want for computing g(r)). More information about boundaries: You can define the boundary you want for your set of points:The scripts is based on the Polygon objects from [shapely](https://shapely.readthedocs.io/en/latest/manual.htmlpolygons) so any list of coordinates that creates a valid Polygon for shapely will work here. The scripts then automatically excludes the points that are not inside the area of interest you have defined.For example, if ones want to do a L-shape boundary:
###Code
lshape_coord=np.array([[0,0],[100,0],[100,25],[50,25],[50,100],[0,100]])
#below is just for illustration purpose (it's not needed otherwise)
plt.plot(np.append(lshape_coord[:,0],0),np.append(lshape_coord[:,1],0))
###Output
_____no_output_____
###Markdown
You can now compute the g(r) for the points of your set that are inside this L-shape polygon (and you can verify what points are kept by using the "plot=True" option):
###Code
[g_of_r_lshape,r]=pcf2d(points,bins,coord_border=lshape_coord,plot=True)
###Output
_____no_output_____
###Markdown
You can add holes in your area of interest:This might be useful for example if you are looking at a set of particles coordinates that are in a geometry with obstacles (the positions of the obstacles are exclusion zones where no particles can ever be found, so you have to remove them from the area of interest).
###Code
square_coord=np.array([[0,0],[0,100],[100,100],[100,0]])
holes_coord=[np.array([[10,10],[10,30],[30,10]]),np.array([[60,60],[60,80],[80,80],[80,60]])] #the coordinates of holes polygons must be into a list (even when there's only one)
#below is just for illustration purpose (it's not needed otherwise)
plt.plot(np.append(square_coord[:,0],0),np.append(square_coord[:,1],0))
plt.plot(np.append(holes_coord[0][:,0],10),np.append(holes_coord[0][:,1],10),'r')
plt.plot(np.append(holes_coord[1][:,0],60),np.append(holes_coord[1][:,1],60),'r')
###Output
_____no_output_____
###Markdown
You can now compute the g(r) for the points of your set that are inside this area of interest with holes (and you can verify what points are kept by using the "plot=True" option):
###Code
[g_of_r_holes,r]=pcf2d(points,bins,coord_border=square_coord,coord_holes=holes_coord,plot=True)
###Output
_____no_output_____
###Markdown
Two things to keep in mind about boundaries:- When no boundary is provided, the scripts computes the minimal convex polygon containing all the points in array_positions (the convex hull). If the set of point has a non-convex boundary, the g(r) that will be computed will be wrong. For example, the convex hull of a L-shape set of points looks like this (polygon in blue, convex hull in red):
###Code
plt.plot(np.append(lshape_coord[:,0],0),np.append(lshape_coord[:,1],0))
plt.plot([0,100,100,50,0,0],[0,0,25,100,100,0],'r')
###Output
_____no_output_____
###Markdown
- The list of coordinates you provide for the boundary of the area of interest has to be "valid" in the sens used by the shapely library: linking all the points in order should result in a simple polygon with no line intersecting each other. For example:
###Code
valid_square = np.array([[0,0],[0,100],[100,100],[100,0]])
plt.plot(np.append(valid_square[:,0],0),np.append(valid_square[:,1],0))
###Output
_____no_output_____
###Markdown
This is a valid polygon.
###Code
invalid_square = np.array([[0,0],[0,100],[100,0],[100,100]])
plt.plot(np.append(invalid_square[:,0],0),np.append(invalid_square[:,1],0),'r')
###Output
_____no_output_____
###Markdown
Import modules
###Code
from pyvad import vad, trim, split
from librosa import load
import matplotlib.pyplot as plt
import numpy as np
import IPython.display
###Output
_____no_output_____
###Markdown
Load speech data
###Code
name = "test/voice/arctic_a0007.wav"
data, fs = load(name)
data = np.hstack((data, -data))
data *=0.95 / np.abs(data).max()
time = np.linspace(0, len(data)/fs, len(data)) # time axis
plt.plot(time, data)
plt.show()
###Output
_____no_output_____
###Markdown
Do VAD (int)
###Code
%time vact = vad(data, fs, fs_vad = 16000, hop_length = 30, vad_mode=3)
###Output
CPU times: user 166 ms, sys: 3.9 ms, total: 169 ms
Wall time: 176 ms
###Markdown
Plot result
###Code
fig, ax1 = plt.subplots()
ax1.plot(time, data, label='speech waveform')
ax1.set_xlabel("TIME [s]")
ax2=ax1.twinx()
ax2.plot(time, vact, color="r", label = 'vad')
plt.yticks([1] ,['voice'])
ax2.set_ylim([-0.01, 1.01])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
trim
###Code
%time edges = trim(data, fs, fs_vad = 16000, hop_length = 30, vad_mode=3)
###Output
CPU times: user 173 ms, sys: 6.07 ms, total: 179 ms
Wall time: 194 ms
###Markdown
Plot result
###Code
trimed = data[edges[0]:edges[1]]
time = np.linspace(0, len(trimed)/fs, len(trimed)) # time axis
fig, ax1 = plt.subplots()
ax1.plot(time, trimed, label='speech waveform')
ax1.set_xlabel("TIME [s]")
plt.show()
###Output
_____no_output_____
###Markdown
split
###Code
%time edges = split(data, fs, fs_vad = 8000, hop_length = 10, vad_mode=3)
###Output
CPU times: user 171 ms, sys: 5.65 ms, total: 177 ms
Wall time: 208 ms
###Markdown
Plot result
###Code
for i, edge in enumerate(edges):
seg = data[edge[0]:edge[1]]
time = np.linspace(0, len(seg)/fs, len(seg)) # time axis
fig, ax1 = plt.subplots()
ax1.plot(time, seg, label='speech waveform')
ax1.set_xlabel("TIME [s]")
plt.show()
###Output
_____no_output_____
###Markdown
AsyncLogDispatcher (use thread)
###Code
async_logger = logging.getLogger('Async Logger')
async_logger.setLevel(logging.INFO)
async_handler = AsyncLogDispatcher(write_record)
async_handler.setLevel(logging.INFO)
async_logger.addHandler(async_handler)
async_logger.info('Test log')
%timeit async_logger.info('Test log')
###Output
40.5 µs ± 386 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
###Markdown
SyncLogHandler
###Code
sync_logger = logging.getLogger('Sync Logger')
sync_logger.setLevel(logging.INFO)
sync_handler = SyncLogHandler()
sync_handler.setLevel(logging.INFO)
sync_logger.addHandler(sync_handler)
sync_logger.info('Test log')
%timeit sync_logger.info('Test log')
###Output
1 s ± 1.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
AsyncLogDispatcher (use celery)
###Code
from unittest import mock
from asynclog.tests.test_handler import app, write_task, has_celery
celery_logger = logging.getLogger('Celery logger')
celery_logger.setLevel(logging.INFO)
if not has_celery:
write_task.delay = mock.MagicMock()
celery_handler = AsyncLogDispatcher(write_task, use_thread=False, use_celery=True)
celery_handler.setLevel(logging.INFO)
celery_logger.addHandler(celery_handler)
celery_logger.info('Test log')
%timeit celery_logger.info('Test log')
###Output
857 µs ± 71.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
PARASCHUT notebookwe'll go through a small-scale example of `parachut` tools.
###Code
import os
import paraschut as psu
print(psu.config.QFile)
print(psu.config.JobDir)
###Output
example/job_queue.db
example/
###Markdown
generate a jobwe'll start from the default template and update it with data relevant to our example. note that these functions may be run offline.
###Code
jobinfo = psu.get_job_template(SetID=True)
jobinfo['name'] = 'example'
jobinfo['CodeDir'] = os.path.abspath('.')
jobinfo['JobIndex'] = 0
jobinfo['script'] = 'python example/job.py {BatchID} {JobIndex}'
# jobinfo['script'] = 'example/template.sh'
# jobinfo['pyfile'] = 'example/job.py'
jobinfo
###Output
_____no_output_____
###Markdown
now let's add some random data for the job to operate on. this job will just output its mean.
###Code
from numpy.random import randint
data = randint(1, 100, (1, 10**4))
psu.generate_data(jobinfo, data)
jobinfo['data']
psu.generate_script(jobinfo)
jobinfo['script']
###Output
_____no_output_____
###Markdown
you may also try setting the 'script' field to 'example/template.sh' and try generating a script. watch the script file that is written in this case.finally, let's add the job we built to the queue.
###Code
psu.add_job_to_queue(jobinfo)
###Output
_____no_output_____
###Markdown
now let's check that a new job (with JobIndex=0) was added to our queue:
###Code
psu.get_queue()
###Output
20210225224111: example
{'init': [0]}
missing jobs: {}
total jobs on server queue: 0
running/complete/total: 0/0/1
###Markdown
NOTE, that the server queue job counter (appearing in the last line of `get_queue` output) counts all currently online jobs associated with one's user (including those that are not part of the projects currently managed using `paraschut`).next, let's verify that the metadata has been properly stored:
###Code
psu.get_job_info(20210225224111, 0)
###Output
_____no_output_____
###Markdown
multiple jobs and collectionfirst, we'll add 3 more simlar jobs similar to our first job.
###Code
def duplicate_job(jobinfo, i):
newjob = jobinfo.copy() # duplicating to keep BatchID and similar fields identical
newjob['script'] = 'python example/job.py {BatchID} {JobIndex}'
# newjob['script'] = 'example/template.sh'
newjob['JobIndex'] = i
data = randint(1, 100, (1, 10**4))
psu.generate_data(newjob, data)
psu.add_job_to_queue(newjob, build_script=True)
# this will also generate the script
for i in range(3):
duplicate_job(jobinfo, i+1)
###Output
_____no_output_____
###Markdown
let's verify that we indeed generated additional jobs.
###Code
psu.get_queue()
psu.get_job_info(20210225224111, 3)
###Output
20210225224111: example
{'init': [0, 1, 2, 3]}
missing jobs: {}
total jobs on server queue: 0
running/complete/total: 0/0/4
###Markdown
finally, let's add a collect job that will compute the mean of means. this job will execute only once the first 4 jobs have completed successfully.
###Code
newjob = jobinfo.copy()
newjob['priority'] = 0.5 # lower priority gets executed after higher priority jobs are done
newjob['script'] = 'python example/collect_job.py {BatchID} {JobIndex}'
# newjob['script'] = 'example/template.sh'
# newjob['pyfile'] = 'example/collect_job.py'
newjob['JobIndex'] = 4
newjob['data'] = range(4) # pointing to previous JobIndices to compute the mean of their results
psu.add_job_to_queue(newjob, build_script=True)
###Output
_____no_output_____
###Markdown
submit jobsthe only job control function that must run on a server. in our case LocalJobExecutor is configured to run on the local machine.
###Code
psu.submit_jobs()
###Output
submiting: python example/job.py 20210225224111 0
submiting: python example/job.py 20210225224111 1
submiting: python example/job.py 20210225224111 2
submiting: python example/job.py 20210225224111 3
max jobs: 1000
in queue: 0
submitted: 4
###Markdown
note that only the first 4 jobs were submitted and are currently running. the collect job is waiting for them to complete. monitor jobslet's check if the job is indeed online and running: (note the * next to jobs 0-3 in the batch, which indicates that)
###Code
psu.get_queue()
###Output
20210225224111: example
{'run': ['0*', '1*', '2*', '3*'], 'init': [4]}
missing jobs: {}
total jobs on server queue: 4
running/complete/total: 4/0/5
###Markdown
this is how the output looks once the jobs have finished:
###Code
psu.get_queue()
###Output
20210225224111: example
{'complete': [0, 1, 2, 3], 'init': [4]}
missing jobs: {}
total jobs on server queue: 0
running/complete/total: 0/4/5
###Markdown
it's time to run the collect job.
###Code
psu.submit_jobs()
###Output
submiting: python example/collect_job.py 20210225224111 4
max jobs: 1000
in queue: 0
submitted: 1
###Markdown
after a short while all jobs should be in 'complete' state.
###Code
psu.get_queue()
###Output
20210225224111: example
{'complete': [0, 1, 2, 3, 4]}
missing jobs: {}
total jobs on server queue: 0
running/complete/total: 0/5/5
###Markdown
we can now check the logs created by the jobs (stdout and sterr), and its post-run metadata (which may includs a PBS report summary, for example). in this case, the result was printed to screen in the stdout file as well as stored in the 'result' field of the job metadata.
###Code
psu.print_log(20210225224111, 4, 'stdout')
psu.get_job_info(20210225224111, 4)
###Output
[[[stdout log for 20210225224111/example/job_4:]]]
50.183325
max jobs: 1000
in queue: 0
submitted: 0
###Markdown
finally, we may clear all batches that have completed all their jobs using the following functions:
###Code
psu.remove_batch_by_state('complete')
psu.get_queue()
###Output
missing jobs: {}
total jobs on server queue: 0
running/complete/total: 0/0/0
###Markdown
Example of simple `sparkhpc` usage in the Jupyter notebook Configure python for using the `spark` python libraries with `findspark`
###Code
import findspark; findspark.init()
###Output
_____no_output_____
###Markdown
Launch the standalone spark clusters using `sparkhpc`
###Code
import sparkhpc
sj = sparkhpc.sparkjob.LSFSparkJob(ncores=4)
sj.wait_to_start()
sj
sj2 = sparkhpc.sparkjob.LSFSparkJob(ncores=10)
sj2.submit()
sj.show_clusters()
###Output
_____no_output_____
###Markdown
Create a `SparkContext` and start computing
###Code
from pyspark import SparkContext
sc = SparkContext(master=sj.master_url)
sc.parallelize(range(100)).count()
###Output
_____no_output_____
###Markdown
Teardown
###Code
sj.stop()
sj2.stop()
sj.show_clusters()
###Output
_____no_output_____
###Markdown
Make a model in the form of TorchScript
###Code
model = torchvision.models.resnet18(pretrained=True)
model.eval()
example = torch.zeros(1, 3, 224, 224)
script_module = torch.jit.trace(model, example)
script_module_optimized = optimize_for_mobile(script_module)
augment_model_with_bundled_inputs(script_module_optimized, [(example,)])
torch.jit.save(script_module_optimized, "./resnet18.pt")
###Output
_____no_output_____
###Markdown
Set up profiling config and run profiler
###Code
profiling_config=DEFAULT_PROF_CONFIG
profiling_config['vulkan'] = False
profiling_config['caffe2_threadpool_android_cap'] = num_threads
profiling_config['caffe2_threadpool_force_inline'] = True
profiling_config['iter'] = 100
profiling_config['warmup'] = 30
profiling_config['use_bundled_input'] = 0
model_filename = './resnet18.pt'
raw_out = run_on_device(
model_filename,
prof_config=profiling_config,
verbose=True)
res = parse_profiler_output(raw_out, is_file=False)
# inspect result
print(json.dumps(res, indent=2))
###Output
_____no_output_____
###Markdown
CASIMAC Demo
###Code
from casimac import CASIMAClassifier, __version__
print(__version__)
###Output
1.0.0
###Markdown
Binary classification
###Code
# Create data
import numpy as np
N = 10
seed = 42
X = np.random.RandomState(seed).uniform(-10,10,N).reshape(-1,1)
y = np.zeros(X.size)
y[X[:,0]>0] = 1
# Classify
from sklearn.gaussian_process import GaussianProcessRegressor
clf = CASIMAClassifier(GaussianProcessRegressor)
clf = clf.fit(X, y)
# Predict
X_sample = np.linspace(-10,10,100).reshape(-1,1)
y_sample = clf.predict(X_sample)
p_sample = clf.predict_proba(X_sample)
d_sample = clf.decision_function(X_sample)
# Plot results
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,5))
plt.plot(X_sample,y_sample,label="class prediction")
plt.plot(X_sample,p_sample[:,1],label="class probability prediction")
plt.scatter(X,y,c='r',label="train data")
plt.xlabel("X")
plt.ylabel("label / probability")
plt.legend()
plt.show()
plt.figure(figsize=(10,5))
plt.hlines(0,-10,10)
plt.plot(X_sample,d_sample,label="decision function")
plt.scatter(X,0*y,c=y,label="train data", cmap="cool")
plt.xlabel("X")
plt.ylabel("distance to class border")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Multi-class classification
###Code
# Create data
import numpy as np
N = 10
seed = 42
X = np.random.RandomState(seed).uniform(-10,10,N).reshape(-1,1)
y = np.zeros(X.size)
y[X[:,0]>5] = 1
y[X[:,0]<-5] = 2
# Classify
from sklearn.gaussian_process import GaussianProcessRegressor
clf = CASIMAClassifier(GaussianProcessRegressor, proba_calc_method="MC")
clf = clf.fit(X, y)
# Predict
X_sample = np.linspace(-10,10,100).reshape(-1,1)
y_sample = clf.predict(X_sample)
p_sample = clf.predict_proba(X_sample)
d_sample, idx_col_map = clf.decision_function(X_sample, return_idx_col_map=True)
# Plot results
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,5))
plt.plot(X_sample,y_sample,label="class prediction")
plt.plot(X_sample,p_sample[:,0],label="class probability prediction: 0")
plt.plot(X_sample,p_sample[:,1],label="class probability prediction: 1")
plt.plot(X_sample,p_sample[:,2],label="class probability prediction: 2")
plt.scatter(X,y,c='r',label="train data")
plt.xlabel("X")
plt.ylabel("label / probability")
plt.legend()
plt.show()
plt.figure(figsize=(10,5))
plt.hlines(0,-10,10)
plt.plot(X_sample,d_sample[:,0],label="decision function: {}".format(idx_col_map[0]))
plt.plot(X_sample,d_sample[:,1],label="decision function: {}".format(idx_col_map[1]))
plt.plot(X_sample,d_sample[:,2],label="decision function: {}".format(idx_col_map[2]))
plt.scatter(X,0*y,c=y,label="train data", cmap="cool")
plt.xlabel("X")
plt.ylabel("distance to class border")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Gradients
###Code
# Create data
import numpy as np
N = 10
seed = 42
X = np.random.RandomState(seed).uniform(-10,10,N).reshape(-1,1)
y = np.zeros(X.size)
y[X[:,0]>0] = 1
# Classify
import GPy
class GPRegressor:
def __init__(self, kernel):
self.kernel = kernel
def fit(self, X, y):
self._models = []
for i in range(y.shape[1]):
model = GPy.models.GPRegression(X, y[:,i].reshape(-1,1), self.kernel)
model.optimize_restarts(verbose=False)
self._models.append(model)
def predict(self, X, return_std=False):
mean, var = np.empty((X.shape[0],0)), np.empty((X.shape[0],0))
for model in self._models:
mean_part, var_part = model.predict(X, full_cov=False) # part_var: only diagonal
mean = np.append(mean,mean_part,axis=1)
var = np.append(var,var_part,axis=1)
mean, var = np.array(mean), np.array(var) # mean: [n_samples, n_outputs], var: [n_samples, n_outputs]
var = np.clip(var, 0, np.inf)
if return_std:
return mean, np.sqrt(var)
else:
return mean
def predict_grad(self, X, return_std=False):
dmean, dvar = np.empty((X.shape[0],X.shape[1],0)), np.empty((X.shape[0],X.shape[1],0))
for model in self._models:
dmean_part, dvar_part = model.predictive_gradients(X)
dmean = np.append(dmean,dmean_part,axis=2)
dvar = np.append(dvar,dvar_part[:,:,np.newaxis],axis=2)
dmean, dvar = np.array(dmean), np.array(dvar) # dmean: [n_sample, n_vars, n_output], dvar: [n_sample, n_vars, n_output]
if return_std:
_, std = self.predict(X, return_std=True)
std[std==0] = np.nan
dstd = dvar/(2*std[:,np.newaxis,:])
dstd[np.isnan(dstd)] = np.inf
return dmean, dstd
else:
return dmean
clf = CASIMAClassifier(lambda:GPRegressor(GPy.kern.RBF(input_dim=X.shape[1], variance=1, lengthscale=1)))
clf = clf.fit(X, y)
# Predict
X_sample = np.linspace(-10,10,250).reshape(-1,1)
y_sample = clf.predict(X_sample)
p_sample = clf.predict_proba(X_sample)
d_sample = clf.decision_function(X_sample)
# Prediction gradients
dp_sample = clf.predict_proba_grad(X_sample)
dd_sample = clf.decision_function_grad(X_sample)
# Gradient errors
from scipy.optimize import check_grad
dp_errors = []
for x0 in X_sample:
dp_errors.append(check_grad(lambda X:clf.predict_proba(X.reshape(1,-1))[:,1], lambda X:clf.predict_proba_grad(X.reshape(1,-1))[:,0,1], x0))
dp_errors = np.array(dp_errors)
dd_errors = []
for x0 in X_sample:
dd_errors.append(check_grad(lambda X:clf.decision_function(X.reshape(1,-1)), lambda X:clf.decision_function_grad(X.reshape(1,-1)).ravel(), x0))
dd_errors = np.array(dd_errors)
# Plot results
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,5))
plt.plot(X_sample,y_sample,label="class prediction")
plt.plot(X_sample,p_sample[:,1],label="class probability prediction")
plt.plot(X_sample,dp_sample[:,0,1],":",label="gradient of class probability prediction")
plt.fill_between(X_sample.ravel(),dp_sample[:,0,1]-100*dp_errors,dp_sample[:,0,1]+100*dp_errors,alpha=.25,label="100 x gradient error")
plt.scatter(X,y,c='r',label="train data")
plt.xlabel("X")
plt.ylabel("label / probability")
plt.legend()
plt.show()
plt.figure(figsize=(10,5))
plt.hlines(0,-10,10)
plt.plot(X_sample,d_sample,label="decision function")
plt.plot(X_sample,dd_sample,":",label="gradient of decision function")
plt.fill_between(X_sample.ravel(),dd_sample.ravel()-100*dd_errors,dd_sample.ravel()+100*dd_errors,alpha=.25,label="100 x gradient error")
plt.scatter(X,0*y,c=y,label="train data", cmap="cool")
plt.xlabel("X")
plt.ylabel("distance to class border")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
pyiron example notebookThis is a placeholder example notebook running and atomistic Lammps job.
###Code
from pyiron_feal import Project
import numpy as np
pr = Project("projects/example")
pr.zerok.plot_phases_0K()
rep = 8
solid_solution = pr.create.structure.FeAl.bcc(c_Al=0.18, repeat=rep)
b2 = pr.create.structure.FeAl.b2(repeat=rep)
neighbors = 14
topology = solid_solution.get_neighbors(num_neighbors=neighbors).indices
pr.mcmd_sro.define_clustering(
reference_environments={'b2': b2.get_chemical_symbols()},
topology=topology,
threshold=neighbors-3
)
cluster = pr.mcmd_sro.cluster(env=solid_solution.get_chemical_symbols())
solid_solution[[id_ for clust in cluster.data['b2'] for id_ in clust]].plot3d()
###Output
_____no_output_____
###Markdown
Simple example with intervals
###Code
points_to_explain = pd.DataFrame({'x':[1.0, 2.0], 'y':[1.0, 2.0]})
explainer = ImpreciseShap(model=model.predict_proba, masker=X_train, eps=0.15)
result_dataframe = explainer.calculate_shapley_values(points_to_explain)
result_dataframe
###Output
_____no_output_____
###Markdown
Example with different epsilon values
###Code
from impreciseshap.visualization import get_df_for_eps
eps_arr = [1e-3, 1e-2, 5e-2, 0.1, 0.15]
example_with_eps = get_df_for_eps(model, X_train, points_to_explain, eps_arr)
display(example_with_eps)
###Output
100%|██████████| 2/2 [00:01<00:00, 1.66it/s]
100%|██████████| 2/2 [00:01<00:00, 1.58it/s]
100%|██████████| 2/2 [00:01<00:00, 1.29it/s]
100%|██████████| 2/2 [00:01<00:00, 1.30it/s]
100%|██████████| 2/2 [00:01<00:00, 1.17it/s]
###Markdown
`Proposition``Proposition`是一个用来构造逻辑计算图的一种结点,它的作用是提供命题变元的值 (i.e. **placeholder** or **data provider**)
###Code
from proposition import Proposition
###Output
_____no_output_____
###Markdown
生成`Proposition`对象之后,可以直接在后面加括号来计算它的值
###Code
a = Proposition('a')
a.val = True
print(a())
###Output
True
###Markdown
`Proposition`对象也可以进行命题变元的操作
###Code
b = a.negation()
a.val = False
print(a(), b())
a = Proposition('a')
b = Proposition('b')
conj = a.conjunction(b)
disj = a.disjunction(b)
impl = a.implication(b)
twoImpl = a.twoWayImplication(b)
for i in [False, True]:
for j in [False, True]:
a.val = i
b.val = j
print('='*10)
print('Input:',a(), b())
print('-'*10)
print('conjunction:',conj())
print('disjunction:',disj())
print('implication:',impl())
print('twoWayImplication:',twoImpl())
###Output
==========
Input: False False
----------
conjunction: False
disjunction: False
implication: True
twoWayImplication: True
==========
Input: False True
----------
conjunction: False
disjunction: True
implication: True
twoWayImplication: False
==========
Input: True False
----------
conjunction: False
disjunction: True
implication: False
twoWayImplication: False
==========
Input: True True
----------
conjunction: True
disjunction: True
implication: True
twoWayImplication: True
###Markdown
`PropositionLogic`由上面可知,可以通过`Proposition`对象之间的运算来构建计算图(computing graph),而`PropositionLogic`简化了这一步骤。`PropositionLogic`可以接受一个`String`形式的命题公式,并返回一个已经构建好计算图的`PropositionLogic`对象命题公式有如下要求:- 命题变元必须由小写字母和下划线组成- 命题变元和运算符之间的空格会被忽略- 运算符有 - `!` negation - `&` conjunction - `|` disjunction - `->` implication - `` two-way implication- 支持用小括号来改变优先级Example:![example_from_slides](tf_example.png)
###Code
from proposition import PropositionLogic
logic = PropositionLogic('!(p->(q&r))')
###Output
_____no_output_____
###Markdown
`PropositionLogic`对象可以直接调用,参数就是所有的命题变元的值
###Code
logic(p=True,q=False,r=False)
logic(p=False,q=True,r=True)
###Output
_____no_output_____
###Markdown
可以调用`PropositionLogic.getTruethFunction`函数来显示它的真值函数
###Code
logic.getTruethFunction(pandas=True)
###Output
_____no_output_____
###Markdown
Generating code for a model:
###Code
import pytorch_composer
from pytorch_composer.datasets import CIFAR10
from pytorch_composer.loops import Loop
# A random sequence of neural network layers. Any positive integer shoud be a valid dimension arguement:
sequence = [
["Conv2d", 6],
["MaxPool2d", 2],
["Linear", 16],
["Relu"],
["MaxPool2d", 2],
["Linear",43],
["RNN",12],
["MaxPool2d", 2],
["Relu"],
["Flat"],
["Linear",38],
]
dataset = pytorch_composer.datasets.CIFAR10()
model = pytorch_composer.Model(sequence, dataset)
loop = Loop(model)
training_code = pytorch_composer.Code([dataset,model,loop])
# The code can be saved in a text file with:
# training_code.save()
training_code
###Output
_____no_output_____
###Markdown
Using the generated code:
###Code
training_code()
###Output
Files already downloaded and verified
Files already downloaded and verified
[1, 2000] loss: 2.302
[1, 4000] loss: 2.174
[1, 6000] loss: 1.989
[1, 8000] loss: 1.930
[1, 10000] loss: 1.881
[1, 12000] loss: 1.843
[2, 2000] loss: 1.817
[2, 4000] loss: 1.768
[2, 6000] loss: 1.717
[2, 8000] loss: 1.690
[2, 10000] loss: 1.647
[2, 12000] loss: 1.639
Finished Training
###Markdown
The settings can be adjusted before or after the code is created.
###Code
# Reviewing the settings:
training_code.settings
# Changing a single setting:
training_code["batch_size"] = 16
# Changing multiple settings at once:
training_code.update({"lr":0.0009, "print_every":3000, 'model_name': 'Net2'})
training_code
# Using the new model:
training_code()
###Output
Files already downloaded and verified
Files already downloaded and verified
[1, 3000] loss: 2.275
[2, 3000] loss: 2.000
Finished Training
###Markdown
Step 1: Call ProphetNewsvendor.fit() in order to get necessary newsvendor statistics from prophets cross validation
###Code
tsprophet_fit = ProphetNewsvendor.fit(model=m, initial='365 days', period='365 days', horizon = '180 days')
###Output
INFO:prophet:Making 23 forecasts with cutoffs between 1993-11-08 00:00:00 and 2015-11-03 00:00:00
WARNING:prophet:Seasonality has period of 365.25 days which is larger than initial window. Consider increasing initial.
INFO:prophet:n_changepoints greater than number of observations. Using 17.
100%|██████████| 23/23 [02:05<00:00, 5.44s/it]
###Markdown
Step 2: Plot Residuls
###Code
ProphetNewsvendor.plot_residuals(tsprophet_fit[2])
###Output
_____no_output_____
###Markdown
Step 3: Make final Forecast & apply Newsvendor model
###Code
future = m.make_future_dataframe(periods=180)
forecast = m.predict(future)
forecast['newsvendor_result'] = forecast.apply(lambda row: ProphetNewsvendor.applynewsvendor(
row['yhat'],
tsprophet_fit[0],
tsprophet_fit[1],
0.75,
0.2
), axis = 1)
forecast[['newsvendor_result', 'yhat', 'yhat_upper', 'yhat_lower']].head()
###Output
_____no_output_____
###Markdown
###Code
%%capture
!git clone https://github.com/plant-ai-biophysics-lab/DeformableCNN-PlantTraits.git
import os
os.chdir('/content/DeformableCNN-PlantTraits')
%%capture
!pip install albumentations==1.1.0
!pip install agml
###Output
_____no_output_____
###Markdown
Training and Evaluation Pipeline Data and config setup Import libraries
###Code
import os
import time
import torch, torchvision
import numpy as np
import torch.nn as nn
from torch.functional import split
from torch.utils.data import DataLoader
from torch.optim import lr_scheduler
from sklearn.model_selection import train_test_split, StratifiedKFold
from torch.utils.tensorboard import SummaryWriter
from datatools import *
from engine import train_single_epoch, validate
from loss import NMSELoss
from architecture import GreenhouseMidFusionRegressor
###Output
_____no_output_____
###Markdown
Download 2021 Autonomous Greenhouse Challenge dataset
###Code
import agml
loader = agml.data.AgMLDataLoader('autonomous_greenhouse_regression', dataset_path = './')
###Output
Downloading autonomous_greenhouse_regression (size = 887.2 MB): 887226368it [00:33, 26634550.95it/s] ouse_regression.
###Markdown
Define data and output directories
###Code
sav_dir='model_weights/'
if not os.path.exists(sav_dir):
os.mkdir(sav_dir)
# Comment these two lines and uncomment the next two if you've already croppped the images to another directory
RGB_Data_Dir = './autonomous_greenhouse_regression/images/'
Depth_Data_Dir = './autonomous_greenhouse_regression/depth_images/'
# RGB_Data_Dir='./autonomous_greenhouse_regression/cropped_images/'
# Depth_Data_Dir='./autonomous_greenhouse_regression/cropped_depth_images/'
JSON_Files_Dir = './autonomous_greenhouse_regression/annotations.json'
###Output
_____no_output_____
###Markdown
Crop the data if necessary (if you did this beforehand or you don't need to crop don't run)
###Code
# import matplotlib.pyplot as plt
import cv2
min_x=650
max_x=1450
min_y=200
max_y=900
cropped_img_dir='./autonomous_greenhouse_regression/cropped_images/'
cropped_depth_img_dir='./autonomous_greenhouse_regression/cropped_depth_images/'
if not os.path.exists(cropped_img_dir):
os.mkdir(cropped_img_dir)
if not os.path.exists(cropped_depth_img_dir):
os.mkdir(cropped_depth_img_dir)
for im in os.listdir(RGB_Data_Dir):
img = cv2.imread(RGB_Data_Dir+im)
crop_img = img[min_y:max_y,min_x:max_x]
cv2.imwrite(cropped_img_dir+im, crop_img)
for depth_im in os.listdir(Depth_Data_Dir):
depth_img = cv2.imread(Depth_Data_Dir+depth_im, 0)
crop_depth_img = depth_img[min_y:max_y,min_x:max_x]
cv2.imwrite(cropped_depth_img_dir+depth_im, crop_depth_img)
RGB_Data_Dir = cropped_img_dir
Depth_Data_Dir = cropped_depth_img_dir
###Output
_____no_output_____
###Markdown
Set model architectures options:- single vs. multi input (SI- or MI-)- single vs. multi output (-SO or -MO)- deformable vs. standard convolutions
###Code
ConvType = 'deformable' # 'standard'
training_category = 'MIMO' #'MIMO', 'MISO', 'SIMO', 'SISO'
# Multi-input, multi-output model
if training_category == 'MIMO':
inputs = ['RGB-D']
outputs = ['ALL']
NumOutputs = None
# Multi-input, single-output model
elif training_category == 'MISO':
inputs = ['RGB-D']
outputs = ['FreshWeightShoot','DryWeightShoot','Height','Diameter','LeafArea']
NumOutputs = 1
# Single-input, multi-output model
elif training_category == 'SIMO':
inputs = ['RGB','D']
outputs = ['ALL']
NumOutputs = None
# Single-input, single-output model
elif training_category == 'SISO':
inputs = ['RGB','D']
outputs = ['FreshWeightShoot','DryWeightShoot','Height','Diameter','LeafArea']
NumOutputs = 1
###Output
_____no_output_____
###Markdown
Set other model config parameters
###Code
split_seed = 12
num_epochs = 400
###Output
_____no_output_____
###Markdown
Create PyTorch dataset, create PyTorch dataloader, and split train/val/test
###Code
# Instantiate the PyTorch datalaoder the autonomous greenhouse dataset.
dataset = GreenhouseDataset(rgb_dir = RGB_Data_Dir,
d_dir = Depth_Data_Dir,
jsonfile_dir = JSON_Files_Dir,
transforms = get_transforms(train=False, means=[0,0,0,0],stds=[1,1,1,1]))
if NumOutputs !=1:
NumOutputs=dataset.num_outputs
# Remove last 50 images from training/validation set. These are the test set.
dataset.df= dataset.df.iloc[:-50]
# Split train and validation set. Stratify based on variety.
train_split, val_split = train_test_split(dataset.df,
test_size = 0.2,
random_state = split_seed,
stratify = dataset.df['outputs'].str['classification']) #change to None if you don't have class info
train = torch.utils.data.Subset(dataset, train_split.index.tolist())
val = torch.utils.data.Subset(dataset, val_split.index.tolist())
# Create train and validation dataloaders
train_loader = torch.utils.data.DataLoader(train, batch_size=6, num_workers=6, shuffle=True)
val_loader = torch.utils.data.DataLoader(val, batch_size=6, shuffle=False, num_workers=6)
###Output
_____no_output_____
###Markdown
Determine the mean and standard deviation of images for normalization (Only need to do once for a new dataset)
###Code
# this part is just to check the MEAN and STD of the dataset (dont run unless you need mu and sigma)
nimages = 0
mean = 0.
std = 0.
dataloader = torch.utils.data.DataLoader(dataset, batch_size=5, shuffle=False, num_workers=12)
dataset.input = 'RGB-D'
dataset.out = 'ALL'
for batch, _ in dataloader:
# Rearrange batch to be the shape of [B, C, W * H]
batch = batch.view(batch.size(0), batch.size(1), -1)
# Update total number of images
nimages += batch.size(0)
# Compute mean and std here
mean += batch.mean(2).sum(0)
std += batch.std(2).sum(0)
# Final step
mean /= nimages
std /= nimages
print('Mean: '+ str(mean))
print('Standard Deviation', str(std))
###Output
Mean: tensor([0.5482, 0.4620, 0.3602, 0.0127])
Standard Deviation tensor([0.1639, 0.1761, 0.2659, 0.0035])
###Markdown
Copy the output of the previous cells into here to avoid needing to redetermine mean and std every time
###Code
dataset.means=[0.5482, 0.4620, 0.3602, 0.0127] #these values were copied from the previous cell
dataset.stds=[0.1639, 0.1761, 0.2659, 0.0035] #copy and paste the values to avoid having
# to rerun the previous cell for every iteration
###Output
_____no_output_____
###Markdown
Define the loss function as Normalized Mean Squared Error, as required for the 2021 Autonomous Greenhouse Challenge
###Code
criterion = NMSELoss()
###Output
_____no_output_____
###Markdown
Training Define the training loop and fit the model.
###Code
# Training loop
device = torch.device('cuda')
for input in inputs:
for output in outputs:
dataset.input = input
dataset.out = output
model = GreenhouseMidFusionRegressor(input_data_type = input, num_outputs = NumOutputs, conv_type = ConvType)
model.to(device)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.Adam(params,
lr=0.0005,
betas=(0.9, 0.999),
eps=1e-08,
weight_decay = 0,
amsgrad = False) # select an optimzer for each run
best_val_loss = 9999999 # initial dummy value
current_val_loss = 0
# training_val_loss=0
writer = SummaryWriter()
start = time.time()
for epoch in range(num_epochs):
with open('run.txt', 'a') as f:
f.write('\n')
f.write('Epoch: '+ str(epoch + 1) + ', Time Elapsed: '+ str((time.time()-start)/60) + ' mins')
print('Epoch: ', str(epoch + 1), ', Time Elapsed: ', str((time.time()-start)/60), ' mins')
train_single_epoch(model, dataset, device, criterion, optimizer, writer, epoch, train_loader)
best_val_loss = validate(model, dataset, device, training_category, sav_dir, criterion, writer, epoch, val_loader, best_val_loss)
###Output
Epoch: 1 , Time Elapsed: 2.1139780680338543e-06 mins
###Markdown
Evaluation Define the test dataset
###Code
# Instantiate the PyTorch datalaoder the autonomous greenhouse dataset.
testset = GreenhouseDataset(rgb_dir = RGB_Data_Dir,
d_dir = Depth_Data_Dir,
jsonfile_dir = JSON_Files_Dir,
transforms = get_transforms(train=False, means=dataset.means, stds=dataset.stds))
# Grab last 50 images as test dataset
testset.df = testset.df[-50:]
# Get testset_size
testset_size = testset.df.shape[0]
# Create test dataloader
test_loader = torch.utils.data.DataLoader(testset,
batch_size = 50,
num_workers = 0,
shuffle = False)
###Output
_____no_output_____
###Markdown
Define loss functions for model evaluation
###Code
cri = NMSELoss()
mse = nn.MSELoss()
###Output
_____no_output_____
###Markdown
Run the evaluation Loop
###Code
# Evaluation loop
device=torch.device('cuda')
with torch.no_grad():
for input in inputs:
final = torch.zeros((testset_size,0))
all_targets = torch.zeros((testset_size,0))
for output in outputs:
print('Input is ', input)
testset.input = input
testset.out = output
device=torch.device('cuda')
model= GreenhouseMidFusionRegressor(input_data_type = input,
num_outputs = NumOutputs,
conv_type = ConvType)
model.to(device)
model.load_state_dict(torch.load(sav_dir + 'bestmodel' + training_category + '_' + input + '_' + output + '.pth'))
model.eval()
if output=='All':
ap=torch.zeros((0,5))
at=torch.zeros((0,5))
else:
ap=torch.zeros((0,1))
at=torch.zeros((0,1))
for rgbd, targets in test_loader:
rgbd = rgbd.to(device)
targets = targets.to(device)
preds = model(rgbd)
# mse_loss=mse(preds, targets)
# nmse=criterion(preds, targets)
# nmse, pred=cri(preds, targets)
ap=torch.cat((ap, preds.detach().cpu()), 0)
at=torch.cat((at, targets.detach().cpu()), 0)
if output=='All':
print('FW MSE: ', str(mse(ap[:,0],at[:,0]).tolist()))
print('DW MSE: ', str(mse(ap[:,1],at[:,1]).tolist()))
print('H MSE: ', str(mse(ap[:,2],at[:,2]).tolist()))
print('D MSE: ', str(mse(ap[:,3],at[:,3]).tolist()))
print('LA MSE: ', str(mse(ap[:,4],at[:,4]).tolist()))
else:
final=torch.cat((final, ap.detach().cpu()),1)
all_targets=torch.cat((all_targets, at.detach().cpu()),1)
print(output,' MSE: ', str(mse(ap,at).tolist()))
if output == 'All':
print('Overall NMSE: ', str(cri(ap,at).tolist()))
else:
print('Overall NMSE: ', str(cri(final,all_targets).tolist()))
###Output
Input is RGB-D
FW MSE: 16857.876953125
DW MSE: 4.854626655578613
H MSE: 3.97654390335083
D MSE: 22.738414764404297
LA MSE: 5795591.0
Overall NMSE: 1.632205843925476
###Markdown
convert 2d trajectory (ndarray) to image (ndarray) ```input shape: batch, 2, sequence_lenoutput shape: batch, channel, w, h```
###Code
import numpy as np
import functools
import holoviews as hv
hv.extension('matplotlib')
%load_ext tensorboard
from trj2img import trj2img
def vis_trj(trajectories):
plt_lst = []
for trajectory in trajectories:
plt_lst.append(hv.Curve((trajectory[0,:], trajectory[1,:])))
curve= functools.reduce(lambda x,y: x+y, plt_lst)
return hv.render(curve)
# create data
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x)
trajectory = np.array([x,y])
trajectories = np.array([trajectory, trajectory, trajectory])
print('input: ', trajectories.shape) # batch, 2, seq_len
vis_trj(trajectories)
# main part
img = trj2img(trajectories, x_range=[-np.pi, np.pi], y_range=[-1, 1])
print('output: ', img.shape) # batch, c, h, w
# visualize by using tensorbaord
import torch
import torchvision.utils as vutils
from torch.utils.tensorboard import SummaryWriter
img = torch.from_numpy(img)
grid_img = vutils.make_grid(img, nrow=2, normalize=True, scale_each=True, pad_value=1)
print(grid_img.shape)
writer = SummaryWriter('./log')
epoch = 0
writer.add_image('output_img', grid_img, epoch)
writer.close()
%tensorboard --logdir log
###Output
_____no_output_____
###Markdown
Single gene name
###Code
geneinfo('USP4')
###Output
_____no_output_____
###Markdown
List of names
###Code
geneinfo(['LARS2', 'XCR1'])
###Output
_____no_output_____
###Markdown
Get all protein coding genes in a (hg38) region
###Code
for gene in mg.query('q=chr2:49500000-50000000 AND type_of_gene:protein-coding', species='human', fetch_all=True):
geneinfo(gene['symbol'])
###Output
Fetching 4 gene(s) . . .
###Markdown
Plot data over gene annotation
###Code
chrom, start, end = 'chr3', 49500000, 50600000
ax = geneplot(chrom, start, end, figsize=(10, 5))
ax.plot(np.linspace(start, end, 1000), np.random.random(1000), 'o') ;
mpld3.display()
geneinfo(['HYAL3', 'IFRD2'])
###Output
_____no_output_____
###Markdown
Keyboard shortcuts:* Prettify Query: Shift-Ctrl-P (or press the prettify button above)* Run Query: Ctrl-Enter (or press the play button above)* Auto Complete: Ctrl-Space (or just start typing)
###Code
graphiql_2 = graphql.GraphiQL(
handler=graphql.FrontendHttpHandler(url='https://swapi.graph.cool'),
query=query,
variables=variables)
graphiql_2
#!pip install vaex-graphql vaex-hdf5
import vaex
df = vaex.example()
class VaexHandler(graphql.BackendHandler):
def handle(self, request):
result = df.graphql.execute(request['query'])
response = {
'data': result.data
}
if result.errors:
response['errors'] = [{'message': e.message} for e in result.errors]
return response
vaex_handler = VaexHandler(timeout=10000)
vaex_query = '''
query {
df {
min
max
count
}
}
'''
graphiql_vaex = graphql.GraphiQL(
handler=vaex_handler,
query=vaex_query,
variables=None)
graphiql_vaex
###Output
_____no_output_____
###Markdown
Bahamas RGB
###Code
# First, create a tile server from raster file
b_client = examples.get_bahamas()
# Create ipyleaflet tile layer from that server
t = get_leaflet_tile_layer(b_client)
# Create ipyleaflet map, add tile layer, and display
m = Map(center=b_client.center(), zoom=8)
m.add_layer(t)
m
###Output
_____no_output_____
###Markdown
Multiband Landsat Compare
###Code
# First, create a tile server from raster file
landsat_client = examples.get_landsat()
# Create 2 tile layers from same raster viewing different bands
l = get_leaflet_tile_layer(landsat_client, band=[7, 5, 4])
r = get_leaflet_tile_layer(landsat_client, band=[5, 3, 2])
# Make the ipyleaflet map
m = Map(center=landsat_client.center(), zoom=11)
control = SplitMapControl(left_layer=l, right_layer=r)
m.add_control(control)
m.add_control(ScaleControl(position='bottomleft'))
m.add_control(FullScreenControl())
m
###Output
_____no_output_____
###Markdown
Vertica ML Python ExampleThis notebook is an example on how to use the Vetica ML Python Library. It will use the Titanic dataset to introduce you the library. The purpose is to predict the passengers survival. InitializationLet's create a connection and load the dataset.
###Code
from vertica_ml_python.utilities import vertica_cursor
from vertica_ml_python.learn.datasets import load_titanic
cur = vertica_cursor("VerticaDSN")
titanic = load_titanic(cur)
print(titanic)
###Output
_____no_output_____
###Markdown
Data Exploration and PreparationLet's explore the data by displaying descriptive statistics of all the columns.
###Code
titanic.describe(method = "categorical")
###Output
_____no_output_____
###Markdown
The column "body" is useless as it is only the ID of the passengers. Besides, it has too much missing values. The column "home.dest" will not influence the survival as it is from where the passengers embarked and where they are going to. We can have the same conclusion with "embarked" which is the port of embarkation. The column 'ticket' which is the ticket ID will also not give us information on the survival. Let's analyze the columns "name" and "cabin to see if we can extract some information. Let's first look at the passengers 'name'.
###Code
from vertica_ml_python.learn.preprocessing import CountVectorizer
CountVectorizer("name_voc", cur).fit("titanic", ["Name"]).to_vdf()
###Output
_____no_output_____
###Markdown
It is possible to extract from the 'name' the title of the passengers. Let's now look at the 'cabins'.
###Code
from vertica_ml_python.learn.preprocessing import CountVectorizer
CountVectorizer("cabin_voc", cur).fit("titanic", ["cabin"]).to_vdf()
###Output
_____no_output_____
###Markdown
We can extract the cabin position (the letter which reprent the position in the boat) and look at the number of occurences.
###Code
CountVectorizer("cabin_voc", cur).fit("titanic", ["cabin"]).to_vdf()["token"].str_slice(1, 1).groupby(
columns = ["token"], expr = ["SUM(cnt)"]).head(30)
###Output
_____no_output_____
###Markdown
The NULL values possibly represent passengers having no cabin (MNAR = Missing values not at random). The same for the column "boat" NULL values which represent passengers who bought the 'lifeboat' option. We can drop the useless columns and encode the others.
###Code
titanic.drop(["body", "home.dest", "embarked", "ticket"])
titanic["cabin"].str_slice(1, 1)["name"].str_extract(' ([A-Za-z]+)\.')["boat"].fillna(
method = "0ifnull")["cabin"].fillna("No Cabin")
###Output
795 elements were filled
948 elements were filled
###Markdown
We can notice that our assumptions about the cabin is wrong as passengers in first class must have a cabin. This column has missing values at random (MAR) and too much. We can drop it.
###Code
titanic["cabin"].drop()
###Output
vColumn '"cabin"' deleted from the vDataframe.
###Markdown
Let's look at descriptive statistics of the entire Virtual Dataframe.
###Code
titanic.statistics()
###Output
_____no_output_____
###Markdown
We can have with this method many relevant information. We can notice for example that the 'age' of the passengers follows more or less a normal distribution (kurtosis and skewness around 0).
###Code
x = titanic["age"].hist()
###Output
_____no_output_____
###Markdown
The column 'fare' has many outliers (512.33 which is the maximum is much greater than 79.13 which is the 9th decile). Most of the passengers traveled in 3rd class (median of pclass = 3) and much more... 'sibsp' represents the number of siblings and parch the number of parents and children, it can be relevant to build a new feature 'family_size'.
###Code
titanic.eval("family_size", "parch + sibsp + 1")
###Output
The new vColumn "family_size" was added to the vDataframe.
###Markdown
Let's deal with the outliers. There are many methods to find them (LocalOutlier Factors, DBSCAN, KMeans...) but we will just winsorize the 'fare' distribution which is the main subject to this anomaly (some passengers could have paid a very expensive fare but outliers could destroy our model prediction).
###Code
titanic["fare"].fill_outliers(method = "winsorize", alpha = 0.03)
###Output
_____no_output_____
###Markdown
Let's encode the column 'sex' to be able to use it with numerical methods.
###Code
titanic["sex"].label_encode()
###Output
_____no_output_____
###Markdown
The column 'age' has too many missing values and we need to impute them. Let's impute them by the average of passengers having the same 'pclass' and the same 'sex'.
###Code
titanic["age"].fillna(method = "mean", by = ["pclass", "sex"])
###Output
237 elements were filled
###Markdown
We can draw the correlation matrix to see different information we could get.
###Code
titanic.corr(method = "spearman")
###Output
_____no_output_____
###Markdown
The fare is very correlated to the family size. It is normal as the bigger the family is, the greater the number of tickets they have to buy will be (so the fare as well). The survival is very correlated to the 'boat'. In case of linear model we will never be able to predict the survival of the passenger having no life boat. To be able to create a real predictive model, we must split the study into 2 use cases: Passengers having no lifeboat Passengers having a lifeboatWe did a lot of operations to clean this table and nothing was saved in the DB ! We can look at the Virtual Dataframe relation to be sure.
###Code
titanic.current_relation()
###Output
_____no_output_____
###Markdown
Let see what's happening when we aggregate and turn on the SQL.
###Code
titanic.sql_on_off().avg()
###Output
_____no_output_____
###Markdown
VERTICA ML Python will do SQL generation during the entire process and keep in mind all the users modifications.
###Code
titanic.sql_on_off().info()
###Output
The vDataframe was modified many times:
* {Thu Nov 28 15:42:44 2019} [Drop]: vColumn '"body"' was deleted from the vDataframe.
* {Thu Nov 28 15:42:44 2019} [Drop]: vColumn '"home.dest"' was deleted from the vDataframe.
* {Thu Nov 28 15:42:44 2019} [Drop]: vColumn '"embarked"' was deleted from the vDataframe.
* {Thu Nov 28 15:42:44 2019} [Drop]: vColumn '"ticket"' was deleted from the vDataframe.
* {Thu Nov 28 15:42:47 2019} [SUBSTR(, 1, 1)]: The vColumn 'cabin' was transformed with the func 'x -> SUBSTR(x, 1, 1)'.
* {Thu Nov 28 15:42:47 2019} [REGEXP_SUBSTR(, ' ([A-Za-z]+)\.')]: The vColumn 'name' was transformed with the func 'x -> REGEXP_SUBSTR(x, ' ([A-Za-z]+)\.')'.
* {Thu Nov 28 15:42:47 2019} [Fillna]: 795 missing values of the vColumn '"boat"' were filled.
* {Thu Nov 28 15:42:47 2019} [Fillna]: 948 missing values of the vColumn '"cabin"' were filled.
* {Thu Nov 28 15:42:48 2019} [Drop]: vColumn '"cabin"' was deleted from the vDataframe.
* {Thu Nov 28 15:42:58 2019} [Eval]: A new vColumn '"family_size"' was added to the vDataframe.
* {Thu Nov 28 15:43:00 2019} [(CASE WHEN < 7.05 THEN 7.05 WHEN > 166.725531999998 THEN 166.725531999998 ELSE END)]: The vColumn 'fare' was transformed with the func 'x -> (CASE WHEN x < 7.05 THEN 7.05 WHEN x > 166.725531999998 THEN 166.725531999998 ELSE x END)'.
* {Thu Nov 28 15:43:02 2019} [Label Encoding]: Label Encoding was applied to the vColumn '"sex"' using the following mapping:
female => 0 male => 1
* {Thu Nov 28 15:43:04 2019} [Fillna]: 237 missing values of the vColumn '"age"' were filled.
###Markdown
You already love the Virtual Dataframe, do you? &128540; If you want to share the object with a member of the team, you can use the following method.
###Code
x = titanic.to_vdf("titanic")
###Output
_____no_output_____
###Markdown
We created a .vdf file which can be read with the 'read_vdf' function:
###Code
from vertica_ml_python.utilities import read_vdf
titanic2 = read_vdf("titanic.vdf", cur)
print(titanic2)
###Output
_____no_output_____
###Markdown
Let's now save the vDataframe in the Database to fulfill the next step: Data Modelling.
###Code
from vertica_ml_python.utilities import drop_view
drop_view("titanic_boat", cur)
drop_view("titanic_not_boat", cur)
x = titanic.save().filter("boat = 1").to_db("titanic_boat").load().filter("boat = 0").to_db("titanic_not_boat")
###Output
The view titanic_boat was successfully dropped.
The view titanic_not_boat was successfully dropped.
795 elements were filtered
439 elements were filtered
###Markdown
Machine Learning Passengers with a lifeboat First let's look at the number of survivors in this dataset.
###Code
from vertica_ml_python import vDataframe
titanic_boat = vDataframe("titanic_boat", cur)
titanic_boat["survived"].describe()
###Output
_____no_output_____
###Markdown
We only have 9 death. Let's try to understand why these passengers died.
###Code
titanic_boat.filter("survived = 0").head(10)
###Output
430 elements were filtered
###Markdown
These passengers have no reason to die except the ones in third class. Building a model for this part of the data is useless. Passengers without a lifeboat Let's now look at passengers without a lifeboat.
###Code
from vertica_ml_python import vDataframe
titanic_boat = vDataframe("titanic_not_boat", cur)
titanic_boat["survived"].describe()
###Output
_____no_output_____
###Markdown
Only 20 survived. Let's look why.
###Code
titanic_boat.filter("survived = 1").head(20)
###Output
775 elements were filtered
###Markdown
They are mostly women. The famous quotation "Women and children first" is then right. Let's build a model to get more insights. As predictors, we have one categorical columns. Besides, we have correlated features as predictors. It is preferable to work with a non-linear classifier which can handle that. Random Forest seems to be perfect for the study. Let's evaluate it with a Cross Validation.
###Code
from vertica_ml_python.learn.ensemble import RandomForestClassifier
from vertica_ml_python.learn.model_selection import cross_validate
from vertica_ml_python.utilities import drop_model
predictors = titanic.get_columns()
predictors.remove('"survived"')
response = "survived"
relation = "titanic_not_boat"
drop_model("rf_titanic", cur)
model = RandomForestClassifier("rf_titanic", cur, n_estimators = 40, max_depth = 4)
cross_validate(model, relation, predictors, response)
###Output
The model rf_titanic was successfully dropped.
###Markdown
As the dataset is unbalanced, the AUC is a good way to evaluate it. The model is very good with an average greater than 0.9 ! We can now build a model with the entire dataset.
###Code
model.fit(relation, predictors, response)
###Output
_____no_output_____
###Markdown
Let's look at the features importance.
###Code
model.features_importance()
###Output
_____no_output_____
###Markdown
Setup evnironment
###Code
import os
import numpy as np
import pandas as pd
import json
from skimage.io import imread
# Notebook auto reloads code. (Ref: http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython)
%load_ext autoreload
%autoreload 2
from psf import compute, plotPSF
###Output
_____no_output_____
###Markdown
Setup plotting
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context('paper', font_scale=2.0)
sns.set_style('ticks')
###Output
_____no_output_____
###Markdown
Define parameters
###Code
pxPerUmLat = 1.0/0.1383 # Inverse of pixel size, assumed to be the same between x and y
pxPerUmAx = 1.0/0.1028
wavelength = 570
NA = 0.7
windowUm = [4,2,2]
options = {'pxPerUmLat':pxPerUmLat, 'pxPerUmAx':pxPerUmAx, 'wavelength':wavelength, 'NA':NA, 'windowUm':windowUm}
options['thresh'] = .01
options
###Output
_____no_output_____
###Markdown
Get PSF
###Code
im = imread('E:\\Richard Already Backed up\\coverslip_align_20210712\\tiff_stacks\\without5meter_run1_HR\\21-07-12 193351_skewed-48_dsf1_allsecs\\crop500.tif', plugin='tifffile')
data, beads, maxima, centers, smoothed = compute(im, options)
PSF = pd.concat([x[0] for x in data])
PSF['Max'] = maxima
PSF = PSF.reset_index().drop(['index'],axis=1)
X_Profile = [x[1] for x in data]
Y_Profile = [x[2] for x in data]
Z_Profile = [x[3] for x in data]
PSF
print('Detected beads:', len(PSF))
print('\nMean values:')
print(PSF.mean())
print('\nStandard deviation:')
print(PSF.std())
###Output
Detected beads: 26
Mean values:
FWHM_X 0.551321
FWHM_Y 0.467328
FWHMax 1.436298
Max 11008.846154
dtype: float64
Standard deviation:
FWHM_X 0.020654
FWHM_Y 0.013894
FWHMax 0.052651
Max 3682.393653
dtype: float64
###Markdown
Plot max projection
###Code
plt.figure(figsize=(5,5));
plt.imshow(smoothed);
plt.plot(centers[:, 2], centers[:, 1], 'r.', ms=10);
plt.xlim([0, smoothed.shape[0]])
plt.ylim([smoothed.shape[1], 0])
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plot 2D slices
###Code
beadInd = 2
average = beads[beadInd]
plt.imshow(average.mean(axis=0));
plt.axis('off');
plt.imshow(average.mean(axis=1), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
plt.imshow(average.mean(axis=2), aspect = pxPerUmLat/pxPerUmAx);
plt.axis('off');
###Output
_____no_output_____
###Markdown
Plotting
###Code
plotPSF(X_Profile[beadInd][0],X_Profile[beadInd][1],X_Profile[beadInd][2],X_Profile[beadInd][3],pxPerUmLat,PSF.Max.iloc[beadInd])
plt.savefig('E:\\Richard_GoogleDrive\\Richard_Yan_Beth\\2021\\coverslip_align_20210712\\x_profile.eps', format='eps')
plotPSF(Y_Profile[beadInd][0],Y_Profile[beadInd][1],Y_Profile[beadInd][2],Y_Profile[beadInd][3],pxPerUmLat,PSF.Max.iloc[beadInd])
plt.savefig('E:\\Richard_GoogleDrive\\Richard_Yan_Beth\\2021\\coverslip_align_20210712\\y_profile.eps', format='eps')
plotPSF(Z_Profile[beadInd][0],Z_Profile[beadInd][1],Z_Profile[beadInd][2],Z_Profile[beadInd][3],pxPerUmAx,PSF.Max.iloc[beadInd])
plt.savefig('E:\\Richard_GoogleDrive\\Richard_Yan_Beth\\2021\\coverslip_align_20210712\\z_profile.eps', format='eps')
###Output
_____no_output_____
###Markdown
Requirement
Python 3.7, numpy>=1.17.4, scipy>=1.3.2
cython>=0.29.13 (Not required but highly recommended)
Run the following code in bash/terminal to compile (Not required but highly recommended).
```bash
The command below is not required but strongly recommended, as it will compile the cython code to run faster
python setup.py build_ext --inplace
```
Spectral entropy
To calculate spectral entropy, the spectrum need to be centroid first. When you are focusing on fragment ion's
information, the precursor ion may need to be removed from the spectrum before calculating spectral entropy.
Calculate spectral entropy for **centroid** spectrum with python is very simple (just one line with scipy package).
###Code
import numpy as np
import scipy.stats
spectrum = np.array([[41.04, 37.16], [69.07, 66.83], [86.1, 999.0]], dtype=np.float32)
entropy = scipy.stats.entropy(spectrum[:, 1])
print("Spectral entropy is {}.".format(entropy))
###Output
Spectral entropy is 0.3737888038158417.
###Markdown
For **profile** spectrum which haven't been centroid, you can use a ```clean_spectrum``` to centroid the spectrum.
For example:
###Code
import numpy as np
import scipy.stats
import spectral_entropy
spectrum = np.array([[69.071, 7.917962], [86.066, 1.021589], [86.0969, 100.0]], dtype=np.float32)
spectrum = spectral_entropy.clean_spectrum(spectrum)
entropy = scipy.stats.entropy(spectrum[:, 1])
print("Spectral entropy is {}.".format(entropy))
###Output
Spectral entropy is 0.2605222463607788.
###Markdown
We provide a function ```clean_spectrum``` to help you remove precursor ion, centroid spectrum and remove noise ions.
For example:
###Code
import numpy as np
import spectral_entropy
spectrum = np.array([[41.04, 0.3716], [69.071, 7.917962], [69.071, 100.], [86.0969, 66.83]], dtype=np.float32)
clean_spectrum = spectral_entropy.clean_spectrum(spectrum,
max_mz=85,
noise_removal=0.01,
ms2_da=0.05)
print("Clean spectrum will be:{}".format(clean_spectrum))
###Output
Clean spectrum will be:[[69.071 1. ]]
###Markdown
Entropy similarity
Before calculate entropy similarity, the spectrum need to be centroid first. Remove the noise ions is highly recommend.
Also, base on our test on NIST20 and Massbank.us database, remove ions have m/z higher than precursor ion's m/z - 1.6
will greatly improve the spectral identification performance.
We provide ```calculate_entropy_similarity``` function to calculate two spectral entropy.
###Code
import numpy as np
import spectral_entropy
spec_query = np.array([[69.071, 7.917962], [86.066, 1.021589], [86.0969, 100.0]], dtype=np.float32)
spec_reference = np.array([[41.04, 37.16], [69.07, 66.83], [86.1, 999.0]], dtype=np.float32)
# Calculate entropy similarity.
similarity = spectral_entropy.calculate_entropy_similarity(spec_query, spec_reference, ms2_da=0.05)
print("Entropy similarity:{}.".format(similarity))
###Output
Entropy similarity:0.8984398591079145.
###Markdown
Spectral similarity
We also provide 44 different spectral similarity algorithm for MS/MS spectral comparison
You can find the detail reference
here: [https://SpectralEntropy.readthedocs.io/en/master/](https://SpectralEntropy.readthedocs.io/en/master/)
Example code
Before calculating spectral similarity, it's highly recommended to remove spectral noise. For example, peaks have
intensity less than 1% maximum intensity can be removed to improve identificaiton performance.
###Code
import numpy as np
import spectral_entropy
spec_query = np.array([[69.071, 7.917962], [86.066, 1.021589], [86.0969, 100.0]], dtype=np.float32)
spec_reference = np.array([[41.04, 37.16], [69.07, 66.83], [86.1, 999.0]], dtype=np.float32)
# Calculate entropy similarity.
similarity = spectral_entropy.similarity(spec_query, spec_reference, method="entropy", ms2_da=0.05)
print("Entropy similarity:{}.".format(similarity))
similarity = spectral_entropy.similarity(spec_query, spec_reference, method="unweighted_entropy", ms2_da=0.05)
print("Unweighted entropy similarity:{}.".format(similarity))
all_dist = spectral_entropy.all_similarity(spec_query, spec_reference, ms2_da=0.05)
for dist_name in all_dist:
method_name = spectral_entropy.methods_name[dist_name]
print("Method name: {}, similarity score:{}.".format(method_name, all_dist[dist_name]))
###Output
Method name: Entropy distance, similarity score:0.8984398591079145.
Method name: Unweighted entropy distance, similarity score:0.9826668790176113.
Method name: Euclidean distance, similarity score:0.9704388194862964.
Method name: Manhattan distance, similarity score:0.9663097634911537.
Method name: Chebyshev distance, similarity score:0.9663097560405731.
Method name: Squared Euclidean distance, similarity score:0.9991261365939863.
Method name: Fidelity distance, similarity score:0.9828163981437683.
Method name: Matusita distance, similarity score:0.8689137443332198.
Method name: Squared-chord distance, similarity score:0.982816394418478.
Method name: Bhattacharya 1 distance, similarity score:0.9860314218260302.
Method name: Bhattacharya 2 distance, similarity score:0.9829623601324312.
Method name: Harmonic mean distance, similarity score:0.9824790358543396.
Method name: Probabilistic symmetric χ2 distance, similarity score:0.9824790470302105.
Method name: Ruzicka distance, similarity score:0.9348156005144119.
Method name: Roberts distance, similarity score:0.9507221579551697.
Method name: Intersection distance, similarity score:0.9663097858428955.
Method name: Motyka distance, similarity score:0.9663097858428955.
Method name: Canberra distance, similarity score:0.475620517035965.
Method name: Baroni-Urbani-Buser distance, similarity score:0.9711240530014038.
Method name: Penrose size distance, similarity score:0.9129998942501335.
Method name: Mean character distance, similarity score:0.9831548817455769.
Method name: Lorentzian distance, similarity score:0.9376263842666843.
Method name: Penrose shape distance, similarity score:0.9704388379255426.
Method name: Clark distance, similarity score:0.5847746606268357.
Method name: Hellinger distance, similarity score:0.6877124408992461.
Method name: Whittaker index of association distance, similarity score:0.9082068549409137.
Method name: Symmetric χ2 distance, similarity score:0.9235780252817392.
Method name: Pearson/Spearman Correlation Coefficient, similarity score:0.9995291233062744.
Method name: Improved Similarity, similarity score:0.5847746606268357.
Method name: Absolute Value Distance, similarity score:0.9663097634911537.
Method name: Dot product distance, similarity score:0.9992468165696725.
Method name: Cosine distance, similarity score:0.9992468165696725.
Method name: Reverse dot product distance, similarity score:0.9992468165696725.
Method name: Spectral Contrast Angle, similarity score:0.9992467761039734.
Method name: Wave Hedges distance, similarity score:0.4566912449792375.
Method name: Jaccard distance, similarity score:0.997934231068939.
Method name: Dice distance, similarity score:0.9989660476567224.
Method name: Inner product distance, similarity score:0.8442940711975098.
Method name: Divergence distance, similarity score:0.331483304773883.
Method name: Avg (L1, L∞) distance, similarity score:0.9326195220152537.
Method name: Vicis-Symmetric χ2 3 distance, similarity score:0.981897447258234.
Method name: MSforID distance version 1, similarity score:0.8395898139303545.
Method name: MSforID distance, similarity score:0.6301550967406659.
Method name: Weighted dot product distance, similarity score:0.9998376420729537.
###Markdown
Parallelizing operations on SAM/BAM filesSAM/BAM files are typically large, thus, operations on these files are time intensive. This project provides tools to parallelize operations on SAM/BAM files. The workflow follows:1. Split BAM/SAM file in _n_ chunks2. Perform operation in each chunk in a dedicated process and save resulting SAM/BAM chunk 3. Merge results back into a single SAM/BAM fileDepends on:1. Samtools Installation1. Git clone project2. cd to cloned project directory3. ```sudo python3 setup.py install``` UsageThere is one main function named ```parallelizedBAMoperation```. This function takes as mandatory arguments:1. path to original bam file (should be ordered)2. a callable function to perform the operation on each bam file chunkThe callable function must accept the following two first arguments: (i) path to input bam file and (ii) path to resulting output bam file, in this order. NotePreparing a bam file to run an operation in parallel takes a while, thus is not worth it when the operatin itself takes a short time. For example, preparing a typical bam file for parallelization (in 8 processes) can take almost a minute.
###Code
from parallelbam.parallelbam import parallelizeBAMoperation, getNumberOfReads
import shutil
def foo(input_bam, output_bam):
shutil.copyfile(input_bam, output_bam)
parallelizeBAMoperation('../sample.bam',
foo, output_path=None,
n_processes=4)
getNumberOfReads('../sample.bam')
getNumberOfReads('../processed.bam')
###Output
_____no_output_____
###Markdown
Demonstration of loading subset from large dat file with neo RawIOhttps://neo.readthedocs.io/en/stable/rawio.htmlneo.rawio is a low-level layer for reading data only. Reading consists of getting NumPy buffers (often int16/int64) of signals/spikes/events. Import neuroscoperawio https://github.com/NeuralEnsemble/python-neo/blob/master/neo/rawio/neuroscoperawio.pyand other helpful packages
###Code
from neo.rawio import neuroscoperawio
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
helper function to read xml channels and groupswe need this because neuroscoperawio does not preserve or store the channel order as far as I know
###Code
from xml.etree import ElementTree
def parse_xml_channel_groups(filename):
filename = filename.replace('.xml', '').replace('.dat', '')
tree = ElementTree.parse(filename + '.xml')
root = tree.getroot()
# find channels
channel_group = []
for grp_index, xml_chx in enumerate(
root.find('anatomicalDescription').find('channelGroups').findall('group')):
for xml_rc in xml_chx:
channel_group.append([int(xml_rc.text),grp_index])
return np.array(channel_group)
###Output
_____no_output_____
###Markdown
First create a reader from class neuroscoperawio
###Code
reader = neuroscoperawio.NeuroScopeRawIO('Z:/Data/HMC1/day8/day8')
reader
###Output
_____no_output_____
###Markdown
Then browse the internal header and display information:
###Code
reader.parse_header()
print(reader)
###Output
NeuroScopeRawIO: Z:/Data/HMC1/day8/day8
nb_block: 1
nb_segment: [1]
signal_streams: [Signals (chans: 512)]
signal_channels: [ch0grp0, ch1grp0, ch2grp0, ch3grp0 ... ch508grp15 , ch509grp15 , ch510grp15 , ch511grp15]
spike_channels: []
event_channels: []
###Markdown
You get the number of blocks and segments per block. You have information about channels: signal_channels, spike_channels, event_channels.All this information is internally available in the header dict:
###Code
reader.header.keys()
###Output
_____no_output_____
###Markdown
You can convert signal channel info to pandas data frame for ease
###Code
df = pd.DataFrame.from_dict(reader.header['signal_channels'])
df.head()
###Output
_____no_output_____
###Markdown
Finally, lets load and plot some data find channels from shank 9Using helper 'parse_xml_channel_groups' as unsure if neo stores channel order
###Code
channel_group = parse_xml_channel_groups(reader.filename)
shank = 9
channel_indexes = channel_group[channel_group[:,1] == shank,0]
###Output
_____no_output_____
###Markdown
Get signal from shank 9 channels around sharp wave ripple
###Code
# epoch of time around ripple, which was previously found
seconds_idx = np.array([7.320,7.620])
# convert to index
to_idx = (seconds_idx*reader.get_signal_sampling_rate()).astype(int)
# get chunk of data
raw_sigs = reader.get_analogsignal_chunk(i_start=to_idx[0],
i_stop=to_idx[1],
channel_indexes=channel_indexes)
###Output
_____no_output_____
###Markdown
finally plot data
###Code
plt.figure(figsize=(4,12))
channel_offset = -np.arange(raw_sigs.shape[1])*4500
x = np.arange(raw_sigs.shape[0]) / reader.get_signal_sampling_rate()
plt.plot(x,raw_sigs + channel_offset,color='k',linewidth=1)
ax = plt.gca()
ax.set_yticks(channel_offset)
ax.set_yticklabels(channel_indexes)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.xlabel('time (sec)')
plt.ylabel('channel id')
plt.show()
###Output
_____no_output_____
###Markdown
Starting InformationThere are four object classes that can be created: course, section, student, and assignment To create a course object, call the course class with the path of the data The data should be stored in a folder as separate csv files for each section and should be labeled as COURSE_SECT.csv like 'CEM153_01H.csv' Course ClassCreating a course object will create all section objects for that course
###Code
cem = course('./exampledata/CEM153/')
###Output
_____no_output_____
###Markdown
The course class contains course wide summaries of the overall scores, assignment scores, students in each section, students with missing or ungraded assignments, etc. Use the ```print``` command to see a summary of useful information about the course
###Code
print(cem)
###Output
_____no_output_____
###Markdown
Use ```.__doc__``` to see the full list of attributes for the course
###Code
print(cem.__doc__)
print(cem.assnsectnumD)
###Output
_____no_output_____
###Markdown
Section ClassOnce the course object has been created, it creates a dictionary with keys that corrospond to the section number and entries that are the section object The attribute name for this dictionary in the course object is ```sectD``` To call a specific section object, use the following syntax:
###Code
sect = cem.sectD['03']
###Output
_____no_output_____
###Markdown
When the section object is created, it create all the student and assignment objects for that section The section class contains section wide summaries about student scores, assignmnet scores, students with missing or ungraded assignments, etc. Use the ```print``` command to see a summary of useful information about each section
###Code
print(sect)
###Output
_____no_output_____
###Markdown
Use ```.__doc__``` to see the full list of attributes for the section
###Code
print(sect.__doc__)
print(sect.allGrade)
###Output
_____no_output_____
###Markdown
Student ClassOnce the section object has been created, it creates a dictionary with keys that corrospond to student username and entries that are the student object The attribute name for this dictionary in the section object is ```stuDict``` To call a specific student object, use the following syntax:
###Code
stud = sect.stuDict['cdooku']
###Output
_____no_output_____
###Markdown
The student class contains information about the individual student like scores, grades, missing or ungraded assignments, etc. Use the ```print``` command to see a summary of useful information about the individual student
###Code
print(stud)
###Output
_____no_output_____
###Markdown
Use ```.__doc__``` to see the full list of attributes for the student
###Code
print(stud.__doc__)
###Output
_____no_output_____
###Markdown
Assignment ClassOnce the section object has been created, it creates a dictionary with keys that corrospond to assignment name and entries that are the assignment object The attribute name for this dictionary in the section object is ```assnDict``` To call a specific assignment object, use the following syntax:
###Code
assn = sect.assnDict['L2 - Worksheet']
###Output
_____no_output_____
###Markdown
The assignment class contains information about the individual assignment within the section like average score, grade, missing or ungraded work, etc. Use the ```print``` command to see a summary of useful information about the individual assignment
###Code
print(assn)
###Output
_____no_output_____
###Markdown
Use ```.__doc__``` to see the full list of attributes for the assignment
###Code
print(assn.__doc__)
###Output
_____no_output_____
###Markdown
Dependencies: graph_stuff.py, numpy, networkx, pygraphviz
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
import pygraphviz
import graph_stuff
import numpy as np
#token = None #put ADS token here
token = '46fJL9NWv4smnN52bx9RTW1OEXCJIvOelKccLVmI'
def get_cmst(start_bibcode):
cgraph = graph_stuff.build_citation_graph(start_bibcode, token, depth=3)
cmst = nx.algorithms.maximum_spanning_tree(cgraph)
return cmst
def make_plot(cmst, savename=None):
pubdates = np.asarray(list(nx.get_node_attributes(cmst, 'pubdate').values()))
minpubdate = pubdates.min()
maxpubdate = pubdates.max()
def normalize(x):
return (x - maxpubdate) / (maxpubdate - minpubdate)
plt.figure(figsize=[16, 16])
pos = nx.nx_agraph.graphviz_layout(cmst, prog='twopi', args='')
plt.figure(figsize=(8, 8))
nx.draw(cmst, pos, node_size=20, node_color=pubdates,
alpha=0.5, with_labels=False, cmap='rainbow')
plt.axis('equal')
if savename is not None:
plt.savefig(savename)
cmst = get_cmst('2016ApJ...817..91B')
make_plot(cmst, savename='eg.png')
###Output
_____no_output_____
###Markdown
Demonstration
###Code
# instantiating a solver using the default values, except for t_max which is set up for a 96 year long run
solver = adsolver.ADSolver(t_max=360*96*86400)
###Output
_____no_output_____
###Markdown
The following cell runs the solver -- takes about 20 sec for 96 years.
###Code
u = solver.solve()
# estimating QBO amplitudes and period
spinup_time = 12*360*86400
amp25 = utils.estimate_amplitude(solver.time, solver.z, u, height=25e3, spinup=spinup_time)
amp20 = utils.estimate_amplitude(solver.time, solver.z, u, height=20e3, spinup=spinup_time)
tau25 = utils.estimate_period(solver.time, solver.z, u, height=25e3, spinup=spinup_time)
###Output
_____no_output_____
###Markdown
Plotting the solution
###Code
fig_size = (06.90, 02.20+01.50)
fig = plt.figure(figsize=fig_size)
ax = []
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 06.00, 02.00])))
cmin = -np.max(np.abs(u.numpy()))
cmax = np.max(np.abs(u.numpy()))
xmin = 84.
xmax = 96.
ymin = 17.
ymax = 35.
ax[0].set_xlim(left=84.)
ax[0].set_xlim(right=96.)
ax[0].set_ylim(bottom=17.)
ax[0].set_ylim(top=35.)
h = []
h.append(ax[0].contourf(solver.time[::30]/86400/360, solver.z[:]/1000, u.numpy()[::30, :].T,
21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax))
ax[0].axhline(25., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].axhline(20., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].set_ylabel('Km', fontsize=10)
ax[0].set_xlabel('model year', fontsize=10)
xticks_list = np.arange(xmin, xmax+1, 1)
ax[0].set_xticks(xticks_list)
yticks_list = np.arange(ymin, ymax+2, 2)
ax[0].set_yticks(yticks_list)
xticklabels_list = list(xticks_list)
xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ]
ax[0].set_xticklabels(xticklabels_list, fontsize=10)
ax[0].xaxis.set_minor_locator(MultipleLocator(1.))
ax[0].yaxis.set_minor_locator(MultipleLocator(1.))
ax[0].tick_params(which='both', left=True, right=True, bottom=True, top=True)
ax[0].tick_params(which='both', labelbottom=True)
ax[0].text(95.50, 25, r'$\sigma_{25}$ = ' '%.1f' %amp25 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(95.50, 20, r'$\sigma_{20}$ = ' '%.1f' %amp20 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(84.50, 25, r'$\tau_{25}$ = ' '%.0f' %tau25 + ' months',
horizontalalignment='left', verticalalignment='bottom', color='black')
# # colorbars
cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 00.50, 05.50, 00.10]))
ax[0].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 2.0f',
boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal',
label=r'$\mathrm{m s^{-1}}$')
###Output
_____no_output_____
###Markdown
Individual terms plot
###Code
# sfunc, gfunc, Ffunc = utils.make_source_func(solver)
model = utils.load_model(solver)
g0 = torch.zeros_like(u)
g1 = torch.zeros_like(u)
F0 = torch.zeros_like(u)
F1 = torch.zeros_like(u)
dF0 = torch.zeros_like(u)
dF1 = torch.zeros_like(u)
S = torch.zeros_like(u)
rhs = torch.zeros_like(u)
diffu = torch.zeros_like(u)
for i in range(solver.time.shape[0]):
g0[i, :] = model.g_func(32, 1 * 2 * np.pi / 4e7, u[i, :])
g1[i, :] = model.g_func(-32, 1 * 2 * np.pi / 4e7, u[i, :])
F0[i, :] = model.F_func(6e-4 / 0.1006, g0[i, :]) * 0.1006
F1[i, :] = model.F_func(-6e-4 / 0.1006, g1[i, :]) * 0.1006
dF0[i, :] = torch.matmul(solver.D1, F0[i, :]) / utils.get_rho(solver.z)
dF1[i, :] = torch.matmul(solver.D1, F1[i, :]) / utils.get_rho(solver.z)
S[i, :] = model.forward(u[i, :])
rhs[i, :] = (-solver.w * torch.matmul(solver.D1, u[i, :]) + solver.kappa * torch.matmul(solver.D2, u[i, :]) - S[i, :])
diffu[i, :] = solver.kappa * torch.matmul(solver.D2, u[i, :])
fig_size = (08.00, 11.25) #02.20+01.50)
fig = plt.figure(figsize=fig_size)
ax = []
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 09.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 07.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 05.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 03.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [04.50, 09.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [04.50, 07.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [04.50, 05.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [04.50, 03.25, 03.00, 01.50])))
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [04.50, 01.25, 03.00, 01.50])))
times = [ 360*3 + 90*i for i in range(4) ]
x_ticks_list = np.arange(17e3, 37.5e3, 2500)
ax[0].plot(solver.z[:], u[times, :].T, marker='.')
ax[0].axhline(0., xmin=0, xmax=1, color='black', linestyle='dashed')
ax[0].axhline(32., xmin=0, xmax=1, color='black', linestyle='dashed')
ax[0].axhline(-33., xmin=0, xmax=1, color='black', linestyle='dashed')
ax[0].set_title(r'$\bar{u}$')
ax[0].xaxis.set_ticklabels([])
ax[1].plot(solver.z[:], -S[times, :].T, marker='.')
ax[1].set_title(r'$- 1 / \rho \partial F / \partial z$')
ax[1].xaxis.set_ticklabels([])
ax[2].plot(solver.z[:], g0[times, :].T, marker='.')
ax[2].set_title(r'$g_0$')
ax[2].xaxis.set_ticklabels([])
ax[3].plot(solver.z[:], F0[times, :].T, marker='.')
ax[3].set_title(r'$F_0$')
ax[3].xaxis.set_ticklabels([])
ax[4].plot(solver.z[:], -dF0[times, :].T, marker='.')
ax[4].set_title(r'$- \partial F_0 / \partial z$')
ax[4].xaxis.set_ticklabels([])
ax[4].set_xlabel('z [m]', fontsize=10)
ax[5].plot(solver.z[:], rhs[times, :].T, marker='.')
ax[5].set_title('RHS')
ax[5].xaxis.set_ticklabels([])
ax[6].plot(solver.z[:], diffu[times, :].T, marker='.')
ax[6].set_title(r'$K \partial^2 \bar{u} / \partial z^2$')
ax[6].xaxis.set_ticklabels([])
ax[7].plot(solver.z[:], g1[times, :].T, marker='.')
ax[7].set_title(r'$g_1$')
ax[7].xaxis.set_ticklabels([])
ax[8].plot(solver.z[:], F1[times, :].T, marker='.')
ax[8].set_title(r'$F_1$')
ax[8].xaxis.set_ticklabels([])
ax[9].plot(solver.z[:], -dF1[times, :].T, marker='.', label=['t=' + str(step) + ' days' for step in times])
ax[9].set_title(r'$- \partial F_1 / \partial z$')
ax[9].xaxis.set_ticklabels([])
ax[9].set_xlabel('z [m]', fontsize=10)
ax[9].legend()
for i in range(10):
ax[i].set_xticks(x_ticks_list)
ax[i].grid()
ax[i].ticklabel_format(axis="y", style="sci", scilimits=(0,0))
###Output
_____no_output_____
###Markdown
"Calibration" We now use backpropagation to demonstrate a calibration problem where we seek to tune both the wave amplitudes and phase speeds to produce an oscillation with high-level amplitude of $23.5$ m s$^{-1}$, low-level amplitude of $20$ m s$^{-1}$, and a period of 28 months (the amplitudes are rather arbitrary, but the period corresponds to observations and you could imagine using observations for the amplitudes as well). Direct integrations suggest that this problem has a well-defined minimum in the QBO-relevant region of the (amplitude, phase speed) plane, so we should be able to converge. To reduce the computation time we will use the 2-wave spectrum for this demonstration. Before optimization
###Code
solver = adsolver.ADSolver(t_max=360*96*86400)
As = (torch.tensor([4.5e-4, -4.5e-4]) / 0.1006)
cs = torch.tensor([40, -40])
u = solver.solve(
source_func=WaveSpectrum(solver, As=As, cs=cs),
nsteps=360*96+1
)
# estimating QBO amplitudes and period
spinup_time = 12*360*86400
amp25 = utils.estimate_amplitude(solver.time, solver.z, u, height=25e3, spinup=spinup_time)
amp20 = utils.estimate_amplitude(solver.time, solver.z, u, height=20e3, spinup=spinup_time)
tau25 = utils.estimate_period(solver.time, solver.z, u, height=25e3, spinup=spinup_time)
fig_size = (06.90, 02.20+01.50)
fig = plt.figure(figsize=fig_size)
ax = []
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 06.00, 02.00])))
cmin = -np.max(np.abs(u.detach().numpy()))
cmax = np.max(np.abs(u.detach().numpy()))
xmin = 84.
xmax = 96.
ymin = 17.
ymax = 35.
ax[0].set_xlim(left=84.)
ax[0].set_xlim(right=96.)
ax[0].set_ylim(bottom=17.)
ax[0].set_ylim(top=35.)
h = []
h.append(ax[0].contourf(solver.time[::30]/86400/360, solver.z[:]/1000, u.detach().numpy()[::30, :].T,
21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax))
ax[0].axhline(25., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].axhline(20., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].set_ylabel('Km', fontsize=10)
ax[0].set_xlabel('model year', fontsize=10)
xticks_list = np.arange(xmin, xmax+1, 1)
ax[0].set_xticks(xticks_list)
yticks_list = np.arange(ymin, ymax+2, 2)
ax[0].set_yticks(yticks_list)
xticklabels_list = list(xticks_list)
xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ]
ax[0].set_xticklabels(xticklabels_list, fontsize=10)
ax[0].xaxis.set_minor_locator(MultipleLocator(1.))
ax[0].yaxis.set_minor_locator(MultipleLocator(1.))
ax[0].tick_params(which='both', left=True, right=True, bottom=True, top=True)
ax[0].tick_params(which='both', labelbottom=True)
ax[0].text(95.50, 25, r'$\sigma_{25}$ = ' '%.1f' %amp25 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(95.50, 20, r'$\sigma_{20}$ = ' '%.1f' %amp20 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(84.50, 25, r'$\tau_{25}$ = ' '%.0f' %tau25 + ' months',
horizontalalignment='left', verticalalignment='bottom', color='black')
# # colorbars
cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 00.50, 05.50, 00.10]))
ax[0].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 2.0f',
boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal',
label=r'$\mathrm{m s^{-1}}$')
###Output
_____no_output_____
###Markdown
Optimizing (takes about 30 min)
###Code
solver = adsolver.ADSolver(t_max=(360 * 24 * 86400))
As = (torch.tensor([4.5e-4, -4.5e-4]) / 0.1006).requires_grad_()
cs = torch.tensor([40., -40.]).requires_grad_()
# u_spunup = solver.solve(source_func=utils.make_source_func(solver, As=As, cs=cs), nsteps=360*12+1)[-1]
# initial_condition = lambda _: u_spunup
optimizer = torch.optim.Adam([As, cs])
max_iters = 5
i_25km = abs(solver.z - 25e3).argmin()
i_20km = abs(solver.z - 20e3).argmin()
target_sigma25 = 23.5
target_sigma20 = 20
target_tau25 = 28
# def get_loss(u):
# return ((utils.estimate_amplitude(solver.time, solver.z, u, height=25e3) - target_sigma25) ** 2 / target_sigma25 ** 2 +
# (utils.estimate_amplitude(solver.time, solver.z, u, height=20e3) - target_sigma20) ** 2 / target_sigma20 ** 2 +
# (utils.estimate_period(solver.time, solver.z, u_padded, height=25e3) - target_tau25) ** 2 / target_tau25 ** 2 )
def get_loss(u):
# This is a silly loss function but it runs much faster than the commented one above.
return (u[:, i_25km].std() - target_sigma25) ** 2
for n_iter in range(1, max_iters + 1):
optimizer.zero_grad()
u = solver.solve(source_func=WaveSpectrum(solver, As=As, cs=cs))
loss = get_loss(u)
loss.backward()
optimizer.step()
if n_iter % 1 == 0:
print(f'Iteration {n_iter}: loss is {loss:.4f}')
###Output
Iteration 1: loss is 25.3371
Iteration 2: loss is 5.0228
Iteration 3: loss is 6.3079
Iteration 4: loss is 1.9744
Iteration 5: loss is 0.2247
###Markdown
After optimization
###Code
# estimating QBO amplitudes and period
spinup_time = 12*360*86400
amp25 = utils.estimate_amplitude(solver.time, solver.z, u.detach(), height=25e3, spinup=spinup_time)
amp20 = utils.estimate_amplitude(solver.time, solver.z, u.detach(), height=20e3, spinup=spinup_time)
tau25 = utils.estimate_period(solver.time, solver.z, u.detach(), height=25e3, spinup=spinup_time)
fig_size = (06.90, 02.20+01.50)
fig = plt.figure(figsize=fig_size)
ax = []
ax.append(fig.add_axes(ax_pos_inch_to_absolute(fig_size, [00.75, 01.25, 06.00, 02.00])))
cmin = -np.max(np.abs(u.detach().numpy()))
cmax = np.max(np.abs(u.detach().numpy()))
# xmin = 84.
# xmax = 96.
# ymin = 17.
# ymax = 35.
# ax[0].set_xlim(left=84.)
# ax[0].set_xlim(right=96.)
# ax[0].set_ylim(bottom=17.)
# ax[0].set_ylim(top=35.)
xmin = 0.
xmax = 12.
ymin = 17.
ymax = 35.
ax[0].set_xlim(left=0.)
ax[0].set_xlim(right=12.)
ax[0].set_ylim(bottom=17.)
ax[0].set_ylim(top=35.)
h = []
h.append(ax[0].contourf(solver.time[::30]/86400/360, solver.z[:]/1000, u.detach().numpy()[::30, :].T,
21, cmap="RdYlBu_r", vmin=cmin, vmax=cmax))
ax[0].axhline(25., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].axhline(20., xmin=0, xmax=1, color='black', linestyle='dashed', linewidth=1.)
ax[0].set_ylabel('Km', fontsize=10)
ax[0].set_xlabel('model year', fontsize=10)
xticks_list = np.arange(xmin, xmax+1, 1)
ax[0].set_xticks(xticks_list)
yticks_list = np.arange(ymin, ymax+2, 2)
ax[0].set_yticks(yticks_list)
xticklabels_list = list(xticks_list)
xticklabels_list = [ '%.0f' % elem for elem in xticklabels_list ]
ax[0].set_xticklabels(xticklabels_list, fontsize=10)
ax[0].xaxis.set_minor_locator(MultipleLocator(1.))
ax[0].yaxis.set_minor_locator(MultipleLocator(1.))
ax[0].tick_params(which='both', left=True, right=True, bottom=True, top=True)
ax[0].tick_params(which='both', labelbottom=True)
# ax[0].text(95.50, 25, r'$\sigma_{25}$ = ' '%.1f' %amp25 + r' $\mathrm{m s^{-1}}$',
# horizontalalignment='right', verticalalignment='bottom', color='black')
# ax[0].text(95.50, 20, r'$\sigma_{20}$ = ' '%.1f' %amp20 + r' $\mathrm{m s^{-1}}$',
# horizontalalignment='right', verticalalignment='bottom', color='black')
# ax[0].text(84.50, 25, r'$\tau_{25}$ = ' '%.0f' %tau25 + ' months',
# horizontalalignment='left', verticalalignment='bottom', color='black')
ax[0].text(11.50, 25, r'$\sigma_{25}$ = ' '%.1f' %amp25 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(11.50, 20, r'$\sigma_{20}$ = ' '%.1f' %amp20 + r' $\mathrm{m s^{-1}}$',
horizontalalignment='right', verticalalignment='bottom', color='black')
ax[0].text(00.50, 25, r'$\tau_{25}$ = ' '%.0f' %tau25 + ' months',
horizontalalignment='left', verticalalignment='bottom', color='black')
# # colorbars
cbar_ax0 = fig.add_axes(ax_pos_inch_to_absolute(fig_size, [01.00, 00.50, 05.50, 00.10]))
ax[0].figure.colorbar(plt.cm.ScalarMappable(cmap="RdYlBu_r"), cax=cbar_ax0, format='% 2.0f',
boundaries=np.linspace(cmin, cmax, 21), orientation='horizontal',
label=r'$\mathrm{m s^{-1}}$')
###Output
_____no_output_____
###Markdown
Part 1: `tweetharvest` Example Analysis This is an example notebook demonstrating how to establish a connection to a database of tweets collected using [`tweetharvest`](https://github.com/ggData/tweetharvest). It presupposes that all [the setup instructions](https://github.com/ggData/tweetharvest/blob/master/README.md) have been completed (see README file for that repository) and that MongoDB server is running as described there. We start by importing core packages the [PyMongo package](http://api.mongodb.org/python/current/index.html), the official package to access MongoDB databases.
###Code
import pymongo
###Output
_____no_output_____
###Markdown
Next we establish a link with the database. We know that the database created by `tweetharvester` is called `tweets_db` and within it is a collection of tweets that goes by the name of the project, in this example: `emotweets`.
###Code
db = pymongo.MongoClient().tweets_db
coll = db.emotweets
coll
###Output
_____no_output_____
###Markdown
We now have an object, `coll`, that offers full access to the MongoDB API where we can analyse the data in the collected tweets. For instance, in our small example collection, we can count the number of tweets:
###Code
coll.count()
###Output
_____no_output_____
###Markdown
Or we can count the number of tweets that are geolocated with a field containing the latitude and longitude of the user when they sent the tweet. We construct a MongoDB query that looks for a non-empty field called `coordinates`.
###Code
query = {'coordinates': {'$ne': None}}
coll.find(query).count()
###Output
_____no_output_____
###Markdown
Or how many tweets had the hashtag `happy` in them?
###Code
query = {'hashtags': {'$in': ['happy']}}
coll.find(query).count()
###Output
_____no_output_____
###Markdown
Pre-requisites for Analysis In order to perform these analyses there are a few things one needs to know:1. At the risk of stating the obvious: how to code in [Python](http://www.python.org) (there is also [an excellent tutorial](https://docs.python.org/2/tutorial/)). Please note that the current version of `tweetharvest` uses Python 2.7, and not Python 3.2. How to perform mongoDB queries, including aggregation, counting, grouping of subsets of data. There is a most effective short introduction ([The Little Book on MongoDB](http://openmymind.net/mongodb.pdf) by Karl Seguin), as well as [extremely rich documentation](http://docs.mongodb.org/manual/reference/) on the parent website.3. [How to use PyMongo](http://api.mongodb.org/python/current/) to interface with the MongoDB API.Apart from these skills, one needs to know how each status is stored in the database. Here is an easy way to look at the data structure of one tweet.
###Code
coll.find_one()
###Output
_____no_output_____
###Markdown
This JSON data structure is [documented on the Twitter API website](https://dev.twitter.com/overview/api/tweets) where each field is described in detail. It is recommended that this description is studied in order to understand how to construct valid queries.`tweetharvest` is faithful to the core structure of the tweets as described in that documentation, but with minor differences created for convenience:1. All date fields are stored as MongoDB `Date` objects and returned as Python `datetime` objects. This makes it easier to work on date ranges, sort by date, and do other date and time related manipulation.2. A `hashtags` field is created for convenience. This contains a simple array of all the hashtags contained in a particular tweet and can be queried directly instead of looking for tags inside a dictionary, inside a list of other entities. It is included for ease of querying but may be ignored if one prefers. Next Steps This notebook establishes how you can connect to the database of tweets that you have harvested and how you can use the power of Python and MongoDB to access and analyse your collections. Good luck! Part 2: `tweetharvest` Further Analysis Assuming we need some more advanced work to be done on the dataset we have collected, below are some sample analyses to dip our toes in the water.The examples below are further illustration of using our dataset with standard Python modules used in datascience. The typical idion is that of queryiong MongoDB to get a cursor on our dataset, importing that into an analytic tool such as Pandas, and then producing the analysis. The analyses below require that a few packages are installed on our system:- matplotlib: a python 2D plotting library ([documentation](http://matplotlib.org/contents.html))- pandas: "an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools" ([documentation](http://pandas.pydata.org/)) Important Note **The dataset used in this notebook is not published on the Github repository. If you want to experiment with your own data, you need to install the `tweetharvest` package, harvest some tweets to replicate the `emotweets` project embedded there, and then run the notebook. The intended use of this example notebook is simply as an illustration of the type of analysis one might want to do using your own tools.**
###Code
%matplotlib inline
import pymongo # in case we have run Part 1 above
import pandas as pd # for data manipulation and analysis
import matplotlib.pyplot as plt
###Output
/Users/gauden/anaconda/lib/python2.7/site-packages/pytz/__init__.py:29: UserWarning: Module argparse was already imported from /Users/gauden/anaconda/lib/python2.7/argparse.pyc, but /Users/gauden/anaconda/lib/python2.7/site-packages is being added to sys.path
from pkg_resources import resource_stream
###Markdown
Establish a Link to the Dataset as a MongoDB Collection
###Code
db = pymongo.MongoClient().tweets_db
COLL = db.emotweets
COLL
###Output
_____no_output_____
###Markdown
Descriptive Statistics Number of Tweets in Dataset
###Code
COLL.count()
def count_by_tag(coll, hashtag):
query = {'hashtags': {'$in': [hashtag]}}
count = coll.find(query).count()
return count
print 'Number of #happy tweets: {}'.format(count_by_tag(COLL, 'happy'))
print 'Number of #sad tweets: {}'.format(count_by_tag(COLL, 'sad'))
###Output
Number of #happy tweets: 8258
Number of #sad tweets: 2403
###Markdown
Number of Geolocated Tweets
###Code
query = {'coordinates': {'$ne': None}}
COLL.find(query).count()
###Output
_____no_output_____
###Markdown
Range of Creation Times for Tweets
###Code
# return a cursor that iterates over all documents and returns the creation date
cursor = COLL.find({}, {'created_at': 1, '_id': 0})
# list all the creation times and convert to Pandas DataFrame
times = pd.DataFrame(list(cursor))
times = pd.to_datetime(times.created_at)
earliest_timestamp = min(times)
latest_timestamp = max(times)
print 'Creation time for EARLIEST tweet in dataset: {}'.format(earliest_timestamp)
print 'Creation time for LATEST tweet in dataset: {}'.format(latest_timestamp)
###Output
Creation time for EARLIEST tweet in dataset: 2015-06-13 07:24:40
Creation time for LATEST tweet in dataset: 2015-06-14 09:29:21
###Markdown
Plot Tweets per Hour
###Code
query = {} # empty query means find all documents
# return just two columns, the date of creation and the id of each document
projection = {'created_at': 1}
df = pd.DataFrame(list(COLL.find(query, projection)))
times = pd.to_datetime(df.created_at)
df.set_index(times, inplace=True)
df.drop('created_at', axis=1, inplace=True)
tweets_all = df.resample('60Min', how='count')
tweets_all.plot(figsize=[12, 7], title='Number of Tweets per Hour', legend=None);
###Output
_____no_output_____
###Markdown
More Complex Query As an example of a more complex query, the following demonstrates how to extract all tweets that are not retweets, contain the hashtag `happy` as well at least one other hashtag, and that are written in English. These attributes are passed to the `.find` method as a dictionary, and the hashtags are then extracted.The hashtags of the first ten tweets meeting this specification are then printed out.
###Code
query = { # find all documents that:
'hashtags': {'$in': ['happy']}, # contain #happy hashtag
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = COLL.find(query, projection)
for tags in cursor[:10]:
print tags['hashtags']
###Output
[u'rains', u'drenched', u'happy', u'kids', u'birds', u'animals', u'tatasky', u'home', u'sad', u'life']
[u'quote', u'wisdom', u'sad', u'happy']
[u'truro', u'nightout', u'drunk', u'nationalginday', u'happy', u'fun', u'cornwall', u'girlsnight', u'zafiros']
[u'happy', u'positivity']
[u'vaghar', u'cook', u'ghee', u'colzaoil', u'spices', u'love', u'happy', u'digestion', u'ayurveda', u'intuitive']
[u'happy', u'yay']
[u'kinderscout', u'peakdistrict', u'darkpeaks', u'happy']
[u'ichoisehappy', u'life', u'happy', u'quote', u'instaphoto']
[u'streetartthrowdown', u'me', u'myself', u'wacky', u'pretty', u'cute', u'nice', u'awesome', u'cool', u'smile', u'happy', u'selfie', u'selca']
[u'brothers', u'love', u'forever', u'heart', u'bless', u'live', u'family', u'happy', u'proud']
###Markdown
Build a Network of Hashtags We could use this method to produce a network of hashtags. The following illustrates this by:- creating a generator function that yields every possible combination of two hashtags from each tweet- adding these pairs of tags as edges in a NetworkX graph- deleting the node `happy` (since it is connected to all the others by definition)- deleting those edges that are below a threshold weight- plotting the resultIn order to run this, we need to install the NetworkX package (`pip install networkx`, [documentation](https://networkx.github.io/documentation.html)) and import it as well as the `combinations` function from Python's standard library [`itertools` module](https://docs.python.org/2/library/itertools.html).
###Code
from itertools import combinations
import networkx as nx
###Output
_____no_output_____
###Markdown
Generate list of all pairs of hashtags
###Code
def gen_edges(coll, hashtag):
query = { # find all documents that:
'hashtags': {'$in': [hashtag]}, # contain hashtag of interest
'retweeted_status': None, # are not retweets
'hashtags.1': {'$exists': True}, # and have more than 1 hashtag
'lang': 'en' # written in English
}
projection = {'hashtags': 1, '_id': 0}
cursor = coll.find(query, projection)
for tags in cursor:
hashtags = tags['hashtags']
for edge in combinations(hashtags, 2):
yield edge
###Output
_____no_output_____
###Markdown
Build graph with weighted edges between hashtags
###Code
def build_graph(coll, hashtag, remove_node=True):
g = nx.Graph()
for u,v in gen_edges(coll, hashtag):
if g.has_edge(u,v):
# add 1 to weight attribute of this edge
g[u][v]['weight'] = g[u][v]['weight'] + 1
else:
# create new edge of weight 1
g.add_edge(u, v, weight=1)
if remove_node:
# since hashtag is connected to every other node,
# it adds no information to this graph; remove it.
g.remove_node(hashtag)
return g
G = build_graph(COLL, 'happy')
###Output
_____no_output_____
###Markdown
Remove rarer edges Finally we remove rare edges (defined here arbitrarily as edges that have a weigthing of less than 25), then print a table of these edges sorted in descending order by weight.
###Code
def trim_edges(g, weight=1):
# function from http://shop.oreilly.com/product/0636920020424.do
g2 = nx.Graph()
for u, v, edata in g.edges(data=True):
if edata['weight'] > weight:
g2.add_edge(u, v, edata)
return g2
###Output
_____no_output_____
###Markdown
View as Table
###Code
G2 = trim_edges(G, weight=25)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
###Output
_____no_output_____
###Markdown
Plot the Network
###Code
G3 = trim_edges(G, weight=35)
pos=nx.circular_layout(G3) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3.edges()]
weight_list = [edata['weight']/5.0 for u, v, edata in G3.edges(data=True)]
# edges
nx.draw_networkx_edges(G3, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3, pos, font_size=20,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(10, 10)
plt.axis('off');
###Output
_____no_output_____
###Markdown
Repeat for `sad`
###Code
G_SAD = build_graph(COLL, 'sad')
G2S = trim_edges(G_SAD, weight=5)
df = pd.DataFrame([(u, v, edata['weight'])
for u, v, edata in G2S.edges(data=True)],
columns = ['from', 'to', 'weight'])
df.sort(['weight'], ascending=False, inplace=True)
df
###Output
_____no_output_____
###Markdown
Graph is drawn with a spring layout to bring out more clearly the disconnected sub-graphs.
###Code
G3S = trim_edges(G_SAD, weight=5)
pos=nx.spring_layout(G3S) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G3S, pos, node_size=700,
linewidths=0, node_color='#cccccc')
edge_list = [(u, v) for u, v in G3S.edges()]
weight_list = [edata['weight'] for u, v, edata in G3S.edges(data=True)]
# edges
nx.draw_networkx_edges(G3S, pos,
edgelist=edge_list,
width=weight_list,
alpha=0.4,edge_color='b')
# labels
nx.draw_networkx_labels(G3S, pos, font_size=12,
font_family='sans-serif', font_weight='bold')
fig = plt.gcf()
fig.set_size_inches(13, 13)
plt.axis('off');
###Output
_____no_output_____
###Markdown
Training the model
###Code
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = lfilter(b, a, data)
return y
def _read_signal(file, low_freq, high_freq, sample_freq):
record = wfdb.rdrecord(file)
annotation = wfdb.rdann(file, 'atr')
annotated_intervals = list(zip(annotation.sample, annotation.aux_note))
signal_ch1 = record.p_signal[:, 0][1500:-1500]
signal_ch2 = record.p_signal[:, 2][1500:-1500]
signal_ch3 = record.p_signal[:, 4][1500:-1500]
signal_ch1 = butter_bandpass_filter(signal_ch1, low_freq,
high_freq, sample_freq, order=4)
signal_ch2 = butter_bandpass_filter(signal_ch2, low_freq,
high_freq, sample_freq, order=4)
signal_ch3 = butter_bandpass_filter(signal_ch3, low_freq,
high_freq, sample_freq, order=4)
for i, ann in enumerate(annotated_intervals):
annotated_intervals[i] = (ann[0] - 1500, ann[1])
signal_ch1 = medfilt(signal_ch1)
signal_ch2 = medfilt(signal_ch2)
signal_ch3 = medfilt(signal_ch3)
ch1_scaler = RobustScaler()
ch2_scaler = RobustScaler()
ch3_scaler = RobustScaler()
signal_ch1 = ch1_scaler.fit_transform(signal_ch1.reshape(-1, 1)).reshape(-1, )
signal_ch2 = ch2_scaler.fit_transform(signal_ch2.reshape(-1, 1)).reshape(-1, )
signal_ch3 = ch3_scaler.fit_transform(signal_ch3.reshape(-1, 1)).reshape(-1, )
return signal_ch1, signal_ch2, signal_ch3, annotated_intervals
def _read_clinical(file):
start_idx = 0
with open(file+'.hea', 'r') as ifp:
lines = ifp.readlines()
for line_idx, line in enumerate(lines):
if line.startswith('#'):
start_idx = line_idx
break
names = []
values = []
for line in lines[start_idx+1:]:
_, name, value = line.split()
names.append(name)
values.append(value)
return names, values
def _process_clinical_df(clin_df):
clin_df = clin_df.drop(['Gestation'], axis=1)
clin_df = clin_df.replace('None', np.NaN)
clin_df = clin_df.replace('N/A', np.NaN)
clin_df['ID'] = clin_df['RecID']
for col in ['Rectime', 'Age', 'Abortions', 'Weight']:
clin_df[col] = clin_df[col].astype(float)
clin_df = clin_df.drop_duplicates()
clin_df = clin_df[['file', 'Rectime', 'Age', 'Parity', 'Abortions']]
return clin_df
def partition_data(directory, n_splits=5):
files = np.unique([x.split('.')[0] for x in os.listdir(directory)])
p_files, t_files, n_files = [], [], []
for file in files:
if file[-4] == 'n':
n_files.append(file)
elif file[-4] == 'p':
p_files.append(file)
else:
t_files.append(file)
np.random.shuffle(p_files)
np.random.shuffle(t_files)
folds = []
for split in range(n_splits):
start = lambda x: int(x * (split / n_splits))
end = lambda x: int(x * ((split + 1) / n_splits))
if split == n_splits - 1:
test_p_files = p_files[start(len(p_files)):]
test_t_files = t_files[start(len(t_files)):]
else:
test_p_files = p_files[start(len(p_files)):end(len(p_files))]
test_t_files = t_files[start(len(t_files)):end(len(t_files))]
train_p_files = sorted(list(set(p_files) - set(test_p_files)))
train_t_files = sorted(list(set(t_files) - set(test_t_files)))
test_files = test_t_files + test_p_files
train_files = train_t_files + train_p_files
folds.append((['{}{}{}'.format(directory, os.sep, x) for x in train_files],
['{}{}{}'.format(directory, os.sep, x) for x in test_files]))
return folds
folds = partition_data('tpehgts')
train_files, test_files = folds[0]
detector = CXDetector(20, 0.05, 4.0, 750, 125, 100, 100, _read_signal, _read_clinical, _process_clinical_df)
features = detector.fit(train_files)
print(list(features.columns))
###Output
_____no_output_____
###Markdown
Evaluating the model
###Code
from sklearn.metrics import roc_auc_score
def get_labels_preds(intervals, predictions):
preds = []
labels = []
for (start_idx, start_type), (end_idx, end_type) in zip(intervals[::2], intervals[1::2]):
if start_idx < 0 or end_idx >= len(predictions):
continue
if start_type[-1] == 'C':
labels.extend([1]*(end_idx - start_idx))
preds.extend(predictions.loc[list(range(start_idx, end_idx)), 'pred'].values)
else:
labels.extend([0]*(end_idx - start_idx))
preds.extend(predictions.loc[list(range(start_idx, end_idx)), 'pred'].values)
return labels, preds
def _load_pred_labels_intervals(predictions):
_, _, _, intervals = _read_signal(predictions['file'].values[0], 0.05, 4.0, 20.0)
labels, preds = get_labels_preds(intervals, predictions)
return labels, preds, intervals
def unweighted_auc(predictions):
all_labels, all_preds = [], []
for file in np.unique(predictions['file']):
preds = predictions[predictions['file'] == file].set_index('index', drop=True)
labels, preds, intervals = _load_pred_labels_intervals(preds)
all_labels.extend(labels)
all_preds.extend(preds)
mask = ~np.isnan(all_preds)
return roc_auc_score(np.array(all_labels)[mask], np.array(all_preds)[mask])
def create_plots(predictions):
def create_plot(signal_ch1, signal_ch2, signal_ch3, predictions, intervals):
f, ax = plt.subplots(4, 1, sharex=True, figsize=(15,3))
ax[0].plot(signal_ch1)
ax[1].plot(signal_ch2)
ax[2].plot(signal_ch3)
_max = np.max([np.max(signal_ch1), np.max(signal_ch2), np.max(signal_ch3)])
_min = np.min([np.min(signal_ch1), np.min(signal_ch2), np.min(signal_ch3)])
for (start_idx, start_type), (end_idx, end_type) in zip(intervals[::2], intervals[1::2]):
if start_type[-1] == 'C':
color = 'g'
elif start_type == '(c)':
color = 'y'
else:
color = 'r'
for k in range(3):
rect = patches.Rectangle((start_idx, _min), end_idx - start_idx, _max - _min, facecolor=color, alpha=0.5)
ax[k].add_patch(rect)
ax[3].plot(predictions)
plt.show()
plt.close()
for file in np.unique(predictions['file']):
sign_ch1, sign_ch2, sign_ch3, intervals = _read_signal(file, 0.05, 4.0, 20.0)
create_plot(sign_ch1, sign_ch2, sign_ch3, predictions[predictions['file'] == file]['pred'].values, intervals)
preds = detector.predict(test_files)
print(unweighted_auc(preds))
create_plots(preds) #0.8077784799594918
"""
def generate_predictions(file, X, idx, model, WINDOW_SIZE, DATA_DIR, OUTPUT_DIR):
for col in ['ID', 'file']:
if col in X.columns:
X = X.drop(col, axis=1)
signal_ch1, signal_ch2, signal_ch3, annotated_intervals = read_signal(DATA_DIR + '/' + file)
ts_predictions = np.empty((len(signal_ch1),), dtype=object)
predictions = model.predict_proba(X)[:, 1]
for pred, x in zip(predictions, idx):
for i in range(x, x+WINDOW_SIZE):
if ts_predictions[i] is None:
ts_predictions[i] = [pred]
else:
ts_predictions[i].append(pred)
for i in range(len(signal_ch1)):
if ts_predictions[i] is None:
ts_predictions[i] = last_value
else:
avg = np.mean(ts_predictions[i])
ts_predictions[i] = avg
last_value = avg
pd.Series(ts_predictions).to_csv('{}/{}.csv'.format(OUTPUT_DIR, file))
create_plot(signal_ch1, signal_ch2, signal_ch3, ts_predictions, annotated_intervals, '{}/{}.png'.format(OUTPUT_DIR, file))
"""
###Output
_____no_output_____
###Markdown
Example Notebook for the Psypypeline Import the module and setup a pipeline
###Code
from psypypeline.psypypeline import Pipeline
pipeline = Pipeline(name="TestPipeline", root="example")
###Output
C:\Users\hulin\anaconda3\lib\site-packages\bids\layout\models.py:148: FutureWarning: The 'extension' entity currently excludes the leading dot ('.'). As of version 0.14.0, it will include the leading dot. To suppress this warning and include the leading dot, use `bids.config.set_option('extension_initial_dot', True)`.
warnings.warn("The 'extension' entity currently excludes the leading dot ('.'). "
###Markdown
This loads everything as specified in pipeline.json, which must lie in */derivatives/*In pipeline.json, one can specify processes (their name, the script in which they are stored and a version of their name which will be used for filenames) and masks (their name and the nii(.gz) file in which they are stored).the above code loads all of this into memory.You can call `pipeline.processes` or `pipeline.masks` and compare the output to the content of the *pipeline.json* or the folder (*masks*) or python script (*processes.py** where they *re stored.
###Code
pipeline.masks
pipeline.processes
###Output
_____no_output_____
###Markdown
Here we see a new class: `Processes`. Calling the `__dict__` of one of the processes, tells us more about their content:
###Code
pipeline.processes["denoise"].__dict__
###Output
_____no_output_____
###Markdown
Load data using the pipeline Now, loading the data is easy:
###Code
pipeline.load_data(sub="S01")
pipeline.load_data(sub="S01", smooth={})
###Output
...found sub-S01_smoothed_bold.nii.gz
###Markdown
As we can see here, just supplying the name of a subject loads the unprocesses file. Additionally supplying keywords like `smooth` applies processes from `pipeline.processes` to them, in the order of appearance. As you can see, no key is supplied to the keyword, which means that the process will run with the default parameters. We can specify the parameters by supplying *-* pairs in the dictionary. But how do we know which parameters are allowed (apart from checking our own code again)?
###Code
pipeline.processes["smooth"].process
###Output
_____no_output_____
###Markdown
So smoothing data with a different kernel looks like this:
###Code
pipeline.load_data("S01", smooth={"fwhm": 3})
###Output
...found sub-S01_smoothed-{fwhm-3}_bold.nii.gz
###Markdown
Note that, if you run of the `load_data` cells multiple times, it speeds up considerably and there is less output. That is because, if not specified otherwise, the loading process first checks if already applied this process and stored it. You can change that an other behavior of the function. Look at the docstring to find out more.
###Code
?pipeline.load_data
###Output
[1;31mSignature:[0m
[0mpipeline[0m[1;33m.[0m[0mload_data[0m[1;33m([0m[1;33m
[0m [0msub[0m[1;33m,[0m[1;33m
[0m [0mreturn_type[0m[1;33m=[0m[1;34m'Brain_Data'[0m[1;33m,[0m[1;33m
[0m [0mwrite[0m[1;33m=[0m[1;34m'all'[0m[1;33m,[0m[1;33m
[0m [0mforce[0m[1;33m=[0m[1;34m'none'[0m[1;33m,[0m[1;33m
[0m [0mverbose[0m[1;33m=[0m[1;32mTrue[0m[1;33m,[0m[1;33m
[0m [0mreload[0m[1;33m=[0m[1;32mTrue[0m[1;33m,[0m[1;33m
[0m [1;33m**[0m[0mprocesses[0m[1;33m,[0m[1;33m
[0m[1;33m)[0m [1;33m->[0m [0mnltools[0m[1;33m.[0m[0mdata[0m[1;33m.[0m[0mbrain_data[0m[1;33m.[0m[0mBrain_Data[0m[1;33m[0m[1;33m[0m[0m
[1;31mDocstring:[0m
Load data from pipeline.root/derivatives/pipeline.name and/or applies processes from
pipeline.processes to it.
By default, first checks wether the processes have been applied and saved before and
then loads them. By default, saves all the intermediate steps
Parameters
----------
sub : str
Name of the subject to load the process from.
return_type : str, optional
Type the return value. Must be one of "path", "Brain_Data". If "path" and write="none" and file does not exist,
throws an Error, as path does not exist. By default "Brain_Data"
write : str, optional
Wether to save the intermediate and the last step when applying processes. Must be one of "none" (no step is saved),
"main" (only endresult is saved) or "all" (all intermediate steps are saved). By default "all"
force : str, optional
Wether to apply processes even though a file of this already exists. Must be one of "none", "main", "all" (see above).
By default "none"
verbose : bool, optional
Wether to be verbose, by default True
reload : bool, optional
Wether to reload the pipeline.layout after writing a file. Only recommended if computing multiple independend processes.
Then, afterwards, should be reloaded by hand (call `pipeline.layout = BIDSLayout(pipeline.root)`
, by default True
Returns
-------
Brain_Data, str
(Un)Processed data or path to where the data is stored
Raises
------
TypeError
If wrong return_type is supplied
FileNotFoundError
If subject is not found
KeyError
If an unknown process is supplied
[1;31mFile:[0m c:\users\hulin\documents\uni\20wise\masterarbeit\psypypeline\psypypeline\psypypeline.py
[1;31mType:[0m method
###Markdown
get_unitids by Adam Hearn A Python Module to Imputate IPEDS UnitID Numbers from Non-Matching Institution Names Have you ever worked with institutional data from multiple sources? If so, one of them is likmayely IPEDS which of course involves the infamous `unitid` variable. The secondary source, on the other hand, may only have the institution's name and no `unitid`. In this case, to join the datasets, you would need to merge on institution name and fill in the rest of the unitids manually to retrieve the IPEDS data. Anyone who has worked with IPEDS data would know that not all institution names perfectly line up across multiple sources. For example, Tulane University is named as Tulane University of Louisiana in the IPEDS universe. In this case of conflicting names, there would be an imperfect merge requiring you to manually enter Tulane's unitid number. In my background, I've run into this issue several times and has gotten to the point where it would be a better use of time to create a module to automate this step rather than filling out unitids manually. That said, I've developed the Python module `unitids`. I'm making this process open-source so other higher-ed researchers can benefit, too. This module, available in the `pip` library, uses a cosine similarity text-analyis metric to merge partial or "non-matching" institution names with an IPEDS master file including all institutions in the IPEDS universe since 2004 and their unitid numbers. The process works by passing a DataFrame of institutions of which you want to get their unitids into the `get_unitids` function. From there, the function populates a sparse matrix and generates a cosine-similarity metric for each insitution you passed and each institution in the IPEDS universe.It will then return two DataFrame objects: the first DataFrame will include your original data and the unitid numbers of the institutions in the dataset, along with a "match score" (displayed on a scale of 0-100). The second DataFrame includes information on the institutions that were not a perfect match, alongside their top-5 closes matches so you can make adjustments as necessary. Example Take, for example, the data present in [this article](https://www.forbes.com/sites/schifrin/2019/11/27/dawn-of-the-dead-for-hundreds-of-the-nations-private-colleges-its-merge-or-perish/77a18358770d). The cleaned data is available on my Github [here](https://raw.githubusercontent.com/ahearn15/get_unitids/master/example_dta.csv). Suppose we want to see the relationshpi between Forbes' Financial GPA and endowment, as reported to IPEDS. I've included a sample dataset of FY2018 endowment for all institutions in the IPEDS universe [here](https://raw.githubusercontent.com/ahearn15/get_unitids/master/ipeds_example.csv).
###Code
# Import necessary modules (no unitid, yet)
import pandas as pd
import numpy as np
# First we read in the Forbes data:
url = 'https://raw.githubusercontent.com/ahearn15/get_unitids/master/example_dta.csv'
forbes = pd.read_csv(url)
forbes.head(5)
# Now we read in the IPEDS data
url = 'https://raw.githubusercontent.com/ahearn15/get_unitids/master/ipeds_example.csv'
ipeds = pd.read_csv(url).drop(columns = 'Unnamed: 0')
ipeds.head(5)
# Now we merge together
forbes = forbes.rename(columns = {'College' : 'institution'}) # need to rename for merge
merged = pd.merge(forbes, ipeds, on = 'institution', how = "left")
merged.head(5)
# How many did not merge?
merged['unitid'].isna().sum()
###Output
_____no_output_____
###Markdown
If we merge our Forbes data with this 2018 list of IPEDS institutions, we get a successful merge rate of 95.7% (892 of 932 institutions). However, we still have 40 unitids we need to manually encode, taking up 40 lines of code and tedious trips to the IPEDS Data Center. What if we used the new `get_unitids` function though? get_unitids
###Code
# Installing the module
!pip install unitids==0.0.92
# Import required functions
import numpy as np
import pandas as pd
from unitids import unitids
#For viewing nonmatches
pd.set_option('display.max_rows', 100)
###Output
Requirement already satisfied: unitids==0.0.92 in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (0.0.92)
Requirement already satisfied: numpy in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from unitids==0.0.92) (1.18.1)
Requirement already satisfied: pandas in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from unitids==0.0.92) (1.0.1)
Requirement already satisfied: nltk in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from unitids==0.0.92) (3.4.5)
Requirement already satisfied: textblob in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from unitids==0.0.92) (0.15.3)
Requirement already satisfied: pytz>=2017.2 in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from pandas->unitids==0.0.92) (2019.3)
Requirement already satisfied: python-dateutil>=2.6.1 in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from pandas->unitids==0.0.92) (2.8.1)
Requirement already satisfied: six in /Users/adamhearn/anaconda3/lib/python3.7/site-packages (from nltk->unitids==0.0.92) (1.14.0)
###Markdown
Running the algorithm The first argument we pass to the function, `forbes`, is our original dataset with the institutions of which we want to retreive unitids. The second argument, `stateFlag`, is an indicator of whether or not we have state abbreviations in our data. This makes the merge much faster and much cleaner, which I'll get to shortly. The funciton returns two DataFrames: `merged` is our original dataset with the fancy new unitids. The second dataframe returned, `nonmatches`, allows us to investigate the institutions that did not perfectly merge and make adjustments as necessary. Now we're ready to call the function! Sidenote: For the algorithm to run error-free, the institution name variable must be listed the first column and the state variable (if available) must be in the second column.
###Code
# Calling the function
forbes_unitids, nonmatches = unitids.get_unitids(forbes, stateFlag = True)
#viewing the new data
forbes_unitids.head(5)
# How many did not merge?
forbes_unitids['unitid'].isna().sum()
###Output
_____no_output_____
###Markdown
100% of the institutions merged! We have significantly fewer unitids we need to fill in ourselves. We can investigate the institutions that were not perfect merges by viewing the second DataFrame, `nonmatches`:
###Code
nonmatches
###Output
_____no_output_____
###Markdown
For example, we can see that Franklin & Marshall College merged successfully with its official name in the IPEDS universe, "Franklin and Marshall College". Also, Tulane University merged successfully with "Tulane University of Louisiana", Hobart and William Smith Colleges merged with "Hobart William Smith Colleges", etc. Further, the algorithm accounts for institutions with changed names. For example, Dordt did not merge in the original dataset (since Forbes called it "Dordt College" and it's official IPEDS name is "Dordt University". The same issue is raised with Calvin College (now Calvin Univeristy). The IPEDS dictionary nested in the algorithm accounts for these historical name-changes. That is why there are only 15 non-perfect matches with this algorithm as opposed to 40 merging the "old fashioned way". However, this wasn't perfect: Saint John's University (MN) merged with Saint Mary's University of Minnesota instead, so we will need to correct that ourselves. We can find the correct unitid in the `nonmatches` dataframe above. Still, one line of code compared to 40 is a big time-saver!
###Code
# Replaces unitid for Saint John's University (MN) only
# For Stata folks, same as replace unitid = 174792 if institution == "Saint John's University (MN)"
forbes_unitids['untid'] = np.where(forbes_unitids['institution'] == "Saint John's University (MN)", 174792, forbes_unitids['unitid'])
###Output
_____no_output_____
###Markdown
Running this algorithm on this dataset as opposed to merging on institution-name gives us an accuracy of 99.9% (931 of 932 institutions), up from 95.7% earlier (892 of 932 institutions). It's a marginal improvement, but a big time-saver. To answer our original research question of how endowment impacts Forbes' "Financial GPA" measure, we can merge in our IPEDS data cleanly here.
###Code
dta = pd.merge(forbes_unitids, ipeds, on = 'unitid', how = 'left')
dta = dta.drop(columns = "institution_y") # no need for duplicate column
dta = dta.rename(columns = {"institution_x" : "institution"}) # renaming
dta.head(5)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
#using logged endowment, as any sane econometrician would
sns.scatterplot(x=np.log(dta["endowment"]), y=dta["Financial GPA"])
###Output
_____no_output_____
###Markdown
Seems like endowment plays a pretty significant role in Forbes' grading of Financial GPA! What if we have no state data? Note that the original merge went very well mostly due to us having access to state codes in our secondary dataset. Suppose our Forbes data did not have state codes, in which case the merge would have gone like this:
###Code
forbes_unitids, nonmatches = unitids.get_unitids(forbes, stateFlag = False)
###Output
_____no_output_____
###Markdown
Running the algorithm with `stateFlag = False` takes significantly longer, considering we can no longer "throw out" institutions that do not match the same state. Instead, the algorithm must cross-check institutions across all states, not just the ones within states like it did when `stateFlag` was set to `True`.
###Code
nonmatches
###Output
_____no_output_____
###Markdown
Step 1Simply define your PyTorch model like usual, and create an instance of it.
###Code
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
pytorch_network = LeNet()
###Output
_____no_output_____
###Markdown
Step 2Determine the names of the layers.For the above model example it is very straightforward, but if you use param groups it may be a little more involved. To determine the names of the layers the next commands are useful:
###Code
# The most useful, just print the network
print(pytorch_network)
# Also useful: will only print those layers with params
state_dict = pytorch_network.state_dict()
print(util.state_dict_layer_names(state_dict))
for k,v in state_dict.items():
print(k)
print(state_dict['conv1.weight'])
print(state_dict['conv1.weight'].shape)
###Output
LeNet(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
['conv1', 'conv2', 'fc1', 'fc2', 'fc3']
conv1.weight
conv1.bias
conv2.weight
conv2.bias
fc1.weight
fc1.bias
fc2.weight
fc2.bias
fc3.weight
fc3.bias
tensor([[[[ 0.1539, 0.0896, 0.0490, -0.1535, 0.0020],
[ 0.0309, -0.1750, 0.0868, -0.1527, -0.0524],
[-0.1320, -0.1024, 0.0271, -0.1880, -0.1861],
[-0.1242, 0.0839, -0.1913, 0.1370, 0.1899],
[ 0.0739, -0.1770, -0.1475, 0.0048, -0.1703]]],
[[[ 0.1676, 0.1166, 0.1534, -0.1749, 0.0259],
[-0.0847, 0.0412, -0.1052, 0.0255, -0.0765],
[-0.0072, -0.0668, -0.0168, -0.0974, 0.0535],
[ 0.0466, 0.1062, 0.0685, 0.0344, 0.0292],
[-0.0633, -0.1403, 0.1636, -0.1849, -0.1479]]],
[[[ 0.0845, 0.0621, 0.1356, -0.1767, -0.1707],
[-0.1499, 0.0968, -0.0890, 0.0065, 0.0015],
[ 0.1017, -0.0010, -0.0732, -0.1482, 0.1475],
[-0.1158, 0.1767, 0.0100, 0.1703, -0.0020],
[-0.1782, -0.0926, -0.0813, 0.0400, 0.0070]]],
[[[ 0.1217, -0.1983, 0.1840, 0.0219, 0.1203],
[-0.1175, 0.1850, 0.0765, -0.1082, -0.1909],
[ 0.1705, 0.0483, -0.1880, 0.1324, 0.0672],
[-0.1472, 0.0643, 0.0551, 0.1725, -0.0772],
[ 0.0642, -0.1890, 0.0074, 0.0906, -0.1870]]],
[[[-0.0144, -0.1863, 0.0041, -0.0734, -0.0113],
[ 0.1117, 0.0724, -0.0280, -0.1593, -0.1956],
[-0.0239, 0.1933, -0.1398, 0.1708, 0.1146],
[-0.0702, -0.1581, 0.0839, 0.1700, -0.1461],
[-0.0449, -0.1765, -0.1755, 0.1146, -0.0205]]],
[[[-0.0257, -0.0173, 0.1636, 0.1686, 0.1353],
[ 0.0735, -0.0166, 0.0518, 0.0487, -0.1305],
[ 0.0805, -0.1451, -0.1688, 0.1553, 0.0832],
[-0.1906, -0.0629, -0.0977, -0.0849, 0.0804],
[ 0.1186, 0.0771, 0.0833, -0.1357, -0.0735]]]])
torch.Size([6, 1, 5, 5])
###Markdown
Step 3Define an equivalent Keras network. Use the built-in `name` keyword argument for each layer with params.
###Code
import paddle
import paddle.fluid as fluid
import numpy as np
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, Linear, Conv2DTranspose
from paddle.fluid.dygraph.base import to_variable
# K.set_image_data_format('channels_first')
# 定义 LeNet 网络结构
class LeNet(fluid.dygraph.Layer):
def __init__(self, num_classes=1):
super(LeNet, self).__init__()
# 创建卷积和池化层块,每个卷积层使用Sigmoid激活函数,后面跟着一个2x2的池化
self.conv1 = Conv2D(num_channels=1, num_filters=6, filter_size=5, act='relu')
self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
self.conv2 = Conv2D(num_channels=6, num_filters=16, filter_size=5, act='relu')
self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
# 创建第3个卷积层
self.fc1 = Linear(input_dim=16*5*5, output_dim=120, act='relu')
self.fc2 = Linear(input_dim=120, output_dim=84, act='relu')
self.fc3 = Linear(input_dim=84, output_dim=num_classes)
# 网络的前向计算过程
def forward(self, x):
x = self.conv1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.pool2(x)
x = self.conv3(x)
x = fluid.layers.reshape(x, [x.shape[0], -1])
x = self.fc1(x)
x = self.fc2(x)
return x
with fluid.dygraph.guard():
paddle_network = LeNet()
print(paddle_network)
state_dict = paddle_network.state_dict()
# print(util.state_dict_layer_names(state_dict))
for k, v in state_dict.items():
print(k)
print(state_dict['conv1.weight'])
# state_dict.numpy
# test_qat()
###Output
_____no_output_____
###Markdown
Step 4Now simply convert!
###Code
# transfer.pytorch_to_paddle(keras_network, pytorch_network)
p2f_trans.pytorch_to_paddle(pytorch_network, paddle_network)
###Output
Layer names in PyTorch state_dict ['conv1', 'conv2', 'fc1', 'fc2', 'fc3']
Layer names in paddle state_dict ['conv1', 'conv2', 'fc1', 'fc2', 'fc3']
niubi <_io.TextIOWrapper name='save_temp.pdparams' mode='a' encoding='UTF-8'>
niubi <_io.TextIOWrapper name='save_temp.pdparams' mode='a' encoding='UTF-8'>
niubi <_io.TextIOWrapper name='save_temp.pdparams' mode='a' encoding='UTF-8'>
niubi <_io.TextIOWrapper name='save_temp.pdparams' mode='a' encoding='UTF-8'>
niubi <_io.TextIOWrapper name='save_temp.pdparams' mode='a' encoding='UTF-8'>
###Markdown
Done!Now let's check whether it was succesful. If it was, both networks should have the same output.
###Code
# Create dummy data
# data = torch.rand(6,1,32,32)
# data_keras = data.numpy()
# data_pytorch = Variable(data, requires_grad=False)
# # Do a forward pass in both frameworks
# keras_pred = keras_network.predict(data_keras)
# pytorch_pred = pytorch_network(data_pytorch).data.numpy()
# Create dummy data
data = torch.rand(6,1,32,32)
data_paddle = data.numpy()
data_pytorch = Variable(data, requires_grad=False)
# Do a forward pass in both frameworks
paddle_pred = paddle_network(data_paddle)
pytorch_pred = pytorch_network(data_pytorch).data.numpy()
# assert keras_pred.shape == pytorch_pred.shape
# plt.axis('Off')
# plt.imshow(keras_pred)
# plt.show()
# plt.axis('Off')
# plt.imshow(pytorch_pred)
# plt.show()
###Output
_____no_output_____
###Markdown
Figure A
###Code
from matplotlib import gridspec
import seaborn as sns
label = "A"
fname, w, h = svgfig.get_figinfo(label)
fig = plt.figure(figsize=cm2inch(w, h))
# gridspec inside gridspec
# outer_grid = gridspec.GridSpec(4, 4, wspace=0.0, hspace=0.0)
gs0 = gridspec.GridSpec(nrows=1, ncols=1)
gs0.update(left=0.1, right=0.55, top=0.9, bottom=0.1, wspace=0.5)
ax = plt.Subplot(fig, gs0[0])
ax.set_title("test")
ax.set_ylim(0, 1); ax.set_xlim(-5, 5)
x = np.random.normal(size=1000)
ax.hist(x, bins=np.linspace(-5, 5, 100), normed=True)
fig.add_subplot(ax)
gs1 = gridspec.GridSpec(nrows=1, ncols=1)
gs1.update(left=0.6, right=0.9, top=0.9, bottom=0.5, wspace=0.5)
ax = plt.Subplot(fig, gs1[0])
tips = sns.load_dataset("tips")
sns.regplot(x="total_bill", y="tip", data=tips, ax=ax)
ax.set_ylabel(""); ax.set_xlabel("")
fig.add_subplot(ax)
gs2 = gridspec.GridSpec(nrows=1, ncols=2)
gs2.update(left=0.6, right=0.9, top=0.4, bottom=0.1, wspace=0.2)
for i in range(2):
ax = plt.Subplot(fig, gs2[i])
ax.set_xlim(0, 10.5); ax.set_xticks([0, 5, 10])
ax.set_ylim(0, 10); ax.set_yticks([0, 5, 10]); ax.set_yticklabels([0, 5, 10])
if i == 1:
ax.set_yticks([0, 5, 10]); ax.set_yticklabels(["", "", ""])
ax.plot(np.arange(10), np.arange(10))
fig.add_subplot(ax)
plt.savefig(fname, format="svg")
add_label(fname, w, h, label)
#plt.savefig(fname, bbox_inches="tight", pad_inches=0.0, format="svg")
label = "B"
fname, w, h = svgfig.get_figinfo(label)
plt.figure(figsize=cm2inch(w, h))
mat = np.random.random(200).reshape(20, 10)
plt.imshow(mat)
plt.colorbar()
plt.tight_layout()
#plt.savefig(fname, bbox_inches="tight", pad_inches=0.0, format="svg")
plt.savefig(fname, format="svg")
add_label(fname, w, h, label)
label = "C"
fname, w, h = svgfig.get_figinfo(label)
# Load the example tips dataset
tips = sns.load_dataset("tips")
plt.figure(figsize=cm2inch(w, h))
# Draw a nested violinplot and split the violins for easier comparison
sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, split=True,
inner="quart", palette={"Male": "b", "Female": "y"})
sns.despine(left=True)
plt.savefig(fname, format="svg")
add_label(fname, w, h, label)
svgfig.assemble()
!open .
###Output
_____no_output_____
###Markdown
Offline notebook exampleYou should see three new buttons:![Offline notebook buttons](./offline-notebook-buttons.png) 1. Make some changes to this notebook (or run it to update the output).2. Do not save the notebook. You can even disconnect from the Jupyter server or your network.3. Click the first button (`Download`). This should prompt you to download the notebook.4. Click the second button (`cloud download`). This should save the current notebook into your browser's [local-storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage).5. Start a new instance of Jupyter, and open the original version of this notebook.6. Click the third button (`cloud upload`). This should restore the copy of the notebook from your browser's local-storage.
###Code
from datetime import datetime
print(datetime.now())
import os
for (k, v) in sorted(os.environ.items()):
print(f'{k}\t{v}')
###Output
_____no_output_____
###Markdown
Example UsageThis is a basic example using the torchvision COCO dataset from coco.py, it assumes that you've already downloaded the COCO images and annotations JSON. You'll notice that the scale augmentations are quite extreme.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import cv2
import numpy as np
from copy_paste import CopyPaste
from coco import CocoDetectionCP
from visualize import display_instances
import albumentations as A
import random
from matplotlib import pyplot as plt
transform = A.Compose([
A.RandomScale(scale_limit=(-0.9, 1), p=1), #LargeScaleJitter from scale of 0.1 to 2
A.PadIfNeeded(256, 256, border_mode=0), #pads with image in the center, not the top left like the paper
A.RandomCrop(256, 256),
CopyPaste(blend=True, sigma=1, pct_objects_paste=0.8, p=1.) #pct_objects_paste is a guess
], bbox_params=A.BboxParams(format="coco", min_visibility=0.05)
)
data = CocoDetectionCP(
'../../datasets/coco/train2014/',
'../../datasets/coco/annotations/instances_train2014.json',
transform
)
f, ax = plt.subplots(1, 2, figsize=(16, 16))
index = random.randint(0, len(data))
img_data = data[index]
image = img_data['image']
masks = img_data['masks']
bboxes = img_data['bboxes']
empty = np.array([])
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[0])
if len(bboxes) > 0:
boxes = np.stack([b[:4] for b in bboxes], axis=0)
box_classes = np.array([b[-2] for b in bboxes])
mask_indices = np.array([b[-1] for b in bboxes])
show_masks = np.stack(masks, axis=-1)[..., mask_indices]
class_names = {k: data.coco.cats[k]['name'] for k in data.coco.cats.keys()}
display_instances(image, boxes, show_masks, box_classes, class_names, show_bbox=True, ax=ax[1])
else:
display_instances(image, empty, empty, empty, empty, show_mask=False, show_bbox=False, ax=ax[1])
###Output
_____no_output_____
###Markdown
Example Drawing
###Code
from drawing import *
image = Drawing(400, 400)
image.set_coords(0, 0, 1, 1)
image.add( Polygon([Point(0.1,.3), Point(0.5,0.9), Point(0.9,0.5)]) )
image.add( Text( Point(0.4, 0.2), "It's a triangle!") )
image.draw()
###Output
_____no_output_____
###Markdown
Initialize tracer profile plummer sphere:Params:- a: Plummer radius,mass OR density:- mass: total stellar mass $\mathrm{M}_{\odot}$- density: scale density: $\mathrm{M}_{\odot}\,\mathrm{kpc}^{-3}$
###Code
# Plummer parameters:
a = .25 * u.kpc # Plummer radius
mass = 1e5 * u.solMass # Mass of stellar systems #! Doesn't matter - set it to number of tracers you want to draw for easy comparison
#* Initialize stellar distribution
tracer = stellar.plummer(a=a, mass=mass)
# for viewing convenience
tracer
###Output
_____no_output_____
###Markdown
Initialize dark matter profile Herquist-Zhao:<!-- $\begin{align}\rho(r) = \frac{\rho_s}{\left(\frac{r}{r_s}\right)^{a} \left(1+\left(\frac{r}{r_s}\right)^{b}\right)^{\frac{c-a}{b}}}\end{align}$ -->
###Code
thetaNFW = {'rho_s':2e7 *u.solMass/u.kpc**3, # scale radius
'r_s' : 2*u.kpc, # scale density
'a' : 1, # inner-slope
'b' : 1, # "width" of transition
'c' : 3 # outer-slope
}
dm = stellar.HerquistZhao(**thetaNFW)
dm
dracoLike = stellar.System(dark_matter=dm,tracer=tracer,beta=0,pm=False)
priors={'a' : 1 ,
'lnrho' : 1 ,
'lnr' : 1 ,
'beta' : 1 ,
'b' : 1 ,
'c' : 1 }
R_observed = np.logspace(-2,0,20)*u.kpc
sigma,cov = dracoLike.Covariance(R_observed,dv=2*u.km/u.s,priors=priors)
# print(cov)
sigma
cov
###Output
_____no_output_____
###Markdown
Example UsageThis is a basic example using the torchvision COCO dataset from coco.py, it assumes that you've already downloaded the COCO images and annotations JSON. You'll notice that the scale augmentations are quite extreme.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import cv2
import numpy as np
from copy_paste import CopyPaste
from coco import CocoDetectionCP
from visualize import display_instances
import albumentations as A
import random
from matplotlib import pyplot as plt
from copy_paste import copy_paste_class
from torch.utils.data import Dataset
import glob
@copy_paste_class
class FigaroDataset(Dataset):
def __init__(self, transforms=None):
# super(FigaroDataset, self).__init__(*args)
self.impath = glob.glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/src/*.jpg')
self.maskpath = glob.glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/mask/*.pbm')
self.transforms = transforms
def __len__(self):
return len(self.impath)
def load_example(self, idx):
path = self.impath[idx]
mask_path = self.maskpath[idx]
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mask = Image.open(mask_path).convert("L")
mask = np.array(mask)
obj_ids = np.unique(mask)
obj_ids = obj_ids[1:]
masks = mask == obj_ids[:, None, None]
num_objs = len(obj_ids)
class_id = 1
boxes = []
for i in range(num_objs):
pos = np.where(masks[i])
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
boxes.append([xmin,ymin,xmax,ymax,class_id])
boxes.append([xmin,ymin,xmax,ymax,class_id])
# print(masks.shape)
# masks = [masks]
masks = [masks.squeeze(0).astype(np.uint8),masks.squeeze(0).astype(np.uint8)]
output = {
'image': image,
'masks': masks,
'bboxes': boxes,
}
return self.transforms(**output)
from copy import deepcopy
from skimage.filters import gaussian
def image_copy_paste(img, paste_img, alpha, blend=True, sigma=1):
if alpha is not None:
if blend:
alpha = gaussian(alpha, sigma=sigma, preserve_range=True)
img_dtype = img.dtype
alpha = alpha[..., None]
img = paste_img * alpha + img * (1 - alpha)
img = img.astype(img_dtype)
return img
def mask_copy_paste(mask, paste_mask, alpha):
raise NotImplementedError
def masks_copy_paste(masks, paste_masks, alpha):
if alpha is not None:
#eliminate pixels that will be pasted over
masks = [
np.logical_and(mask, np.logical_xor(mask, alpha)).astype(np.uint8) for mask in masks
]
masks.extend(paste_masks)
return masks
def extract_bboxes(masks):
bboxes = []
# allow for case of no masks
if len(masks) == 0:
return bboxes
h, w = masks[0].shape
for mask in masks:
yindices = np.where(np.any(mask, axis=0))[0]
xindices = np.where(np.any(mask, axis=1))[0]
if yindices.shape[0]:
y1, y2 = yindices[[0, -1]]
x1, x2 = xindices[[0, -1]]
y2 += 1
x2 += 1
y1 /= w
y2 /= w
x1 /= h
x2 /= h
else:
y1, x1, y2, x2 = 0, 0, 0, 0
bboxes.append((y1, x1, y2, x2))
return bboxes
def bboxes_copy_paste(bboxes, paste_bboxes, masks, paste_masks, alpha, key):
if key == 'paste_bboxes':
return bboxes
elif paste_bboxes is not None:
masks = masks_copy_paste(masks, paste_masks=[], alpha=alpha)
adjusted_bboxes = extract_bboxes(masks)
#only keep the bounding boxes for objects listed in bboxes
mask_indices = [box[-1] for box in bboxes]
adjusted_bboxes = [adjusted_bboxes[idx] for idx in mask_indices]
#append bbox tails (classes, etc.)
adjusted_bboxes = [bbox + tail[4:] for bbox, tail in zip(adjusted_bboxes, bboxes)]
#adjust paste_bboxes mask indices to avoid overlap
if len(masks) > 0:
max_mask_index = len(masks)
else:
max_mask_index = 0
paste_mask_indices = [max_mask_index + ix for ix in range(len(paste_bboxes))]
paste_bboxes = [pbox[:-1] + (pmi,) for pbox, pmi in zip(paste_bboxes, paste_mask_indices)]
adjusted_paste_bboxes = extract_bboxes(paste_masks)
adjusted_paste_bboxes = [apbox + tail[4:] for apbox, tail in zip(adjusted_paste_bboxes, paste_bboxes)]
bboxes = adjusted_bboxes + adjusted_paste_bboxes
return bboxes
bgpath = glob.glob('/mnt/vitasoft/kobaco/dataset/data_processing/kobaco_data/scene/**/*')
self_impath = glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/src/*.jpg')
self_maskpath = glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/mask/*pbm')
self_humanmaskpath = glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/humanmask/*.png')
# impath = glob.glob('/home/ubuntu/workspace/CelebAMask-HQ/CelebA-HQ-img/*')
# impath = glob.glob('/home/ubuntu/workspace/CelebAMask-HQ/CelebAMask-HQ-mask-anno/*')
self_impath += glob('/home/ubuntu/workspace/CelebAMask-HQ/CelebA-HQ-img/*.jpg')
self_maskpath += glob('/home/ubuntu/workspace/CelebAMask-HQ/mask/*.png')
self_humanmaskpath += glob('/home/ubuntu/workspace/CelebAMask-HQ/humanmask/*.png')
len(self_humanmaskpath)
a = [1,2,3,4,5,6]
a[::2]
import torch
from rvm_model.model import MattingNetwork
from torchvision.transforms import Compose, ToTensor, Resize
model = torch.hub.load("PeterL1n/RobustVideoMatting", "mobilenetv3") # or "resnet50"
from PIL import Image
import cv2
bgr = torch.tensor([.47, 1, .6]).view(3, 1, 1).cuda()
downsample_ratio = 0.8 # Adjust based on your video.
device = 'cuda'
def cv2_frame_to_cuda(frame):
"""
convert cv2 frame to tensor.
"""
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
loader = ToTensor()
return loader(Image.fromarray(frame)).to(device,torch.float32,non_blocking=True).unsqueeze(0)
# path = '/mnt/vitasoft/kobaco_batch/Video.frame/20211112223104/763848F02_1-18-TH-07-531/*'
path = '/mnt/vitasoft/kobaco/sketchy/RobustVideoMatting/src/*.jpg'
from glob import glob
cnt = 0
with torch.no_grad():
for item in glob(path):
cnt+=1
# img = cv2.imread('../U-2-Net/test_img.jpg')
img = cv2.imread(item)
src = cv2_frame_to_cuda(img).cuda()
model.cuda()
model.eval()
rec = [None] * 4
fgr, pha, *rec = model(src.cuda(), *rec, 0.4)
com = fgr * pha + 0 * (1 - pha)
com = com.mul(255).byte().cpu().permute(0, 2, 3, 1).numpy()[0]
com = cv2.cvtColor(com, cv2.COLOR_RGB2BGR)
im = Image.fromarray(com)
a = cv2.hconcat([img,com])
# Image.fromarray(a).show()
display(Image.fromarray(a))
# cv2.imwrite('seg/'+str(cnt)+'.jpg',a)
# display(im)
import os
for path in self_impath:
at_img = cv2.imread(path)
with torch.no_grad():
src = cv2_frame_to_cuda(at_img).cuda()
rec = [None] * 4
fgr, pha, *rec = model(src.cuda(), *rec, 0.4)
fgr[:,:,:,:] = 1
pha[pha>=0.5] = 1
pha[pha<0.5] = 0
com = fgr * pha + 0 * (1 - pha)
com = com.mul(255).byte().cpu().permute(0, 2, 3, 1).numpy()[0]
mask_ = cv2.cvtColor(com, cv2.COLOR_RGB2BGR)
display(Image.fromarray(mask_))
# cv2.imwrite('humanmask/'+os.path.basename(path)[:-4]+'.png', mask_)
self_impath =sorted(self_impath)
# at_mask = cv2.imread(self_maskpath[at_idx])
# at_humanmask = cv2.imread(self_humanmaskpath[at_idx])
self_maskpath = sorted(self_maskpath)
self_humanmaskpath = sorted(self_humanmaskpath)
import random
path = bgpath[200]
cnt = 1
for path in bgpath:
image = cv2.imread(path)
bg_h, bg_w = image.shape[:2]
mask = np.zeros((bg_h, bg_w))
## random choice num of attach images
num_of_at = random.randrange(1, 5)
print(num_of_at)
model.cuda()
model.eval()
while(1):
if num_of_at == 0:
break
at_idx = random.randrange(0,len(self_impath))
print(self_impath[at_idx])
print(self_maskpath[at_idx])
at_img = cv2.imread(self_impath[at_idx])
at_mask = cv2.imread(self_maskpath[at_idx])
at_humanmask = cv2.imread(self_humanmaskpath[at_idx])
at_mask = cv2.cvtColor(at_mask, cv2.COLOR_BGR2GRAY)
## random resize
h, w = at_img.shape[:2]
at_resize_factor = random.uniform(0.1, 1.0)
dsize = (int(w*at_resize_factor),int(h*at_resize_factor))
at_img = cv2.resize(at_img,dsize=dsize)
at_mask = cv2.resize(at_mask,dsize=dsize)
at_humanmask = cv2.resize(at_humanmask,dsize=dsize)
## random locate selection
max_locate_h = bg_h-dsize[1]
max_locate_w = bg_w-dsize[0]
if max_locate_h <= 0 or max_locate_w <= 0:
continue
locate_h = random.randrange(0, max_locate_h)
locate_w = random.randrange(0, max_locate_w)
mask_ = cv2.cvtColor(at_humanmask, cv2.COLOR_BGR2GRAY)
mask_inv = 255 - mask_
fg = cv2.bitwise_and(at_img, at_img, mask=mask_)
crop_image = image[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0],:]
bg = cv2.bitwise_and(crop_image, crop_image, mask=mask_inv)
image[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0],:] = fg+bg
crop_mask = mask[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0]]
crop_mask[at_mask==255] = 255
mask[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0]] = crop_mask
num_of_at -= 1
mask = np.expand_dims(mask, axis=2)
mask_sq = np.squeeze(mask, axis=2).astype(np.uint8)
mask_sq = cv2.cvtColor(mask_sq, cv2.COLOR_GRAY2BGR)
# plt.imshow(mask_,)
# plt.show()
# display(Image.fromarray(mask_))
# display(Image.fromarray(image))
# cv2.imwrite('result/'+str(cnt)+'.png',cv2.hconcat([image,mask_sq]))
display(Image.fromarray(cv2.hconcat([image,mask_sq])))
cnt+=1
at_humanmask.shape
# plt.imshow(at_img)
# plt.show()
# plt.imshow(at_mask, cmap='gray')
# plt.show()
#
print(mask.shape)
mask_sq = np.squeeze(mask, axis=2).astype(np.uint8)
print(mask_sq.shape)
mask_sq = cv2.cvtColor(mask_sq, cv2.COLOR_GRAY2BGR)
plt.imshow(cv2.hconcat([image,mask_sq]))
plt.show()
# print(mask.shape)
# plt.imshow(image)
# plt.show()
# plt.imshow(mask, cmap='gray')
# plt.show()
mask = np.zeros(at_img.shape)
mask[at_img==0] = 255
mask = mask.astype(np.uint8)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
plt.imshow(mask,cmap='gray')
plt.show()
print(mask.shape)
print(at_img.shape)
# at_img_gray = cv2.cvtColor(at_img, cv2.COLOR_BGR2GRAY)
print(at_img_gray.shape)
mask_inv = 255 - mask
fg = cv2.bitwise_and(at_img, at_img, mask=mask_inv)
crop_image = image[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0],:]
bg = cv2.bitwise_and(crop_image, crop_image, mask=mask)
image[locate_h:locate_h+dsize[1], locate_w:locate_w+dsize[0],:] = fg+bg
plt.imshow(fg)
plt.show()
plt.imshow(bg)
plt.show()
transform = A.Compose([
A.RandomScale(scale_limit=(-0.6, -0.6), p=1), #LargeScaleJitter from scale of 0.1 to 2
A.PadIfNeeded(256, 256, border_mode=0), #pads with image in the center, not the top left like the paper
A.RandomCrop(256, 256),
CopyPaste(blend=True, sigma=1, pct_objects_paste=0.8, p=1., always_apply=True) #pct_objects_paste is a guess
], bbox_params=A.BboxParams(format="pascal_voc", min_visibility=0.05)
)
# blend=True,
# sigma=3,
# pct_objects_paste=0.1,
# max_paste_objects=None,
# p=0.5,
# always_apply=False
transform2 = A.Compose([
# A.RandomScale(scale_limit=(-0.9, 1), p=1), #LargeScaleJitter from scale of 0.1 to 2
# A.PadIfNeeded(256, 256, border_mode=0), #pads with image in the center, not the top left like the paper
# A.RandomCrop(256, 256),
], bbox_params=A.BboxParams(format="coco", min_visibility=0.05)
)
data = CocoDetectionCP(
'./coco/train2014/',
'./coco/annotations/instances_train2014.json',
transform
)
data2 = FigaroDataset(transform2)
import glob
import numpy as np
from PIL import Image
impath = glob.glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/src/*.jpg')
maskpath = glob.glob('/home/ubuntu/workspace/U-2-Net_portrait_sketch/Figaro1k/train/mask/*.pbm')
path = impath[1]
mask_path = maskpath[1]
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mask = Image.open(mask_path).convert("L")
mask = np.array(mask)
obj_ids = np.unique(mask)
obj_ids = obj_ids[1:]
masks = mask == obj_ids[:, None, None]
num_objs = len(obj_ids)
class_id = 1
boxes = []
for i in range(num_objs):
pos = np.where(masks[i])
xmin = np.min(pos[1])
xmax = np.max(pos[1])
ymin = np.min(pos[0])
ymax = np.max(pos[0])
boxes.append([xmin,ymin,xmax,ymax,class_id])
# print(masks.shape)
masks = [masks.squeeze(0)]
output = {
'image': image,
'masks': masks,
'bboxes': boxes,
}
data2 = FigaroDataset(transform)
img_data = data2[16]
image = img_data['image']
masks = img_data['masks']
bboxes = img_data['bboxes']
plt.imshow(image)
plt.show()
print(len(masks))
plt.imshow(masks[0])
plt.show()
data3 = CocoDetectionCP(
'./coco/train2014/',
'./coco/annotations/instances_train2014.json',
transform
)
img_data = data3[3]
image = img_data['image']
masks = img_data['masks']
bboxes = img_data['bboxes']
plt.imshow(image)
plt.show()
masks[0].shape
bgpath = glob.glob('/mnt/vitasoft/kobaco/dataset/data_processing/kobaco_data/scene/**/*')
import shutil
import os
cnt = 0
for path in bgpath:
# print(cv2.imread(path).shape[0])
# print('/mnt/vitasoft/kobaco/dataset/scene1280/' + os.path.basename(path))
# print(path)
if cv2.imread(path).shape[0] == 720:
os.makedirs('/mnt/vitasoft/kobaco/dataset/scene1280/', exist_ok=True)
shutil.copy(path, '/mnt/vitasoft/kobaco/dataset/scene1280/' + str(cnt).zfill(4) + '.jpg')
cnt+=1
else:
print("1")
if cnt >= 4999:
break
bgpath2 = glob.glob('/mnt/vitasoft/kobaco/dataset/scene1280/*.jpg')
len(bgpath2)
path
###Output
_____no_output_____ |
Exploratory Data Analytics - DataFrame Data.ipynb | ###Markdown
Analyzing relationships between variables 1. Correlation Matrix
###Code
corr = data1.corr()# plot the heatmap
sns.heatmap(corr, annot=True);
###Output
_____no_output_____
###Markdown
2. Scatterplot
###Code
sns.scatterplot('open', 'close', data=data1);
sns.scatterplot('high', 'low', data=data1);
sns.pairplot(data1)
###Output
_____no_output_____
###Markdown
Distribution of the data
###Code
sns.distplot(data1.open);
sns.distplot(data1.close);
sns.distplot(data1.high);
sns.distplot(data1.low);
###Output
_____no_output_____ |
docs/notebooks/Simple_simulations_and_plotting_with_basico.ipynb | ###Markdown
Simple simulations and plottingIn this file, we load a model from the BioModels database, and simulate it for varying durations. We start as usual:
###Code
import sys
sys.path.append('../..')
%matplotlib inline
from basico import *
###Output
_____no_output_____
###Markdown
Load a modelto load the model, we use the `load_biomodel` function, it takes in either an integer, which will be transformed into a valid biomodels id, or you can pass in a valid biomodels id to begin with:
###Code
biomod = load_biomodel(10)
###Output
_____no_output_____
###Markdown
Run time courseAfter the model is loaded, it is ready to be simulated, here we try it for varying durations: Time course duration 100
###Code
tc = run_time_course(duration = 100)
tc.plot();
###Output
_____no_output_____
###Markdown
Time course duration 3000
###Code
tc = run_time_course(duration = 3000)
tc.plot();
###Output
_____no_output_____
###Markdown
Get compartmentsTo get an overview of what elements the model entails, we can query the individual elements, yielding each time a pandas dataframe with the information:
###Code
get_compartments()
###Output
_____no_output_____
###Markdown
Get parameters ("global quantities")
###Code
get_parameters() # no global quantities
###Output
_____no_output_____
###Markdown
Show experimental data from the model
###Code
get_experiment_data_from_model() # no experimental data in this file either
###Output
_____no_output_____
###Markdown
Run steady statewe can also run the model to steady state, in order to see the steady state concentrations:
###Code
run_steadystate()
# now call get_species, to get the steady state concentration and particle numbers
get_species()
###Output
_____no_output_____
###Markdown
Use pandas syntax for indexing and plotting
###Code
tc = run_time_course(model = biomod, duration = 4000)
tc.plot();
tc.loc[:, ['Mek1', 'Mek1-P', 'Mos']].plot()
###Output
_____no_output_____
###Markdown
Get parameter sets
###Code
model = biomod.getModel()
sets = model.getModelParameterSets()
sets.size()
###Output
_____no_output_____ |
IBM_TorontoClustering.ipynb | ###Markdown
Clustering Neighborhoods of TorontoIn this notebook I will run an unsupervised machine learning model known as clustering in order to segment neighborhoods in Toronto based on similarities in venues profile. I will initially fetch relevant data by web scraping a Wikipedia page using BeautifulSoup and additionally get latitude and longitude of each neighborhood by making requests to geocoder.google. Alternatively I got the coordinates by parsing a .csv file. This will be the working dataframe.After that, I will interact with the Foursquare Restful API in order to get the names of all the venus in each neighborhood I have collected.I will process the venues by categorizing them and encoding them and finally reassigning the most common 10 venues to each respective neighborhood.Finally I will use this informative dataframe to run the K-means algorithm on and it will cluster the neighborhoods using all existing venue types as dimensions, therefore it is clustering on 268 dimensions. The number of cluster here is chosen arbitrarily to be 5 but in the final project I will illustrate methodologies that will help understand which number of clusters is appropriate to choose based on inertia assessment and silhouette score.
###Code
from bs4 import BeautifulSoup
from time import sleep
import random
from tqdm.notebook import tqdm
import requests
import pandas as pd
from datetime import datetime
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 300)
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
table_contents=[]
table=soup.find('table')
for row in table.findAll('td'):
cell = {}
if row.span.text=='Not assigned':
pass
else:
cell['PostalCode'] = row.p.text[:3]
cell['Borough'] = (row.span.text).split('(')[0]
cell['Neighborhood'] = (((((row.span.text).split('(')[1]).strip(')')).replace(' /',',')).replace(')',' ')).strip(' ')
table_contents.append(cell)
# print(table_contents)
df=pd.DataFrame(table_contents)
df['Borough']=df['Borough'].replace({'Downtown TorontoStn A PO Boxes25 The Esplanade':'Downtown Toronto Stn A',
'East TorontoBusiness reply mail Processing Centre969 Eastern':'East Toronto Business',
'EtobicokeNorthwest':'Etobicoke Northwest','East YorkEast Toronto':'East York/East Toronto',
'MississaugaCanada Post Gateway Processing Centre':'Mississauga'})
df.head(20)
df.shape
from tqdm import tqdm
# import geocoder # import geocoder
# # initialize your variable to None
# lat_lng_coords = None
# coords=[]
# # loop until you get the coordinates
# for i in tqdm(df['PostalCode']):
# while(lat_lng_coords is None):
# g = geocoder.google('{}, Toronto, Ontario'.format(i))
# lat_lng_coords = g.latlng
# if not lat_lng_coords is None:
# coords=coords.append(lat_lng_coords)
# latitude = lat_lng_coords[0]
# longitude = lat_lng_coords[1]
coor_df=pd.read_csv('Geospatial_Coordinates.csv')
coor_df.head()
data = pd.merge(df, coor_df, how='left', left_on=['PostalCode'], right_on = ['Postal Code'])
data.head()
data=data.drop(['Postal Code'], axis=1)
data.head()
# @hidden_cell
# Foursquare API Info
CLIENT_ID = 'ICFBT5OVJYI4T5MMRIB4ZTRROAG0TENZUKD0FDSK5QY2SS55' # your Foursquare ID
CLIENT_SECRET = 'HYP1N21AMM2SL0QATQU2OYAKXAHDALIEVV1PH0F5Y3AZNCSB' # your Foursquare Secret
VERSION = '20210325' # Foursquare API version
LIMIT = 100 # A default Foursquare API limit value
# Function to call the Foursquare API taken from the Lab for NYC venues
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(data['Neighborhood'], data['Latitude'], data['Longitude'])
toronto_venues.head()
len(toronto_venues['Venue Category'].unique())
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
find 10 top venues for each neighborhood
###Code
# Function taken from the lab
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
import numpy as np
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
from sklearn.cluster import KMeans
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = data
# merge manhattan_grouped with manhattan_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.dropna(inplace=True)
toronto_merged['Cluster Labels'] = toronto_merged['Cluster Labels'].astype(int)
toronto_merged # check the last columns!
!pip install folium
import folium
import matplotlib.cm as cm
import matplotlib.colors as colors
# create map
map_clusters = folium.Map(location=[43.65, -79.38], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
Requirement already satisfied: folium in /usr/local/lib/python3.7/dist-packages (0.8.3)
Requirement already satisfied: branca>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from folium) (0.4.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from folium) (1.19.5)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from folium) (2.11.3)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from folium) (1.15.0)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from folium) (2.23.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->folium) (1.1.1)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->folium) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->folium) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->folium) (2020.12.5)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->folium) (1.24.3)
|
Example/Hooks.ipynb | ###Markdown
Save button calls your supplied Python function
###Code
foodfns = sorted(os.listdir('./foods/'))
targets = np.zeros((len(foodfns), 4), dtype='int') # (x,y,w,h) for each data row
def my_save_hook(uindexes):
np.savetxt("foodboxes.csv", targets, delimiter=",", fmt="%d")
return True # Tell Innotater the save was successful (we just assume so here...)
Innotater( ImageInnotation(foodfns, path='./foods'), BoundingBoxInnotation(targets), save_hook=my_save_hook )
###Output
_____no_output_____
###Markdown
Click the Save button above after making changes, and a csv file will be saved containing your latest data.Your function should return True if the save was successful, otherwise False if the data should still be saved.The uindexes parameter is a list of integers telling you which indexes have been changed through the Innotater. Custom Buttons calling your own Python functionThe ButtonInnotation allows you to provide custom button functionality.In this example, there is a button to reset everything in the current sample, and buttons to reset each bounding box.
###Code
animalfns = sorted(os.listdir('./animals/'))
repeats = 8
# Per-photo data
classes = ['cat', 'dog']
targets_type = np.zeros((len(animalfns), len(classes)), dtype='int') # One-hot encoding
# Repeats within each photo
targets_bboxes = np.zeros((len(animalfns), repeats, 4), dtype='int') # (x,y,w,h) for each animal
def reset_click(uindex, repeat_index, **kwargs):
# uindex is the (underlying) index of the data sample where the button was clicked
# repeat_index will be the sub-index of the row in a RepeatInnotation, or -1 if at the top level
# kwargs will contain name and desc fields
if repeat_index == -1: # This was a top-level button (no sub-index within the RepeatInnotation)
# So reset everything
targets_type[uindex] = [1,0]
for i in range(repeats):
targets_bboxes[uindex, i, :] = 0
else:
# Only reset the row with repeat_index
targets_bboxes[uindex, repeat_index, :] = 0
return True # Tell Innotater the data at uindex was changed
Innotater(
ImageInnotation(animalfns, path='./animals', width=400, height=300),
[
MultiClassInnotation(targets_type, name='Animal Type', classes=classes, dropdown=False),
RepeatInnotation(
(ButtonInnotation, None, {'desc': 'X', 'on_click': reset_click, 'layout': {'width': '40px'}}),
(BoundingBoxInnotation, targets_bboxes),
max_repeats=repeats, min_repeats=1
),
ButtonInnotation(None, name='Reset All', on_click=reset_click)
]
)
###Output
_____no_output_____ |
docs/user-guide/mini-batching.ipynb | ###Markdown
Mini-batching In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `river`.The main downside of single-instance processing is that it doesn't scale to big data, at least not in the sense of traditional batch learning. Indeed, processing one sample at a time means that we are unable to fully take advantage of [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `river` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `river` is slow. In fact, for processing a single instance, `river` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `river` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.In order to propose the best of both worlds, `river` offers some limited support for mini-batch learning. Some of `river`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `learn_many` method as well as a `transform_many` method, in addition to `learn_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
###Code
from river import compose
from river import linear_model
from river import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
###Output
_____no_output_____
###Markdown
For this example, we will use `datasets.Higgs`.
###Code
from river import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
###Output
_____no_output_____
###Markdown
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
###Code
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.learn_many(x, y)
###Output
_____no_output_____
###Markdown
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/dev/computing/scaling_strategies.htmlincremental-learning) of their estimators have a `partial_fit` method, which is similar to river's `learn_many` method. Here are some advantages that river has over scikit-learn:- We guarantee that river's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of river.- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions. Note that you can check which estimators can process mini-batches programmatically:
###Code
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'learn_many')
for module in importlib.import_module('river').__all__:
if module in ['datasets', 'synth']:
continue
for name, obj in inspect.getmembers(importlib.import_module(f'river.{module}'), can_mini_batch):
print(name)
###Output
MiniBatchClassifier
MiniBatchRegressor
SKL2RiverClassifier
SKL2RiverRegressor
Pipeline
BagOfWords
TFIDF
LinearRegression
LogisticRegression
Perceptron
OneVsRestClassifier
BernoulliNB
ComplementNB
MultinomialNB
MLPRegressor
StandardScaler
###Markdown
Mini-batching In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `river`.The main downside of single-instance processing is that it doesn't scale to big data, at least not in the sense of traditional batch learning. Indeed, processing one sample at a time means that we are unable to fully take advantage of [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `river` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `river` is slow. In fact, for processing a single instance, `river` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `river` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.In order to propose the best of both worlds, `river` offers some limited support for mini-batch learning. Some of `river`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `learn_many` method as well as a `transform_many` method, in addition to `learn_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
###Code
from river import compose
from river import linear_model
from river import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
###Output
_____no_output_____
###Markdown
For this example, we will use `datasets.Higgs`.
###Code
from river import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
###Output
Downloading https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz (2.62 GB)
###Markdown
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
###Code
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.learn_many(x, y)
###Output
_____no_output_____
###Markdown
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/dev/computing/scaling_strategies.htmlincremental-learning) of their estimators have a `partial_fit` method, which is similar to river's `learn_many` method. Here are some advantages that river has over scikit-learn:- We guarantee that river's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of river.- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions. Note that you can check which estimators can process mini-batches programmatically:
###Code
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'learn_many')
for module in importlib.import_module('river').__all__:
if module in ['datasets', 'synth']:
continue
for name, obj in inspect.getmembers(importlib.import_module(f'river.{module}'), can_mini_batch):
print(name)
###Output
MiniBatchClassifier
MiniBatchRegressor
SKL2RiverClassifier
SKL2RiverRegressor
Pipeline
LinearRegression
LogisticRegression
Perceptron
OneVsRestClassifier
StandardScaler
###Markdown
Mini-batching In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `river`.The main downside of single-instance processing is that it doesn't scale to big data, at least not in the sense of traditional batch learning. Indeed, processing one sample at a time means that we are unable to fully take advantage of [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `river` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `river` is slow. In fact, for processing a single instance, `river` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `river` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.In order to propose the best of both worlds, `river` offers some limited support for mini-batch learning. Some of `river`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `learn_many` method as well as a `transform_many` method, in addition to `learn_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
###Code
from river import compose
from river import linear_model
from river import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
###Output
_____no_output_____
###Markdown
For this example, we will use `datasets.Higgs`.
###Code
from river import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
###Output
Downloading https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz (2.62 GB)
###Markdown
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
###Code
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.learn_many(x, y)
###Output
_____no_output_____
###Markdown
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/stable/modules/computing.htmlincremental-learning) of their estimators have a `partial_fit` method, which is similar to river's `learn_many` method. Here are some advantages that river has over scikit-learn:- We guarantee that river's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of river.- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions. Note that you can check which estimators can process mini-batches programmatically:
###Code
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'learn_many')
for module in importlib.import_module('river').__all__:
if module in ['datasets', 'synth']:
continue
for name, obj in inspect.getmembers(importlib.import_module(f'river.{module}'), can_mini_batch):
print(name)
###Output
MiniBatchClassifier
MiniBatchRegressor
SKL2RiverClassifier
SKL2RiverRegressor
Pipeline
LinearRegression
LogisticRegression
Perceptron
OneVsRestClassifier
StandardScaler
###Markdown
Mini-batching In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `river`.The main downside of single-instance processing is that it doesn't scale to big data, at least not in the sense of traditional batch learning. Indeed, processing one sample at a time means that we are unable to fully take advantage of [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `river` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `river` is slow. In fact, for processing a single instance, `river` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `river` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.In order to propose the best of both worlds, `river` offers some limited support for mini-batch learning. Some of `river`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `learn_many` method as well as a `transform_many` method, in addition to `learn_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
###Code
from river import compose
from river import linear_model
from river import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
###Output
_____no_output_____
###Markdown
For this example, we will use `datasets.Higgs`.
###Code
from river import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
###Output
_____no_output_____
###Markdown
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
###Code
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.learn_many(x, y)
###Output
_____no_output_____
###Markdown
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/stable/modules/computing.htmlincremental-learning) of their estimators have a `partial_fit` method, which is similar to river's `learn_many` method. Here are some advantages that river has over scikit-learn:- We guarantee that river's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of river.- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions. Note that you can check which estimators can process mini-batches programmatically:
###Code
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'learn_many')
for module in importlib.import_module('river').__all__:
if module in ['datasets', 'synth']:
continue
for name, obj in inspect.getmembers(importlib.import_module(f'river.{module}'), can_mini_batch):
print(name)
###Output
MiniBatchClassifier
MiniBatchRegressor
SKL2RiverClassifier
SKL2RiverRegressor
Pipeline
LinearRegression
LogisticRegression
Perceptron
OneVsRestClassifier
StandardScaler
###Markdown
Mini-batching In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `creme`.The main downside of single-instance processing is that it doesn't scale to big data. Indeed, processing one sample at a time means that we are able to use [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `creme` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `creme` is slow. In fact, for processing a single instance, `creme` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `creme` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.In order to propose the best of both worlds, `creme` offers some limited support for mini-batch learning. Some of `creme`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `fit_many` method as well as a `transform_many` method, in addition to `fit_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
###Code
from creme import compose
from creme import linear_model
from creme import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
###Output
_____no_output_____
###Markdown
For this example, we will use `datasets.Higgs`.
###Code
from creme import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
###Output
Downloading https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz (2.62 GB)
###Markdown
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
###Code
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.fit_many(x, y)
###Output
_____no_output_____
###Markdown
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/stable/modules/computing.htmlincremental-learning) of their estimators have a `partial_fit` method, which is similar to creme's `fit_many` method. Here are some advantages that creme has over scikit-learn:- We guarantee that creme's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of creme.- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions. Note that you can check which estimators can process mini-batches programmatically:
###Code
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'fit_many')
for module in importlib.import_module('creme').__all__:
for name, obj in inspect.getmembers(importlib.import_module(f'creme.{module}'), can_mini_batch):
print(name)
###Output
Pipeline
LinearRegression
LogisticRegression
StandardScaler
|
SMS Classifier - NLP.ipynb | ###Markdown
Text Preprocessing
###Code
import string
mess = 'Sample msg! Notice it has punctuation.'
###Output
_____no_output_____
###Markdown
**Removing punctuations using list comprehension**
###Code
nopunc = [c for c in mess if c not in string.punctuation]
nopunc
from nltk.corpus import stopwords
stopwords.words('english')
nopunc = ''.join(nopunc)
nopunc.split()
clean_msg = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
clean_msg
###Output
_____no_output_____
###Markdown
**Converting above cells into a function**
###Code
def process_text(mess):
nopunc = [c for c in mess if c not in string.punctuation]
nopunc = ''.join(nopunc)
return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
messages.head()
messages['message'].apply(process_text)
from sklearn.feature_extraction.text import CountVectorizer
bow_transofrmer = CountVectorizer(analyzer=process_text).fit(messages['message'])
print(len(bow_transofrmer.vocabulary_))
mess4 = messages['message'][3]
mess4
bow4 = bow_transofrmer.transform([mess4])
print(bow4)
print(bow4.shape)
bow_transofrmer.get_feature_names()[9554]
messages_bow = bow_transofrmer.transform(messages['message'])
messages_bow.shape
messages_bow.nnz
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print('sparsity: {}'.format(sparsity))
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer().fit(messages_bow)
tfidf4 = tfidf_transformer.transform(bow4)
print(tfidf4)
tfidf_transformer.idf_[bow_transofrmer.vocabulary_['university']]
messages_tfidf = tfidf_transformer.transform(messages_bow)
from sklearn.naive_bayes import MultinomialNB
spam_detect_model = MultinomialNB().fit(messages_tfidf,messages['label'])
spam_detect_model.predict(tfidf4)[0]
messages['label'][3]
all_pred = spam_detect_model.predict(messages_tfidf)
all_pred
###Output
_____no_output_____
###Markdown
Data pipeline
###Code
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = train_test_split(messages['message'], messages['label'], test_size=0.3)
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow',CountVectorizer(analyzer=process_text)),
('tfidf',TfidfTransformer()),
('classifier',MultinomialNB())
])
pipeline.fit(msg_train,label_train)
predictions = pipeline.predict(msg_test)
from sklearn.metrics import classification_report
print(classification_report(label_test,predictions))
###Output
precision recall f1-score support
ham 0.96 1.00 0.98 1442
spam 1.00 0.73 0.84 230
accuracy 0.96 1672
macro avg 0.98 0.87 0.91 1672
weighted avg 0.96 0.96 0.96 1672
|
06 - exercise.ipynb | ###Markdown
Exercise Calculate 7 to the power of 4
###Code
7**4
# split this tring into a list
s = 'Have a great day'
s.split()
# split this string on the ";" simbol
s = 'Item1; item2; 12345; ???'
s.split(';')
name = 'Avi'
topic = 'Python'
number = 99
# print in a formated way print(f'...)
# Avi is teaching Python and winning number is 99
# using the variables. change the variables to see different results
print(f'{name} is teaching {topic} and the winning number is {number}')
# try printing the number 20 from this crazy list
crazy_list = [7,100, [1,2,3], {'key1': [3,4,5], 'key2': "savta Haya", 'key3': ['a','b', [30,20,10]]}]
crazy_list[3]['key3'][2][1]
# add a new item to the list
l = [10,9,8]
l.append('kuku')
l
#pop out 'Avi' from the list and insert it to a new variable
l = [1,2,3,4,'Avi',5,6,7]
a = l.pop(4)
print(f'a is {a}')
print(f'and the updated list is \n\t{l}')
# print this sentence in all upper case
s = 'This is fun!'
print(s.upper())
###Output
THIS IS FUN!
|
notebooks/3.1_Model_Age_Ratings_with_Audio_Features.ipynb | ###Markdown
Summary: Use audio features to predict age ratings.This notebook use `TreeRegressor` model with 13 audio features (key, acousticness, tempo, duration, etc) and popularity to predict age-ratings. The model achieves an $R^2$ score of 0.50, with popluarity and duration being the two most important features.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import spotipy
import os
%matplotlib inline
###Output
_____no_output_____
###Markdown
Track data: Features and Age Ratings https://developer.spotify.com/documentation/web-api/reference//operations/get-audio-featuresvalence :A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
###Code
data = pd.read_csv('../data/all.csv')
print (data.columns)
print (data.shape)
song_features = ['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', \
'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', \
'type', 'id', 'uri', 'track_href', 'analysis_url', 'duration_ms', 'time_signature']
columns = ['key','mode', 'time_signature', 'duration_min','popularity', 'danceability', 'energy','loudness', 'speechiness',
'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
data['duration_min'] = data['duration_ms']/10**3/60
data = data.dropna(subset=columns)
data = data.astype({'key': 'Int64', 'mode':'Int64', 'time_signature':'Int64'})
X = data[columns]
y = list(data['Age'])
display(X.sample(5))
print ("Number of Tracks:" , X.shape[0])
print ("Number of Freatures:", X.shape[1])
print ("Ages: ", set(y))
###Output
_____no_output_____
###Markdown
Decision Tree Model
###Code
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.tree import DecisionTreeRegressor
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
categorical_columns = ['key','mode', 'time_signature']
numeric_columns = ['duration_min','popularity', 'danceability', 'energy','loudness', 'speechiness',
'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
features = ColumnTransformer([
('categorical', OneHotEncoder(), categorical_columns),
('numeric', 'passthrough', numeric_columns)
])
est = Pipeline([
('features', features),
('regressor', DecisionTreeRegressor(max_depth=5) )
])
est.fit(X_train, y_train)
print ("R^2 Score: ", est.score(X_test,y_test))
# The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
# A constant model that always predicts the expected value of y, disregarding the input features,
# would get a score of 0.0."
from sklearn.metrics import mean_squared_error
import math
test_error = math.sqrt(mean_squared_error(y_test, est.predict(X_test)))
mean = np.mean(y_train)
baseline_error = math.sqrt(mean_squared_error(y_test, [mean for _ in range(len(y_test))]))
print ("Base Line Model Test Error: ", baseline_error)
print ("Current Model Test Error: ", test_error)
###Output
Base Line Model Test Error: 4.825135424997403
Current Model Test Error: 3.4046751211576454
###Markdown
Cross validation on max_depth
###Code
from sklearn.model_selection import GridSearchCV
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
categorical_columns = ['key','mode', 'time_signature']
numeric_columns = ['duration_min','popularity', 'danceability', 'energy','loudness', 'speechiness',
'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo']
features = ColumnTransformer([
('categorical', OneHotEncoder(), categorical_columns),
('numeric', 'passthrough', numeric_columns)
])
pipeline = Pipeline([
('features', features),
('regressor', DecisionTreeRegressor())
])
param_grid = {'regressor__max_depth': range(2,10)}
est = GridSearchCV(pipeline, param_grid, return_train_score = True)
est.fit(X_train, y_train);
depth = est.param_grid['regressor__max_depth']
plt.plot(depth, est.cv_results_['mean_test_score'], c='r', label = 'validation score')
plt.plot(depth, est.cv_results_['mean_train_score'], c='b', label = 'train score')
plt.xlabel('max_depth')
plt.ylabel('score')
plt.legend(loc='upper right');
from sklearn import metrics
test_errors = []
in_sample_errors = []
max_depths= [2,3,4,5,6,7,8,9,10]
for max_depth in max_depths:
model = DecisionTreeRegressor(max_depth=max_depth).fit(X_train, y_train)
y_pred = model.predict(X_test)
y_train_pred = model.predict(X_train)
in_sample_errors.append(metrics.mean_squared_error(y_train, y_train_pred))
test_errors.append(metrics.mean_squared_error(y_test, y_pred))
plt.plot(max_depths, in_sample_errors, 'b-', label='in-sample error')
plt.plot(max_depths, test_errors, c='r', label='out-of-sample error')
plt.xlabel('max_depth')
plt.ylabel('MSE')
plt.legend(loc='upper right');
###Output
_____no_output_____
###Markdown
Feature importance
###Code
model = est.best_estimator_['regressor']
model.feature_importances_
est.best_estimator_['features'].get_feature_names_out()[0:10]
feature_names = est.best_estimator_['features'].get_feature_names()
features = [(feature_names[i], model.feature_importances_[i]) for i in range(len(feature_names))]
for feature in sorted(features, key=lambda x: -x[1]):
print (feature)
###Output
('popularity', 0.4863635259767914)
('duration_min', 0.22151226619356706)
('speechiness', 0.10110635971046776)
('acousticness', 0.08886440846186068)
('valence', 0.03304241685961839)
('loudness', 0.02577808910180296)
('danceability', 0.010227075620263912)
('instrumentalness', 0.008166894620060506)
('liveness', 0.0074611980592236594)
('energy', 0.006854735706522058)
('tempo', 0.0046767884781645785)
('categorical__x1_1', 0.0022343325148414094)
('categorical__x2_4', 0.0012484473189348464)
('categorical__x1_0', 0.0008698390969922705)
('categorical__x2_5', 0.0008338774197275184)
('categorical__x0_0', 0.00039336579538753563)
('categorical__x0_8', 0.00035245316297959615)
('categorical__x0_11', 1.3925902793811138e-05)
('categorical__x0_1', 0.0)
('categorical__x0_2', 0.0)
('categorical__x0_3', 0.0)
('categorical__x0_4', 0.0)
('categorical__x0_5', 0.0)
('categorical__x0_6', 0.0)
('categorical__x0_7', 0.0)
('categorical__x0_9', 0.0)
('categorical__x0_10', 0.0)
('categorical__x2_0', 0.0)
('categorical__x2_1', 0.0)
('categorical__x2_3', 0.0)
###Markdown
Visualize the decision tree Problem with Graphviz on win10https://stackoverflow.com/questions/35064304/runtimeerror-make-sure-the-graphviz-executables-are-on-your-systems-path-aft
###Code
!conda install graphviz
import os
os.environ["PATH"] += os.pathsep + 'C:/Program Files/Graphviz/bin/'
import graphviz
from sklearn.tree import export_graphviz
g = graphviz.Source(export_graphviz(model, feature_names=feature_names, max_depth=3))
g
g.render('../figures/rating_decision_tree', format='png', quiet=True)
###Output
_____no_output_____ |
Compare Agents.ipynb | ###Markdown
Comparing Agent PerformanceThis notebook compares the performance of a selection of our included agents. The results presented are the median CTR that one would achieve if the agent were used to recommend products to 100 test users after being trained.
###Code
import gym, recogym
from recogym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env_1_args['num_products'] = 100
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
from recogym.agents import BanditMFSquare, bandit_mf_square_args
from recogym.agents import BanditCount, bandit_count_args
from recogym.agents import RandomAgent, random_args
from recogym import Configuration
agent_banditmfsquare = BanditMFSquare(Configuration({
**bandit_mf_square_args,
**env_1_args,
}))
agent_banditcount = BanditCount(Configuration({
**bandit_count_args,
**env_1_args,
}))
agent_rand = RandomAgent(Configuration({
**random_args,
**env_1_args,
}))
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_banditcount), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_banditmfsquare), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (804.8363463878632s)
###Markdown
Comparing Agent PerformanceThis notebook compares the performance of a selection of our included agents. The results presented are the median CTR that one would achieve if the agent were used to recommend products to 100 test users after being trained.
###Code
import gym, reco_gym
from reco_gym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env_1_args['num_products'] = 100
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
from agents import BanditMFSquare, bandit_mf_square_args
from agents import BanditCount, bandit_count_args
from agents import RandomAgent, random_args
from reco_gym import Configuration
agent_banditmfsquare = BanditMFSquare(Configuration({
**bandit_mf_square_args,
**env_1_args,
}))
agent_banditcount = BanditCount(Configuration({
**bandit_count_args,
**env_1_args,
}))
agent_rand = RandomAgent(Configuration({
**random_args,
**env_1_args,
}))
# Credible interval of the CTR median and 0.025 0.975 quantile.
reco_gym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
reco_gym.test_agent(deepcopy(env), deepcopy(agent_banditcount), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
reco_gym.test_agent(deepcopy(env), deepcopy(agent_banditmfsquare), 1000, 1000)
###Output
Start: Agent Training #0
Start: Agent Testing #0
End: Agent Testing #0 (804.8363463878632s)
###Markdown
Comparing Agent PerformanceThis notebook compares the performance of a selection of our included agents. The results presented are the median CTRthat one would achieve if the agent were used to recommend products to 100 test users after being trained.
###Code
import gym, recogym
from recogym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env_1_args['num_products'] = 100
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args)
from recogym.agents import BanditMFSquare, bandit_mf_square_args
from recogym.agents import BanditCount, bandit_count_args
from recogym.agents import RandomAgent, random_args
from recogym import Configuration
agent_banditmfsquare = BanditMFSquare(Configuration({
**bandit_mf_square_args,
**env_1_args,
}))
agent_banditcount = BanditCount(Configuration({
**bandit_count_args,
**env_1_args,
}))
agent_rand = RandomAgent(Configuration({
**random_args,
**env_1_args,
}))
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_rand), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_banditcount), 1000, 1000)
# Credible interval of the CTR median and 0.025 0.975 quantile.
recogym.test_agent(deepcopy(env), deepcopy(agent_banditmfsquare), 1000, 1000)
###Output
Organic Users: 0it [00:00, ?it/s]
Users: 0%| | 4/1000 [00:00<00:31, 31.88it/s]
###Markdown
Comparing Agent PerformanceThis notebook compares the performance of a selection of our included agents. The results presented are the median CTR that one would achieve if the agent was used to recommend products to 100 test users after being trained.
###Code
import gym, reco_gym
from reco_gym import env_1_args
from copy import deepcopy
env_1_args['random_seed'] = 42
env = gym.make('reco-gym-v1')
env.init_gym(env_1_args);
from agents import BanditMFSquare, bandit_mf_square_args
from agents import BanditCount, bandit_count_args
from agents import RandomAgent, random_args
bandit_mf_square_args['num_products'] = env_1_args['num_products']
bandit_count_args['num_products'] = env_1_args['num_products']
random_args['num_products'] = env_1_args['num_products']
agent_banditmfsquare = BanditMFSquare(bandit_mf_square_args)
agent_banditcount = BanditCount(bandit_count_args)
agent_rand = RandomAgent(random_args)
# credible interval of the ctr median and 0.025 0.975 quantile
reco_gym.test_agent(deepcopy(env), deepcopy(agent_rand), 100, 100)
# credible interval of the ctr median and 0.025 0.975 quantile
reco_gym.test_agent(deepcopy(env), deepcopy(agent_banditcount), 100, 100)
# credible interval of the ctr median and 0.025 0.975 quantile
reco_gym.test_agent(deepcopy(env), deepcopy(agent_banditmfsquare), 100, 100)
###Output
Starting Agent Training
Starting Agent Testing
|
colabs/dbm.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DBM Report ParametersCreate a DBM report. 1. Reference field values from the DBM API to build a report. 1. Copy and paste the JSON definition of a report. 1. The report is only created, use a move script to move it. 1. To reset a report, delete it from DBM reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'body': '{}',
'delete': False,
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DBM ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {
'body': {'field': {'name': 'body','kind': 'json','order': 1,'default': '{}'}}
},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 3,'default': False}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'report': '{}', # Report body and filters.
'auth_read': 'user', # Credentials used for reading data.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'description': 'Report body and filters.','name': 'report','order': 1,'default': '{}','kind': 'json'}},
'delete': {'field': {'description': 'If report exists, delete it before creating a new one.','name': 'delete','order': 2,'default': False,'kind': 'boolean'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report. 1. The report is only created, use a move script to move it. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'body': '{}',
'delete': False,
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {
'body': {'field': {'name': 'body','kind': 'json','order': 1,'default': '{}'}}
},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 3,'default': False}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter DV360 Report ParametersCreate a DV360 report. 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute DV360 ReportThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report','kind': 'json','order': 1,'default': '{}','description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete','kind': 'boolean','order': 2,'default': False,'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
DV360 ReportCreate a DV360 report. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter DV360 Report Recipe Parameters 1. Reference field values from the DV360 API to build a report. 1. Copy and paste the JSON definition of a report, sample for reference. 1. The report is only created, a seperate script is required to move the data. 1. To reset a report, delete it from DV360 reporting.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'report': '{}', # Report body and filters.
'delete': False, # If report exists, delete it before creating a new one.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute DV360 ReportThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dbm': {
'auth': 'user',
'report': {'field': {'name': 'report', 'kind': 'json', 'order': 1, 'default': '{}', 'description': 'Report body and filters.'}},
'delete': {'field': {'name': 'delete', 'kind': 'boolean', 'order': 2, 'default': False, 'description': 'If report exists, delete it before creating a new one.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____ |
test-lab/06 - appium.ipynb | ###Markdown
Capítulo 6 - Uso de Appium para automatizar acciones en dispositivos___ Conectar un dispositivo___ Pasos comunesPara conectar un dispositivo de Android hay que seguir los siguientes pasos:1. Descargar e instalar Java jdk 1.8: https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html2. Añadir la variable de entorno JAVA_HOME = "C:\Program Files\Java\jdk {version} "3. Descargar e instalar Android Studio: https://developer.android.com/studio4. Añadir la variable de entorno ANDROID_HOME = "C:\Users\\ {user} \AppData\Local\Android\Sdk\"5. Añadir el directorio "C:\Users\\ {user} \AppData\Local\Android\Sdk\platform-tools\" al Path de Windows EmuladorPara crear un emulador hay que seguir los siguientes pasos:1. Lanzar Android Studio, si pide crear un proyecto se crea un vacío (que no usaremos para nada)2. Dejar que se actualice con las actualizaciones por defecto (puede variar dependiendo de la versión)3. Ir a "Tools" > "AVD Manager"4. CLick en "Create Virtual Device".5. Seleccionar "Phone" > "Nexus 5X", "Next"6. Seleccionar "Oreo" (API Level 27, Android 8.1), si no está disponible click en descargar, "Next"7. Nombrar y "Finish" RealPara conectar un dispositivo real hay que seguir los siguientes pasos (No todos los dispositivos son compatibles):1. En el dispositivo: Ir a "Settings" > "About phone" > "Software information" y pulsar "Build number" 7 veces, esto activa el modo "desarrollador" (puede variar según el modelo del dispositivo)2. En el dispositivo: Ir a "Settings" > "Developer options" y activar "Stay awake" y "USB debugging" (puede variar según el modelo del dispositivo)3. Conectar por USB y aceptar permisos Comprobar la conexiónPar comprobar que todo funciona correctamente ejecutar:
###Code
! adb devices
###Output
_____no_output_____
###Markdown
debería aparecer el nombre del dispositio seguido de "device":```List of devices attachedLRINFIZPPN7TYHUC device``` Levantar un servidor de Appium en local___1. Descargar e instalar Appium-Desktop: https://github.com/appium/appium-desktop/releases/2. Iniciar Appium (tarda)3. Poner Host: 0.0.0.0 y Puerto: 4723, pulsar "Start Server" Crear un script con el cliente de Appium para Python___Se instalan los sdk's de Appium para Python:
###Code
! pip install Appium-Python-Client
###Output
_____no_output_____
###Markdown
Importamos la librería:
###Code
from appium import webdriver
import os
desired_caps = {}
desired_caps['platformName'] = 'Android'
desired_caps['deviceName'] = 'Android Emulator'
desired_caps['app'] = os.path.join(os.getcwd(), 'example.apk') # ruta a una apk de ejemplo
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
from appium.webdriver.common.mobileby import MobileBy
driver.find_element(MobileBy.ACCESSIBILITY_ID, "Add Contact").click()
import time
time.sleep(1)
driver.find_element(MobileBy.ID, "com.example.android.contactmanager:id/contactNameEditText").send_keys('Alejandro')
driver.find_element(MobileBy.ID, "com.example.android.contactmanager:id/contactPhoneEditText").send_keys('987654321')
driver.quit()
###Output
_____no_output_____ |
Week3/Week 3 - Quiz Assignment.ipynb | ###Markdown
Week 3 - Quiz Assignment 1) Assume that the chain rule is used to compute the joint probability of the sentence $P('\text{I got this one}') $. The products of probabilities are represented by $P(got|I) \times P(this|I,got) \times P(one|I,got,this)$ - True - False __Answer__: False It should be $P(I) \times P(got|I) \times P(this|I,got) \times P(one|I,got,this)$Probability of the sentence $W$:> $P(W) = P(w_1, w_2, ..., w_n)$Chain Rule:> $P(w_1, w_2, ..., w_n) = P(w_1)P(w_2|x_1)...P(w_n|w_1,...w_{n-1})$> $P('\text{I got this one}') = P('\text{I}', '\text{got}', '\text{this}', '\text{one}')$> $P('\text{I got this one}') = P('\text{I}') \times P('\text{got}' | '\text{I}') \times P('\text{this}' | '\text{I got}') \times P('\text{one}' | '\text{I got this}')$Markove Assumption:> $P('\text{I got this one}') = P('\text{I}') \times P('\text{got}' | '\text{I}') \times P('\text{this}' | '\text{got}') \times P('\text{one}' | '\text{this}')$ *** 2) Assume that the language model is evaluated as given below$\phi(W) = \sqrt[n]{\frac{1}{{P(w_1,w_2,\ldots, w_n)}}}$*__Note:__* $n$ is the number of words in the sentence.Smoothing will be used if the denominator →0. Is the statement $"\text{Minimizing}$ $ϕ(W)$ $\text{is same as maximizing the probability}$ $P(w_1,w_2,…,w_n)$ $\text{of the sentence"}$ true?- True- False __Answer__: TrueRefer [9], about Perplexity
###Code
import math
def get_nth_root(num,root):
'''
Computers Nth root over the given num and returns it
'''
answer = num ** (1/root)
return answer
def getPerplexityMetric(n, p):
'''
Input:
n: number of words in the sentence
p: probability of the sentence
Output:
Returns perplexity
'''
return get_nth_root(1/p, n)
print(getPerplexityMetric(5, 0.333))
print(getPerplexityMetric(5, 0.666))
print(getPerplexityMetric(5, 0.782))
print()
print(getPerplexityMetric(11, 0.333))
print(getPerplexityMetric(11, 0.666))
print(getPerplexityMetric(11, 0.782))
# From below, we can see Minimizing Perplexity, increases sentence Probability
###Output
1.2459802354008653
1.0846887957840605
1.0504095205335795
1.105132015639037
1.037642609845368
1.0226063306698965
###Markdown
*** 3) Select one of the following bigram probabilities that represents the sentence I love dogs (i) __<\S> I love dogs</S>__ P (I)· P (love | I) · P (dogs | I love)(ii) P () · P (I | __<S>__) · P (love | __<S>__ I) · P (dogs | I love) · P (__</S>__ | love dogs) (iii) P (I | __<S>__) · P (love | I) · P (dogs | love) · P (__</S>__ | dogs) - a - b - c - d __Answer__: c *** 4) The table given below contains some of the bigram frequencies of $(determine,w_i)$ where $w_i$ represents every word in the column| first word | the | how | this | a | his ||------------|:-----:|:---:|--------|-------|--------|| determine | 0.115 | 0 | 0.0125 | 0.006 | 0.0013 |What is the conditional probability of $P(his|determine)$ if the probability of $determine$ as the starting word is 0.6?- 0.0031- 0.0022- 0.0122- 0.0128 __Answer__: 0.0022*Derivation:*Given $E_1 = determine$, $E_2 = his$, $P(E_1, E_2) = 0.0013$, $P(E1) = 0.6$Conditional Probability Formula: $P(E_2 | E_1) = \frac{P(E_1,E_2)} {P(E_1)}$ if $P(E_1) > 0$$P(his | determine) = \frac{P(determine,his)} {P(determine)}$$P(his | determine) = \frac{0.0013} {0.6}$0.00216666666666666666666666666667 *** 5) Assuming that a language model assigns the following conditional probabilities to a 4-word sentence (S)=0.01212. What is the perplexity? Note: Perplexity is defined in question 2. - 2.41- 3.14- 4.35- 3.014 __Answer__: d
###Code
print(getPerplexityMetric(4, 0.01212))
###Output
3.0138688166390875
###Markdown
*** 6) Consider the following three sentences Ram read a novel Raj read a journal Rai read a bookWhat is the bigram probability of the sentence Ram read a book?Include start and end symbols in your calculations- 0.06- 0.2222- 0.1111- 0.0556 __Answer__: 3 *** 7) Consider the following three sentences Ram read a novel Raj read a journal Rai read a bookWhat is the trigram probability of the sentence Ram read a book?Include start and end symbols in your calculations- 0.06- 0.2222- 0.1111- 0.0556- None of the above __Answer__: 3
###Code
from ngram_lm import NGramLM
corpus = [
'Ram read a novel', \
'Raj read a journal', \
'Rai read a book'
]
query = 'Ram read a book'
bi_lm = NGramLM(corpus,True,True)
bi_lm.buildBiGramModel()
print(bi_lm.getBiGramProbability(query))
# Add additional start/stop
tri_lm = NGramLM(corpus,True,True,2)
tri_lm.buildTriGramModel()
print(tri_lm.getTriGramProbability(query))
###Output
Ram read a novel
Raj read a journal
Rai read a book
0 ['<S>', 'ram', 'read', 'a', 'novel', '<\\S>']
1 ['<S>', 'raj', 'read', 'a', 'journal', '<\\S>']
2 ['<S>', 'rai', 'read', 'a', 'book', '<\\S>']
Building Bigram Model ...
For <S>:
<S> ram 1
<S> raj 1
<S> rai 1
For ram:
ram read 1
For read:
read a 3
For a:
a novel 1
a journal 1
a book 1
For novel:
novel <\S> 1
For raj:
raj read 1
For journal:
journal <\S> 1
For rai:
rai read 1
For book:
book <\S> 1
Count: 3.0
Count: 1.0
Count: 3.0
Count: 3.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Query Bi <S> ram 0.3333333333333333
Query Bi ram read 1.0
Query Bi read a 1.0
Query Bi a book 0.3333333333333333
Query Bi book <\S> 1.0
0.1111111111111111
Ram read a novel
Raj read a journal
Rai read a book
0 ['<S>', '<S>', 'ram', 'read', 'a', 'novel', '<\\S>', '<\\S>']
1 ['<S>', '<S>', 'raj', 'read', 'a', 'journal', '<\\S>', '<\\S>']
2 ['<S>', '<S>', 'rai', 'read', 'a', 'book', '<\\S>', '<\\S>']
Building Trigram Model ...
['<S>', '<S>', 'ram', 'read', 'a', 'novel', '<\\S>', '<\\S>']
['<S>', '<S>', 'raj', 'read', 'a', 'journal', '<\\S>', '<\\S>']
['<S>', '<S>', 'rai', 'read', 'a', 'book', '<\\S>', '<\\S>']
Count: 3.0
Count: 1.0
Count: 1.0
Count: 3.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
Count: 1.0
For <S> <S>:
<S> <S> ram 0.3333333333333333
<S> <S> raj 0.3333333333333333
<S> <S> rai 0.3333333333333333
For <S> ram:
<S> ram read 1.0
For ram read:
ram read a 1.0
For read a:
read a novel 0.3333333333333333
read a journal 0.3333333333333333
read a book 0.3333333333333333
For a novel:
a novel <\S> 1.0
For novel <\S>:
novel <\S> <\S> 1.0
For <S> raj:
<S> raj read 1.0
For raj read:
raj read a 1.0
For a journal:
a journal <\S> 1.0
For journal <\S>:
journal <\S> <\S> 1.0
For <S> rai:
<S> rai read 1.0
For rai read:
rai read a 1.0
For a book:
a book <\S> 1.0
For book <\S>:
book <\S> <\S> 1.0
Query Tri <S> <S> ram 0.3333333333333333
Query Tri <S> ram read 1.0
Query Tri ram read a 1.0
Query Tri read a book 0.3333333333333333
Query Tri a book <\S> 1.0
Query Tri book <\S> <\S> 1.0
0.1111111111111111
|
python_bootcamp/notebooks/00-Python Object and Data Structure Basics/08-Files.ipynb | ###Markdown
FilesPython uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note: You will probably need to install certain libraries or modules to interact with those various file types, but they are easily available. (We will cover downloading modules later on in the course).Python has a built-in open function that allows us to open and play with basic file types. First we will need a file though. We're going to use some IPython magic to create a text file! IPython Writing a File This function is specific to jupyter notebooks! Alternatively, quickly create a simple .txt file with sublime text editor.
###Code
%%writefile test.txt
Hello, this is a quick test file.
###Output
Overwriting test.txt
###Markdown
Python Opening a fileLet's being by opening the file test.txt that is located in the same directory as this notebook. For now we will work with files located in the same directory as the notebook or .py script you are using.It is very easy to get an error on this step:
###Code
myfile = open('whoops.txt')
###Output
_____no_output_____
###Markdown
To avoid this error,make sure your .txt file is saved in the same location as your notebook, to check your notebook location, use **pwd**:
###Code
pwd
###Output
_____no_output_____
###Markdown
**Alternatively, to grab files from any location on your computer, simply pass in the entire file path. **For Windows you need to use double \ so python doesn't treat the second \ as an escape character, a file path is in the form: myfile = open("C:\\Users\\YourUserName\\Home\\Folder\\myfile.txt")For MacOS and Linux you use slashes in the opposite direction: myfile = open("/Users/YouUserName/Folder/myfile.txt")
###Code
# Open the text.txt we made earlier
my_file = open('test.txt')
# We can now read the file
my_file.read()
# But what happens if we try to read it again?
my_file.read()
###Output
_____no_output_____
###Markdown
This happens because you can imagine the reading "cursor" is at the end of the file after having read it. So there is nothing left to read. We can reset the "cursor" like this:
###Code
# Seek to the start of file (index 0)
my_file.seek(0)
# Now read again
my_file.read()
###Output
_____no_output_____
###Markdown
You can read a file line by line using the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course.
###Code
# Readlines returns a list of the lines in the file
my_file.seek(0)
my_file.readlines()
###Output
_____no_output_____
###Markdown
When you have finished using a file, it is always good practice to close it.
###Code
my_file.close()
###Output
_____no_output_____
###Markdown
Writing to a FileBy default, the `open()` function will only allow us to read the file. We need to pass the argument `'w'` to write over the file. For example:
###Code
# Add a second argument to the function, 'w' which stands for write.
# Passing 'w+' lets us read and write to the file
my_file = open('test.txt','w+')
###Output
_____no_output_____
###Markdown
Use caution! Opening a file with `'w'` or `'w+'` truncates the original, meaning that anything that was in the original file **is deleted**!
###Code
# Write to the file
my_file.write('This is a new line')
# Read the file
my_file.seek(0)
my_file.read()
my_file.close() # always do this when you're done with a file
###Output
_____no_output_____
###Markdown
Appending to a FilePassing the argument `'a'` opens the file and puts the pointer at the end, so anything written is appended. Like `'w+'`, `'a+'` lets us read and write to a file. If the file does not exist, one will be created.
###Code
my_file = open('test.txt','a+')
my_file.write('\nThis is text being appended to test.txt')
my_file.write('\nAnd another line here.')
my_file.seek(0)
print(my_file.read())
my_file.close()
###Output
_____no_output_____
###Markdown
Appending with `%%writefile`We can do the same thing using IPython cell magic:
###Code
%%writefile -a test.txt
This is text being appended to test.txt
And another line here.
###Output
Appending to test.txt
###Markdown
Add a blank space if you want the first line to begin on its own line, as Jupyter won't recognize escape sequences like `\n` Iterating through a FileLets get a quick preview of a for loop by iterating over a text file. First let's make a new text file with some IPython Magic:
###Code
%%writefile test.txt
First Line
Second Line
###Output
Overwriting test.txt
###Markdown
Now we can use a little bit of flow to tell the program to for through every line of the file and do something:
###Code
for line in open('test.txt'):
print(line)
###Output
First Line
Second Line
###Markdown
Don't worry about fully understanding this yet, for loops are coming up soon. But we'll break down what we did above. We said that for every line in this text file, go ahead and print that line. It's important to note a few things here:1. We could have called the "line" object anything (see example below).2. By not calling `.read()` on the file, the whole text file was not stored in memory.3. Notice the indent on the second line for print. This whitespace is required in Python.
###Code
# Pertaining to the first point above
for asdf in open('test.txt'):
print(asdf)
###Output
First Line
Second Line
###Markdown
We'll learn a lot more about this later, but up next: Sets and Booleans!
###Code
"Section 4: Python Comparison Operators".upper()
not(1)
###Output
_____no_output_____ |
Kaggle Health Insurance.ipynb | ###Markdown
Health Insurance cross sell prediction Cross-sell PredictionPredict Health Insurance Owners' who will be interested in Vehicle Insurance.Here is the link to the dataset. Dataset:- **`id`** --> Unique ID for the customer**`Gender`** --> Gender of the customer**`Age`** --> Age of the customer**`Driving_License`** --> 0 : Customer does not have DL, 1 : Customer already has DL**`Region_Code`** --> Unique code for the region of the customer**`Previously_Insured`** -->1 : Customer already has Vehicle Insurance, 0 : Customer doesn't have Vehicle Insurance**`Vehicle_Age`** --> Age of the Vehicle**`Vehicle_Damage`** --> 1 : Customer got his/her vehicle damaged in the past. 0 : Customer didn't get his/her vehicle damaged in the past.**`Annual_Premium`** --> The amount customer needs to pay as premium in the year **`PolicySalesChannel`** --> Anonymized Code for the channel of outreaching to the customer ie. Different Agents, Over Mail, Over Phone, In Person, etc.**`Vintage`** --> Number of Days, Customer has been associated with the company**`Response`** --> 1 : Customer is interested, 0 : Customer is not interested Let's import some libraries
###Code
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style("darkgrid")
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
train = pd.read_csv(r"C:\Users\sarthak\Downloads\Kaggle Health Insurance\train.csv")
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train["Vehicle_Damage"] = le.fit_transform(train.Vehicle_Damage)
###Output
_____no_output_____
###Markdown
Understanding the dataset
###Code
print("Summary: ")
print(train.describe())
print('\n')
print("Missing Values: ")
print(train.isna().sum())
print('\n')
print("Overall Response Rate: ")
print(train.Response.mean())
###Output
Summary:
id Age Driving_License Region_Code \
count 381109.000000 381109.000000 381109.000000 381109.000000
mean 190555.000000 38.822584 0.997869 26.388807
std 110016.836208 15.511611 0.046110 13.229888
min 1.000000 20.000000 0.000000 0.000000
25% 95278.000000 25.000000 1.000000 15.000000
50% 190555.000000 36.000000 1.000000 28.000000
75% 285832.000000 49.000000 1.000000 35.000000
max 381109.000000 85.000000 1.000000 52.000000
Previously_Insured Vehicle_Damage Annual_Premium \
count 381109.000000 381109.000000 381109.000000
mean 0.458210 0.504877 30564.389581
std 0.498251 0.499977 17213.155057
min 0.000000 0.000000 2630.000000
25% 0.000000 0.000000 24405.000000
50% 0.000000 1.000000 31669.000000
75% 1.000000 1.000000 39400.000000
max 1.000000 1.000000 540165.000000
Policy_Sales_Channel Vintage Response
count 381109.000000 381109.000000 381109.000000
mean 112.034295 154.347397 0.122563
std 54.203995 83.671304 0.327936
min 1.000000 10.000000 0.000000
25% 29.000000 82.000000 0.000000
50% 133.000000 154.000000 0.000000
75% 152.000000 227.000000 0.000000
max 163.000000 299.000000 1.000000
Missing Values:
id 0
Gender 0
Age 0
Driving_License 0
Region_Code 0
Previously_Insured 0
Vehicle_Age 0
Vehicle_Damage 0
Annual_Premium 0
Policy_Sales_Channel 0
Vintage 0
Response 0
Age_Bucket 0
dtype: int64
Overall Response Rate:
0.12256336113815208
###Markdown
Highlights:1) There are no Missing values in the data. Oh Great!!! God is with us.2) The overall response rate is near to 12.25 percent.
###Code
train.head()
###Output
_____no_output_____
###Markdown
Hypothesis from the dataset Questions which can be solved with help of this dataset.* Which feature has the highest significance on the response?* Are they any targeting a particular gender? If yes, is it willingly or not. If not willingly, is there any way it can be corrected.* Is previously insuring a vehicle a gender thing, like female are more likely to insure than male.* Does gender play a role in vehicle damage?* Which gender is more loyal customer?* Is there any significance gap between annual premium paid by male and female?* Which policy channel has highest conversion rate for each gender. Which channel is most used for contacting each gender.* Age and gender, does it play any role in response...together. like male of age 40-50 are more likely to insure.* Age and previously insured......does age play a role in insurance?* Does age play role in vehicle damage?* Which policy channel has influence over which age group. Function to get details of a particular feature-
###Code
def all_about_feature(feature):
print("Unique values and their count: ")
print(feature.value_counts())
print("\n")
print("Response Rate: ")
print(train.groupby(feature)["Response"].mean().sort_values(ascending=False))
print("\n")
print("Total Response received: ")
print(train.groupby(feature)["Response"].count())
###Output
_____no_output_____
###Markdown
Let's start with analyzing Gender feature
###Code
all_about_feature(train.Gender)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.pie(train.Gender.value_counts(),labels=["Male",'Female'],shadow=True,autopct='%1.1f%%')
plt.title("Gender distribution")
plt.tight_layout()
plt.subplots_adjust(wspace=1)
plt.subplot(1,2,2)
train.groupby("Gender")["Response"].mean().plot(kind='bar',cmap='summer')
plt.title("Response Rate")
plt.ylabel("Rate")
plt.xticks(rotation="horizontal")
plt.tight_layout()
###Output
Unique values and their count:
Male 206089
Female 175020
Name: Gender, dtype: int64
Response Rate:
Gender
Male 0.138411
Female 0.103902
Name: Response, dtype: float64
Total Response received:
Gender
Female 175020
Male 206089
Name: Response, dtype: int64
###Markdown
1) There are 54 percent Male and 46 percent Female in the dataset.2) Response rate of Male is 13 percent while response rate of female is 10 percent. The response rate of male is nearly **30** percent more than that of female.3) `There are more male customers, which may be an intentional thing because response rate is higher for male. So, it is reasonable to target more male customers.` Is vehicle getting previously insured a "Gender" thing?
###Code
train.groupby("Gender")["Previously_Insured"].mean()
#Can be represented as key points after a topic ends.
###Output
_____no_output_____
###Markdown
From this data, it seems more female get their vehicle insured. **19** percent more female get vehicle insured as compared to male. Let's check, what happens to response rate of particular gender when they previously insured.
###Code
print("When it is previously not insured:")
print(train[train.Previously_Insured==0].groupby("Gender")["Response"].mean())
print("\n")
print("When it is previously insured: ")
print(train[train.Previously_Insured==1].groupby("Gender")["Response"].mean())
train.groupby(["Gender","Previously_Insured"]).Response.mean()
###Output
When it is previously not insured:
Gender
Female 0.208140
Male 0.238079
Name: Response, dtype: float64
When it is previously insured:
Gender
Female 0.000705
Male 0.001108
Name: Response, dtype: float64
###Markdown
There is huge difference between response rate of particular gender when they have previously insured. By which we can imply, previously_insured is a good metric for response rate. Let's check whether vehicle damage a gender thing?
###Code
print(train.groupby("Gender")["Vehicle_Damage"].mean())
train.groupby("Gender")["Vehicle_Damage"].mean().plot(kind='bar',cmap='summer')
plt.xlabel("Gender")
plt.xticks(rotation='horizontal')
plt.ylabel("Vehicle Damage mean")
plt.title("Vehicle Damaged by a particular Gender")
plt.tight_layout()
###Output
Gender
Female 0.455177
Male 0.547084
Name: Vehicle_Damage, dtype: float64
###Markdown
From the data, it seems more Male are involved in vehicle damage than Female. Vehicle Damage - Feature analysis
###Code
print("When vehicle is not damaged, the conversion rate is: ")
print(train[train.Vehicle_Damage==0].groupby("Gender")["Response"].mean())
print("\n")
print("When vehicle is damaged, the conversion rate is: ")
print(train[train.Vehicle_Damage==1].groupby("Gender")["Response"].mean())
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
train[train.Vehicle_Damage==0].groupby("Gender")["Response"].mean().plot(kind='bar',cmap='summer')
plt.subplots_adjust(wspace=0.5)
plt.xticks(rotation='horizontal')
plt.title("Response rate when vehicle is not damaged")
plt.subplot(1,2,2)
train[train.Vehicle_Damage==1].groupby("Gender")["Response"].mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.title("Response rate when vehicle is damaged")
plt.tight_layout()
###Output
When vehicle is not damaged, the conversion rate is:
Gender
Female 0.004384
Male 0.006042
Name: Response, dtype: float64
When vehicle is damaged, the conversion rate is:
Gender
Female 0.223021
Male 0.247996
Name: Response, dtype: float64
###Markdown
1) Response rate is much higher when vehicle is damaged as compared to when it is not damaged.2) Response rate when vehicle is damaged is nearly 40 times when vehicle is not damaged. This means, damaged vehicle incites some fear to get insurance.
###Code
train.groupby("Gender")["Vintage"].mean()
###Output
_____no_output_____
###Markdown
Both the gender have similar vintage (same number of days after becoming customer)
###Code
train.groupby("Gender")["Annual_Premium"].mean()
###Output
_____no_output_____
###Markdown
There is not much difference in annual premium paid by both the gender. Are different gender contacted via different medium?
###Code
print(train.groupby("Gender")["Policy_Sales_Channel"].agg(Series.mode))
print('\n')
print("Response Rate:")
print("Male:")
print(train[((train.Policy_Sales_Channel==152) & (train.Gender=="Male"))].Response.mean())
print("Female:")
print(train[((train.Policy_Sales_Channel==152) & (train.Gender=="Female"))].Response.mean())
###Output
Gender
Female 152.0
Male 152.0
Name: Policy_Sales_Channel, dtype: float64
Response Rate:
Male:
0.03495188774376592
Female:
0.023769255981645362
###Markdown
* This implies that similar mode of communication is used for both the gender.* The most frequent mode of communication used has response rate as low as 2-3 percent for both male and female.* **What is the idea behind still using it as frequently. Is it cheaper or easily accessible?**
###Code
train[train.Gender=="Female"].groupby(["Policy_Sales_Channel"])["Response"].mean().sort_values(ascending=False)
train[train.Gender=="Male"].groupby(["Policy_Sales_Channel"])["Response"].mean().sort_values(ascending=False)
train.groupby("Policy_Sales_Channel")["Response"].mean().sort_values().tail(10)
###Output
_____no_output_____
###Markdown
No conclusions can be drawn as count of some policy sales channel is very low, less than 10-20. Age Analysis
###Code
all_about_feature(train.Age)
train[train.Age>80].Response.mean()
###Output
_____no_output_____
###Markdown
The response rate of people with age greater than 80 years is near to 4 percent. Analysis for creating bucket for age: We can see it is tough to draw any conclusion with just using age. We need to regularise the age feature so that we can draw conclusion and some kind of insights from the data.We can plot histogram and decide bucket size according to that. And once we create the bucket, we will analyse it with other features in the train dataset. Also, does bucketing help in drawing some conclusions? We will get the answer to that question.
###Code
sns.distplot(train.Age)
#suggested age bucket = 20 - 35,35-55 , 55-80.
plt.xlabel("Age")
plt.title("Age histogram")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
We can see three different distributions in the whole histogram. First distribution is from age 20 to 35, it is bit like normal distribution which is tall and skinny. Second distribution is from 35 to 55, which also looks like normal distribution which is short and fat. Final and third distribution is like some inverse linear function as count is falling as age is increasing. 3 buckets which we will be created (a) 20 - 35 (b) 35-55 (c) 55 and above. Let's create Age bucket :
###Code
def bucket(ser):
if ser>55:
return "C"
elif ser>35:
return "B"
else:
return "A"
train["Age_Bucket"] = train.Age.apply(bucket)
###Output
_____no_output_____
###Markdown
Analyzing the age bucket :
###Code
all_about_feature(train.Age_Bucket)
plt.figure(figsize=(8,5))
plt.subplot(1,2,1)
plt.pie(train["Age_Bucket"].value_counts(),labels=["20-35","35-55","55 and above"],shadow=True,autopct='%1.1f%%',radius=1.5)
plt.tight_layout()
plt.subplots_adjust(wspace=0.5)
plt.subplot(1,2,2)
train.groupby("Age_Bucket")["Response"].mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.title("Response Rate")
plt.ylabel("Rate")
###Output
Unique values and their count:
A 186812
B 132081
C 62216
Name: Age_Bucket, dtype: int64
Response Rate:
Age_Bucket
B 0.206805
C 0.114922
A 0.065547
Name: Response, dtype: float64
Total Response received:
Age_Bucket
A 186812
B 132081
C 62216
Name: Response, dtype: int64
###Markdown
1) Around 50 percent of the total people are younger than 35.2) Response rate is highest in "B" bucket followed by "C" and its lowest in "A" bucket .3) Response rate for bucket "C" is nearly 4 times as "A" while double of "C".4) People from 35-55 seems to be a much better to approach for insurance.5) One reason for such low response rate in "A" bucket can be people are really young and could not afford vehicle at first place. **The age group 20-35 seems to be most targeted despite having very less response rate as compared to other age groups. Instead, we can target more people in the bucket of 35-55 age group, which has highest response rate.**
###Code
print(train.groupby(["Age_Bucket","Gender"]).Response.mean())
train.groupby(["Age_Bucket","Gender"]).Response.mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.ylabel("Response rate")
plt.tight_layout()
###Output
Age_Bucket Gender
A Female 0.053691
Male 0.079481
B Female 0.203914
Male 0.208596
C Female 0.104504
Male 0.121272
Name: Response, dtype: float64
###Markdown
There is not much difference in response rate for female and male in various age bucket. Age bucket behaves similar for either of the gender.
###Code
train.groupby(["Age_Bucket","Previously_Insured"]).Response.mean()
###Output
_____no_output_____
###Markdown
Vehicles which are previously insured have much lower response rate. So combining with other features will not tell us anything insightful. Is there any relation between vehicle damage and age group?
###Code
print(train.groupby("Age_Bucket").Vehicle_Damage.mean())
train.groupby("Age_Bucket").Vehicle_Damage.mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.tight_layout()
###Output
Age_Bucket
A 0.338506
B 0.682172
C 0.628038
Name: Vehicle_Damage, dtype: float64
###Markdown
* Age bucket "A" (20-35) people have very less damaged vehicle. This implies that young people tend to have less vehicle damage than any other age group of people.* Different age groups have different chances of vehicle damage.
###Code
sns.lmplot("Vehicle_Damage","Response",data=train)
###Output
_____no_output_____
###Markdown
There is little positive correlation between Vehicle Damage and Response which means when a vehicle is damaged it has higher probablity to get response.
###Code
print(train.groupby("Age_Bucket").Annual_Premium.mean())
plt.figure(figsize=(6,4))
train.groupby("Age_Bucket").Annual_Premium.mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.ylabel("Annual Premium")
plt.title("Annual Premium paid by different age groups")
plt.tight_layout()
###Output
Age_Bucket
A 29670.942696
B 30581.101922
C 33211.606002
Name: Annual_Premium, dtype: float64
###Markdown
There is slight variation in annual premium paid by different age groups.**As the age increases, chances of paying higher annual premium increases.** Are different age groups targeted using same or different Policy channel?
###Code
print(train.groupby("Age_Bucket").Policy_Sales_Channel.agg(Series.mode))
print("\n")
print("Response rate for most used communication:")
print("A:- ",train[((train.Age_Bucket=="A")&(train.Policy_Sales_Channel==152))].Response.mean())
print("B:- ",train[((train.Age_Bucket=="B")&(train.Policy_Sales_Channel==124))].Response.mean())
print("C:- ",train[((train.Age_Bucket=="C")&(train.Policy_Sales_Channel==26))].Response.mean())
###Output
Age_Bucket
A 152.0
B 124.0
C 26.0
Name: Policy_Sales_Channel, dtype: float64
Response rate for most used communication:
A:- 0.02849493510356813
B:- 0.19860127895288407
C:- 0.133086876155268
###Markdown
1) Different age group are targeted by different policy sales channel.2) By the response rate, it seems "A" has very poor response rate. This implies the mode of communication used for "A" is not effective. 3) Response rate for "B" is very good, which implies mode of communication used is working. Is there any relation between Vehicle damage and previously insured?
###Code
sns.lmplot("Previously_Insured","Vehicle_Damage",data=train)
###Output
_____no_output_____
###Markdown
**`Previously Insured and Vehicle damage have very strong negative correlation.`** Is there any relation between previously insured and response?
###Code
sns.lmplot("Previously_Insured","Response",data=train)
###Output
_____no_output_____
###Markdown
**Previously Insured and response have very slight negative correlation.**
###Code
print(train.groupby("Vehicle_Age").Annual_Premium.mean())
train.groupby("Vehicle_Age").Annual_Premium.mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.ylabel("Annual Premium")
plt.tight_layout()
###Output
Vehicle_Age
1-2 Year 30523.582120
< 1 Year 30119.552025
> 2 Years 35654.499469
Name: Annual_Premium, dtype: float64
###Markdown
Annual premium has some correlation with Vehicle age. Annual premium goes higher as vehicle age increases. Once the vehicle is more than 2 years old, premium increase exponentially. Is there any kind of relation between Vehicle Age and Vehicle damage? Ideally, there should be a direct relationship between vehicle age and vehicle damage. As the age of vehicle increases, there is high chances of vehicle damage. Let's see what does the data say.
###Code
print(train.groupby("Vehicle_Age").Vehicle_Damage.mean())
train.groupby("Vehicle_Age").Vehicle_Damage.mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.tight_layout()
###Output
Vehicle_Age
1-2 Year 0.640114
< 1 Year 0.292476
> 2 Years 0.999063
Name: Vehicle_Damage, dtype: float64
###Markdown
1) Vehicle damage for vehicle whose age is greater than 2 years is near to 1. This implies, vehicle which are more than 2 years old have undergone damage.2) Vehicle Damage exponential changes with vehicle age increases.3) There is more than 100 percent increase in vehicle damage when vehicle age increases from 4) There is nearly 33 percent increase in vehicle damage when vehicle age increases from 1-2 to >2 years. Does age of vehicle affect response?
###Code
print(train.groupby("Vehicle_Age")["Response"].mean())
train.groupby("Vehicle_Age")["Response"].mean().plot(kind='bar',cmap='summer')
plt.xticks(rotation='horizontal')
plt.tight_layout()
###Output
Vehicle_Age
1-2 Year 0.173755
< 1 Year 0.043705
> 2 Years 0.293746
Name: Response, dtype: float64
|
assets/all_html/2019_11_04_HP_book_to_script.ipynb | ###Markdown
Harry Potter: Book to Script Experiment STEP 1: Load the data
###Code
def get_text_from_file(file):
file = open(file).readlines()
all_text = ""
for line in file:
all_text += line
all_text = all_text.replace("\n", " ")
all_text = all_text.replace("\'", "")
return all_text
all_text = get_text_from_file('HP1.txt')
###Output
_____no_output_____
###Markdown
Get only CH1 for experimenting
###Code
import re
def get_ch1(all_text):
all_text = re.sub(r'[0-9]', '', all_text)
chapters = all_text.split('CHAPTER ')
ch1 = chapters[1]
return ch1
ch1 = get_ch1(all_text)
###Output
_____no_output_____
###Markdown
STEP 2: Find everything between quotes
###Code
def get_dialogue(text):
return re.findall(r'"([^"]*)"', text)
ch1_dialogue = get_dialogue(ch1)
ch1_dialogue
###Output
_____no_output_____
###Markdown
But oh no!! All of a sudden it started getting everything AFTER the quote!! Let's investigate. Ah ha! After doing a regex search in sublime, it's clear there is no closing quote here_Professor McGonagall's voice trembled as she went on. "That's not all.They're saying he tried to kill the Potter's son, Harry. But -- hecouldn't. He couldn't kill that little boy. No one knows why, or how,but they're saying that when he couldn't kill Harry Potter, Voldemort'spower somehow broke -- and that's why he's gone.__Dumbledore nodded glumly._ *(OK so for this first step, I just went back in and added it manually and saved a new file as HP1_clean.txt)*---- ANNNND we found another few instances (line 345 in HP1_clean)
###Code
all_text = get_text_from_file('HP1_clean.txt')
ch1 = get_ch1(all_text)
ch1_dialogue_clean = get_dialogue(ch1)
ch1_dialogue_clean
###Output
_____no_output_____
###Markdown
STEP 3: Get the person who said the quote
###Code
def get_capitalized_words(text):
cap_words = re.findall(r'[A-Z]\w*\s', text)
return [word for word in cap_words if len(word) > 4]
ch1_dialogue = get_capitalized_words(ch1)
ch1_dialogue
def get_capitalized_words(text):
return re.findall(r'\,\"(.*)', text)
ch1_dialogue = get_capitalized_words(ch1)
ch1_dialogue
ch1 = ch1.replace('Mr. ', 'Mr')
ch1 = ch1.replace('Mrs. ', 'Mrs')
def get_capitalized_words(text):
return re.findall(r'(?<=\,\").*?(?=\.)', text)
ch1_dialogue = get_capitalized_words(ch1)
ch1_dialogue
###Output
_____no_output_____
###Markdown
WOW!! That actually looks great!! But... it looks a little short...
###Code
len(ch1_dialogue)
len(ch1_dialogue_clean)
###Output
_____no_output_____
###Markdown
Oh dear! That's a pretty big discrepency (discrepancy?)
###Code
for line in ch1_dialogue_clean:
print('ACTOR:')
print(line)
print('--')
###Output
ACTOR:
Little tyke,
--
ACTOR:
The Potters, thats right, thats what I heard yes, their son, Harry
--
ACTOR:
Sorry,
--
ACTOR:
Dont be sorry, my dear sir, for nothing could upset me today! Rejoice, for You-Know-Who has gone at last! Even Muggles like yourself should be celebrating, this happy, happy day!
--
ACTOR:
Shoo!
--
ACTOR:
Wont!
--
ACTOR:
And finally, bird-watchers everywhere have reported that the nations owls have been behaving very unusually today. Although owls normally hunt at night and are hardly ever seen in daylight, there have been hundreds of sightings of these birds flying in every direction since sunrise. Experts are unable to explain why the owls have suddenly changed their sleeping pattern.
--
ACTOR:
Most mysterious. And now, over to Jim McGuffin with the weather. Going to be any more showers of owls tonight, Jim?
--
ACTOR:
Well, Ted,
--
ACTOR:
I dont know about that, but its not only the owls that have been acting oddly today. Viewers as far apart as Kent, Yorkshire, and Dundee have been phoning in to tell me that instead of the rain I promised yesterday, theyve had a downpour of shooting stars! Perhaps people have been celebrating Bonfire Night early -- its not until next week, folks! But I can promise a wet night tonight.
--
ACTOR:
Er -- Petunia, dear -- you havent heard from your sister lately, have you?
--
ACTOR:
No,
--
ACTOR:
Why?
--
ACTOR:
Funny stuff on the news,
--
ACTOR:
Owls... shooting stars... and there were a lot of funny-looking people in town today...
--
ACTOR:
So?
--
ACTOR:
Well, I just thought... maybe... it was something to do with... you know... her crowd.
--
ACTOR:
Potter.
--
ACTOR:
Their son -- hed be about Dudleys age now, wouldnt he?
--
ACTOR:
I suppose so,
--
ACTOR:
Whats his name again? Howard, isnt it?
--
ACTOR:
Harry. Nasty, common name, if you ask me.
--
ACTOR:
Oh, yes,
--
ACTOR:
Yes, I quite agree.
--
ACTOR:
I should have known.
--
ACTOR:
Fancy seeing you here, Professor McGonagall.
--
ACTOR:
How did you know it was me?
--
ACTOR:
My dear Professor, I ve never seen a cat sit so stiffly.
--
ACTOR:
Youd be stiff if youd been sitting on a brick wall all day,
--
ACTOR:
All day? When you could have been celebrating? I must have passed a dozen feasts and parties on my way here.
--
ACTOR:
Oh yes, everyones celebrating, all right,
--
ACTOR:
Youd think theyd be a bit more careful, but no -- even the Muggles have noticed somethings going on. It was on their news.
--
ACTOR:
I heard it. Flocks of owls... shooting stars.... Well, theyre not completely stupid. They were bound to notice something. Shooting stars down in Kent -- Ill bet that was Dedalus Diggle. He never had much sense.
--
ACTOR:
You cant blame them,
--
ACTOR:
Weve had precious little to celebrate for eleven years.
--
ACTOR:
I know that,
--
ACTOR:
But thats no reason to lose our heads. People are being downright careless, out on the streets in broad daylight, not even dressed in Muggle clothes, swapping rumors.
--
ACTOR:
A fine thing it would be if, on the very day YouKnow-Who seems to have disappeared at last, the Muggles found out about us all. I suppose he really has gone, Dumbledore?
--
ACTOR:
It certainly seems so,
--
ACTOR:
We have much to be thankful for. Would you care for a lemon drop?
--
ACTOR:
A what?
--
ACTOR:
A lemon drop. Theyre a kind of Muggle sweet Im rather fond of
--
ACTOR:
No, thank you,
--
ACTOR:
As I say, even if You-Know-Who has gone -
--
ACTOR:
My dear Professor, surely a sensible person like yourself can call him by his name? All this You- Know-Who nonsense -- for eleven years I have been trying to persuade people to call him by his proper name: Voldemort.
--
ACTOR:
It all gets so confusing if we keep saying You-Know-Who. I have never seen any reason to be frightened of saying Voldemorts name.
--
ACTOR:
But youre different. Everyone knows youre the only one You-Know- oh, all right, Voldemort, was frightened of.
--
ACTOR:
You flatter me,
--
ACTOR:
Voldemort had powers I will never have.
--
ACTOR:
Only because youre too -- well -- noble to use them.
--
ACTOR:
Its lucky its dark. I havent blushed so much since Madam Pomfrey told me she liked my new earmuffs.
--
ACTOR:
The owls are nothing next to the rumors that are flying around. You know what everyones saying? About why hes disappeared? About what finally stopped him?
--
ACTOR:
everyone
--
ACTOR:
What theyre saying,
--
ACTOR:
is that last night Voldemort turned up in Godrics Hollow. He went to find the Potters. The rumor is that Lily and James Potter are -- are -- that theyre -- dead.
--
ACTOR:
Lily and James... I cant believe it... I didnt want to believe it... Oh, Albus...
--
ACTOR:
I know... I know...
--
ACTOR:
Thats not all. Theyre saying he tried to kill the Potters son, Harry. But -- he couldnt. He couldnt kill that little boy. No one knows why, or how, but theyre saying that when he couldnt kill Harry Potter, Voldemorts power somehow broke -- and thats why hes gone.
--
ACTOR:
Its -- its true?
--
ACTOR:
After all hes done... all the people hes killed... he couldnt kill a little boy? Its just astounding... of all the things to stop him... but how in the name of heaven did Harry survive?
--
ACTOR:
We can only guess,
--
ACTOR:
We may never know.
--
ACTOR:
Hagrids late. I suppose it was he who told you Id be here, by the way?
--
ACTOR:
Yes,
--
ACTOR:
And I dont suppose youre going to tell me why youre here, of all places?
--
ACTOR:
Ive come to bring Harry to his aunt and uncle. Theyre the only family he has left now.
--
ACTOR:
You dont mean -- you cant mean the people who live here?
--
ACTOR:
Dumbledore -- you cant. Ive been watching them all day. You couldnt find two people who are less like us. And theyve got this son -- I saw him kicking his mother all the way up the street, screaming for sweets. Harry Potter come and live here!
--
ACTOR:
Its the best place for him,
--
ACTOR:
His aunt and uncle will be able to explain everything to him when hes older. Ive written them a letter.
--
ACTOR:
A letter?
--
ACTOR:
Really, Dumbledore, you think you can explain all this in a letter? These people will never understand him! Hell be famous -- a legend -- I wouldnt be surprised if today was known as Harry Potter day in the future -- there will be books written about Harry -- every child in our world will know his name!
--
ACTOR:
Exactly,
--
ACTOR:
It would be enough to turn any boys head. Famous before he can walk and talk! Famous for something he wont even remember! CarA you see how much better off hell be, growing up away from all that until hes ready to take it?
--
ACTOR:
Yes -- yes, youre right, of course. But how is the boy getting here, Dumbledore?
--
ACTOR:
Hagrids bringing him.
--
ACTOR:
You think it -- wise -- to trust Hagrid with something as important as this?
--
ACTOR:
I would trust Hagrid with my life,
--
ACTOR:
Im not saying his heart isnt in the right place,
--
ACTOR:
but you cant pretend hes not careless. He does tend to -- what was that?
--
ACTOR:
Hagrid,
--
ACTOR:
At last. And where did you get that motorcycle?
--
ACTOR:
Borrowed it, Professor Dumbledore, sit,
--
ACTOR:
Young Sirius Black lent it to me. Ive got him, sir.
--
ACTOR:
No problems, were there?
--
ACTOR:
No, sir -- house was almost destroyed, but I got him out all right before the Muggles started swarmin around. He fell asleep as we was flyin over Bristol.
--
ACTOR:
Is that where -?
--
ACTOR:
Yes,
--
ACTOR:
Hell have that scar forever.
--
ACTOR:
Couldnt you do something about it, Dumbledore?
--
ACTOR:
Even if I could, I wouldnt. Scars can come in handy. I have one myself above my left knee that is a perfect map of the London Underground. Well -- give him here, Hagrid -- wed better get this over with.
--
ACTOR:
Could I -- could I say good-bye to him, sir?
--
ACTOR:
Shhh!
--
ACTOR:
youll wake the Muggles!
--
ACTOR:
S-s-sorry,
--
ACTOR:
But I c-c-cant stand it -- Lily an James dead -- an poor little Harry off ter live with Muggles -
--
ACTOR:
Yes, yes, its all very sad, but get a grip on yourself, Hagrid, or well be found,
--
ACTOR:
Well,
--
ACTOR:
thats that. Weve no business staying here. We may as well go and join the celebrations.
--
ACTOR:
Yeah,
--
ACTOR:
Ill be takin Sirius his bike back. Gnight, Professor McGonagall -- Professor Dumbledore, sir.
--
ACTOR:
I shall see you soon, I expect, Professor McGonagall,
--
ACTOR:
Good luck, Harry,
--
ACTOR:
To Harry Potter -- the boy who lived!
--
|
notebooks/AxisScaling_Part7.ipynb | ###Markdown
Part 7 of Axis Scaling: Combining ScalesThis page is primarily based on the following page at the Circos documentation site:- [7. Combining Scales](????????????)That page is found as part number 4 of the ??? part ['Axis Scaling' section](http://circos.ca/documentation/tutorials/quick_start/) of [the larger set of Circos tutorials](http://circos.ca/documentation/tutorials/).Go back to Part 6 by clicking [here &8592;](AxisScaling_Part6.ipynb).----7 --- Axis Scaling==================7. Combining Scales-------------------::: {menu4}[[Lesson](/documentation/tutorials/scaling/combining_scales/lesson){.clean}]{.active}[Images](/documentation/tutorials/scaling/combining_scales/images){.normal}[Configuration](/documentation/tutorials/scaling/combining_scales/configuration){.normal}:::In this final example, I combine all the scale adjustments discussed inthis tutorial section into one image. I\'ve added a histogram plot, inaddition to the heat map, that shows the scale across the ideograms. Thehistogram y-axis is graduated every 0.5 from 0x to 10x. The y-axislabels were added in post-processing (Circos does not know how to dothis - yet).Chromosomes 1, 2 and 3 are displayed, with chromosome 2 split into threeideograms. Two ranges on chromosome 2 are defined 0-100 and 150-), aswell as an axis break 40-60Mb. Chromosomes 1 and 3 have a baseline scaleof 1x and chromosome 2 has a baseline scale of 2x (first two ideograms)and 0.5x (third ideogram).Notice how the three ideograms on chromosome two are defined. First,Circos is told to draw two regions 0-100 (ideogram id \"a\") and 150-)(ideogram id \"b\"). Scale factors of 2x and 0.5x are then assigned tothese ideograms. Finally, an axis break is introduced at 40-60Mb, whichbreaks the \"a\" ideogram into two. However, the global scale is still2x across the two new pieces, which are still internally labeled as\"a\", and therefore the ideogram label is 2a for both pieces.At the moment, you cannot define a different global scale for ideogramsthat contain an axis break. If you needed the same regions drawn, eachwith a different global scale, you would define them as follows, forexample ```inichromosomes: hs2[a]:0-40;hs2[b]:60-100;hs2[c]:150-)chromosomes_scale: a:2;b:3;c:0.5``` There are several local scale adjustments in this example. On chromosome1, there are two zoom regions that are smoothed and their smoothingregions overlap. Recall the rule that for a given region the scalefactor is taken to be the largest absolute zoom level for any zoomregion that overlaps with it. As the smoothing of the 10x and 0.2xregions run into each other, the is a sudden scale shift (marked by redarrot in the vicinity of chr1:140Mb). Currently, scale smoothing is notiterative - there is no additional smoothing applied between adjacentsmoothing regions.On chromosome 2 there is a 5x zoom region at the start of the firstideogram. Its smoothing region runs off the edge of the ideogram.On chromosome 3 there are two nested zoom regions. The outer region is0.5x 25-150Mb and the inner region is 10x 72-73Mb. Both are smoothed,but their smoothing does not overlap.Currently, a region is smoothed by gradually adjusting its scale fromthat defined in the configuration file to the ideogram\'s base scale. Inthe last case, where one region (10x) was nested inside another (0.5x),the inner region is still smoothed between 10x and 1x, the latter beingthe ideograms\' base scale. Keep this in mind when nesting regions.Finally, a word about tick marks and scale adjustment. Experiment withthe tick mark parameters `label_separation`, `tick_separation` and`min_label_distance_to_edge`. If you are going to be significantlyexpand the scale in some regions, define tick marks with sufficientlysmall spacing to cover the expanded region in ticks. Set thetick\_separation to avoid tick mark crowding in regions with a lowerscale.If you are going to change scale, make sure that this fact is made clearin your figure. You can mark this fact by using highlights or a heatmap.Circos will produce a report of zoom regions as it\'s drawing the image,and you can use this output to create data files to annotate the figure(you need to run Circos twice --- once to generate the file and again toplot it). ```ini...zoomregion ideogram 1 chr hs2 17000001 18000001 scale 2.82 absolutescale 2.82zoomregion ideogram 1 chr hs2 18000001 19000001 scale 2.55 absolutescale 2.55zoomregion ideogram 1 chr hs2 19000001 20000001 scale 2.27 absolutescale 2.27zoomregion ideogram 1 chr hs2 20000001 40000000 scale 2.00 absolutescale 2.00zoomregion ideogram 2 chr hs2 59999999 60000001 scale 1.83 absolutescale 1.83zoomregion ideogram 2 chr hs2 60000001 61000001 scale 1.75 absolutescale 1.75zoomregion ideogram 2 chr hs2 61000001 62000001 scale 1.67 absolutescale 1.67zoomregion ideogram 2 chr hs2 62000001 63000001 scale 1.58 absolutescale 1.58zoomregion ideogram 2 chr hs2 63000001 64000001 scale 1.50 absolutescale 1.50zoomregion ideogram 2 chr hs2 64000001 65000001 scale 1.42 absolutescale 1.42zoomregion ideogram 2 chr hs2 65000001 66000001 scale 1.33 absolutescale 1.33zoomregion ideogram 2 chr hs2 66000001 67000001 scale 1.25 absolutescale 1.25...``` The absolute scale is `max(scale,1/scale)`.---- Generating the plot produced by this example codeThe following two cells will generate the plot. The first cell adjusts the current working directory.
###Code
%cd ../circos-tutorials-0.67/tutorials/7/7/
%%bash
../../../../circos-0.69-6/bin/circos -conf circos.conf
###Output
debuggroup summary 0.38s welcome to circos v0.69-6 31 July 2017 on Perl 5.022000
debuggroup summary 0.38s current working directory /home/jovyan/circos-tutorials-0.67/tutorials/7/7
debuggroup summary 0.38s command ../../../../circos-0.69-6/bin/circos -conf circos.conf
debuggroup summary 0.38s loading configuration from file circos.conf
debuggroup summary 0.38s found conf file circos.conf
debuggroup summary 0.59s debug will appear for these features: output,summary
debuggroup summary 0.59s bitmap output image ./circos.png
debuggroup summary 0.59s SVG output image ./circos.svg
debuggroup summary 0.59s parsing karyotype and organizing ideograms
debuggroup summary 0.71s karyotype has 24 chromosomes of total size 3,095,677,436
debuggroup summary 0.71s applying global and local scaling
debuggroup summary 0.87s allocating image, colors and brushes
debuggroup summary 2.96s drawing 5 ideograms of total size 620,472,429
debuggroup summary 2.96s drawing highlights and ideograms
debuggroup summary 4.20s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/heatmap.conf
debuggroup summary 4.20s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/histogram.conf
debuggroup summary 4.20s processing track_0 heatmap /home/jovyan/circos-tutorials-0.67/tutorials/7/7/../../../data/7/heatmap.zoom-05.txt
debuggroup summary 4.24s processing track_1 histogram /home/jovyan/circos-tutorials-0.67/tutorials/7/7/../../../data/7/heatmap.zoom-05.txt
debuggroup summary 4.27s drawing track_0 heatmap z 0 heatmap.zoom-05.txt
debuggroup summary 4.28s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/axis.conf
debuggroup summary 4.41s drawing track_1 histogram z 0 heatmap.zoom-05.txt orient out
debuggroup summary 4.42s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/axis.conf
debuggroup output 4.60s generating output
debuggroup output 5.48s created PNG image ./circos.png (533 kb)
debuggroup output 5.48s created SVG image ./circos.svg (232 kb)
###Markdown
View the plot in this page using the following cell.
###Code
from IPython.display import Image
Image("circos.png")
###Output
_____no_output_____ |
Projects/6 Spam Detection Text Classification/spam_detection_with_cnn.ipynb | ###Markdown
1) Data Preprocessing---
###Code
import tensorflow as tf
print(tf.__version__)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Input, GlobalAveragePooling1D ,GlobalMaxPooling1D
from tensorflow.keras.layers import Conv1D, MaxPool1D, Embedding
from tensorflow.keras.models import Model
df = pd.read_csv('/content/spam.csv', encoding= 'ISO-8859-1')
df.head()
# delete garbage columns
df = df.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis = 1)
df.head()
# rename columns
df.columns = ['labels', 'data']
df.head()
# create binary labels (0 and 1)
df['b_labels'] = df['labels'].map({'ham': 0, 'spam': 1})
y = df['b_labels'].values
# split the dataset
X_train, X_test, y_train, y_test = train_test_split(df['data'], y, test_size = 0.33)
# Convert sentences to sequences
max_vocab_size = 20000
tokenizer = Tokenizer(num_words = max_vocab_size)
tokenizer.fit_on_texts(X_train)
sequences_train = tokenizer.texts_to_sequences(X_train)
sequences_test = tokenizer.texts_to_sequences(X_test)
len(sequences_train[0])
# Check word index mapping (to check the nımber of words in vocabulary)
word2idx = tokenizer.word_index
V = len(word2idx)
print("Total number of unique tokens are %s" %V)
# pada sequences to get N * T matrix
data_train = pad_sequences(sequences_train)
print("Shape of data train tensor:", data_train.shape)
# Set the value of t to get sequence length
T = data_train.shape[1]
print(T)
# Pad the test set
data_test = pad_sequences(sequences_test, maxlen = T)
# maxlen = T, to truncate longer sentences in test set
print('Shape of data test tensor:', data_test.shape)
data_train[0]
len(data_train[0])
###Output
_____no_output_____
###Markdown
2) Building The Model---
###Code
# Create the model
# Choose embedding dimensionality
D = 20 # this is a hyper parameter, we can choose any word vector size that we want
# Input layer
i = Input(shape = (T,)) # input layer takes in sequences of integers, so shape is T
# Embedding layer
x = Embedding(V+1, D)(i) #This takes in sequences of integers and returns sequences
# ıf word vectors
# this will be an N * T * D array
# we want size of embedding to (V + 1) x D, because first word index starts from 1 and not 0
# first cnn layer
x = Conv1D(32, 3, activation = 'relu')(x)
x = MaxPool1D(3)(x)
# second cnn layer
x = Conv1D(64, 3, activation='relu')(x)
x = MaxPool1D()(x)
# third cnn layer
x = Conv1D(128, 3, activation= 'relu')(x)
x = GlobalMaxPooling1D()(x)
# dense layer
x = Dense(1, activation= 'sigmoid')(x)
model = Model(i, x)
# compile the model
model.compile(optimizer='adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Train the model
r = model.fit(x=data_train, y = y_train, epochs = 5, validation_data=(data_test, y_test))
# Loss per iteration
import matplotlib.pyplot as plt
plt.plot(r.history['loss'], label = 'Loss')
plt.plot(r.history['val_loss'], label = 'Validation Loss')
plt.legend()
plt.show()
# accuracy per iteration
plt.plot(r.history['accuracy'], label = 'Accuracy')
plt.plot(r.history['val_accuracy'], label = 'Validation accuracy')
plt.legend()
plt.show()
###Output
_____no_output_____ |
DAP_Activities.ipynb | ###Markdown
Failte Ireland API
###Code
pip install requests
import requests, json
# Connect to Data.Gov Activities API and print response
response = requests.get("https://failteireland.portal.azure-api.net/docs/services/opendata-api-v1/operations/activities-get")
print(response.status_code)
print(response)
# Review if data we received back from the API is JSON:
url = "https://failteireland.portal.azure-api.net/docs/services/opendata-api-v1/operations/activities-get"
r = requests.get(url)
if 'json' in r.headers.get('Content-Type'):
js = r.json()
else:
print('Response content is not in JSON format.')
js = 'spam'
#Read non JSON API data using URL Parse
import http.client, urllib.request, urllib.parse, urllib.error, base64
import pandas as pd
headers = {}
params = urllib.parse.urlencode({})
jsondata = []
try:
conn = http.client.HTTPSConnection('failteireland.azure-api.net')
conn.request("GET", "/opendata-api/v1/activities?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
# for r in data:
# print(r)
jsond = json.loads(data)
jsondata.append(jsond)
print(jsondata)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
# Reattempt to read API Data with dataframe
import urllib.request as request
import pandas as pd
import json
Name, Telephone = [],[]
jsondata = []
with request.urlopen('https://failteireland.azure-api.net/opendata-api/v1/activities') as response:
source = response.read()
for line in source:
try:
jsond = json.loads(source)
jsond2 = jsond['results']
for i in jsondata:
Name.append(i['name'])
Telephone.append(i['telephone'])
df = pd.DataFrame([Name,Telephone]).T
# jsondata.append(jsond)
except Exception as e:
print(e)
# jsondata = json.loads(source)
# jsond = json.loads(source)
# jsondata.append(jsond)
# Panda Flatten Script
from itertools import chain, starmap
def flatten_json_iterative_solution(dictionary):
"""Flatten a nested json file"""
def unpack(parent_key, parent_value):
"""Unpack one level of nesting in json file"""
# Unpack one level only!!!
if isinstance(parent_value, dict):
for key, value in parent_value.items():
temp1 = parent_key + '_' + key
yield temp1, value
elif isinstance(parent_value, list):
i = 0
for value in parent_value:
temp2 = parent_key + '_'+str(i)
i += 1
yield temp2, value
else:
yield parent_key, parent_value
# Keep iterating until the termination condition is satisfied
while True:
# Keep unpacking the json file until all values are atomic elements (not dictionary or list)
dictionary = dict(chain.from_iterable(starmap(unpack, dictionary.items())))
# Terminate condition: not any value in the json file is dictionary or list
if not any(isinstance(value, dict) for value in dictionary.values()) and \
not any(isinstance(value, list) for value in dictionary.values()):
break
return dictionary
### Did not need to use this in the end
# Dataframe for API
activities = pd.DataFrame()
activities['name'] = list(map(lambda jsond: jsond['name'], jsondata))
activities['url'] = list(map(lambda jsond: jsond['url'], jsondata))
activities['AddressRegion'] = list(map(lambda jsond: jsond['AddressRegion'], jsondata))
activities['AddressLocality'] = list(map(lambda jsond: jsond['AddressLocality'], jsondata))
activities.head()
### Did not need to use this in the end as I exported to CSV as shown below
###Output
_____no_output_____
###Markdown
Exporting to CSV
###Code
# Python program to convert JSON file to CSV
import json
import csv
# Opening JSON file and loading the data
# into the variable data
with open('activities.json') as json_file:
data = json.load(json_file)
activities_data = data['results']
# now we will open a file for writing
data_file = open('activities_file.csv', 'w')
# create the csv writer object
csv_writer = csv.writer(data_file)
# Counter variable used for writing
# headers to the CSV file
count = 0
for results in activities_data:
if count == 0:
# Writing headers of CSV file
header = results.keys()
csv_writer.writerow(header)
count += 1
# Writing data of CSV file
csv_writer.writerow(results.values())
data_file.close()
# Reading activities_file dataset
import pandas as pd
import numpy as np
act_data = pd.read_csv(r"C:\Users\korpe\Desktop\Work\Activities\Activities_Failte.csv")
#print(act_data)
#Dealing with headers
print(act_data.head(5))
# Saving the csv into a dataFrame
activitiesfile = "C://Users/korpe/Desktop/Work/Activities/Activities_Failte.csv"
activities_df = pd.read_csv(activitiesfile)
print(activities_df)
# View the head of the DataFrame
print(activities_df.head())
# Using Pandas, create an array from the dataframe
activities_df_array = activities_df.values
print(activities_df.head())
# Check the datatype of activities_df_array
print(type(activities_df_array))
# Print information of 'activities_df'
activities_df.info()
# Before handling missing values in dataframe, will import to Mongodb to save data
###Output
_____no_output_____
###Markdown
Import into MongoDB
###Code
# Saving pandas dataframe into json before sending to MongoDB
import numpy as np
import pandas as pd
dataFrame = activities_df
dataFrame.to_json(r'C:\Users\korpe\Desktop\Work\Activities\Activities_Failte.json')
# Read json into mongdb
import json
import pymongo
import pandas as pd
from pymongo import MongoClient
# Making Connection
myclient = pymongo.MongoClient('192.168.56.30', 27017)
# database
db = client.activitiesdb
# Created or Switched to collection names
Collection = db.activitiescollection
# Loading or Opening the dataframe
with open('Activities_Failte.json') as file:
file_data = json.load(file)
# Inserting the loaded data in the Collection if JSON contains data more than one entry insert_many is used else inser_one is used
if isinstance(file_data, list):
Collection.insert_many(file_data)
else:
Collection.insert_one(file_data)
# Print Output of activities db to confirm Mongodb load
for m in Collection.find({}):
print(m)
# Same procedure used to upload into a new Collection after data wrangling step below.
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
# Using Numpy, create an array from the dataframe
activities_df_array = activities_df.values
print(activities_df.head())
# Check the datatype of activities_df_array
print(type(activities_df_array))
# Print information of 'activities_df'
activities_df.info()
# There are a total of 5413 rows and 9 columns in the dataframe.
# Before handling missing data, create copy of activities_df
activities_df_missing = activities_df.copy()
df = activities_df_missing
#activities_df.head()
# For the merge-join of weather data for the project, the critical data is the Address region.
# We will need to replace NaN in the AddressRegion column with correct region data.
# Identify missing values in the AddressRegion column
print (df[df['AddressRegion'].isna()])
# Select all rows with NaN under AddressRegion DataFrame column
nan_values = df[df['AddressRegion'].isna()]
print (nan_values)
# Confirm all empty fields using iloc function beginning with the First
df.iloc[379][5]
# Replace NaN values with a Region label based on the locality and URL
df.iat[379,5] = 'Cork'
df.iat[2186,5] = 'Cork'
df.iat[2191,5] = 'Donegal'
df.iat[2298,5] = 'Cork'
df.iat[2358,5] = 'Offaly'
df.iat[2908,5] = 'Wicklow'
df.iat[3167,5] = 'Mayo'
df.iat[3682,5] = 'Cork'
df.iat[3959,5] = 'Galway'
df.iat[3993,5] = 'Cork'
# Confirm No NaN are present in the AddressRegion Column
nan_values = df[df['AddressRegion'].isna()]
print (nan_values)
# The remaning NaN are acceptable as is some cases there are no fields present for these.
###Output
Empty DataFrame
Columns: [Name, Url, Telephone, Longitude, Latitude, AddressRegion, AddressLocality, AddressCountry, Tags]
Index: []
###Markdown
Miscellanous
###Code
#Instructions for Pandas Dataframe
def flatten_dict(d):
""" Returns list of lists from given dictionary """
l = []
for k, v in sorted(d.items()):
if isinstance(v, dict):
flatten_v = flatten_dict(v)
for my_l in reversed(flatten_v):
my_l.insert(0, k)
l.extend(flatten_v)
elif isinstance(v, list):
for l_val in v:
l.append([k, l_val])
else:
l.append([k, v])
return l
df = pd.DataFrame(flatten_dict(jsond))
print(df)
df = pd.DataFrame(flatten_dict(jsond))
print(df)
#type(jsondata)
json_string = json.dumps(jsondata)
print(json_string)
###Output
_____no_output_____ |
notebooks/05_random_baseline.ipynb | ###Markdown
Random Guess using one label per movie Test datasets
###Code
# Random guess using multi-classes (one label per movie)
l_y_pred = []
l_gen_art = []
for i in range(0, len(df_test)):
gen = np.zeros(len(GENRE_COLS))
pos = np.random.randint(0, len(GENRE_COLS))
gen[pos] = 1
l_gen_art.append(gen)
y_pred_test = np.array(l_gen_art)
accuracy_score(y_true_test, y_pred_test, False)
###Output
0.05357628501495785
###Markdown
Training datasets
###Code
# Random guess using multi-classes (one label per movie)
l_y_pred = []
l_gen_art = []
for i in range(0, len(df_train)):
gen = np.zeros(len(GENRE_COLS))
pos = np.random.randint(0, len(GENRE_COLS))
gen[pos] = 1
l_gen_art.append(gen)
y_pred_train = np.array(l_gen_art)
accuracy_score(y_true_train, y_pred_train, False)
###Output
0.05210812648758926
###Markdown
Random Guess using up to 3 labels per movie Test datasets
###Code
# Random guess using multi-labels following the distribution (up to 3 labels per movie)
l_count_gen = hlp.get_dist_of_simple_genre_combis(df_test, const.GENRE_OHE_COLS, True)
l_gen_art = []
np.random.seed(const.SEED)
for num_gen, count_gen in enumerate(l_count_gen):
for count in range(0, count_gen):
gen = np.zeros(len(GENRE_COLS))
for i in range(0, num_gen):
pos = np.random.randint(0, len(GENRE_COLS))
gen[pos] = 1
l_gen_art.append(gen)
df_tmp = pd.DataFrame(l_gen_art)
y_pred_test = df_tmp.to_numpy()
accuracy_score(y_true_test, y_pred_test, False)
###Output
Number of movies holding 0 genres: 0
Number of movies holding 1 genres: 3553
Number of movies holding 2 genres: 124
Number of movies holding 3 genres: 0
Number of movies holding 4 genres: 0
Number of movies holding 5 genres: 0
Number of movies holding 6 genres: 0
Number of movies holding 7 genres: 0
Number of movies holding 8 genres: 0
Number of movies holding 9 genres: 0
Number of movies holding 10 genres: 0
Number of movies holding 11 genres: 0
Number of movies holding 12 genres: 0
Number of movies holding 13 genres: 0
Number of movies holding 14 genres: 0
Number of movies holding 15 genres: 0
Number of movies holding 16 genres: 0
Number of movies holding 17 genres: 0
Number of movies holding 18 genres: 0
0.04813706826217025
###Markdown
Training datasets
###Code
# Random guess using multi-labels following the distribution (up to 3 labels per movie)
l_count_gen = hlp.get_dist_of_simple_genre_combis(df_train, const.GENRE_OHE_COLS, True)
l_gen_art = []
np.random.seed(const.SEED)
for num_gen, count_gen in enumerate(l_count_gen):
for count in range(0, count_gen):
gen = np.zeros(len(GENRE_COLS))
for i in range(0, num_gen):
pos = np.random.randint(0, len(GENRE_COLS))
gen[pos] = 1
l_gen_art.append(gen)
df_tmp = pd.DataFrame(l_gen_art)
y_pred_train = df_tmp.to_numpy()
accuracy_score(y_true_train, y_pred_train, False)
###Output
Number of movies holding 0 genres: 0
Number of movies holding 1 genres: 11382
Number of movies holding 2 genres: 375
Number of movies holding 3 genres: 7
Number of movies holding 4 genres: 0
Number of movies holding 5 genres: 0
Number of movies holding 6 genres: 0
Number of movies holding 7 genres: 0
Number of movies holding 8 genres: 0
Number of movies holding 9 genres: 0
Number of movies holding 10 genres: 0
Number of movies holding 11 genres: 0
Number of movies holding 12 genres: 0
Number of movies holding 13 genres: 0
Number of movies holding 14 genres: 0
Number of movies holding 15 genres: 0
Number of movies holding 16 genres: 0
Number of movies holding 17 genres: 0
Number of movies holding 18 genres: 0
0.04981298877932676
|
CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_report_contents.ipynb | ###Markdown
Peakcalling Peak Stats================================================================This notebook is for the analysis of outputs from the peakcalling pipeline relating to the quality of the peakcalling stepsThere are severals stats that you want collected and graphed - you can click on the links below to find the jupyter **notebooks** where you can directly interact with the code or the **html** files that can be opened in your web browser.Stats you should be interested in are: Quality of Bam files for Peakcalling ------------------------------------- how many reads input: [notebook](./1_peakcalling_filtering_Report.ipynb) [html](./1_peakcalling_filtering_Report.html)- how many reads removed at each step (numbers and percentages): [notebook](./1_peakcalling_filtering_Report.ipynb) [html](./1_peakcalling_filtering_Report.html)- how many reads left after filtering: [notebook](./1_peakcalling_filtering_Report.ipynb) [html](./1_peakcalling_filtering_Report.html)- how many reads mapping to each chromosome before filtering?: [notebook](./2_peakcalling_filtering_Report_reads_per_chr.ipynb) [html](./2_peakcalling_filtering_Report_reads_per_chr.html)- how many reads mapping to each chromosome after filtering?: [notebook](./2_peakcalling_filtering_Report_reads_per_chr.ipynb) [html](./2_peakcalling_filtering_Report_reads_per_chr.html)- X:Y reads ratio: [notebook](./2_peakcalling_filtering_Report_reads_per_chr.ipynb) [html](./2_peakcalling_filtering_Report_reads_per_chr.html)- inset size distribution after filtering for PE reads: [notebook](./3_peakcalling_filtering_Report_insert_sizes.ipynb) [html](./3_peakcalling_filtering_Report_insert_sizes.html)- samtools flags - check how many reads are in categories they shouldn't be: [notebook](./1_peakcalling_filtering_Report.ipynb) [html](./1_peakcalling_filtering_Report.html)- [picard stats - check how many reads are in categories they shouldn't be: Peakcalling stats------------------ Number of peaks called in each sample: [notebook](./4_peakcalling_peakstats.ipynb) [html](./4_peakcalling_peakstats.html)- Number of reads in peaks: [notebook](./4_peakcalling_peakstats.ipynb) [html](./4_peakcalling_peakstats.html)- Size distribution of the peaks- Location of peaks - correlation of peaks between samples - other things? - IDR stats - What peak lists are the best This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics It assumes a file directory of: location of database = project_folder/csvdb location of this notebook = project_folder/notebooks.dir/ Firstly lets load all the things that might be needed This is where we are and when the notebook was run
###Code
!pwd
!date
###Output
/Users/charlotteg/Documents/7_BassonProj/Mar17
Sat 11 Mar 2017 19:50:55 GMT
|
train_dsb2018.ipynb | ###Markdown
Mask R-CNN - Train on Shapes DatasetThis notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
###Code
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
from config import Config
import utils
import model as modellib
import visualize
from model import log
%matplotlib inline
# Root directory of the project
ROOT_DIR = os.getcwd()
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
###Output
_____no_output_____
###Markdown
Configurations
###Code
class DSB2018Config(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "DSB2018"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# Number of classes (including background)
NUM_CLASSES = 1 + 1 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 5
config = DSB2018Config()
config.display()
###Output
Configurations:
BACKBONE_SHAPES [[256 256]
[128 128]
[ 64 64]
[ 32 32]
[ 16 16]]
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 1
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.7
DETECTION_NMS_THRESHOLD 0.3
GPU_COUNT 1
IMAGES_PER_GPU 1
IMAGE_MAX_DIM 1024
IMAGE_MIN_DIM 1024
IMAGE_PADDING True
IMAGE_SHAPE [1024 1024 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.001
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 100
MEAN_PIXEL [123.7 116.8 103.9]
MINI_MASK_SHAPE (56, 56)
NAME DSB2018
NUM_CLASSES 2
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (8, 16, 32, 64, 128)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 100
TRAIN_ROIS_PER_IMAGE 32
USE_MINI_MASK False
USE_RPN_ROIS True
VALIDATION_STEPS 5
WEIGHT_DECAY 0.0001
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetCreate a synthetic datasetExtend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:* load_image()* load_mask()* image_reference()
###Code
class DSB2018Dataset(utils.Dataset):
"""Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
"""
def load_DSB2018(self, data_dir, set_name, config=None):
"""Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
"""
# Add classes
self.add_class("DSB2018", 1, "nuclie")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
filenames = np.genfromtxt(set_name, dtype=str)
for i in range(len(filenames)):
self.add_image("DSB2018", image_id=i, path=None, channels=3, data_dir=data_dir, name=filenames[i])
def load_image(self, image_id):
"""Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but
in this case it generates the image on the fly from the
specs in image_info.
"""
info = self.image_info[image_id]
data_dir = info['data_dir']
name = info['name']
channels = info['channels']
imgs = os.listdir(os.path.join(data_dir, name, 'images'))
img = plt.imread(os.path.join(data_dir, name, 'images', '%s' % (imgs[0])))
img = img[:,:,:channels] * 256
return img
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
data_dir = info['data_dir']
name = info['name']
mask_files = os.listdir(os.path.join(data_dir, name, 'masks'))
masks = []
for i in range(len(mask_files)):
mask = plt.imread(os.path.join(data_dir, name, 'masks', '%s' % (mask_files[i])))
masks.append(mask)
class_ids = np.ones(len(masks))
masks = np.array(masks)
masks = np.moveaxis(masks, 0, -1)
return masks, class_ids.astype(np.int32)
data_dir = '/home/htang6/data/dsb2018/stage1_train/'
# Training dataset
set_name = '/home/htang6/workspace/Mask_RCNN/filenames/filenames_train.csv'
dataset_train = DSB2018Dataset()
dataset_train.load_DSB2018(data_dir, set_name)
dataset_train.prepare()
# Validation dataset
set_name = '/home/htang6/workspace/Mask_RCNN/filenames/filenames_val.csv'
dataset_val = DSB2018Dataset()
dataset_val.load_DSB2018(data_dir, set_name)
dataset_val.prepare()
print(dataset_train.load_image(0).shape)
print(dataset_train.load_mask(0)[0].shape)
print(dataset_train.load_mask(0)[1])
plt.imshow(dataset_train.load_image(0))
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
image_ids = [16, 31, 2, 3]
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
###Output
_____no_output_____
###Markdown
Ceate Model
###Code
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last()[1], by_name=True)
###Output
_____no_output_____
###Markdown
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
###Code
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=100,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
# model.keras_model.save_weights(model_path)
###Output
_____no_output_____
###Markdown
Detection
###Code
class InferenceConfig(DSB2018Config):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()[1]
# Load trained weights (fill in path to trained weights here)
assert model_path != "", "Provide path to trained weights"
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
# image_id = 16
# original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
# modellib.load_image_gt(dataset_train, inference_config,
# image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
img = dataset_val.load_image(image_id)
results = model.detect([img], verbose=1)
r = results[0]
visualize.display_instances(img, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
def rle_encoding(x):
dots = np.where(x.T.flatten() == 1)[0]
run_lengths = []
prev = -2
for b in dots:
if (b>prev+1): run_lengths.extend((b + 1, 0))
run_lengths[-1] += 1
prev = b
return run_lengths
print(r['masks'].shape)
print(r['scores'])
masks = r['masks']
# whole = np.zeros(shape=masks.shape[:2])
# for i in range(masks.shape[-1]):
# whole = np.logical_or(whole, masks[:,:,i])
# plt.imshow(whole)
reduced = []
for i in range(masks.shape[-1]):
mask = np.copy(masks[:,:,i])
for j in range(len(reduced)):
intersection = mask & reduced[j]
if np.any(intersection):
# print('Overlap!!')
# print(np.where(intersection))
mask -= intersection
# plt.imshow(intersection)
# plt.show()
# plt.imshow(mask)
# plt.show()
# plt.imshow(reduced[j])
# plt.show()
if np.any(mask):
reduced.append(mask)
rles = []
test_ids = []
for m in reduced:
rles.append(rle_encoding(m))
test_ids.extend(['test'] * len(reduced))
import pandas as pd
sub = pd.DataFrame()
sub['ImageId'] = test_ids
sub['EncodedPixels'] = pd.Series(rles).apply(lambda x: ' '.join(str(y) for y in x))
sub.to_csv('sub-dsbowl2018-1.csv', index=False)
print(sub)
###Output
(256, 320, 32)
[0.9971697 0.99582964 0.9945339 0.99331784 0.99328107 0.9927954
0.9921508 0.9920859 0.9911851 0.99046665 0.9874351 0.9834874
0.9815144 0.9805147 0.9783029 0.978157 0.9683178 0.95240587
0.9431202 0.94237137 0.9397261 0.9273334 0.8978374 0.86357665
0.858545 0.831884 0.82994 0.82985514 0.7778135 0.77081865
0.7599422 0.710297 ]
ImageId EncodedPixels
0 test 71141 7 71394 18 71649 20 71904 22 72159 24 72...
1 test 67337 10 67592 16 67847 18 68102 20 68357 22 6...
2 test 52479 1 52734 2 52989 3 53245 3 53500 4 53755 ...
3 test 48568 8 48822 12 49076 15 49330 18 49585 21 49...
4 test 44976 8 45231 10 45485 12 45740 13 45995 14 46...
5 test 24424 9 24675 19 24928 23 25182 25 25436 27 25...
6 test 52347 15 52600 19 52855 21 53111 22 53367 23 5...
7 test 68294 11 68548 14 68803 15 69057 17 69312 18 6...
8 test 36086 8 36340 11 36594 14 36847 17 37097 23 37...
9 test 58836 5 59090 9 59345 11 59601 12 59856 13 601...
10 test 76313 3 76567 6 76822 8 77077 10 77332 11 7758...
11 test 60094 6 60348 10 60602 12 60856 15 61110 17 61...
12 test 35812 4 36066 7 36320 9 36573 12 36826 15 3708...
13 test 72294 4 72548 7 72803 8 73058 9 73313 10 73568...
14 test 51669 3 51923 8 52178 11 52433 14 52688 18 529...
15 test 29860 10 29874 7 30114 24 30368 27 30621 30 30...
16 test 69771 2 70022 7 70275 11 70529 15 70784 16 710...
17 test 60207 5 60460 16 60716 17 60971 19 61227 19 61...
18 test 49023 7 49278 8 49532 11 49787 12 50043 12 502...
19 test 49472 7 49726 10 49981 11 50236 12 50491 13 50...
20 test 73217 2 73473 3 73729 4 73985 5 74241 6 74497 ...
21 test 71445 2 71700 4 71953 8 72205 13 72459 15 7271...
22 test 61585 6 61839 9 62094 10 62350 10 62605 10 628...
23 test 57190 6 57444 9 57700 9 57956 10 58212 10 5846...
24 test 48377 3 48631 6 48887 7 49143 8 49399 9 49655 ...
25 test 41468 3 41723 5 41979 5 42234 6 42489 7 42744 ...
26 test 60753 6 61006 10 61260 13 61515 14 61770 15 62...
27 test 47340 4 47595 7 47850 10 48106 11 48363 10 486...
28 test 58992 5 59247 7 59503 8 59758 10 60015 10 6027...
29 test 15871 1 16127 1 16382 2 16638 2 16893 3 17149 ...
30 test 81069 11 81322 16 81577 17 81832 18
31 test 68494 6 68750 8 69005 10 69261 10 69516 12 697...
###Markdown
Evaluation
###Code
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
###Output
_____no_output_____ |
deep-learning/Tensorflow-2.x/Examples/TensorFlow_2_0_+_Keras_Crash_Course.ipynb | ###Markdown
###Code
!pip install tensorflow==2.0.0
import tensorflow as tf
print(tf.__version__)
###Output
2.0.0
###Markdown
TensorFlow 2.0 + Keras Overview for Deep Learning Researchers*@fchollet, October 2019*---**This document serves as an introduction, crash course, and quick API reference for TensorFlow 2.0.**---TensorFlow and Keras were both released over four years ago (March 2015 for Keras and November 2015 for TensorFlow). That's a long time in deep learning years!In the old days, TensorFlow 1.x + Keras had a number of known issues:- Using TensorFlow meant manipulating static computation graphs, which would feel awkward and difficult to programmers used to imperative styles of coding.- While the TensorFlow API was very powerful and flexible, it lacked polish and was often confusing or difficult to use.- While Keras was very productive and easy to use, it would often lack flexibility for research use cases.--- TensorFlow 2.0 is an extensive redesign of TensorFlow and Keras that takes into account over four years of user feedback and technical progress. It fixes the issues above in a big way. It's a machine learning platform from the future.---TensorFlow 2.0 is built on the following key ideas:- Let users run their computation eagerly, like they would in Numpy. This makes TensorFlow 2.0 programming intuitive and Pythonic.- Preserve the considerable advantages of compiled graphs (for performance, distribution, and deployment). This makes TensorFlow fast, scalable, and production-ready.- Leverage Keras as its high-level deep learning API, making TensorFlow approachable and highly productive.- Extend Keras into a spectrum of workflows ranging from the very high-level (easier to use, less flexible) to the very low-level (requires more expertise, but provides great flexibility). Part 1: TensorFlow basics Tensors This is a [constant](https://www.tensorflow.org/api_docs/python/tf/constant) tensor:
###Code
x = tf.constant([[5, 2], [1, 3]])
print(x)
###Output
tf.Tensor(
[[5 2]
[1 3]], shape=(2, 2), dtype=int32)
###Markdown
You can get its value as a Numpy array by calling `.numpy()`:
###Code
x.numpy()
###Output
_____no_output_____
###Markdown
Much like a Numpy array, it features the attributes `dtype` and `shape`:
###Code
print('dtype:', x.dtype)
print('shape:', x.shape)
###Output
dtype: <dtype: 'int32'>
shape: (2, 2)
###Markdown
A common way to create constant tensors is via `tf.ones` and `tf.zeros` (just like `np.ones` and `np.zeros`):
###Code
print(tf.ones(shape=(2, 1)))
print(tf.zeros(shape=(2, 1)))
###Output
tf.Tensor(
[[1.]
[1.]], shape=(2, 1), dtype=float32)
tf.Tensor(
[[0.]
[0.]], shape=(2, 1), dtype=float32)
###Markdown
Random constant tensors This is all pretty [normal](https://www.tensorflow.org/api_docs/python/tf/random/normal):
###Code
tf.random.normal(shape=(2, 2), mean=0., stddev=1.)
###Output
_____no_output_____
###Markdown
And here's an integer tensor with values drawn from a random [uniform](https://www.tensorflow.org/api_docs/python/tf/random/uniform) distribution:
###Code
tf.random.uniform(shape=(2, 2), minval=0, maxval=10, dtype='int32')
###Output
_____no_output_____
###Markdown
Variables [Variables](https://www.tensorflow.org/guide/variable) are special tensors used to store mutable state (like the weights of a neural network). You create a Variable using some initial value.
###Code
initial_value = tf.random.normal(shape=(2, 2))
a = tf.Variable(initial_value)
print(a)
###Output
<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=
array([[-0.16258094, -0.52607477],
[ 0.69424504, 2.1672049 ]], dtype=float32)>
###Markdown
You update the value of a Variable by using the methods `.assign(value)`, or `.assign_add(increment)` or `.assign_sub(decrement)`:
###Code
new_value = tf.random.normal(shape=(2, 2))
a.assign(new_value)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j]
added_value = tf.random.normal(shape=(2, 2))
a.assign_add(added_value)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j] + added_value[i, j]
###Output
_____no_output_____
###Markdown
Doing math in TensorFlow You can use TensorFlow exactly like you would use Numpy. The main difference is that your TensorFlow code can run on GPU and TPU.
###Code
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
c = a + b
d = tf.square(c)
e = tf.exp(d)
###Output
_____no_output_____
###Markdown
Computing gradients with `GradientTape` Oh, and there's another big difference with Numpy: you can automatically retrieve the gradient of any differentiable expression.Just open a [`GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape), start "watching" a tensor via `tape.watch()`, and compose a differentiable expression using this tensor as input:
###Code
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
with tf.GradientTape() as tape:
tape.watch(a) # Start recording the history of operations applied to `a`
c = tf.sqrt(tf.square(a) + tf.square(b)) # Do some math using `a`
# What's the gradient of `c` with respect to `a`?
dc_da = tape.gradient(c, a)
print(dc_da)
###Output
tf.Tensor(
[[0.60742545 0.39205843]
[0.73241967 0.11925844]], shape=(2, 2), dtype=float32)
###Markdown
By default, variables are watched automatically, so you don't need to manually `watch` them:
###Code
a = tf.Variable(a)
with tf.GradientTape() as tape:
c = tf.sqrt(tf.square(a) + tf.square(b))
dc_da = tape.gradient(c, a)
print(dc_da)
###Output
tf.Tensor(
[[0.60742545 0.39205843]
[0.73241967 0.11925844]], shape=(2, 2), dtype=float32)
###Markdown
Note that you can compute higher-order derivatives by nesting tapes:
###Code
with tf.GradientTape() as outer_tape:
with tf.GradientTape() as tape:
c = tf.sqrt(tf.square(a) + tf.square(b))
dc_da = tape.gradient(c, a)
d2c_da2 = outer_tape.gradient(dc_da, a)
print(d2c_da2)
###Output
tf.Tensor(
[[0.5418279 0.68627995]
[0.3451818 0.3664849 ]], shape=(2, 2), dtype=float32)
###Markdown
An end-to-end example: linear regression So far you've learned that TensorFlow is a Numpy-like library that is GPU or TPU accelerated, with automatic differentiation. Time for an end-to-end example: let's implement a linear regression, the FizzBuzz of Machine Learning. For the sake of demonstration, we won't use any of the higher-level Keras components like `Layer` or `MeanSquaredError`. Just basic ops.
###Code
input_dim = 2
output_dim = 1
learning_rate = 0.01
# This is our weight matrix
w = tf.Variable(tf.random.uniform(shape=(input_dim, output_dim)))
# This is our bias vector
b = tf.Variable(tf.zeros(shape=(output_dim,)))
def compute_predictions(features):
return tf.matmul(features, w) + b
def compute_loss(labels, predictions):
return tf.reduce_mean(tf.square(labels - predictions))
def train_on_batch(x, y):
with tf.GradientTape() as tape:
predictions = compute_predictions(x)
loss = compute_loss(y, predictions)
# Note that `tape.gradient` works with a list as well (w, b).
dloss_dw, dloss_db = tape.gradient(loss, [w, b])
w.assign_sub(learning_rate * dloss_dw)
b.assign_sub(learning_rate * dloss_db)
return loss
###Output
_____no_output_____
###Markdown
Let's generate some artificial data to demonstrate our model:
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
# Prepare a dataset.
num_samples = 10000
negative_samples = np.random.multivariate_normal(
mean=[0, 3], cov=[[1, 0.5],[0.5, 1]], size=num_samples)
positive_samples = np.random.multivariate_normal(
mean=[3, 0], cov=[[1, 0.5],[0.5, 1]], size=num_samples)
features = np.vstack((negative_samples, positive_samples)).astype(np.float32)
labels = np.vstack((np.zeros((num_samples, 1), dtype='float32'),
np.ones((num_samples, 1), dtype='float32')))
plt.scatter(features[:, 0], features[:, 1], c=labels[:, 0])
###Output
_____no_output_____
###Markdown
Now let's train our linear regression by iterating over batch-by-batch over the data and repeatedly calling `train_on_batch`:
###Code
# Shuffle the data.
indices = np.random.permutation(len(features))
features = features[indices]
labels = labels[indices]
# Create a tf.data.Dataset object for easy batched iteration
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset = dataset.shuffle(buffer_size=1024).batch(256)
for epoch in range(10):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
print('Epoch %d: last batch loss = %.4f' % (epoch, float(loss)))
###Output
Epoch 0: last batch loss = 0.0777
Epoch 1: last batch loss = 0.0337
Epoch 2: last batch loss = 0.0326
Epoch 3: last batch loss = 0.0287
Epoch 4: last batch loss = 0.0334
Epoch 5: last batch loss = 0.0261
Epoch 6: last batch loss = 0.0307
Epoch 7: last batch loss = 0.0155
Epoch 8: last batch loss = 0.0230
Epoch 9: last batch loss = 0.0205
###Markdown
Here's how our model performs:
###Code
predictions = compute_predictions(features)
plt.scatter(features[:, 0], features[:, 1], c=predictions[:, 0] > 0.5)
###Output
_____no_output_____
###Markdown
Making it fast with `tf.function` But how fast is our current code running?
###Code
import time
t0 = time.time()
for epoch in range(20):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
t_end = time.time() - t0
print('Time per epoch: %.3f s' % (t_end / 20,))
###Output
Time per epoch: 0.140 s
###Markdown
Let's compile the training function into a static graph. Literally all we need to do is add the `tf.function` decorator on it:
###Code
@tf.function
def train_on_batch(x, y):
with tf.GradientTape() as tape:
predictions = compute_predictions(x)
loss = compute_loss(y, predictions)
dloss_dw, dloss_db = tape.gradient(loss, [w, b])
w.assign_sub(learning_rate * dloss_dw)
b.assign_sub(learning_rate * dloss_db)
return loss
###Output
_____no_output_____
###Markdown
Let's try this again:
###Code
t0 = time.time()
for epoch in range(20):
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
t_end = time.time() - t0
print('Time per epoch: %.3f s' % (t_end / 20,))
###Output
Time per epoch: 0.085 s
###Markdown
40% reduction, neat. In this case we used a trivially simple model; in general the bigger the model the greater the speedup you can get by leveraging static graphs.Remember: eager execution is great for debugging and printing results line-by-line, but when it's time to scale, static graphs are a researcher's best friends. Part 2: The Keras API Keras is a Python API for deep learning. It has something for everyone:- If you're an engineer, Keras provides you with reusable blocks such as layers, metrics, training loops, to support common use cases. It provides a high-level user experience that's accessible and productive.- If you're a researcher, you may prefer not to use these built-in blocks such as layers and training loops, and instead create your own. Of course, Keras allows you to do this. In this case, Keras provides you with templates for the blocks you write, it provides you with structure, with an API standard for things like Layers and Metrics. This structure makes your code easy to share with others and easy to integrate in production workflows.- The same is true for library developers: TensorFlow is a large ecosystem. It has many different libraries. In order for different libraries to be able to talk to each other and share components, they need to follow an API standard. That's what Keras provides.Crucially, Keras brings high-level UX and low-level flexibility together fluently: you no longer have on one hand, a high-level API that's easy to use but inflexible, and on the other hand a low-level API that's flexible but only approachable by experts. Instead, you have a spectrum of workflows, from the very high-level to the very low-level. Workflows that are all compatible because they're built on top of the same concepts and objects.![Spectrum of Keras workflows](https://keras-dev.s3.amazonaws.com/tutorials-img/spectrum-of-workflows.png) The base `Layer` classThe first class you need to know is [`Layer`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer). Pretty much everything in Keras derives from it.A Layer encapsulates a state (weights) and some computation (defined in the `call` method).
###Code
from tensorflow.keras.layers import Layer
class Linear(Layer):
"""y = w.x + b"""
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype='float32'),
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
# Instantiate our layer.
linear_layer = Linear(4, 2)
###Output
_____no_output_____
###Markdown
A layer instance works like a function. Let's call it on some data:
###Code
y = linear_layer(tf.ones((2, 2)))
assert y.shape == (2, 4)
###Output
_____no_output_____
###Markdown
The `Layer` class takes care of tracking the weights assigned to it as attributes:
###Code
# Weights are automatically tracked under the `weights` property.
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
###Output
_____no_output_____
###Markdown
Note that's also a shortcut method for creating weights: `add_weight`. Instead of doing```pythonw_init = tf.random_normal_initializer()self.w = tf.Variable(initial_value=w_init(shape=shape, dtype='float32'))```You would typically do:```pythonself.w = self.add_weight(shape=shape, initializer='random_normal')``` It’s good practice to create weights in a separate `build` method, called lazily with the shape of the first inputs seen by your layer. Here, this pattern prevents us from having to specify input_dim in the constructor:
###Code
class Linear(Layer):
"""y = w.x + b"""
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
# Instantiate our lazy layer.
linear_layer = Linear(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
###Output
_____no_output_____
###Markdown
Trainable and non-trainable weights Weights created by layers can be either trainable or non-trainable. They're exposed in `trainable_weights` and `non_trainable_weights`. Here's a layer with a non-trainable weight:
###Code
from tensorflow.keras.layers import Layer
class ComputeSum(Layer):
"""Returns the sum of the inputs."""
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
###Output
[2. 2.]
[4. 4.]
###Markdown
Recursively composing layers Layers can be recursively nested to create bigger computation blocks. Each layer will track the weights of its sublayers (both trainable and non-trainable).
###Code
# Let's reuse the Linear class
# with a `build` method that we defined above.
class MLP(Layer):
"""Simple stack of Linear layers."""
def __init__(self):
super(MLP, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(10)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLP()
# The first call to the `mlp` object will create the weights.
y = mlp(tf.ones(shape=(3, 64)))
# Weights are recursively tracked.
assert len(mlp.weights) == 6
###Output
_____no_output_____
###Markdown
Built-in layersKeras provides you with a [wide range of built-in layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/), so that you don't have to implement your own layers all the time.- Convolution layers- Transposed convolutions- Separateable convolutions- Average and max pooling- Global average and max pooling- LSTM, GRU (with built-in cuDNN acceleration)- BatchNormalization- Dropout- Attention- ConvLSTM2D- etc. Keras follows the principles of exposing good default configurations, so that layers will work fine out of the box for most use cases if you leave keyword arguments to their default value. For instance, the `LSTM` layer uses an orthogonal recurrent matrix initializer by default, and initializes the forget gate bias to one by default. The `training` argument in `call` Some layers, in particular the `BatchNormalization` layer and the `Dropout` layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a `training` (boolean) argument in the `call` method.By exposing this argument in `call`, you enable the built-in training and evaluation loops (e.g. `fit`) to correctly use the layer in training and inference.
###Code
from tensorflow.keras.layers import Layer
class Dropout(Layer):
def __init__(self, rate):
super(Dropout, self).__init__()
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
class MLPWithDropout(Layer):
def __init__(self):
super(MLPWithDropout, self).__init__()
self.linear_1 = Linear(32)
self.dropout = Dropout(0.5)
self.linear_3 = Linear(10)
def call(self, inputs, training=None):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.dropout(x, training=training)
return self.linear_3(x)
mlp = MLPWithDropout()
y_train = mlp(tf.ones((2, 2)), training=True)
y_test = mlp(tf.ones((2, 2)), training=False)
###Output
_____no_output_____
###Markdown
A more Functional way of defining models To build deep learning models, you don't have to use object-oriented programming all the time. Layers can also be composed functionally, like this (we call it the "Functional API"):
###Code
# We use an `Input` object to describe the shape and dtype of the inputs.
# This is the deep learning equivalent of *declaring a type*.
# The shape argument is per-sample; it does not include the batch size.
# The functional API focused on defining per-sample transformations.
# The model we create will automatically batch the per-sample transformations,
# so that it can be called on batches of data.
inputs = tf.keras.Input(shape=(16,))
# We call layers on these "type" objects
# and they return updated types (new shapes/dtypes).
x = Linear(32)(inputs) # We are reusing the Linear layer we defined earlier.
x = Dropout(0.5)(x) # We are reusing the Dropout layer we defined earlier.
outputs = Linear(10)(x)
# A functional `Model` can be defined by specifying inputs and outputs.
# A model is itself a layer like any other.
model = tf.keras.Model(inputs, outputs)
# A functional model already has weights, before being called on any data.
# That's because we defined its input shape in advance (in `Input`).
assert len(model.weights) == 4
# Let's call our model on some data.
y = model(tf.ones((2, 16)))
assert y.shape == (2, 10)
###Output
_____no_output_____
###Markdown
The Functional API tends to be more concise than subclassing, and provides a few other advantages (generally the same advantages that functional, typed languages provide over untyped OO development). However, it can only be used to define DAGs of layers -- recursive networks should be defined as `Layer` subclasses instead.Key differences between models defined via subclassing and Functional models are explained in [this blog post](https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021).Learn more about the Functional API [here](https://www.tensorflow.org/alpha/guide/keras/functional).In your research workflows, you may often find yourself mix-and-matching OO models and Functional models. For models that are simple stacks of layers with a single input and a single output, you can also use the `Sequential` class which turns a list of layers into a `Model`:
###Code
from tensorflow.keras import Sequential
model = Sequential([Linear(32), Dropout(0.5), Linear(10)])
y = model(tf.ones((2, 16)))
assert y.shape == (2, 10)
###Output
_____no_output_____
###Markdown
Loss classesKeras features a wide range of built-in loss classes, like `BinaryCrossentropy`, `CategoricalCrossentropy`, `KLDivergence`, etc. They work like this:
###Code
bce = tf.keras.losses.BinaryCrossentropy()
y_true = [0., 0., 1., 1.] # Targets
y_pred = [1., 1., 1., 0.] # Predictions
loss = bce(y_true, y_pred)
print('Loss:', loss.numpy())
###Output
Loss: 11.522857
###Markdown
Note that loss classes are stateless: the output of `__call__` is only a function of the input. Metric classesKeras also features a wide range of built-in metric classes, such as `BinaryAccuracy`, `AUC`, `FalsePositives`, etc.Unlike losses, metrics are stateful. You update their state using the `update_state` method, and you query the scalar metric result using `result`:
###Code
m = tf.keras.metrics.AUC()
m.update_state([0, 1, 1, 1], [0, 1, 0, 0])
print('Intermediate result:', m.result().numpy())
m.update_state([1, 1, 1, 1], [0, 1, 1, 0])
print('Final result:', m.result().numpy())
###Output
Intermediate result: 0.6666667
Final result: 0.71428573
###Markdown
The internal state can be cleared with `metric.reset_states`. You can easily roll out your own metrics by subclassing the `Metric` class:- Create the state variables in `__init__`- Update the variables given `y_true` and `y_pred` in `update_state`- Return the metric result in `result`- Clear the state in `reset_states`Here's a quick implementation of a `BinaryTruePositives` metric as a demonstration:
###Code
class BinaryTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='binary_true_positives', **kwargs):
super(BinaryTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, tf.bool)
y_pred = tf.cast(y_pred, tf.bool)
values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))
values = tf.cast(values, self.dtype)
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, self.dtype)
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
self.true_positive.assign(0)
m = BinaryTruePositives()
m.update_state([0, 1, 1, 1], [0, 1, 0, 0])
print('Intermediate result:', m.result().numpy())
m.update_state([1, 1, 1, 1], [0, 1, 1, 0])
print('Final result:', m.result().numpy())
###Output
Intermediate result: 1.0
Final result: 3.0
###Markdown
Optimizer classes & a quick end-to-end training loopYou don't normally have to define by hand how to update your variables during gradient descent, like we did in our initial linear regression example. You would usually use one of the built-in Keras optimizer, like `SGD`, `RMSprop`, or `Adam`.Here's a simple MNSIT example that brings together loss classes, metric classes, and optimizers.
###Code
from tensorflow.keras import layers
# Prepare a dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[:].reshape(60000, 784).astype('float32') / 255
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.shuffle(buffer_size=1024).batch(64)
# Instantiate a simple classification model
model = tf.keras.Sequential([
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(10)
])
# Instantiate a logistic loss function that expects integer targets.
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Instantiate an accuracy metric.
accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.Adam()
# Iterate over the batches of the dataset.
for step, (x, y) in enumerate(dataset):
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
logits = model(x)
# Loss value for this batch.
loss_value = loss(y, logits)
# Get gradients of loss wrt the weights.
gradients = tape.gradient(loss_value, model.trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
# Update the running accuracy.
accuracy.update_state(y, logits)
# Logging.
if step % 100 == 0:
print('Step:', step)
print('Loss from last step: %.3f' % loss_value)
print('Total running accuracy so far: %.3f' % accuracy.result())
###Output
Step: 0
Loss from last step: 2.330
Total running accuracy so far: 0.047
Step: 100
Loss from last step: 0.183
Total running accuracy so far: 0.828
Step: 200
Loss from last step: 0.228
Total running accuracy so far: 0.873
Step: 300
Loss from last step: 0.175
Total running accuracy so far: 0.893
Step: 400
Loss from last step: 0.164
Total running accuracy so far: 0.905
Step: 500
Loss from last step: 0.234
Total running accuracy so far: 0.914
Step: 600
Loss from last step: 0.231
Total running accuracy so far: 0.921
Step: 700
Loss from last step: 0.149
Total running accuracy so far: 0.926
Step: 800
Loss from last step: 0.268
Total running accuracy so far: 0.930
Step: 900
Loss from last step: 0.061
Total running accuracy so far: 0.933
###Markdown
We can reuse our `SparseCategoricalAccuracy` metric instance to implement a testing loop:
###Code
x_test = x_test[:].reshape(10000, 784).astype('float32') / 255
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(128)
accuracy.reset_states() # This clears the internal state of the metric
for step, (x, y) in enumerate(test_dataset):
logits = model(x)
accuracy.update_state(y, logits)
print('Final test accuracy: %.3f' % accuracy.result())
###Output
Final test accuracy: 0.963
###Markdown
The `add_loss` methodSometimes you need to compute loss values on the fly during a foward pass (especially regularization losses). Keras allows you to compute loss values at any time, and to recursively keep track of them via the `add_loss` method.Here's an example of a layer that adds a sparsity regularization loss based on the L2 norm of the inputs:
###Code
from tensorflow.keras.layers import Layer
class ActivityRegularization(Layer):
"""Layer that creates an activity sparsity regularization loss."""
def __init__(self, rate=1e-2):
super(ActivityRegularization, self).__init__()
self.rate = rate
def call(self, inputs):
# We use `add_loss` to create a regularization loss
# that depends on the inputs.
self.add_loss(self.rate * tf.reduce_sum(tf.square(inputs)))
return inputs
###Output
_____no_output_____
###Markdown
Loss values added via `add_loss` can be retrieved in the `.losses` list property of any `Layer` or `Model`:
###Code
from tensorflow.keras import layers
class SparseMLP(Layer):
"""Stack of Linear layers with a sparsity regularization loss."""
def __init__(self, output_dim):
super(SparseMLP, self).__init__()
self.dense_1 = layers.Dense(32, activation=tf.nn.relu)
self.regularization = ActivityRegularization(1e-2)
self.dense_2 = layers.Dense(output_dim)
def call(self, inputs):
x = self.dense_1(inputs)
x = self.regularization(x)
return self.dense_2(x)
mlp = SparseMLP(1)
y = mlp(tf.ones((10, 10)))
print(mlp.losses) # List containing one float32 scalar
###Output
[<tf.Tensor: id=201583, shape=(), dtype=float32, numpy=1.0274899>]
###Markdown
These losses are cleared by the top-level layer at the start of each forward pass -- they don't accumulate. So `layer.losses` always contain only the losses created during the last forward pass. You would typically use these losses by summing them before computing your gradients when writing a training loop.
###Code
# Losses correspond to the *last* forward pass.
mlp = SparseMLP(1)
mlp(tf.ones((10, 10)))
assert len(mlp.losses) == 1
mlp(tf.ones((10, 10)))
assert len(mlp.losses) == 1 # No accumulation.
# Let's demonstrate how to use these losses in a training loop.
# Prepare a dataset.
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(x_train.reshape(60000, 784).astype('float32') / 255, y_train))
dataset = dataset.shuffle(buffer_size=1024).batch(64)
# A new MLP.
mlp = SparseMLP(10)
# Loss and optimizer.
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
for step, (x, y) in enumerate(dataset):
with tf.GradientTape() as tape:
# Forward pass.
logits = mlp(x)
# External loss value for this batch.
loss = loss_fn(y, logits)
# Add the losses created during the forward pass.
loss += sum(mlp.losses)
# Get gradients of loss wrt the weights.
gradients = tape.gradient(loss, mlp.trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, mlp.trainable_weights))
# Logging.
if step % 100 == 0:
print('Loss at step %d: %.3f' % (step, loss))
###Output
Loss at step 0: 4.389
Loss at step 100: 2.301
Loss at step 200: 2.278
Loss at step 300: 2.210
Loss at step 400: 2.157
Loss at step 500: 2.041
Loss at step 600: 1.945
Loss at step 700: 1.932
Loss at step 800: 1.818
Loss at step 900: 2.024
###Markdown
A detailed end-to-end example: a Variational AutoEncoder (VAE)If you want to take a break from the basics and look at a slightly more advanced example, check out this [Variational AutoEncoder](https://www.tensorflow.org/guide/keras/custom_layers_and_modelsputting_it_all_together_an_end-to-end_example) implementation that demonstrates everything you've learned so far:- Subclassing `Layer`- Recursive layer composition- Loss classes and metric classes- `add_loss`- `GradientTape` Using built-in training loops It would be a bit silly if you had to write your own low-level training loops every time for simple use cases. Keras provides you with a built-in training loop on the `Model` class. If you want to use it, either subclass from the `Model` class, or create a `Functional` or `Sequential` model.To demonstrate it, let's reuse the MNIST setup from above:
###Code
# Prepare a dataset.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.shuffle(buffer_size=1024).batch(64)
# Instantiate a simple classification model
model = tf.keras.Sequential([
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(10)
])
# Instantiate a logistic loss function that expects integer targets.
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Instantiate an accuracy metric.
accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.Adam()
###Output
_____no_output_____
###Markdown
First, call `compile` to configure the optimizer, loss, and metrics to monitor.
###Code
model.compile(optimizer=optimizer, loss=loss, metrics=[accuracy])
###Output
_____no_output_____
###Markdown
Then we call `fit` on our model to pass it the data:
###Code
model.fit(dataset, epochs=3)
###Output
Epoch 1/3
938/938 [==============================] - 9s 10ms/step - loss: 0.2160 - sparse_categorical_accuracy: 0.9370
Epoch 2/3
938/938 [==============================] - 6s 6ms/step - loss: 0.0831 - sparse_categorical_accuracy: 0.9745
Epoch 3/3
938/938 [==============================] - 6s 6ms/step - loss: 0.0571 - sparse_categorical_accuracy: 0.9817
###Markdown
Done!**Note:** When you use `fit`, by default execution uses static graphs, so you don't need to add any `tf.function` decorators to your model or your layers.Now let's test it:
###Code
x_test = x_test[:].reshape(10000, 784).astype('float32') / 255
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(128)
loss, acc = model.evaluate(test_dataset)
print('loss: %.3f - acc: %.3f' % (loss, acc))
###Output
79/79 [==============================] - 0s 5ms/step - loss: 0.0853 - sparse_categorical_accuracy: 0.9756
loss: 0.085 - acc: 0.976
###Markdown
Note that you can also monitor your loss and metrics on some validation data during `fit`.Also, you can call `fit` directly on Numpy arrays, so no need for the dataset conversion:
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
num_val_samples = 10000
x_val = x_train[-num_val_samples:]
y_val = y_train[-num_val_samples:]
x_train = x_train[:-num_val_samples]
y_train = y_train[:-num_val_samples]
# Instantiate a simple classification model
model = tf.keras.Sequential([
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(10)
])
# Instantiate a logistic loss function that expects integer targets.
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Instantiate an accuracy metric.
accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer,
loss=loss,
metrics=[accuracy])
model.fit(x_train, y_train,
validation_data=(x_val, y_val),
epochs=3,
batch_size=64)
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/3
50000/50000 [==============================] - 5s 104us/sample - loss: 0.2447 - sparse_categorical_accuracy: 0.9283 - val_loss: 0.1155 - val_sparse_categorical_accuracy: 0.9663
Epoch 2/3
50000/50000 [==============================] - 5s 94us/sample - loss: 0.0947 - sparse_categorical_accuracy: 0.9713 - val_loss: 0.1011 - val_sparse_categorical_accuracy: 0.9703
Epoch 3/3
50000/50000 [==============================] - 5s 91us/sample - loss: 0.0620 - sparse_categorical_accuracy: 0.9803 - val_loss: 0.0803 - val_sparse_categorical_accuracy: 0.9773
###Markdown
CallbacksOne of the neat features of `fit` (besides built-in support for sample weighting and class weighting) is that you can easily customize what happens during training and evaluation by using [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/).A callback is an object that is called at different points during training (e.g. at the end of every batch or at the end of every epoch) and takes actions, such as saving a model, mutating variables on the model, loading a checkpoint, stopping training, etc.There's a bunch of built-in callback available, like `ModelCheckpoint` to save your models after each epoch during training, or `EarlyStopping`, which interrupts training when your validation metrics start stalling.And you can easily [write your own callbacks](https://www.tensorflow.org/guide/keras/custom_callback).
###Code
# Instantiate a simple classification model
model = tf.keras.Sequential([
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(256, activation=tf.nn.relu),
layers.Dense(10)
])
# Instantiate a logistic loss function that expects integer targets.
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Instantiate an accuracy metric.
accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer,
loss=loss,
metrics=[accuracy])
# Instantiate some callbacks
callbacks = [tf.keras.callbacks.EarlyStopping(),
tf.keras.callbacks.ModelCheckpoint(filepath='my_model.keras',
save_best_only=True)]
model.fit(x_train, y_train,
validation_data=(x_val, y_val),
epochs=30,
batch_size=64,
callbacks=callbacks)
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/30
50000/50000 [==============================] - 6s 113us/sample - loss: 0.2405 - sparse_categorical_accuracy: 0.9280 - val_loss: 0.1103 - val_sparse_categorical_accuracy: 0.9669
Epoch 2/30
50000/50000 [==============================] - 5s 91us/sample - loss: 0.0916 - sparse_categorical_accuracy: 0.9716 - val_loss: 0.0835 - val_sparse_categorical_accuracy: 0.9745
Epoch 3/30
50000/50000 [==============================] - 5s 107us/sample - loss: 0.0617 - sparse_categorical_accuracy: 0.9804 - val_loss: 0.0896 - val_sparse_categorical_accuracy: 0.9738
|
neural_network/old_jupyter_notebooks/jupyterNeuralNetworkWignerDistributions.ipynb | ###Markdown
Deep Learning study on the results of the 1D Pseudo-Wigner Distribution using Neural Networks**Why?**Check if the wigner distribution of an hologram is capable to give us enough information to be able to predict how many point sources generated the hologram (1 to 5 sources).**How?**Using a Convolutional Neural Networks (CNN) to solve this classification problem.**What?**Using the keras libray (python).**Some examples:*** https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5 Load dataset
###Code
%%time
path = 'output/wigner_distribution/'
file_name = 'wd_results.npy'
dataset = np.load(path + file_name)
print(dataset.shape)
print('Total number of holograms: ' + str(dataset.shape[0]))
print('Number of holograms per class: ' + str(int(dataset.shape[0]/ 5)))
###Output
(125, 8, 200, 200)
Total number of holograms: 125
Number of holograms per class: 25
Wall time: 169 ms
###Markdown
CNN (Convolutional Neural Networks) Data pre-processing
###Code
def compute_targets_array(nb_class, X_train):
"""
Compute an array with the targets of the dataset. Note that the number on the array correspond to the number of
sources minus one. E.g. Y_array = 1, the number of point sources is 2.
"""
# Number of the examples
nb_holograms = X_train.shape[0]
# Number of examples per class
nb_holograms_class = int(nb_holograms / nb_class)
# Y vector
Y_array = np.zeros((nb_holograms,))
counter = 1
target = 0
for i in range(nb_holograms):
if counter == (nb_holograms_class + 1):
target = target + 1
counter = 1
Y_array[i,] = target
counter = counter + 1
return Y_array
# Select one of the 8 frequencies ! BUG !!!!!!!!!!!!
X_train = dataset[:,0,:,:]
# The 1 signify that the images are greyscale
X_train = X_train.reshape(X_train.shape[0], 200, 200,1)
print(X_train.shape)
# Compute array of targets
nb_class = 5
Y_array = compute_targets_array(nb_class, X_train)
print(Y_array.shape)
print(Y_array)
# One-hot encode target column
Y_train = to_categorical(Y_array)
print(Y_train.shape)
###Output
(125,)
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.
2. 2. 2. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.
3. 3. 3. 3. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4. 4.
4. 4. 4. 4. 4.]
###Markdown
Building the model
###Code
# Create model
model = Sequential() # allows build a model layer by layer
# Add model layers
# Conv2D layer:
# 64 nodes, 3x3 filter matrix, Rectified Linear Activation as activation function,
# shape of each input (200, 200, 1,) with 1 signifying images are greyscale
model.add(Conv2D(64, kernel_size=3, activation='relu', input_shape=(200,200,1)))
# 32 nodes
model.add(Conv2D(32, kernel_size=3, activation='relu'))
# Flatten layer: connection between the convolution and dense layers
model.add(Flatten())
# Dense layer: used for the output layer
# 5 nodes for the output layer, one for each possible outcome (1-5)
# 'softmax' as activation function, it makes the output sump up to 1 so the output
# can be interpreted as probalities
model.add(Dense(5, activation='softmax'))
###Output
_____no_output_____
###Markdown
Compiling the model
###Code
# Three parameters:
# optmizer: 'adam'
# loss function: 'categorical_crossentropy', the most common choice for classification
# metrics: 'accuracy', to see the accuracy score
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Training the model
###Code
%%time
# Number of epochs: number of tmes the model wil cycle trough the data
model.fit(X_train, Y_train, validation_data=(X_train, Y_train), epochs=30)
###Output
Train on 125 samples, validate on 125 samples
Epoch 1/30
125/125 [==============================] - 24s 189ms/step - loss: 1969604.1170 - accuracy: 0.2160 - val_loss: 1934728.2350 - val_accuracy: 0.2080
Epoch 2/30
125/125 [==============================] - 23s 183ms/step - loss: 1007733.9370 - accuracy: 0.1920 - val_loss: 162199.3170 - val_accuracy: 0.2000
Epoch 3/30
125/125 [==============================] - 24s 193ms/step - loss: 62347.1701 - accuracy: 0.2640 - val_loss: 1971.5147 - val_accuracy: 0.6880
Epoch 4/30
125/125 [==============================] - 23s 188ms/step - loss: 3407.0259 - accuracy: 0.6880 - val_loss: 299.7151 - val_accuracy: 0.8080
Epoch 5/30
125/125 [==============================] - 23s 188ms/step - loss: 153.1435 - accuracy: 0.8320 - val_loss: 1.8502 - val_accuracy: 0.9280
Epoch 6/30
125/125 [==============================] - 24s 191ms/step - loss: 0.8037 - accuracy: 0.9440 - val_loss: 0.4717 - val_accuracy: 0.9440
Epoch 7/30
125/125 [==============================] - 23s 186ms/step - loss: 0.3462 - accuracy: 0.9520 - val_loss: 0.2682 - val_accuracy: 0.9440
Epoch 8/30
125/125 [==============================] - 23s 187ms/step - loss: 0.2470 - accuracy: 0.9440 - val_loss: 0.2148 - val_accuracy: 0.9520
Epoch 9/30
125/125 [==============================] - 24s 188ms/step - loss: 0.2090 - accuracy: 0.9440 - val_loss: 0.2364 - val_accuracy: 0.9440
Epoch 10/30
125/125 [==============================] - 23s 185ms/step - loss: 0.2712 - accuracy: 0.9280 - val_loss: 0.2194 - val_accuracy: 0.9440
Epoch 11/30
125/125 [==============================] - 23s 188ms/step - loss: 0.2246 - accuracy: 0.9360 - val_loss: 0.2305 - val_accuracy: 0.9360
Epoch 12/30
125/125 [==============================] - 24s 195ms/step - loss: 0.2343 - accuracy: 0.9360 - val_loss: 0.2324 - val_accuracy: 0.9360
Epoch 13/30
125/125 [==============================] - 24s 195ms/step - loss: 0.2335 - accuracy: 0.9360 - val_loss: 0.2276 - val_accuracy: 0.9360
Epoch 14/30
125/125 [==============================] - 24s 191ms/step - loss: 0.2265 - accuracy: 0.9360 - val_loss: 0.2226 - val_accuracy: 0.9360
Epoch 15/30
125/125 [==============================] - 23s 184ms/step - loss: 0.2200 - accuracy: 0.9360 - val_loss: 0.2176 - val_accuracy: 0.9440
Epoch 16/30
125/125 [==============================] - 23s 187ms/step - loss: 0.2156 - accuracy: 0.9440 - val_loss: 0.2132 - val_accuracy: 0.9440
Epoch 17/30
125/125 [==============================] - 24s 192ms/step - loss: 0.2125 - accuracy: 0.9440 - val_loss: 0.2087 - val_accuracy: 0.9440
Epoch 18/30
125/125 [==============================] - 23s 188ms/step - loss: 0.2070 - accuracy: 0.9440 - val_loss: 0.2047 - val_accuracy: 0.9440
Epoch 19/30
125/125 [==============================] - 23s 187ms/step - loss: 0.2035 - accuracy: 0.9440 - val_loss: 0.2007 - val_accuracy: 0.9440
Epoch 20/30
125/125 [==============================] - 23s 185ms/step - loss: 0.1993 - accuracy: 0.9440 - val_loss: 0.1970 - val_accuracy: 0.9440
Epoch 21/30
125/125 [==============================] - 25s 197ms/step - loss: 0.1956 - accuracy: 0.9440 - val_loss: 0.1932 - val_accuracy: 0.9440
Epoch 22/30
125/125 [==============================] - 23s 183ms/step - loss: 0.1923 - accuracy: 0.9440 - val_loss: 0.1893 - val_accuracy: 0.9440
Epoch 23/30
125/125 [==============================] - 23s 185ms/step - loss: 0.1880 - accuracy: 0.9440 - val_loss: 0.1857 - val_accuracy: 0.9440
Epoch 24/30
125/125 [==============================] - 23s 186ms/step - loss: 0.1839 - accuracy: 0.9440 - val_loss: 0.1823 - val_accuracy: 0.9440
Epoch 25/30
125/125 [==============================] - 23s 184ms/step - loss: 0.1809 - accuracy: 0.9440 - val_loss: 0.1786 - val_accuracy: 0.9440
Epoch 26/30
125/125 [==============================] - 23s 185ms/step - loss: 0.1774 - accuracy: 0.9520 - val_loss: 0.1750 - val_accuracy: 0.9520
Epoch 27/30
125/125 [==============================] - 24s 188ms/step - loss: 0.1742 - accuracy: 0.9520 - val_loss: 0.1713 - val_accuracy: 0.9520
Epoch 28/30
125/125 [==============================] - 23s 185ms/step - loss: 0.1702 - accuracy: 0.9520 - val_loss: 0.1679 - val_accuracy: 0.9520
Epoch 29/30
125/125 [==============================] - 24s 190ms/step - loss: 0.1670 - accuracy: 0.9520 - val_loss: 0.1645 - val_accuracy: 0.9520
Epoch 30/30
125/125 [==============================] - 23s 184ms/step - loss: 0.1639 - accuracy: 0.9520 - val_loss: 0.1610 - val_accuracy: 0.9600
Wall time: 11min 45s
###Markdown
Evalutation
###Code
# Evaluate the keras model
_, accuracy = model.evaluate(X_train, Y_train, verbose=0)
print('Accuracy: %.2f%%' % (accuracy*100))
###Output
Accuracy: 96.00%
###Markdown
Make predictions
###Code
# Make probability predictions with the model
predictions = model.predict(X_train)
# Round predictions
rounded = [round(x[0]) for x in predictions]
# Make class predictions with the model
predictions = model.predict_classes(X_train)
# Summarize the first 5 cases
for i in range(5):
print('Predicted: %d (expected: %d)' % (predictions[i], Y_array[i]))
###Output
Predicted: 0 (expected: 0)
Predicted: 0 (expected: 0)
Predicted: 0 (expected: 0)
Predicted: 3 (expected: 0)
Predicted: 0 (expected: 0)
###Markdown
Save weights and model
###Code
%%time
# Serialize model to JSON
model_json = model.to_json()
with open("output/neural_networks/model.json", "w") as json_file:
json_file.write(model_json)
# Serialize weights to HDF5
model.save_weights("output/neural_networks/model.h5")
print("Saved model structure and weights")
###Output
Saved model structure and weights
Wall time: 241 ms
###Markdown
Load model
###Code
# The model weights and architecture were saved separated, so it must re-compile
# Load json and create model
json_file = open('output/neural_networks/model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# Load weights into new model
loaded_model.load_weights("output/neural_networks/model.h5")
print("Loaded model from disk")
# Evaluate loaded model on test data
loaded_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
score = loaded_model.evaluate(X_train, Y_train, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
###Output
Loaded model from disk
accuracy: 96.00%
###Markdown
Summary
###Code
# Summarize model.
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 198, 198, 64) 640
_________________________________________________________________
conv2d_2 (Conv2D) (None, 196, 196, 32) 18464
_________________________________________________________________
flatten_1 (Flatten) (None, 1229312) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 6146565
=================================================================
Total params: 6,165,669
Trainable params: 6,165,669
Non-trainable params: 0
_________________________________________________________________
###Markdown
Plot model
###Code
# Error, BUG, MUST FIX
# plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
###Output
_____no_output_____ |
Standard_calibration.ipynb | ###Markdown
0. Importing Necessary Packages
###Code
# Printing the information of Python, IPython, OS, and the generation date.
%load_ext version_information
%version_information
# Printing the versions of packages
from importlib_metadata import version
for pkg in ['numpy', 'scipy', 'matplotlib', 'astropy', 'pandas', 'statsmodels']:
print(pkg+": ver "+version(pkg))
# matplotlib backend
%matplotlib notebook
# importing necessary modules
import numpy as np
import glob, os
import pandas as pd
from sklearn import linear_model
import statsmodels.api as sm
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
1. Reading the Data
###Code
# Observation data: r-band magnitude, r-band magnitude error, and airmass
obs = pd.read_csv("Calibration/Observation.csv")
obs.head(10)
# Standard star data: V-band magnitude, and color indices of B-V, U-B, V-R, R-I, and V-I
lan = pd.read_csv("Calibration/Landolt_catalog.csv")
lan.head(10)
# Merging all the data in one data frame (for convenience)
df = pd.merge(lan, obs, how="left", on="Star")
df['R'] = -(df['V-R']-df['V'])
df.head(10)
# Defining functions (for convenience)
# Plot - Observed values vs. Fitted values
def plot_comparison(input_data, fitted_data):
arr0 = np.linspace(-5.0, 0.0, 1000)
min_limit = np.minimum(input_data.min(), fitted_data.min()) - 0.2
max_limit = np.maximum(input_data.max(), fitted_data.max()) + 0.2
fig, ax = plt.subplots(figsize=(5,5))
ax.plot(arr0, arr0, 'r--', linewidth=1.5, alpha=0.6)
ax.plot(input_data, fitted_data, 'o', color='blue', ms=4.0)
ax.tick_params(axis='both', labelsize=12.0)
ax.set_xlabel(r"Observed $r-R$", fontsize=12.0)
ax.set_ylabel(r"Fitted $r-R$", fontsize=12.0)
ax.set_xlim([min_limit, max_limit])
ax.set_ylim([min_limit, max_limit])
plt.tight_layout()
# Plot - Observed values vs. Residuals
def plot_residuals(input_data, residuals):
arr0 = np.linspace(-5.0, 0.0, 1000)
min_limit = input_data.min() - 0.2
max_limit = input_data.max() + 0.2
RMSE = np.sqrt(np.sum(residuals**2) / len(input_data))
fig, ax = plt.subplots(figsize=(5,5))
ax.plot(arr0, np.zeros_like(arr0), 'r--', linewidth=1.5, alpha=0.6)
ax.plot(input_data, residuals, 'o', color='blue', ms=4.0)
ax.tick_params(axis='both', labelsize=12.0)
ax.set_xlabel(r"Observed $r-R$", fontsize=12.0)
ax.set_ylabel("Residuals", fontsize=12.0)
ax.set_xlim([min_limit, max_limit])
ax.set_ylim([-1.5, 1.5])
ax.text(0.05, 0.95, f"RMS Error = {RMSE:.2f}", fontsize=13.0, fontweight='bold',
transform=ax.transAxes, ha='left', va='top')
plt.tight_layout()
# Printing the summary of model
def summary_model(x, y, e_y):
Xm = sm.add_constant(x)
model = sm.WLS(y.astype('float'), Xm.astype('float'), weights=1/e_y**2).fit()
print_model = model.summary()
print(print_model)
###Output
_____no_output_____
###Markdown
2. Linear Regression 1) Multiple linear regression with all the data $\large r-R = Zero(R) + k(R) \times airmass + c(R) \times (V-R)$ **We have to guess the three parameters: $Zero(R)$, $k(R)$, and $c(R)$.*** $Zero(R)$: Zeropoint (different from 25.0!)* $k(R)$: Extinction coefficient* $c(R)$: Color coefficient
###Code
# Setting X and Y for multiple linear regression
X = df[['airmass', 'V-R']]
Y = df['r_obs'] - df['R']
e_Y = df['e_r_obs']
# Running the multiple linear regression
regr = linear_model.LinearRegression()
regr.fit(X, Y) # Without considering magnitude error
regr.fit(X, Y, 1/e_Y**2.) # With considering magnitude error
print(f"Zeropoint: Zero(R) = {regr.intercept_:.3f}")
print("\nCoeffients")
print(f"Extinction coefficient: k(R) = {regr.coef_[0]:.3f}")
print(f"Color coefficient: c(V-R) = {regr.coef_[1]:.3f}")
print("\n")
summary_model(X, Y, e_Y)
fitted_Y = regr.predict(X)
resi = Y - regr.predict(X)
###Output
Zeropoint: Zero(R) = -2.159
Coeffients
Extinction coefficient: k(R) = 0.215
Color coefficient: c(V-R) = -0.009
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.052
Model: WLS Adj. R-squared: -0.094
Method: Least Squares F-statistic: 0.3566
Date: Sat, 02 Apr 2022 Prob (F-statistic): 0.707
Time: 20:23:20 Log-Likelihood: 14.482
No. Observations: 16 AIC: -22.96
Df Residuals: 13 BIC: -20.65
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -2.1585 0.397 -5.435 0.000 -3.016 -1.301
airmass 0.2151 0.319 0.674 0.512 -0.474 0.904
V-R -0.0087 0.036 -0.245 0.810 -0.085 0.068
==============================================================================
Omnibus: 40.113 Durbin-Watson: 2.128
Prob(Omnibus): 0.000 Jarque-Bera (JB): 97.793
Skew: -3.381 Prob(JB): 5.81e-22
Kurtosis: 13.048 Cond. No. 50.8
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
**Fitting results*** $\large r-R = (-2.159 \pm 0.397) + (0.215 \pm 0.319) \times airmass + (-0.009 \pm 0.036) \times (V-R)$ **How much reliable are these results?**
###Code
plot_comparison(Y, fitted_Y) # Comparison plot (observed Y vs. fitted Y)
plot_residuals(Y, resi)
# Printing residuals
resi
###Output
_____no_output_____
###Markdown
**We should remove the data of Star 6 (index 5) for better results!** 2) Multiple linear regression with clipped data
###Code
df2 = df.drop(index = 5) # Dropping the 5th index data (Star 6)
df2
# Setting X and Y for multiple linear regression
X = df2[['airmass', 'V-R']] # Multiple linear regression with clipped data
Y = df2['r_obs'] - df2['R']
e_Y = df2['e_r_obs']
# Running the multiple linear regression
regr = linear_model.LinearRegression()
regr.fit(X, Y, 1/e_Y**2.)
print(f"Zeropoint: Zero(R) = {regr.intercept_:.3f}")
print("\nCoeffients")
print(f"Extinction coefficient: k(R) = {regr.coef_[0]:.3f}")
print(f"Color term: c(V-R) = {regr.coef_[1]:.3f}")
print("\n")
summary_model(X, Y, e_Y)
fitted_Y = regr.predict(X)
resi = Y - regr.predict(X)
###Output
Zeropoint: Zero(R) = -2.141
Coeffients
Extinction coefficient: k(R) = 0.202
Color term: c(V-R) = -0.007
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.580
Model: WLS Adj. R-squared: 0.510
Method: Least Squares F-statistic: 8.275
Date: Sat, 02 Apr 2022 Prob (F-statistic): 0.00551
Time: 20:23:20 Log-Likelihood: 39.961
No. Observations: 15 AIC: -73.92
Df Residuals: 12 BIC: -71.80
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -2.1415 0.075 -28.653 0.000 -2.304 -1.979
airmass 0.2024 0.060 3.372 0.006 0.072 0.333
V-R -0.0065 0.007 -0.975 0.349 -0.021 0.008
==============================================================================
Omnibus: 1.806 Durbin-Watson: 1.634
Prob(Omnibus): 0.405 Jarque-Bera (JB): 0.917
Skew: 0.604 Prob(JB): 0.632
Kurtosis: 2.929 Cond. No. 50.7
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
**Fitting results*** $\large r-R = (-2.141 \pm 0.075) + (0.202 \pm 0.060) \times airmass + (-0.007 \pm 0.007) \times (V-R)$
###Code
plot_comparison(Y, fitted_Y)
plot_residuals(Y, resi)
###Output
_____no_output_____
###Markdown
3) Adding the second-order term **Now we are going to add the second-order term as below.**$\large r-R = Zero(R) + k(R) \times airmass + c(R) \times (V-R) + k_{2}(R) \times (V-R) \times airmass$ **We have to guess the four parameters: $Zero(R)$, $k(R)$, $c(R)$, and $k_{2}(R)$.**
###Code
# Setting X and Y for multiple linear regression
df2['(V-R)X'] = df2['airmass']*df2['V-R']
X = df2[['airmass', 'V-R', '(V-R)X']] # Multiple linear regression with the second-order term
Y = df2['r_obs'] - df2['R']
e_Y = df2['e_r_obs']
# Running the multiple linear regression
regr = linear_model.LinearRegression()
regr.fit(X, Y, 1/e_Y**2.)
print(f"Zeropoint: Zero(R) = {regr.intercept_:.3f}")
print("\nCoeffients")
print(f"Extinction coefficient: k(R) = {regr.coef_[0]:.3f}")
print(f"Color term: c(V-R) = {regr.coef_[1]:.3f}")
print(f"2nd-order term: k2(R) = {regr.coef_[2]:.3f}")
print("\n")
summary_model(X, Y, e_Y)
fitted_Y = regr.predict(X)
resi = Y - regr.predict(X)
###Output
Zeropoint: Zero(R) = -2.104
Coeffients
Extinction coefficient: k(R) = 0.172
Color term: c(V-R) = -0.574
2nd-order term: k2(R) = 0.472
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.697
Model: WLS Adj. R-squared: 0.615
Method: Least Squares F-statistic: 8.442
Date: Sat, 02 Apr 2022 Prob (F-statistic): 0.00341
Time: 20:23:21 Log-Likelihood: 42.420
No. Observations: 15 AIC: -76.84
Df Residuals: 11 BIC: -74.01
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -2.1045 0.069 -30.663 0.000 -2.256 -1.953
airmass 0.1717 0.055 3.107 0.010 0.050 0.293
V-R -0.5738 0.275 -2.089 0.061 -1.178 0.031
(V-R)X 0.4721 0.229 2.066 0.063 -0.031 0.975
==============================================================================
Omnibus: 0.362 Durbin-Watson: 1.425
Prob(Omnibus): 0.834 Jarque-Bera (JB): 0.491
Skew: -0.146 Prob(JB): 0.782
Kurtosis: 2.163 Cond. No. 234.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
**Fitting results*** $\large r-R = (-2.104 \pm 0.069) + (0.172 \pm 0.055) \times airmass + (-0.574 \pm 0.275) \times (V-R) + (0.472 \pm 0.229) \times (V-R) \times airmass$
###Code
plot_comparison(Y, fitted_Y)
plot_residuals(Y, resi)
###Output
_____no_output_____
###Markdown
* We obtained the three linear regression (LR) models in this step. Theoretically, the LR model with the second-order term seems to be more reasonable than other models. However, **the second-order LR model is generally not used unless the observational data have very large sample and very precise measurement of magnitudes.** For the data of only 15 stars in this example, we had better select **the LR model without the second-order term**.* $\large r-R = (-2.141 \pm 0.075) + (0.202 \pm 0.060) \times airmass + (-0.007 \pm 0.007) \times (V-R)$ 3. Estimation of Standard Magnitudes Applying the above model coefficients, we can estimate the standard magnitudes of an observed star.$\large r-R = (-2.141 \pm 0.075) + (0.202 \pm 0.060) \times airmass + (-0.007 \pm 0.007) \times (V-R)$ Assuming that you obtained the following model for V-band magniude,$\large v-V = (-2.504 \pm 0.085) + (0.237 \pm 0.153) \times airmass + (-0.010 \pm 0.005) \times (V-R)$ **Q) When a star is observed $v=10.5\pm0.2~{\rm mag}$ and $r=10.0\pm0.1~{\rm mag}$ with airmass of $X=1.15~(v),~1.10~(r)$, what are the standard $V$ and $R$ magnitudes?** In this case, we have to solve the above model equations for $V$ and $R$ with the obtained model coefficients and airmass.
###Code
from scipy.optimize import fsolve
###Output
_____no_output_____
###Markdown
We can numerically solve the following equations using ``scipy.optimize.fsolve`` for $V$ and $R$.* $\large f_{1}(V,~R)=Zero(V) + k(V)\times airmass + c(V)\times(V-R)-(v-V) = 0$* $\large f_{2}(V,~R)=Zero(R) + k(R)\times airmass + c(R)\times(V-R)-(r-R) = 0$
###Code
airmass = [1.15, 1.10] # airmass
v_obs, r_obs = 10.5, 10.0 # observed magnitude v and r
e_v_obs, e_r_obs = 0.2, 0.1 # observed magnitude error of v and r
Zero_V, e_Zero_V, Zero_R, e_Zero_R = -2.504, 0.085, -2.141, 0.075 # zeropoints
k_V, e_k_V, k_R, e_k_R = 0.237, 0.153, 0.202, 0.060 # extinction coefficients
c_V, e_c_V, c_R, e_c_R = -0.010, 0.005, -0.007, 0.007 # color coefficients
def equations(var):
V, R = var
f1 = Zero_V + k_V*airmass[0] + c_V*(V-R) - (v_obs-V)
f2 = Zero_R + k_R*airmass[1] + c_R*(V-R) - (r_obs-R)
return [f1, f2]
# initial guess (V0, R0) = (10.5, 10.0)
solution, infodict, ier, mesg = fsolve(equations, (9.5, 9.0), full_output=True)
print(f"V standard magnitude: {solution[0]:.3f}")
print(f"R standard magnitude: {solution[1]:.3f}")
VR_color = solution[0] - solution[1]
print(f"V-R standard color: {VR_color:.3f}")
###Output
V standard magnitude: 12.740
R standard magnitude: 11.925
V-R standard color: 0.815
###Markdown
Now we obtained the standard magnitudes of $V=12.740~{\rm mag}$ and $R=11.925~{\rm mag}$, then how can we compute their uncertainties? For example, the uncertainty of $V$ magnitude $(\sigma_{V})$ can be propagated by the known uncertainties of $\sigma_{Zero(V)}$, $\sigma_{k(V)}$, $\sigma_{c(V)}$, and $\sigma_{v}$.$\large \sigma_{V}^{2}=\left(\frac{\partial V}{\partial Zero(V)}\right)^{2}\times\sigma_{Zero(V)}^{2}+\left(\frac{\partial V}{\partial k(V)}\right)^{2}\times\sigma_{k(V)}^{2}+\left(\frac{\partial V}{\partial c(V)}\right)^{2}\times\sigma_{c(V)}^{2}+\left(\frac{\partial V}{\partial v}\right)^{2}\times\sigma_{v}^{2}$ Reference: [Uncertainty propagation](https://en.wikipedia.org/wiki/Propagation_of_uncertainty) and [Using autograd for error propagation](https://kitchingroup.cheme.cmu.edu/blog/category/uncertainty/) Taking derivatives of $f_{1}(V,~R)= 0$ (given above), we can get the derivative of each variable.$\large \frac{\partial V}{\partial Zero(V)}=-\frac{1}{c(V)+1},$$\large \frac{\partial V}{\partial k(V)}=-\frac{X}{c(V)+1},$$\large \frac{\partial V}{\partial c(V)}=-(V-R),$$\large \frac{\partial V}{\partial v}=\frac{1}{c(V)+1}$
###Code
VR_color = solution[0] - solution[1]
dVdZero = -1./(c_V+1)
dVdk = -airmass[0]/(c_V+1)
dVdc = -VR_color
dVdv = 1./(c_V+1)
V_err = np.sqrt((dVdZero * e_Zero_V)**2. + \
(dVdk * e_k_V)**2. + \
(dVdc * e_c_V)**2. + \
(dVdv * e_v_obs)**2.)
print(f"V standard magnitude: {solution[0]:.3f} +/- {V_err:.3f}")
###Output
V standard magnitude: 12.740 +/- 0.282
###Markdown
Similarly, we can also compute the uncertainty of the standard $R$ magnitude.
###Code
dRdZero = -1./(c_R+1)
dRdk = -airmass[1]/(c_R+1)
dRdc = -VR_color
dRdv = 1./(c_R+1)
R_err = np.sqrt((dRdZero * e_Zero_R)**2. + \
(dRdk * e_k_R)**2. + \
(dRdc * e_c_R)**2. + \
(dRdv * e_r_obs)**2.)
print(f"R standard magnitude: {solution[1]:.3f} +/- {R_err:.3f}")
###Output
R standard magnitude: 11.925 +/- 0.142
###Markdown
If the above method is too tricky, then you can simply do the Gaussian random resampling as below.
###Code
# Gaussian random resampling
np.random.seed(123)
niter = 10000 # Iterations
sol = []
for i in np.arange(niter):
v_obs2, r_obs2 = np.random.normal(v_obs, e_v_obs), np.random.normal(r_obs, e_r_obs)
Zero_V2, Zero_R2 = np.random.normal(Zero_V, e_Zero_V), np.random.normal(Zero_R, e_Zero_R)
k_V2, k_R2 = np.random.normal(k_V, e_k_V), np.random.normal(k_R, e_k_R)
c_V2, c_R2 = np.random.normal(c_V, e_c_V), np.random.normal(c_R, e_c_R)
def f(var):
V, R = var
f1 = Zero_V2 + k_V2*airmass[0] + c_V2*(V-R) - (v_obs2-V)
f2 = Zero_R2 + k_R2*airmass[1] + c_R2*(V-R) - (r_obs2-R)
return [f1, f2]
sol_i, _, _, _ = fsolve(f, (10.5, 10.0), full_output=True)
sol.append(sol_i)
V2, R2 = np.mean(np.array(sol), axis=0)
V2_err, R2_err = np.std(np.array(sol), axis=0)
print(f"V standard magnitude: {V2:.3f} +/- {V2_err:.3f}")
print(f"R standard magnitude: {R2:.3f} +/- {R2_err:.3f}")
###Output
V standard magnitude: 12.743 +/- 0.282
R standard magnitude: 11.924 +/- 0.140
|
C6_ssp_ps-houly-2070-2100-withQuestions.ipynb | ###Markdown
Search using ESGF API
###Code
#!/usr/bin/env python
# Code from Robinson
from __future__ import print_function
import requests
import xml.etree.ElementTree as ET
import numpy
# Author: Unknown
# I got the original version from a word document published by ESGF
# https://docs.google.com/document/d/1pxz1Kd3JHfFp8vR2JCVBfApbsHmbUQQstifhGNdc6U0/edit?usp=sharing
# API AT: https://github.com/ESGF/esgf.github.io/wiki/ESGF_Search_REST_API#results-pagination
def esgf_search(server="https://esgf-node.llnl.gov/esg-search/search",
files_type="OPENDAP", local_node=True, project="CMIP6",
verbose=False, format="application%2Fsolr%2Bjson",
use_csrf=False, **search):
client = requests.session()
payload = search
payload["project"] = project
payload["type"]= "File"
if local_node:
payload["distrib"] = "false"
if use_csrf:
client.get(server)
if 'csrftoken' in client.cookies:
# Django 1.6 and up
csrftoken = client.cookies['csrftoken']
else:
# older versions
csrftoken = client.cookies['csrf']
payload["csrfmiddlewaretoken"] = csrftoken
payload["format"] = format
offset = 0
numFound = 10000
all_files = []
files_type = files_type.upper()
while offset < numFound:
payload["offset"] = offset
url_keys = []
for k in payload:
url_keys += ["{}={}".format(k, payload[k])]
url = "{}/?{}".format(server, "&".join(url_keys))
print(url)
r = client.get(url)
r.raise_for_status()
resp = r.json()["response"]
numFound = int(resp["numFound"])
resp = resp["docs"]
offset += len(resp)
for d in resp:
if verbose:
for k in d:
print("{}: {}".format(k,d[k]))
url = d["url"]
for f in d["url"]:
sp = f.split("|")
if sp[-1] == files_type:
all_files.append(sp[0].split(".html")[0])
return sorted(all_files)
###Output
_____no_output_____
###Markdown
Load data with xrray
###Code
# Code from Robinson, modified by Yanlei
def CMIP_processing_plot(my_result, my_experiment_id, my_varname, my_model, my_source, index_initial, index_end):
# there are mulitple sources of the same data--need to pick one
index_ini=index_initial#3
index_fin=index_end#11
files_to_open = my_result[index_ini:index_fin+1]
n_files= len(files_to_open)
print ('number of files to open:', n_files)
myvarvals=[]
myvarvals_values = []
time_appropriate = []
#define a boundary
ds=xr.open_dataset(files_to_open[0])
lat = ds.lat
lon = ds.lon
lat_range = lat[(lat>=-20)&(lat<=10)]
lon_range = lon[(lon>=270)&(lon<=330)]
for ifls in range(n_files):
ds=xr.open_dataset(files_to_open[ifls])
# fix CF non standard time issues
if ds.time.dtype == 'O' and int(ds.indexes['time'][-1].strftime("%Y")) < 2262:
datetimeindex = ds.indexes['time'].to_datetimeindex()
ds['time'] = datetimeindex
# filter some files have different time range
if ds['time.year'][0].values >= 2065 and ds['time.year'][0].values <= 2099:
ds_am= ds.sel(lon = lon_range, lat = lat_range)
myvarvals.append(ds_am)
# myvarvals_values.append(ds_am.values)
# time_appropriate.append(ifls)
# print(myvarvals)
# Yanlei
# concatenate all the data together by dimension "time.year"
ds_concat = xr.concat(myvarvals, dim="time")
ds = ds_concat.sel(time=slice('2070-01','2099-12'))
print(ds.time)
# #===================================================
# #plotting
# #====================================================
# DIRin='/Users/yanlei/Documents/PhD/4B/Deep convections in Amazon/future_CAPE_rho_p/2070_2100/ps/'
# filename=DIRin+my_experiment_id+'_'+my_varname+'_avg_2070_2100_'+my_model+'_'+my_source+'.nc'
# # save xarrays to netcdf files
# avgda_ds.to_netcdf(filename)
# # stdda.to_netcdf(filename)
# # create map
# fig, ax = map()
# title= my_experiment_id+'_'+my_varname+'_avg_2070_2100_'+my_model+'_'+my_source
# avgda_ds.ps.plot.contourf(ax=ax, cmap='Spectral_r', extend='both',
# transform=ccrs.PlateCarree())
# plt.title(title)
# fig.show()
###Output
_____no_output_____
###Markdown
search all models which have available 6hrLev ps data
###Code
result_0 = esgf_search(activity_id='ScenarioMIP', table_id='6hrLev', variable_id='ps', experiment_id='ssp585',
latest=True)
#result
for ires in range(len(result_0)):
print(ires,':', result_0[ires])
###Output
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=0
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=10
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=20
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=30
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=40
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=50
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=60
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=70
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=80
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=90
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=100
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=110
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=120
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=130
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=140
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=150
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=160
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=170
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=180
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=190
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=200
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=210
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=220
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=230
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=240
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=250
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=260
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=270
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=280
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=290
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=300
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=310
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=320
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=330
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=340
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=350
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=360
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=370
https://esgf-node.llnl.gov/esg-search/search/?activity_id=ScenarioMIP&table_id=6hrLev&variable_id=ps&experiment_id=ssp585&latest=True&project=CMIP6&type=File&distrib=false&format=application%2Fsolr%2Bjson&offset=380
###Markdown
1. BCC, BCC-CSM2-MR
###Code
result_BCC = esgf_search(activity_id='ScenarioMIP', table_id='6hrLev', variable_id='ps', experiment_id='ssp585',
institution_id="BCC", source_id= "BCC-CSM2-MR",latest=True)
#result
for ires in range(len(result_BCC)):
print(ires,':', result_BCC[ires])
fig = CMIP_processing_plot(result_BCC, 'ssp585', 'ps', "BCC", "BCC-CSM2-MR", 10, len(result_BCC))
###Output
number of files to open: 8
|
note_books/.ipynb_checkpoints/Basic-checkpoint.ipynb | ###Markdown
梯度下降 解一元一次方程 SGD
###Code
# 目标函数 2x+5,构造一批样本
import random
(a_true,b_true) = (9,0)
def y_true(x):
return a_true*x+b_true
samples = [[i,y_true(i)] for i in range(100)]
samples[:2]
(a,b) = (0.5,0) # 0.5初始化,或随机初始化a,b
n = 0.001 # 定义学习率为0.1
# 损失函数设计为均方误差
# 参数更新方式为 param_new = param - 学习率*损失函数对param的(在x处的)偏导数
print(f"[true]: y={a_true}x+{b_true}")
print(f"[initial]: y={a}x+{b}")
for _ in range(2):
print(f"第 {_} 次迭代")
cnt = 0
for (x,y_true) in tqdm_notebook(samples):
y = a*x+b
grad_a = (y-y_true)*x
grad_b = (y-y_true)*1
a = a - n*grad_a #
# b = b - n*grad_b #
mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
new_mse = sum(mse_list)/len(mse_list)
if cnt%5==0:
print(f"x:{x}, y={a:.4f}x+{b:.4f}, new_mse:{new_mse}, grad_a:{grad_a},grad_b:{grad_b:.7f}")
cnt += 1
if new_mse<=0.001:
print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse:.2f}")
assert False
# assert False
# mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
# new_mse = sum(mse_list)/len(mse_list)
# print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse}")
###Output
[true]: y=9x+0
[initial]: y=8.999875112523538x+0
第 0 次迭代
###Markdown
SGD (mini-batch)
###Code
# 损失函数设计为均方误差
# 参数更新方式为 param_new = param - 学习率*损失函数对param的(在x处的)偏导数
print(f"[true]: y={a_true}x+{b_true}")
print(f"[initial]: y={a}x+{b}")
batch_size = 5
print(f"[batch_size]: {batch_size}")
for _ in range(2):
print(f"第 {_} 次迭代")
cnt = 0
for i in range(0,len(samples),batch_size):
batch_data = samples[i:i+batch_size]
grad_a = [(a*x+b-y_true)*x for (x,y_true) in batch_data]
grad_b = [(a*x+b-y_true)*1 for (x,y_true) in batch_data]
grad_a = mean_list(grad_a)
grad_b = mean_list(grad_b)
assert False
for (x,y_true) in tqdm_notebook(samples):
y = a*x+b
grad_a = (y-y_true)*x
grad_b = (y-y_true)*1
a = a - n*grad_a #
# b = b - n*grad_b #
mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
new_mse = sum(mse_list)/len(mse_list)
if cnt%10==0:
print(f"x:{x}, y={a:.4f}x+{b:.4f}, new_mse:{new_mse}, grad_a:{grad_a},grad_b:{grad_b:.7f}")
cnt += 1
if new_mse==0:
print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse:.2f}")
assert False
# assert False
# mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
# new_mse = sum(mse_list)/len(mse_list)
# print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse}")
###Output
[true]: y=9x+0
[initial]: y=-4.2759682064638344e+35x+0
[batch_size]: 5
第 0 次迭代
###Markdown
GD 二元一次
###Code
# 目标函数 2x1+3x2+5,构造一批样本
import random
(a_true,b_true,c_true) = (2,3,0)
def y_true(x1,x2):
return a_true*x1+b_true*x2+c_true
samples = [[i,i+1,y_true(i,i+1)] for i in range(100)]
samples[:2]
(a,b,c) = (0.5,0.5,0) # 0.5初始化,或随机初始化a,b,c
n = 0.001 # 定义学习率为0.1
# 损失函数设计为均方误差
# 参数更新方式为 param_new = param - 学习率*损失函数对param的(在x处的)偏导数
print(f"[true]: y={a_true}x1+{b_true}x2+{c_true}")
print(f"[initial]: y={a}x1+{b}x2+{c}")
for _ in range(2):
print(f"第 {_} 次迭代")
for (x1,x2,y_true) in tqdm_notebook(samples):
y = a*x1+b*x2+c
grad_a = (y-y_true)*x1
grad_b = (y-y_true)*x2
grad_c = (y-y_true)*1
a = a - n*grad_a #
b = b - n*grad_b #
# c = c - n*grad_c
mse_list = [pow((a*x1+b*x2+c-y_true),2) for (x1,x2,y_true) in samples]
new_mse = sum(mse_list)/len(mse_list)
print(f"y={a:.4f}x1+{b:.4f}x2+{c:.4f}, new_mse:{new_mse:.2f}, x1:{x1},x2:{x2},grad_a:{grad_a:.4f},grad_b:{grad_b:.4f}")
if new_mse<=0.01:
break
print(f"y={a:.4f}x1+{b:.4f}x2+{c:.4f}, new_mse:{new_mse:.2f}")
# assert False
# mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
# new_mse = sum(mse_list)/len(mse_list)
# print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse}")
###Output
_____no_output_____
###Markdown
一元二次方程
###Code
# 目标函数 2x+5,构造一批样本
import random
(a_true,b_true) = (9,2)
def y_true(x):
return pow(a_true*x,2)+b_true
samples = [[i,y_true(i)] for i in range(100)]
samples[:2]
(a,b) = (0.5,0.5) # 0.5初始化,或随机初始化a,b
n = 0.001 # 定义学习率为0.1
# 损失函数设计为均方误差
# 参数更新方式为 param_new = param - 学习率*损失函数对param的(在x处的)偏导数
print(f"[true]: y={a_true}x+{b_true}")
print(f"[initial]: y={a}x+{b}")
for _ in range(10):
print(f"第 {_} 次迭代")
for (x,y_true) in tqdm_notebook(samples):
y = a*x+b
grad_a = (y-y_true)*x
grad_b = (y-y_true)*1
a = a - n*grad_a #
b = b - n*grad_b #
mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
new_mse = sum(mse_list)/len(mse_list)
print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse:.2f}, grad_a:{grad_a:.4f},grad_b:{grad_b:.4f}")
if new_mse<=0.01:
break
print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse:.2f}")
# assert False
# mse_list = [pow((a*x+b-y_true),2) for (x,y_true) in samples]
# new_mse = sum(mse_list)/len(mse_list)
# print(f"y={a:.4f}x+{b:.4f}, new_mse:{new_mse}")
###Output
_____no_output_____ |
notebooks/es157_notebookY_solutions.ipynb | ###Markdown
`ES 157` Notebook Y: Maximum Likelihood, MAP, and PCA We spent the last few weeks talking a lot about _maximum likelihood_ and the _maximum a posteriori_ estimators, as well as _principal component analysis_. We went through a lot of math during class and sections, so in this notebook we will spend some time visualizing concepts that we saw in class.At the end of this notebook you will1. have a better understanding of the maximum likelihood and MAP estimators,2. have seen how the MAP and maximum likelihood estimators are related, and3. have a better understanding of how PCA works and it's properties.As we always, let us import some needed libraries.
###Code
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Maximum Likelihood vs MAP 📙 During class and previous sections, we derived analytically both the maximum likelihood and the MAP estimators, for various settings, and we saw how they are related. However, here we would like to emphasize the _likelihood function_ and the _aposteriori function_ are, well, _functions_ of $\theta$.For our setting, we will consider again the case of *Gaussian* i.i.d. random variables $X_1, \ldots, X_n \sim \mathcal{N}(\theta^{\ast}, \sigma^2)$, each generating a _single_ sample $x_1, \ldots, x_n$. In what follows, we will try to estimate the _unknown_ mean of the random variables $\theta^{\ast}$. Maximum Likelihood estimationWhen computing the maximum likelihood estimate, we make no assumption about the distribution of $\theta^{\ast}$. This basically means we have _no information_ about what $\theta^{\ast}$ "looks like", so we're solely using the data to find the best estimate. The likelihood function in this case, as we saw in class, is given by$L(\theta \mid \mathbf{x}) = \frac{1}{(2\pi\sigma^2)^{\frac{n}{2}}} e^{-\frac{1}{2\sigma^2}\sum_{i=1}^{n}(x_i - \theta)^2}$.Then, the maximum likelihood estimator is given by$\hat{\theta}_{\textrm{ML}} = \frac{1}{n}\sum_{i=1}^{n}x_i$.Below, choose specific options for $\theta^{\ast}$ and $\sigma^2$ and generate `n = 10` samples from that distribution. Plot the likelihood and log-likelihood functions over a range of values for $\theta$ near the value that you chose. Overlay on your plots the true value $\theta^{\ast}$, along with the maximum likelihood estimate $\hat{\theta}_{\textrm{ML}}$.
###Code
# set your parameters
n = 10
theta_star = 5
sigma = 2
# generate data
x = np.random.normal(theta_star, sigma, n)
# compute the likelihood and log-likelihood
thetas = np.linspace(2.5, 7.5, 1000)
likelihoods = []
loglikelihoods = []
for theta in thetas:
likelihood = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2))
loglikelihood = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2)
likelihoods.append(likelihood)
loglikelihoods.append(loglikelihood)
# compute the ML estimate
theta_ML = np.mean(x)
likelihood_ML = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2))
loglikelihood_ML = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2)
# plot the likelihood and loglikelihood functions
fig = plt.figure(figsize=(10, 5))
plt.subplot(211)
plt.plot(thetas, likelihoods, 'k')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$L(\theta \mid \mathbf{x})$")
plt.title("The likelihood function")
plt.xlim([2.5, 7.5])
# add the true theta and the ML estimate
plt.plot([theta_star, theta_star], [np.min(likelihoods), 1.5 * np.max(likelihoods)], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, 1.35 * np.max(likelihoods), r'$\theta^{\ast}$', size=10)
plt.plot(theta_ML, likelihood_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(likelihoods), likelihood_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, 1.1 * likelihood_ML, r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.subplot(212)
plt.plot(thetas, loglikelihoods, 'b')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$\log L(\theta \mid \mathbf{x})$")
plt.title("The log-likelihood function")
plt.xlim([2.5, 7.5])
# add the true theta and the ML estimate
plt.plot([theta_star, theta_star], [np.min(loglikelihoods), np.max(loglikelihoods) + 0.5 * np.abs(np.min(loglikelihoods))], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, np.max(loglikelihoods) + 0.4 * np.abs(np.min(loglikelihoods)), r'$\theta^{\ast}$', size=10)
plt.plot(theta_ML, loglikelihood_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(loglikelihoods), loglikelihood_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, loglikelihood_ML + 0.075 * np.abs(np.min(loglikelihoods)), r'$\hat{\theta}_{ML}$', size=10, color='r')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
We see that the maximum likelihood ends up being, as expected, the maximum of the likelihood and/or the log-likelihood functions. Note that, as we stressed in class, the _point_ where the maximum is attained is the same for both the likelihood and the log-likelihood; as we discussed, _increasing_ functions might change the _value_ of the maximum, but not the point that attains it! MAP estimationIn MAP estimation, conversely, we make explicit assumptions about the distribution of $\theta^{\ast}$. Let us specifically assume that $\theta^{\ast} \sim \mathcal{N}(\bar{\theta}, \tau^2)$. In layterms, this basically means that we know the mean is close to a number, say $5$, but we don't know it's exact value; it could be $5.12$ or $4.86$. Then, MAP estimation tries to strike a balance between "trusting" the data and using the prior information that we have. The posterior function in this case, as we saw in class, is given by$p_{\mathbf{X}}(\theta \mid \mathbf{x}) = \frac{p_{\mathbf{X}}(\mathbf{x} \mid \theta) \cdot p_{\theta}(\theta)}{p_{\mathbf{X}}(\mathbf{x})}$,where $p_{\mathbf{X}}(\mathbf{x} \mid \theta)$ is equal to the likelihood function $L(\theta \mid \mathbf{x})$. In this case, the MAP estimator is given by$\hat{\theta}_{\textrm{MAP}} = \frac{\tau^2}{n \tau^2 + \sigma^2}\sum_{i=1}^{n}x_i + \frac{\sigma^2}{n \tau^2 + \sigma^2}\bar{\theta}$.Below, let $\bar{\theta}$ be equal to the value you chose for $\theta^{\ast}$ before, and choose a value for $\tau^2$. Then, generate $\theta^{\ast}$ and sample `n = 10` samples from the data distribution. Plot the posterior and the log-posterior functions over a range of values for $\theta$. Overlay on your plots the true value $\theta^{\ast}$, along with the MAP estimate $\hat{\theta}_{\textrm{MAP}}$ and the maximum likelihood estimate $\hat{\theta}_{\textrm{ML}}$. In your computations of the posterior, feel free to ignore the term $p_{\mathbf{X}}(\mathbf{x})$, i.e.$p_{\mathbf{X}}(\theta \mid \mathbf{x}) = \frac{1}{(2\pi\sigma^2)^{\frac{n}{2}}} e^{-\frac{1}{2\sigma^2}\sum_{i=1}^{n}(x_i - \theta)^2} \cdot \frac{1}{\sqrt{2 \pi} \tau} e^{-\frac{1}{2\tau^2}(\theta-\bar{\theta})^2}$.
###Code
# set your parameters
theta_bar = 5
tau = 1
theta_star = np.random.normal(theta_bar, tau)
# generate data
x = np.random.normal(theta_star, sigma, n)
# compute the likelihood and log-likelihood
thetas = np.linspace(2.5, 7.5, 1000)
posteriors = []
logposteriors = []
for theta in thetas:
posterior = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2)
logposterior = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2
posteriors.append(posterior)
logposteriors.append(logposterior)
# compute the ML and MAP estimates
theta_MAP = tau ** 2 / (n * tau ** 2 + sigma ** 2) * np.sum(x) + sigma ** 2 / (n * tau ** 2 + sigma ** 2) * theta_bar
posterior_MAP = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2)
logposterior_MAP = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2
theta_ML = np.mean(x)
posterior_ML = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2)
logposterior_ML = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2
# plot the posterior and logposterior functions
fig = plt.figure(figsize=(10, 5))
plt.subplot(211)
plt.plot(thetas, posteriors, 'k')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta \mid \mathbf{x})$")
plt.title("The posterior function")
plt.xlim([2.5, 7.5])
# add the true theta, theta_bar, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, 1.35 * np.max(posteriors), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, 1.7 * np.max(posteriors), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, posterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(posteriors), posterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, 1.1 * posterior_ML, r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, posterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(posteriors), posterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, 1.1 * posterior_MAP, r'$\hat{\theta}_{MAP}$', size=10, color='g')
plt.subplot(212)
plt.plot(thetas, logposteriors, 'b')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$\log p(\theta \mid \mathbf{x})$")
plt.title("The log-posterior function")
plt.xlim([2.5, 7.5])
# add the true theta, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, np.max(logposteriors) + 0.4 * np.abs(np.min(logposteriors)), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, np.max(logposteriors) + 0.6 * np.abs(np.min(logposteriors)), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, logposterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(logposteriors), logposterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, logposterior_ML + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, logposterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(logposteriors), logposterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, logposterior_MAP + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{MAP}$', size=10, color='g')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Effect of $\sigma^2$ and $\tau^2$Having computed the MAP and ML estimates, let us examine now how they are affected by different choices of $\sigma^2$ and $\tau^2$. As a first exercise, plot the distribution of $\theta^{\ast} \sim \mathcal{N}(\bar{\theta}, \tau^2)$ for different values of $\tau$.
###Code
# set your parameters
theta_bar = 5
taus = [0.25, 1, 5]
thetas = np.linspace(2.5, 7.5, 1000)
# plot the densities in the same plot
density_1 = 1 / (np.sqrt(2 * np.pi) * taus[0]) * np.e ** (- 1 / (2 * taus[0] ** 2) * (thetas - theta_bar) ** 2)
plt.plot(thetas, density_1, 'purple', label=r"$\tau = 0.25$")
density_2 = 1 / (np.sqrt(2 * np.pi) * taus[1]) * np.e ** (- 1 / (2 * taus[1] ** 2) * (thetas - theta_bar) ** 2)
plt.plot(thetas, density_2, 'orange', label=r"$\tau = 1$")
density_3 = 1 / (np.sqrt(2 * np.pi) * taus[2]) * np.e ** (- 1 / (2 * taus[2] ** 2) * (thetas - theta_bar) ** 2)
plt.plot(thetas, density_3, 'green', label=r"$\tau = 5$")
plt.xlim([2.5, 7.5])
plt.title(r"The pdf of $\theta^{\ast}$ for different choices of $\tau$")
plt.ylabel(r"$p(\theta)$")
plt.xlabel(r"$\theta$")
plt.legend()
###Output
_____no_output_____
###Markdown
Note that the above describes the _prior distribution_ of $\theta^{\ast}$. In other words, it encodes the prior information we may have about $\theta^{\ast}$; when $\tau$ is small, we are pretty confident that we have a good initial "guess" for $\theta^{\ast}$. Conversely, when $\tau$ is large, virtually any $\theta$ is equally likely to be the true value of $\theta^{\ast}$.Next, repeat the MAP and ML estimations for the two different values of $\tau$ that are indicated. Also, plot the estimators again for a much higher value of $\sigma$.
###Code
# vary tau
theta_bar = 5
sigma = 2
taus = [0.01, 10]
for idx in range(len(taus)):
tau = taus[idx]
theta_star = np.random.normal(theta_bar, tau)
# generate data
x = np.random.normal(theta_star, sigma, n)
# compute the likelihood and log-likelihood
thetas = np.linspace(theta_star - 2.5, theta_star + 2.5, 1000)
posteriors = []
logposteriors = []
for theta in thetas:
posterior = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2)
logposterior = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2
posteriors.append(posterior)
logposteriors.append(logposterior)
# compute the ML and MAP estimates
theta_MAP = tau ** 2 / (n * tau ** 2 + sigma ** 2) * np.sum(x) + sigma ** 2 / (n * tau ** 2 + sigma ** 2) * theta_bar
posterior_MAP = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2)
logposterior_MAP = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2
theta_ML = np.mean(x)
posterior_ML = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2)
logposterior_ML = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2
# plot the posterior and logposterior functions
fig = plt.figure(figsize=(10, 15))
plt.subplot(611 + idx)
plt.plot(thetas, posteriors, 'k')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta \mid \mathbf{x})$")
plt.title("The posterior function")
plt.xlim([theta_star - 2.5, theta_star + 2.5])
# add tau, sigma
plt.legend(loc='upper left', title="$\sigma = {}$\n$\\tau = {}$".format(sigma, tau))
# add the true theta, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, 1.35 * np.max(posteriors), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, 1.7 * np.max(posteriors), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, posterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(posteriors), posterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, 1.1 * posterior_ML, r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, posterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(posteriors), posterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, 1.1 * posterior_MAP, r'$\hat{\theta}_{MAP}$', size=10, color='g')
plt.subplot(611 + idx + 1)
plt.plot(thetas, logposteriors, 'b')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$\log p(\theta \mid \mathbf{x})$")
plt.title("The log-posterior function")
plt.xlim([theta_star - 2.5, theta_star + 2.5])
# add tau, sigma
plt.legend(loc='upper left', title="$\sigma = {}$\n$\\tau = {}$".format(sigma, tau))
# add the true theta, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, np.max(logposteriors) + 0.4 * np.abs(np.min(logposteriors)), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, np.max(logposteriors) + 0.6 * np.abs(np.min(logposteriors)), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, logposterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(logposteriors), logposterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, logposterior_ML + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, logposterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(logposteriors), logposterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, logposterior_MAP + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{MAP}$', size=10, color='g')
fig.tight_layout()
# high sigma
sigma = 100
tau = 1
theta_star = np.random.normal(theta_bar, tau)
# generate data
x = np.random.normal(theta_star, sigma, n)
# compute the likelihood and log-likelihood
thetas = np.linspace(-50, 50, 1000)
posteriors = []
logposteriors = []
for theta in thetas:
posterior = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2)
logposterior = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta - theta_bar) ** 2
posteriors.append(posterior)
logposteriors.append(logposterior)
# compute the ML and MAP estimates
theta_MAP = tau ** 2 / (n * tau ** 2 + sigma ** 2) * np.sum(x) + sigma ** 2 / (n * tau ** 2 + sigma ** 2) * theta_bar
posterior_MAP = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2)
logposterior_MAP = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_MAP) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_MAP - theta_bar) ** 2
theta_ML = np.mean(x)
posterior_ML = 1 / (2 * np.pi * sigma ** 2) ** (n / 2) * np.e ** (- 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2)) * 1 / (np.sqrt(2 * np.pi) * tau) * np.e ** (- 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2)
logposterior_ML = np.log(1 / (2 * np.pi * sigma ** 2) ** (n / 2)) - 1 / (2 * sigma ** 2) * np.sum((x - theta_ML) ** 2) + np.log(1 / (np.sqrt(2 * np.pi) * tau)) - 1 / (2 * tau ** 2) * (theta_ML - theta_bar) ** 2
# plot the posterior and logposterior functions
fig = plt.figure(figsize=(10, 15))
plt.subplot(615)
plt.plot(thetas, posteriors, 'k')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta \mid \mathbf{x})$")
plt.title("The posterior function")
plt.xlim([-50, 50])
# add tau, sigma
plt.legend(loc='upper left', title="$\sigma = {}$\n$\\tau = {}$".format(sigma, tau))
# add the true theta, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, 1.35 * np.max(posteriors), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(posteriors), 1.85 * np.max(posteriors)], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, 1.7 * np.max(posteriors), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, posterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(posteriors), posterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, 1.1 * posterior_ML, r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, posterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(posteriors), posterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, 1.1 * posterior_MAP, r'$\hat{\theta}_{MAP}$', size=10, color='g')
plt.subplot(616)
plt.plot(thetas, logposteriors, 'b')
plt.xlabel(r"$\theta$")
plt.ylabel(r"$\log p(\theta \mid \mathbf{x})$")
plt.title("The log-posterior function")
plt.xlim([-50, 50])
# add tau, sigma
plt.legend(loc='upper left', title="$\sigma = {}$\n$\\tau = {}$".format(sigma, tau))
# add the true theta, the ML, and the MAP estimates
plt.plot([theta_star, theta_star], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1)
plt.text(theta_star - 0.15, np.max(logposteriors) + 0.4 * np.abs(np.min(logposteriors)), r'$\theta^{\ast}$', size=10)
plt.plot([theta_bar, theta_bar], [np.min(logposteriors), np.max(logposteriors) + 0.7 * np.abs(np.min(logposteriors))], color='k', linestyle='--', linewidth=1, alpha=0.5)
plt.text(theta_bar - 0.15, np.max(logposteriors) + 0.6 * np.abs(np.min(logposteriors)), r'$\bar{\theta}$', size=10, alpha=0.5)
plt.plot(theta_ML, logposterior_ML, 'or')
plt.plot([theta_ML, theta_ML], [np.min(logposteriors), logposterior_ML], color='r', linestyle='--', linewidth=1)
plt.text(0.99 * theta_ML, logposterior_ML + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{ML}$', size=10, color='r')
plt.plot(theta_MAP, logposterior_MAP, 'og')
plt.plot([theta_MAP, theta_MAP], [np.min(logposteriors), logposterior_MAP], color='g', linestyle='--', linewidth=1)
plt.text(0.99 * theta_MAP, logposterior_MAP + 0.075 * np.abs(np.min(logposteriors)), r'$\hat{\theta}_{MAP}$', size=10, color='g')
###Output
No handles with labels found to put in legend.
No handles with labels found to put in legend.
No handles with labels found to put in legend.
No handles with labels found to put in legend.
No handles with labels found to put in legend.
No handles with labels found to put in legend.
###Markdown
Principal Component Analysis We spent quite a few lectures and sections talking about PCA, and you implemented versions of it for `PSet 3` and `Lab 3`. In this notebook, we want to emphasize the steps of PCA, in a visual manner. Moreover, we will point out some of the common _pitfalls_ of PCA; namely, where and when it doesn't work as well as we'd hope.We will spare the excruciating details that we went over during class, as well as the exposition of the properties of the covariance matrix. We will only, for completeness, formally state the setting. Consider some data unlabelled $\mathbf{X} \in \mathbb{R}^{n \times m}$. Center the data and let $\mathbf{X}_c = \mathbf{X} - \mathbb{E}(\mathbf{X})$. Then, $\boldsymbol{\Sigma} = \operatorname{Cov}(\mathbf{X}_c)$ is a symmetric matrix, and has an eigenvalue decomposition, let $\boldsymbol{\Sigma} = \mathbf{W} \boldsymbol{\Lambda} \mathbf{W}^{-1}$, with $\mathbf{W}\mathbf{W}^T = \mathbf{I}$. Then the columns of $\mathbf{W}$ are called _principal directions_, and PCA is defined as the projection of the data on the principal components$\mathbf{Y} = \mathbf{W}^T\mathbf{X}_c.$ PCA is an extremely powerful tool, most frequently used for _dimentionality reduction_. A few things to keep in mind about PCA:- PCA simply finds another basis to represent the data; namely, it finds the vectors along the directions with _maximal variance_. However, this is done in a _greedy_ manner, and not in a _joint_ maximization of the variance.- PCA creates an _orthonormal basis_ (which, in many cases, is a _pitfall_).As a final comment before we begin, we can think of PCA as changing the point from which we're looking at an object (we will come back to that perspective later on during the notebook). Elementary PCATo set the setting, let's generate some data from an ellipse. In the following `code` cell, generate data that abides by the equation of an ellipse of your choice. To make things more interesting, rotate your data, and make sure that they are centered somewhere away from zero.
###Code
# set the number of data points and parameters
n = 5000
x_denom = 9
y_denom = 1
theta = -np.pi / 4
# generate random points
random_points = np.random.uniform(-5, 5, (n, 2))
# keep only the ones that satisfy the ellipse equation
ellipse = [[x, y] for x, y in random_points if x ** 2 / x_denom + y ** 2 / y_denom <= 1]
ellipse = np.array(ellipse)
# rotate it and make sure it is away from zero
x = ellipse[:, 0] * np.cos(theta) + ellipse[:, 1] * np.sin(theta)
y = -ellipse[:, 0] * np.sin(theta) + ellipse[:, 1] * np.cos(theta)
x += 5
y += 7
# plot the data
plt.scatter(x, y, s=3, marker='o', c='k')
# axes for visualization purposes
plt.plot([-5, 15], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 15], color='k', linewidth=1)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
Let us now perform PCA. Our goal here is to plot the data and the principal directions after every step, to try and get a better understanding of exactly what PCA does. Let us again restate the steps of PCA, so we are all on the same page regarding what we need to do in what follows1. The first step towards PCA is centering our data around zero.2. Then, we compute the covariance matrix of the data.3. The _principal directions_ are defined as the eigenvectors of the covariance matrix.4. (Optional) We only keep a few coefficients.5. We project the data on the principal dimensions.6. To reconstruct, we "invert" the transformation to go back to the original domain.7. We re-add the mean to get the same representation.So, let's begin. 🤓 As the first step dictates, recenter your data and plot the zero-mean'ed and the original data on the same plot.
###Code
# center the data
mean_x = np.mean(x)
mean_y = np.mean(y)
x_zm = x - mean_x
y_zm = y - mean_y
# plot the data
plt.scatter(x, y, s=3, marker='o', c='k', alpha=0.125)
plt.scatter(x_zm, y_zm, s=3, marker='o', c='k')
# axes for visualization purposes
plt.plot([-5, 15], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 15], color='k', linewidth=1)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
We then need to compute the _principal directions_. Again, these are simply the eigenvectors of the covariance matrix of the centered data. Compoute the principal directions, and plot them overlayed on both the original and the zero-meaned data.
###Code
N = len(x)
data = np.zeros((2, N))
data[0, :] = x_zm
data[1, :] = y_zm
# compute the covariance matrix
cov_x = np.cov(data)
# compute the eigenvectors
vals, V = np.linalg.eig(cov_x)
# plot the data
plt.scatter(x, y, s=3, marker='o', c='k', alpha=0.125)
plt.scatter(x_zm, y_zm, s=3, marker='o', c='k')
# axes for visualization purposes
plt.plot([-5, 15], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 15], color='k', linewidth=1)
# plot the principal directions
plt.plot([0, V[0][0]], [0, V[0][1]], color='r', linewidth=3)
plt.plot([0, V[1][0]], [0, V[1][1]], color='r', linewidth=3)
# also overlayed on the original data
plt.plot([mean_x, V[0][0] + mean_x], [mean_y, V[0][1] + mean_y], color='r', linewidth=3, alpha=0.5)
plt.plot([mean_x, V[1][0] + mean_x], [mean_y, V[1][1] + mean_y], color='r', linewidth=3, alpha=0.5)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
We see that the first principal direction is chosen along the axis with the _greatest_ variance. Then, the next direction of maximum variance is chosen, **but** under the constraint that it is _orthogonal_ to the first one. In the following `code` block, project both the original and the centered data on the principal components and plot everything on the same plot.
###Code
data_orig = np.zeros((2, N))
data_orig[0, :] = x
data_orig[1, :] = y
# project the data on the principal components
pc_data = np.dot(V.T, data)
pc_data_orig = np.dot(V.T, data_orig)
# plot the data
plt.scatter(pc_data[0, :], pc_data[1, :], s=3, marker='o', c='k')
plt.scatter(pc_data_orig[0, :], pc_data_orig[1, :], s=3, marker='o', c='k', alpha=0.125)
# plot the new axes
plt.plot([-5, 15], [0, 0], color='r', linewidth=1)
plt.plot([0, 0], [-5, 15], color='r', linewidth=1)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
We see that PCA generated axes along the directions that are minimizing the variance. Remember what we said; we can think of PCA as simply moving around the space and changing our point of view. Our original data was like a _frisbee_; all we did was change the viewpoint of the frisbee to see it in a clearer light.**Note**: with the risk of going on a tangent, this interpretation of PCA that we're introducing is not entirely arbitrary. On the contrary, by construction $\mathbf{W}$ is an _orthonormal_ matrix. These matrices have a special place in algebra; the comprise the _special orthogonal group_ $SO(n)$. This group is also, aptly, called the _rotation group_; these matrices are transformations that generalize the notion of rotation to any dimension.Next, we will apply the "inverse" transformation to go back to the original domain. Note that in our simple example, we didn't really prune any of the dimensions, so the plot we expect to see is _identical_ to the one where we simply centered the data. In an actual application with _high-dimensional data_, we would only keep a few of the principal components before projecting back, resulting in a _low-dimensional_ approximation of the original data. Apply the "inverse" transformation and plot again the centered and the original data.
###Code
# "invert" the projection
data_recon = np.dot(V, pc_data)
data_orig_recon = np.dot(V, pc_data_orig)
# plot the data
plt.scatter(data_recon[0, :], data_recon[1, :], s=3, marker='o', c='k')
plt.scatter(data_orig_recon[0, :], data_orig_recon[1, :], s=3, marker='o', c='k', alpha=0.125)
# add the axes
plt.plot([-5, 10], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 10], color='k', linewidth=1)
plt.xlim([-5, 10])
plt.ylim([-5, 10])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
Finally, adding back the means of each dimension will get us, faithfully, back to our original data magnitudes.
###Code
data_recon_m = data_recon
# add back the means
data_recon_m[0, :] += mean_x
data_recon_m[1, :] += mean_y
# plot the data
plt.scatter(data_recon_m[0, :], data_recon_m[1, :], s=3, marker='o', c='k')
# add the axes
plt.plot([-5, 10], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 10], color='k', linewidth=1)
plt.xlim([-5, 10])
plt.ylim([-5, 10])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("Data sampled on an ellipse")
###Output
_____no_output_____
###Markdown
PCA pitfalls 👎So far PCA seems pretty awesome! It is an extremely powerful tool, that is based on a very simple and intuitive idea, and is very easy to implement. There has to be a catch, _right_?As we mentioned before, the principal directions _have_ to be orthogonal to each other. Therefore, PCA will have rather poor performance when the data dimensions aren't orthogonal to each other.Another pitfall, in conjuction with the orthogonality constraint, is that PCA maximizes variance in a _greedy_ manner. What that means is that it chooses the direction of maximum variance, and _then_ chooses another direction orthogonal to that. However, if the directions (again, baring the constraint of orthogonality) were chosen _jointly_, rather than _sequentially_, we could minimize the overall variance of the data.Let us try to illustrate these two pitfalls in conjuction, by showing an example where the principal directions chosen by PCA seem like a rather poor choice. To that end, generate again data from _two_, this time, ellipses so that they create an overall "X" shaped pattern.
###Code
# set the number of data points and parameters
n = 5000
x_denom = 25
y_denom = 0.25
theta_1 = -np.pi / 4
theta_2 = -np.pi / 8
# generate random points
random_points = np.random.uniform(-5, 5, (n, 2))
# keep only the ones that satisfy the equation
ellipse = [[x, y] for x, y in random_points if x ** 2 / x_denom + y ** 2 / y_denom <= 1]
ellipse = np.array(ellipse)
# create the first part of the X
x = ellipse[:, 0] * np.cos(theta_1) + ellipse[:, 1] * np.sin(theta_1)
y = -ellipse[:, 0] * np.sin(theta_1) + ellipse[:, 1] * np.cos(theta_1)
x += 5
y += 7
# create the second part of the X
x_e = ellipse[:, 0] * np.cos(theta_2) + ellipse[:, 1] * np.sin(theta_2)
y_e = -ellipse[:, 0] * np.sin(theta_2) + ellipse[:, 1] * np.cos(theta_2)
x_e += 5
y_e += 7
x_n = np.concatenate((x, x_e))
y_n = np.concatenate((y, y_e))
# plot the data
plt.scatter(x_n, y_n, s=3, marker='o', c='k')
# axes for visualization purposes
plt.plot([-5, 15], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 15], color='k', linewidth=1)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("X-shaped data sampled on two ellipses")
###Output
_____no_output_____
###Markdown
So why is this particular dataset interesting? Well, we see that the data lie along directions that are not orthogonal, and still have a decent amount of variance along those directions. What do you expect the principal directions will look like for the above dataset? Compute the principal directions below, and overlay them in both the centered and the original dataset.
###Code
# zero mean the data
mean_xn = np.mean(x_n)
mean_yn = np.mean(y_n)
xn_zm = x_n - mean_xn
yn_zm = y_n - mean_yn
N = len(x_n)
data_n = np.zeros((2, N))
data_n[0, :] = xn_zm
data_n[1, :] = yn_zm
# compute the covariance matrix
cov_xn = np.cov(data_n)
# compute the eigenvectors
vals_n, V_n = np.linalg.eig(cov_xn)
# plot the data
plt.scatter(x_n, y_n, s=3, marker='o', c='k', alpha=0.125)
plt.scatter(xn_zm, yn_zm, s=3, marker='o', c='k')
# axes for visualization purposes
plt.plot([-5, 15], [0, 0], color='k', linewidth=1)
plt.plot([0, 0], [-5, 15], color='k', linewidth=1)
# plot the principal directions
plt.plot([0, V_n[0][0]], [0, V_n[0][1]], color='r', linewidth=3)
plt.plot([0, V_n[1][0]], [0, V_n[1][1]], color='r', linewidth=3)
# also overlayed on the original data
plt.plot([mean_xn, V_n[0][0] + mean_xn], [mean_yn, V_n[0][1] + mean_yn], color='r', linewidth=3, alpha=0.5)
plt.plot([mean_xn, V_n[1][0] + mean_xn], [mean_yn, V_n[1][1] + mean_yn], color='r', linewidth=3, alpha=0.5)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("X-shaped data sampled on two ellipses")
###Output
_____no_output_____
###Markdown
Hmmm, were you expecting these principal directions? 🤔 We see that, even though these directions are orthogonal to each other and are along the directions of maximal variance, they are not very good for modeling the data. In the next `code` block, project the data onto the principal dimensions so we can have a closer look.
###Code
data_orig_n = np.zeros((2, N))
data_orig_n[0, :] = x_n
data_orig_n[1, :] = y_n
# project the data on the principal components
pc_data_n = np.dot(V_n.T, data_n)
pc_data_orig_n = np.dot(V_n.T, data_orig_n)
# plot the data
plt.scatter(pc_data_n[0, :], pc_data_n[1, :], s=3, marker='o', c='k')
plt.scatter(pc_data_orig_n[0, :], pc_data_orig_n[1, :], s=3, marker='o', c='k', alpha=0.125)
# plot the new axes
plt.plot([-5, 15], [0, 0], color='r', linewidth=1)
plt.plot([0, 0], [-5, 15], color='r', linewidth=1)
plt.xlim([-5, 15])
plt.ylim([-5, 15])
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.title("X-shaped data sampled on two ellipses")
###Output
_____no_output_____ |
viral_events/viral_event_summaries.ipynb | ###Markdown
Notebook description This notebook takes as inputs two files, called 'counts_ginis.csv' and 'queries_origin_matched.csv' and produces as output a file called 'event_summary.csv' which is used in the actual analysis.First, the script reads data about the query-events from counts_ginis.csv. Then, it uses data from queries_origin_matched.csv to filter the data so that only a certain number of days before and after the event's time of origin are considered.Note that there is no 1-to-1 mapping between Futusome events and queries, so a query-event may have multiple origin dates. Due to this, the script checks if two or more origin times that correspond to a query are included within the time window mentioned above. If so, the queries are considered to form a single event. Otherwise, they are treated as different events.
###Code
import csv
import pandas as pd
import datetime
import collections
import sys
csv.field_size_limit(sys.maxsize)
###Output
_____no_output_____
###Markdown
Essentially, we're interested in looking at some number of days before and after an event's time of origin. This part can be used to set up these parameters; the paper used days_before = 0 and days_before = 30. Also set up the final date to be used, which here is 2017-05-17.
###Code
days_before = 0
days_after = 30
interval = datetime.timedelta(days = days_before + days_after)
#interval = datetime.timedelta(days = 30)
final_date = datetime.date(2017, 5, 17)
###Output
_____no_output_____
###Markdown
The event queries contain some incorrect characters, this is used to correct them. Note that the characters here are not the regular 'ö', 'ä', 'Ö' and 'Ä' although they look like them.
###Code
wrong_ae = 'ä'
wrong_oe = 'ö'
wrong_OE = 'Ö'
wrong_AE = '̈A'
###Output
_____no_output_____
###Markdown
Read data and combine events Read queries and corresponding data. This script also replaces the faulty characters mentioned above with '*'s.
###Code
def read_numbers(path):
ndict = collections.defaultdict(dict)
with open(path, 'r') as f:
reader = csv.DictReader(f, delimiter = ',')
for row in reader:
## The one row for each query and each value type
## Let's split the index and fix broken letters
#query_count = row['query / count'].replace(wrong_ae, 'ä').replace(wrong_AE, 'Ä').replace(wrong_oe, 'ö').replace(wrong_OE, 'Ö')
## This is 'query / type' for 15062017 onwards,
## 'query / count' before that
query_count = row['query / type'].replace(wrong_ae, '*').replace(wrong_AE, '*').replace(wrong_oe, '*').replace(wrong_OE, '*')
query_count = query_count.split(' / ')
query = query_count[0]
value_type = query_count[1]
row.pop('query / type')
ndict[query][value_type] = row
return ndict
###Output
_____no_output_____
###Markdown
This function checks if two or more origin times that map to a query are included within the same time window and, if so, combines them. Used as a helper function by select_days().
###Code
def combine_events(orig_ats, event_ids, interval):
i = 0
j = 1
clean_origins = set()
while True:
## Here we'll also add an id to each event
if j >= len(orig_ats):
if len(orig_ats) == 1:
clean_origins.add((orig_ats[0], event_ids[0]))
break
first = orig_ats[i]
first_id = event_ids[i]
second = orig_ats[j]
second_id = event_ids[j]
if second - first <= interval:
clean_origins.add((first, first_id))
j += 1
else:
clean_origins.add((first, first_id))
clean_origins.add((second, second_id))
i += 1
j += 1
return clean_origins
###Output
_____no_output_____
###Markdown
Loops through the data associated with a query. Discards a query if some of the data is missing. Otherwise takes each part of the event data (i.e. information about sources, authors etc. per day) as a dict, appends the events origin time, event id, corresponding query and data type and returns a list containing these dicts.
###Code
def loop_query_data(query_data, orig_at, event_id, _query):
origs_to_add = []
for k,v in query_data.iteritems():
## If every type of count (posts count, domains count)
## has something other than zero in the first slot,
## the event will be added to the list. If it has a zero,
## something's broken and the event will be discarded
## and its query printed.
## The current data set has three broken events, I believe.
v = v.copy()
if v[str(orig_at)] == str(0):
print 'Error at: ' + _query
return False, origs_to_add
v['orig_at'] = orig_at
v['event_id'] = event_id
v['query'] = _query
v['value_type'] = k
origs_to_add.append(v)
return True, origs_to_add
###Output
_____no_output_____
###Markdown
This function selects data from days falling within the time interval specified above from an event's origin. It also replaces the faulty characters mentioned earlier with '*'s.
###Code
def select_days(path, numdict):
events = []
## How many days before and after origin at are looked at
with open(path, 'r') as f:
reader = csv.DictReader(f, delimiter = ',')
for row in reader:
query = row['query']
query = row['query'].replace('ö', '*').replace('ä', '*').replace('Ö', '*').replace('Ä', '*')
## Here, there may be multiple origin dates and event ids
## for a given query, so let's separate them
event_ids = row['id'].split(';')
orig_ats = row['orig_at'].split(';')
orig_dates = []
## Consider each origin_at date. If the temporal overlap
## between two origin_ats related to an event is large enough,
## treat it as multiple independent events.
for i in range(0, len(orig_ats)):
## Clean out milliseconds
orig_at = orig_ats[i].split('.')[0]
orig_at = datetime.datetime.strptime(orig_at, '%Y-%m-%d %H:%M:%S').date()
orig_ats[i] = orig_at
clean_origins = combine_events(orig_ats, event_ids, interval)
## Loop through each event 'version'
version = 0
for origin in clean_origins:
## This part makes sure that only events that
## do not have incomplete data associated with them
## are considered
## Consider each event and each id
orig_at = origin[0]
event_id = origin[1]
## Fetch related data
query_data = numdict[query]
## In case of multiple events correspond to a single query,
## append 'version number' to the query name.
_query = query + '_' + str(version)
## Go through the data associated with the query.
## If loop_query_data() returns True, there were no
## problems with the data so it's added to events.
success, origs_to_add = loop_query_data(query_data, orig_at, event_id, _query)
if success:
for orig in origs_to_add:
events.append(orig)
version += 1
return events
e = read_numbers('data/csv/counts_ginis_2017-08-27.csv')
e = select_days('data/csv/queries_orig_matched_2017-08-24.csv', e)
###Output
_____no_output_____
###Markdown
Turn data into a data frame
###Code
df = pd.DataFrame(e)
df = df.set_index(['query', 'value_type'])
df.drop('all documents', axis = 1, inplace = True)
df.drop('event_id', axis = 1, inplace = True)
def selected_days(columns):
orig_at = columns.loc['orig_at']
dates = columns.index
dates = dates.drop('orig_at')
selected = []
if orig_at + datetime.timedelta(days = days_after) > final_date:
return
for date in dates:
column_date = datetime.datetime.strptime(date, '%Y-%m-%d').date()
if column_date >= orig_at - datetime.timedelta(days = days_before) \
and column_date < orig_at + datetime.timedelta(days = days_after):
if 'counts' in columns.name[1]:
selected.append(columns.loc[date])
return pd.Series(selected)
df = df.iloc[df.index.get_level_values('value_type').str.contains('counts')]
df = df.apply(selected_days, axis = 1)
###Output
_____no_output_____
###Markdown
Form event summary file Find out how many days an event lasted, i.e. how many days it took for post count to drop to zero.
###Code
def get_event_duration(columns):
if (columns == '0').any() == False:
return 30
return int((columns == '0').argmax())
###Output
_____no_output_____
###Markdown
Gets the total of dataframe values during the time an event lasted, and the average.
###Code
def total_during_duration(columns):
duration = columns['duration']
active_days = columns[0:duration]
return sum(active_days.astype(float))
def average_during_duration(columns):
duration = columns['duration']
active_days = columns[0:duration]
return sum(active_days.astype(float)) / duration
###Output
_____no_output_____
###Markdown
Manually separate the dataframe.
###Code
posts_df = df.loc[(df.index.get_level_values('value_type') == 'post counts')]
posts_df = posts_df.reset_index().drop('value_type', axis = 1).set_index('query')
authors_df = df.loc[(df.index.get_level_values('value_type') == 'author counts')]
authors_df = authors_df.reset_index().drop('value_type', axis = 1).set_index('query')
domains_df = df.loc[(df.index.get_level_values('value_type') == 'domain counts')]
domains_df = domains_df.reset_index().drop('value_type', axis = 1).set_index('query')
sources_df = df.loc[(df.index.get_level_values('value_type') == 'source counts')]
sources_df = sources_df.reset_index().drop('value_type', axis = 1).set_index('query')
###Output
_____no_output_____
###Markdown
Apply the dataframe functions defined above.
###Code
duration_df = posts_df.apply(get_event_duration, axis = 1)
## Rename some columns
posts_df.columns = [str(i) + ' posts' for i in range(0,30)]
authors_df.columns = [str(i) + ' authors' for i in range(0,30)]
domains_df.columns = [str(i) + ' domains' for i in range(0,30)]
sources_df.columns = [str(i) + ' sources' for i in range(0,30)]
## Add duration info to each data frame
posts_df['duration'] = duration_df
authors_df['duration'] = duration_df
domains_df['duration'] = duration_df
sources_df['duration'] = duration_df
## Compute the total and average values of the variables
## during the time the event was 'active'
posts_df['total posts'] = posts_df.apply(total_during_duration, axis = 1)
posts_df['average posts'] = posts_df.apply(average_during_duration, axis = 1)
authors_df['total authors'] = authors_df.apply(total_during_duration, axis = 1)
authors_df['average authors'] = authors_df.apply(average_during_duration, axis = 1)
domains_df['total domains'] = domains_df.apply(total_during_duration, axis = 1)
domains_df['average domains'] = domains_df.apply(average_during_duration, axis = 1)
sources_df['total sources'] = sources_df.apply(total_during_duration, axis = 1)
sources_df['average sources'] = sources_df.apply(average_during_duration, axis = 1)
###Output
_____no_output_____
###Markdown
Recombine data frame.
###Code
## Combine the data into a single data frame
combine_df = posts_df[['0 posts', '1 posts', '2 posts', 'total posts', 'average posts']]
combine_df = pd.concat([combine_df, authors_df[['0 authors', '1 authors', '2 authors', 'total authors', 'average authors']]], axis = 1)
combine_df = pd.concat([combine_df, domains_df[['0 domains', '1 domains', '2 domains', 'total domains', 'average domains']]], axis = 1)
combine_df = pd.concat([combine_df, sources_df[['0 sources', '1 sources', '2 sources', 'total sources', 'average sources']]], axis = 1)
combine_df = pd.concat([combine_df, posts_df[['duration']]], axis = 1)
###Output
_____no_output_____
###Markdown
Turn query name into event name and event typeAlso removes things like 'AND NOT text.exact' as wellas the marker for different 'iterations' of the same event,e.g. 'ykk*saamu_0' and 'ykk*saamu_1' both become 'ykk*saamu'Possible event types are hashtag and substantive (i.e. keyword)
###Code
def split_event_name_and_type(columns):
query = columns['query']
split = query.split(':')
event_type = split[0].split('.')[1]
event_name = split[1]
event_name = event_name.split(' ')[0]
event_name = event_name.rsplit('_')[0]
return pd.Series({'event name': event_name, 'event type': event_type})
combine_df = combine_df.reset_index()
combine_df[['event name', 'event type']] = combine_df.reset_index().apply(split_event_name_and_type, axis = 1)
###Output
_____no_output_____
###Markdown
The next three blocks wrangle the data frame, show it for inspection, and write it in a file.
###Code
combine_df = combine_df.drop('query', axis = 1)
event_names = combine_df['event name']
event_types = combine_df['event type']
combine_df = combine_df.drop(['event name', 'event type'], axis = 1)
combine_df.insert(0, 'event type', event_types)
combine_df.insert(0, 'event name', event_names)
combine_df = combine_df.set_index('event name')
combine_df
## And output it. Remember to set the proper file name!
combine_df.to_csv('data/csv/event_summary.csv')
###Output
_____no_output_____ |
drawing_conclusions_solutions.ipynb | ###Markdown
Q1: Are more unique models using alternative sources of fuel? By how much?
###Code
df_08.fuel.value_counts()
df_18.fuel.value_counts()
# how many unique models used alternative sources of fuel in 2008
alt_08 = df_08.query('fuel in ["CNG", "ethanol"]').model.nunique()
alt_08
# how many unique models used alternative sources of fuel in 2018
alt_18 = df_18.query('fuel in ["Ethanol","Electricity"]').model.nunique()
alt_18
plt.bar(["2008","2018"],[alt_08,alt_18])
plt.title("Number of Unique Models Using Alternative Fuels")
plt.xlabel("Year")
plt.ylabel("Number of Unique Models");
# total unique models each year
total_08 = df_08.model.nunique()
total_18 = df_18.model.nunique()
total_08, total_18
prop_08 = alt_08/total_08
prop_18 = alt_18/total_18
prop_08, prop_18
plt.bar(["2008", "2018"], [prop_08, prop_18])
plt.title("Proportion of Unique Models Using Alternative Fuels")
plt.xlabel("Year")
plt.ylabel("Proportion of Unique Models");
###Output
_____no_output_____
###Markdown
Q2: How much have vehicle classes improved in fuel economy?
###Code
veh_08 = df_08.groupby('veh_class').cmb_mpg.mean()
veh_08
veh_18 = df_18.groupby('veh_class').cmb_mpg.mean()
veh_18
# how much they've increased by for each vehicle class
inc = veh_18 - veh_08
inc
inc.dropna(inplace=True)
plt.subplots(figsize=(8, 5))
plt.bar(inc.index, inc)
plt.title('Improvements in Fuel Economy from 2008 to 2018 by Vehicle Class')
plt.xlabel('Vehicle Class')
plt.ylabel('Increase in Average Combined MPG');
###Output
_____no_output_____ |
table_of_contents.ipynb | ###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](./00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](./01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](./02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Probabilities, Gaussians, and Bayes' Theorem**](./03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](./04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](./05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](./06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](./07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](./08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](./09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](./10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](./11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](./12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](./13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](./14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](./Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](./Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](./Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](./Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](./Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](./Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](./Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](./Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Python实现卡尔曼和贝叶斯滤波器 目录[**前言**](./00-Preface.ipynb) 写书的动机。如何下载和阅读这本书。对IPython笔记本和Python的要求。github链接。[**第一章 : g-h Filter**](./01-g-h-filter.ipynb) 直观介绍g-h滤波器,也称为$\alpha$-$\beta$ 滤波器,这是一个滤波器家族,包括卡尔曼滤波器。一旦你理解了这一章,你就会理解卡尔曼滤波器背后的概念。[**第二章: 离散贝叶斯滤波器**](./02-Discrete-Bayes.ipynb)介绍离散贝叶斯滤波器。从这里,您将学习以一种容易理解的形式支持卡尔曼滤波器的概率(贝叶斯)推理。[**第三章: 概率,高斯和贝叶斯定理**](./03-Gaussians.ipynb)介绍在贝叶斯意义上使用高斯来表示信念。高斯函数允许我们实现在连续域中使用的离散贝叶斯滤波器的算法。[**第四章: 一维卡尔曼滤波器**](./04-One-Dimensional-Kalman-Filters.ipynb)通过将离散贝叶斯滤波器修改为高斯滤波器来实现卡尔曼滤波器。这是一个功能齐全的卡尔曼滤波器,尽管只对一维问题有用。 [**第五章: 多元高斯模型**](./05-Multivariate-Gaussians.ipynb)将高斯函数扩展到多个维度,并演示了“三角剖分”和隐藏变量如何极大地改进估计[**第六章:多元卡尔曼滤波**](./06-Multivariate-Kalman-Filters.ipynb)我们将在单变量一章中发展的卡尔曼滤波器推广到线性问题的全广义滤波器。读完这篇文章后,你将了解卡尔曼滤波器如何工作,以及如何设计和实现一个(线性)问题的选择。[**第七章:卡尔曼滤波数学**](./07-Kalman-Filter-Math.ipynb)在没有形成坚实的数学基础的情况下,我们已经做了很多了。这一章是可选的,特别是第一次,但如果你打算写稳健的,数值稳定的过滤器,或阅读文献,你将需要了解这一章的材料。为了理解后面关于非线性滤波的章节,将需要一些章节。 [**第八章:卡尔曼滤波器的设计**](./08-Designing-Kalman-Filters.ipynb)在第5章和第6章的基础上,介绍了几个卡尔曼滤波器的设计。只有通过看几个不同的例子,你才能真正掌握所有的理论。选择的例子是现实的,而不是“玩具”问题,让你开始实现自己的过滤器。讨论,但不解决数值稳定性等问题。[**第九章:非线性滤波**](./09-Nonlinear-Filtering.ipynb)卡尔曼滤波器仅适用于线性问题。然而,世界是非线性的。在这里,我介绍非线性系统对滤波器提出的问题,并简要讨论我们将在后续章节中学习的各种算法。[**第十章:无迹卡尔曼滤波器**](./10-Unscented-Kalman-Filter.ipynb)无迹卡尔曼滤波器(UKF)是卡尔曼滤波理论的最新发展。它们允许你过滤非线性问题,而不需要像扩展卡尔曼滤波器那样需要封闭形式的解决方案。这个话题通常不是没有提到,就是在现有的文本中被掩盖了,扩展卡尔曼滤波器接受了大量的讨论。我把它放在第一位是因为UKF更容易理解和实现,而且滤波性能通常与扩展卡尔曼滤波器一样好,甚至更好。我总是尝试先实现UKF来解决实际问题,你也应该这样做[**第十一章:扩展卡尔曼滤波器**](./11-Extended-Kalman-Filters.ipynb)扩展卡尔曼滤波器(EKF)是线性化非线性问题最常用的方法。现实世界中的大多数卡尔曼滤波器都是ekf,因此需要理解这些材料来理解现有的代码、论文、演讲等[**第十二章:粒子滤波器**](./12-Particle-Filters.ipynb)粒子滤波器使用蒙特卡罗技术来过滤数据。它们很容易处理高度非线性和非高斯系统,以及多模态分布(同时跟踪多个目标),但代价是高计算要求[**第十三章:平滑**](./13-Smoothing.ipynb)卡尔曼滤波器是递归的,因此非常适合于实时滤波。然而,它们在后期处理数据时工作得非常好。毕竟,卡尔曼滤波器是预测-校正器,预测过去比预测未来更容易!我们讨论一些常见的方法。[**第十四章:自适应滤波**](./14-Adaptive-Filtering.ipynb) 卡尔曼滤波器假定一个单一的过程模型,但是操纵目标通常需要由几个不同的过程模型来描述。自适应滤波使用几种技术,使卡尔曼滤波适应目标的变化行为。[**附录A:安装、Python、NumPy和FilterPy**](./Appendix-A-Installation.ipynb)本书简要介绍了Python及其使用方法。配套库FilterPy的描述。 [**附录B:符号和符号**](./Appendix-B-Symbols-and-Notations.ipynb)大多数书选择使用不同的符号和变量名来表示相同的概念。当你刚开始学习时,这是一个很大的障碍。我收集了本书中使用的符号和符号,并建立了表格,显示该领域的主要书籍使用的符号和名称。*在这一点上仍然只是一个笔记的收集.*[**附录D: h -无限滤波器**](./Appendix-D-HInfinity-Filters.ipynb) 介绍了 $H_\infty$ 滤波器. *我有实现过滤器的代码,但还没有支持文本*[**附录E:集合卡尔曼滤波器**](./Appendix-E-Ensemble-Kalman-Filters.ipynb)讨论了集合卡尔曼滤波器,它使用蒙特卡罗方法来处理非线性系统中非常大的卡尔曼滤波器状态。[**附录F: FilterPy源代码**](./Appendix-F-Filterpy-Code.ipynb)本书中使用的FilterPy中重要类的清单。 支持笔记这些笔记不是书的主要部分,但包含了可能会对一部分读者感兴趣的信息。[**计算和绘制pdfs**](./Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)描述我如何在书中实现各种pdf的绘图。[**交互**](./Supporting_Notebooks/Interactions.ipynb)各种算法的交互式仿真。使用滑块实时更改输出。[**将多元方程转换为单变量情况**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)通过将所有向量和矩阵的维数设为1,证明多元方程与一元卡尔曼滤波方程是相同的。[**传感器融合的迭代最小二乘**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)深入研究利用迭代最小二乘技术解决从多个GPS伪距测量中寻找位置的非线性问题。[**泰勒系列,泰勒级数**](./Supporting_Notebooks/Taylor-Series.ipynb)简单介绍一下泰勒级数。 Github 仓库http://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Table of contents **Course instructor: **Ivan Oseledets** TAs:**Maxim Rakhuba,Marina Munkhoeva,Alexandr Katrutsa,Artem Nikitin,Valentin Khrulkov | Week | Classes | Homework | Tests ||------|----------|----------|-------||1| [General info](lectures/lecture-0.ipynb), [Python Crash Course](lectures/Python_Intro.ipynb), [Lecture 1 (Floating point, vector norms)](lectures/lecture-1.ipynb), [Lecture 2 (Memory hierarchy, matrix multiplication, Strassen algorithm)](lectures/lecture-2.ipynb), [Lecture 3 (Matrix norms, Unitary matrices, QR via Housholder and Givens)](lectures/lecture-3.ipynb) | [Requirements](psets/rules_hw.pdf), [Problem set 1](psets/pset1.ipynb) | |2| [Lecture 4 (Matrix rank, skeleton decomposition, SVD)](lectures/lecture-4.ipynb) [Lecture 5 (LU decomposition, least squares problem)](lectures/lecture-5.ipynb) ||3| [Lecture 6 (Eigendecomposition, Power method, Schur decomposition)](lectures/lecture-6.ipynb) [Lecture 7 (More about the QR decomp, QR algorithm)](lectures/lecture-7.ipynb) [Lecture 8 (More about QR algorithm, eigendecomposition algorithms, SVD algorithms)](lectures/lecture-8.ipynb) | [Problem set 2](psets/pset2.ipynb) ||4| [Lecture 9 (Sparse linear algebra part 1)](lectures/lecture-9.ipynb) [Lecture 10 (Sparse linear algebra part 2, Iterative methods part 1)](lectures/lecture-10.ipynb) [Lecture 11 (Iterative methods part 2 (CG, GMRES))](lectures/lecture-11.ipynb) ||5| [Lecture 12 (More Krylov methods + preconditioning)](lectures/lecture-12.ipynb) [Lecture 13 (Iterative methods for eigenvalues)](lectures/lecture-13.ipynb) Midterm test | [Problem set 3](psets/pset3.ipynb) |[Midterm test rules](midterm.pdf) ||6| [Lecture 14 (Structured matrices: circulants, Toeplitz matrices, Fourier transform)](lectures/lecture-14.ipynb) [Lecture 15 (Matrix functions, matrix equations)](lectures/lecture-15.ipynb) [Lecture 16 (Tensor decompositions)](lectures/lecture-16.ipynb) ||7| Q&A before the exam Oral exam: [Thursday list of students](https://d1b10bmlvqabco.cloudfront.net/attach/iuiaquv3t0y6vy/i0i5wwfaund4jr/iw3je3a4sxcn/Exam_list_Thursday.pdf) Oral exam: [Friday list of students](https://d1b10bmlvqabco.cloudfront.net/attach/iuiaquv3t0y6vy/i0i5wwfaund4jr/iw4xxllv549h/Exam_list_Friday.pdf) || [List of exam questions](exam/exam_questions.pdf),[Basics](exam/program_min.pdf)|8| Friday: application period presentations||9| Reexamination
###Code
from IPython.core.display import HTML
def css_styling():
styles = open("lectures/styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Probabilities, Gaussians, and Bayes' Theorem**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python 목차[**들어가는 말**](./00-Preface.ipynb) 이 책이 만들어진 이유, 책을 다운로드하고 사용하는 방법, 파이썬 등의 설치 시 요구 사항, GitHub 링크 등을 설명합니다.[**Chapter 1: g-h 필터**](./01-g-h-filter.ipynb)g-h 필터에 대해 직관적으로 소개합니다. 이 필터는 $\alpha$-$\beta$라고도 알려져 있는데, 특정한 하나의 필터가 아니라 칼만 필터 등을 포함하는 수학적 필터의 한 분야라고 볼 수 있습니다. 이 단원을 이해하고 나면 칼만 필터의 기본적인 개념과 사상을 이해할 수 있습니다.[**Chapter 2: 이산 베이즈 필터**](./02-Discrete-Bayes.ipynb)이산 베이즈 필터를 통해 칼만 필터의 기반을 이루는 확률적 사상에 대해 쉽게 이해할 수 있습니다. [**Chapter 3: 확률, 가우시안, 그리고 베이즈 정리**](./03-Gaussians.ipynb)가우시안(분포)를 사용해 믿음(신뢰도)를 베이지안 사상으로 표현하는 방법을 소개합니다. 가우시안을 통해 이산 베이즈 필터를 연속적인 상태 공간에서 사용할 수 있게 변형할 수 있습니다.[**Chapter 4: 1차원 칼만 필터**](./04-One-Dimensional-Kalman-Filters.ipynb) 이산 베이즈 필터를 개조해 간단한 1차원 칼만 필터를 만들어 봅니다. 1차원에밖에 쓰지 못하긴 하지만 어쨌든 제대로 된 칼만 필터입니다.[**Chapter 5: 다차원 가우시안**](./05-Multivariate-Gaussians.ipynb)가우시안 분포를 다차원으로 확장하고, "숨은 변수"들이 상태 추정에 도움이 될 수 있음을 이해합니다. [**Chapter 6: 다차원 칼만 필터**](./06-Multivariate-Kalman-Filters.ipynb)1차원 칼만 필터를 확장해 선형 문제에 적용할 수 있는 일반화된 칼만 필터를 만들어 봅니다. 이 단원을 읽고 나면 칼만 필터가 어떻게 작동하는지 이해하고 선형 문제에 사용할 수 있는 칼만 필터를 설계할 수 있습니다. [**Chapter 7: 칼만 필터 관련 수학**](./07-Kalman-Filter-Math.ipynb)6단원까지는 수학적으로 엄밀하게 들어가지 않고 올 수 있었습니다. 이 단원은 반드시 볼 필요는 없지만 앞으로 칼만 필터를 공학적으로 제대로 사용하거나 관련 서적을 읽으려면 여기 나와 있는 수학적인 부분을 알아 놔야 합니다. 비선형 필터링 단원에서도 조금 필요합니다.[**Chapter 8: 칼만 필터 설계**](./08-Designing-Kalman-Filters.ipynb)5, 6단원을 이해했다고 가정하고 여러 가지 칼만 필터를 설계하는 흐름에 대해 알아봅니다. 예제를 여럿 봐야지 이론을 더 제대로 이해할 수 있습니다. 이 예제들은 대강 만든 현실성 없는 예제들이 아니라 실제로 쓸만한 문제들이고 이를 응용해서 각자 필요한 필터를 만들어볼 수도 있습니다. 수치적 안정성에 대해서도 쪼금 다룹니다.[**Chapter 9: 비선형 필터링**](./09-Nonlinear-Filtering.ipynb)지금까지의 칼만 필터는 선형 문제들에 대해서만 쓸 수 있습니다. 애석하게도 세상은 비선형 공간입니다. 이 단원에서는 비선형성이 필터에 어떤 문제를 일으키는지 알아보고 앞으로 배울 알고리즘들에 대해 대강 소개합니다.[**Chapter 10: 무향 칼만 필터**](./10-Unscented-Kalman-Filter.ipynb)무향 칼만 필터(Unscented Kalman Filter)는 비교적 최근에 개발된 이론으로, 확장 칼만 필터 같이 closed form solution을 구하지 않고도 비선형 필터링을 가능하게 합니다.무향 칼만 필터는 보통 교과서에서 잘 안 다뤄지지만, 이 책에서는 이해하고 구현하기 쉽다는 특성 때문에 확장 칼만 필터보다 먼저 놓았습니다(심지어 성능도 더 좋을 때도 많습니다!). 저는 칼만 필터를 제대로 써야 할 때는 항상 무향 칼만 필터를 먼저 써 봅니다. 여러분도 그러는 걸 추천합니다.[**Chapter 11: 확장 칼만 필터**](./11-Extended-Kalman-Filters.ipynb)확장 칼만 필터(Extended Kalman Filter)는 비선형 문제에 접근하는 보편적인 방법입니다. 확장 칼만 필터를 기반으로 돌아가는 많은 알고리즘과 프로그램이 있기 때문에 이런 것들을 이해하려면 이 단원을 이해해야겠죠?[**Chapter 12: 파티클 필터**](./12-Particle-Filters.ipynb)파티클 필터(Particle Filter)는 몬테 카를로 기법(Monte Carlo Techniques)을 사용해 데이터를 필터링합니다. 비선형 문제나 가우시안으로 가정하기 힘든 문제에도 아주 잘 먹히고, 다양한 분포도 다룰 수 있지만 계산 복잡도가 높다는 단점이 있습니다. [**Chapter 13: 데이터 후처리**](./13-Smoothing.ipynb)칼만 필터는 원래 재귀적으로 작동하기에 실시간 필터링에 적합하지만, 후처리 할 때 사용해도 아주 잘 작동합니다. 결국 칼만 필터는 예측-수정 과정으로 이루어져 있는데, 미래보다는 과거 정보가 예측하기 쉽겠죠?[**Chapter 14: 적응형 필터링**](./14-Adaptive-Filtering.ipynb) 칼만 필터에서는 보통 하나의 시스템 모델(시스템이 어떻게 동작하는지 묘사하는 모델)을 사용하지만, 움직이는 물체를 추적하는 일 등에서는 여러 가지 모델을 사용해서 시스템을 표현해야 합니다. 적응형 필터링은 몇 가지 기법을 사용해서 시스템의 변화하는 상태에 적응할 수 있도록 칼만 필터를 개조하는 과정입니다.[**Appendix A: 설치 방법, Python, NumPy, 그리고 FilterPy**](./Appendix-A-Installation.ipynb) 파이썬과 보조 라이브러리인 FilterPy에 대한 간단한 설명입니다. [**Appendix B: 기호 및 표기법**](./Appendix-B-Symbols-and-Notations.ipynb)책마다 기호랑 표기법이 다 달라서 골치아플 때가 많습니다. 특히 초보자들한테는 장벽이나 다름없죠.이 책에서 쓴 기호와 표기법들을 모아 놓고 다른 책에서는 어떻게 표현했는지 긁어 모아서 뭐가 뭐에 해당하는지 써 놓았습니다.*Work In Progress.*[**Appendix D: H-Infinity 필터**](./Appendix-D-HInfinity-Filters.ipynb) $H_\infty$ 필터에 대해 간단히 설명합니다.*구현은 돼 있는데 아직 설명 없음.*[**Appendix E: 앙상블 칼만 필터**](./Appendix-E-Ensemble-Kalman-Filters.ipynb)몬테 카를로 기법을 사용해서 엄청 큰 상태 공간을 지니는 비선형 시스템에서 칼만 필터를 쓰는 방법입니다.[**Appendix F: FilterPy 소스 코드**](./Appendix-F-Filterpy-Code.ipynb)FilterPy의 중요한 클래스들 모음 보조 노트북이 노트북들은 책의 주요 단원은 아니지만, 관심있는 사람들에게는 쓸모있을 만한 정보가 있습니다.[**확률 밀도 함수 계산 및 그리기**](./Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)확률 밀도 함수를 계산하고 plot하는 방법[**상호작용 시뮬레이션**](./Supporting_Notebooks/Interactions.ipynb)알고리즘과 상호작용을 해볼 수 있는 간단한 시뮬레이션들의 모음[**다변수 식을 단변수 식으로 바꾸는 방법**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)다변수와 단변수 칼만 필터가 같음을 보이는 단원. 각종 벡터와 행렬의 차원을 줄임으로써 증명[**센서 퓨전을 위한 재귀적 최소자승법**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)(심화 과정) 재귀적 최소자승법을 이용해 다수의 GPS 측정값으로부터 실제 위치를 추정하는 예제[**테일러 급수**](./Supporting_Notebooks/Taylor-Series.ipynb)테일러 급수에 관한 간단한 설명 Github repositoryhttp://github.com/wjdghksdl26/Kalman-and-Bayesian-Filters-in-Python/tree/translation_KR
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python 卡爾曼與貝葉斯濾波器的Python實現 Table of Contents 目錄 [**Preface**](./00-Preface.html) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links. [**前言(未翻譯)**](./00-Preface.html) 介紹了寫作本書的動機,下載和閱讀本書的方法,以及對IPython Notebook和Python的要求。給出了本書的Github鏈接。 [**Chapter 1: The g-h Filter**](./01-g-h-filter.html)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**第一章:g-h濾波器(未翻譯)**](./01-g-h-filter.html)對g-h濾波器,又名$\alpha$-$\beta$濾波器的直觀介紹。g-h濾波器是包含卡爾曼濾波器在內的一個大家族。只要讀懂這一章,你就能領會卡爾曼濾波器背後的設計思想。 [**Chapter 2: The Discrete Bayes Filter**](./02-Discrete-Bayes.html)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form. [**第二章:離散貝葉斯濾波器**](./02-Discrete-Bayes.html)介紹了離散貝葉斯濾波器。這一章,你能通過一種易於消化的方式學會概率(貝葉斯)推理方法。貝葉斯推理是卡爾曼濾波器的理論基礎。 [**Chapter 3: Probabilities, Gaussians, and Bayes' Theorem**](./03-Gaussians.html)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains. [**第三章:概率分佈、高斯分佈、以及貝葉斯定理**](./03-Gaussians.html)介紹如何用高斯分佈表示貝葉斯推理中的信念。高斯分佈允許我們將離散貝葉斯濾波器中所用的那套方法移到連續分佈的領域中來。 [**Chapter 4: One Dimensional Kalman Filters**](./04-One-Dimensional-Kalman-Filters.html)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**第四章:一維卡爾曼濾波器**](./04-One-Dimensional-Kalman-Filters.html)通過在離散貝葉斯濾波器中應用高斯分佈,實現了卡爾曼濾波器。這是一個完整的卡爾曼濾波器,不過它是1維的。 以下文章均未翻譯完全。 [**Chapter 5: Multivariate Gaussians**](./05-Multivariate-Gaussians.html)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates. [**第五章:多變量高斯分佈**](./05-Multivariate-Gaussians.html)將一維高斯分佈推廣到多維,展示了“三角測量法”以及隱變量對於準確估計的幫助。 [**Chapter 6: Multivariate Kalman Filter**](./06-Multivariate-Kalman-Filters.html)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](./07-Kalman-Filter-Math.html)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](./08-Designing-Kalman-Filters.html)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](./09-Nonlinear-Filtering.html)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](./10-Unscented-Kalman-Filter.html)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](./11-Extended-Kalman-Filters.html)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](./12-Particle-Filters.html)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](./13-Smoothing.html)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](./14-Adaptive-Filtering.html) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](./Appendix-A-Installation.html)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](./Appendix-B-Symbols-and-Notations.html)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](./Appendix-D-HInfinity-Filters.html) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](./Appendix-E-Ensemble-Kalman-Filters.html)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](./Appendix-F-Filterpy-Code.html)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](./Supporting_Notebooks/Computing_and_plotting_PDFs.html)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](./Supporting_Notebooks/Interactions.html)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.html)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.html)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](./Supporting_Notebooks/Taylor-Series.html)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Fast methods for partial differential and integral equations **Course instructor: **Ivan Oseledets** TAs:**Maxim Rakhuba, Alexander Katrutsa, Alexey Boyko | Week | Classes | Homework | Tests ||:------:|:---------|:----------|-------||1| Day 1: [Lecture 1 (Intro)](lectures/Lecture-1.ipynb), [Course rules and syllabus](lectures/PDE_start.ipynb) Day 2: [Lecture 2 (Discretization of IE)](lectures/Lecture-2.ipynb) Day 3: [Lecture 3 (Fast matvec: FFT, pFFT)](lectures/Lecture-3.ipynb) | Read [rules!](psets/hw_rules.pdf) [Problem set 1](psets/PS1.ipynb) (Deadline 18/09/17) Project proposal (Deadline 28/09/17) [Proposal form](fastpdes_projects.pdf)| |2 | Day 1: Homework 1 Q & A Day 2: [BEM++ overview](lectures/bempp_overview.ipynb) and [example](lectures/laplace_interior_dirichlet_original.ipynb) Day 3: [FEniCS overview](lectures/fenics_overview.ipynb) | | | |3| Day 1: [Lecture 4 ($N$-body problem, Barnes-Hut)](lectures/Lecture-4.ipynb) Day 2: [Lecture 5 (Fast Multipole Method (FMM))](lectures/Lecture-5.ipynb) | [Problem set 2](psets/PS2.ipynb) (Deadline 02/10/17)|4| Day 1: [Lecture 6 (Discretization of PDEs and sparse matrices)](./lectures/Lecture-6.ipynb) Day 2: [Lecture 7 (Sparse solvers)](./lectures/Lecture-7.ipynb) | | | |5| Day 1: [Lecture 8 (The Multigrid)](lectures/Lecture-8.ipynb) Day 2: [Lecture 9 (Domain decomposition)](lectures/Lecture-9.ipynb) | [Problem set 3](./psets/PS3.ipynb) (Deadline 16/10/17) ||6| Day 1: [Lecture 10 (Intro to isogeometric analysis)](lectures/Lecture-10.ipynb) | | |
###Code
from IPython.core.display import HTML
def css_styling():
styles = open("lectures/styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](./00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](./01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](./02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Probabilities, Gaussians, and Bayes' Theorem**](./03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](./04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](./05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](./06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](./07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](./08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](./09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](./10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](./11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](./12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](./13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](./14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](./Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](./Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](./Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](./Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](./Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](./Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](./Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](./Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Gaussian Probabilities**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Iterative Least Squares for Sensor Fusion**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](http://nbviewer.ipython.org/urls/raw.github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/master/Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____
###Markdown
Kalman and Bayesian Filters in Python Table of Contents[**Preface**](./00-Preface.ipynb) Motivation behind writing the book. How to download and read the book. Requirements for IPython Notebook and Python. github links.[**Chapter 1: The g-h Filter**](./01-g-h-filter.ipynb)Intuitive introduction to the g-h filter, also known as the $\alpha$-$\beta$ Filter, which is a family of filters that includes the Kalman filter. Once you understand this chapter you will understand the concepts behind the Kalman filter. [**Chapter 2: The Discrete Bayes Filter**](./02-Discrete-Bayes.ipynb)Introduces the discrete Bayes filter. From this you will learn the probabilistic (Bayesian) reasoning that underpins the Kalman filter in an easy to digest form.[**Chapter 3: Probabilities, Gaussians, and Bayes' Theorem**](./03-Gaussians.ipynb)Introduces using Gaussians to represent beliefs in the Bayesian sense. Gaussians allow us to implement the algorithms used in the discrete Bayes filter to work in continuous domains.[**Chapter 4: One Dimensional Kalman Filters**](./04-One-Dimensional-Kalman-Filters.ipynb)Implements a Kalman filter by modifying the discrete Bayes filter to use Gaussians. This is a full featured Kalman filter, albeit only useful for 1D problems. [**Chapter 5: Multivariate Gaussians**](./05-Multivariate-Gaussians.ipynb)Extends Gaussians to multiple dimensions, and demonstrates how 'triangulation' and hidden variables can vastly improve estimates.[**Chapter 6: Multivariate Kalman Filter**](./06-Multivariate-Kalman-Filters.ipynb)We extend the Kalman filter developed in the univariate chapter to the full, generalized filter for linear problems. After reading this you will understand how a Kalman filter works and how to design and implement one for a (linear) problem of your choice.[**Chapter 7: Kalman Filter Math**](./07-Kalman-Filter-Math.ipynb)We gotten about as far as we can without forming a strong mathematical foundation. This chapter is optional, especially the first time, but if you intend to write robust, numerically stable filters, or to read the literature, you will need to know the material in this chapter. Some sections will be required to understand the later chapters on nonlinear filtering. [**Chapter 8: Designing Kalman Filters**](./08-Designing-Kalman-Filters.ipynb)Building on material in Chapters 5 and 6, walks you through the design of several Kalman filters. Only by seeing several different examples can you really grasp all of the theory. Examples are chosen to be realistic, not 'toy' problems to give you a start towards implementing your own filters. Discusses, but does not solve issues like numerical stability.[**Chapter 9: Nonlinear Filtering**](./09-Nonlinear-Filtering.ipynb)Kalman filters as covered only work for linear problems. Yet the world is nonlinear. Here I introduce the problems that nonlinear systems pose to the filter, and briefly discuss the various algorithms that we will be learning in subsequent chapters.[**Chapter 10: Unscented Kalman Filters**](./10-Unscented-Kalman-Filter.ipynb)Unscented Kalman filters (UKF) are a recent development in Kalman filter theory. They allow you to filter nonlinear problems without requiring a closed form solution like the Extended Kalman filter requires.This topic is typically either not mentioned, or glossed over in existing texts, with Extended Kalman filters receiving the bulk of discussion. I put it first because the UKF is much simpler to understand, implement, and the filtering performance is usually as good as or better then the Extended Kalman filter. I always try to implement the UKF first for real world problems, and you should also.[**Chapter 11: Extended Kalman Filters**](./11-Extended-Kalman-Filters.ipynb)Extended Kalman filters (EKF) are the most common approach to linearizing non-linear problems. A majority of real world Kalman filters are EKFs, so will need to understand this material to understand existing code, papers, talks, etc. [**Chapter 12: Particle Filters**](./12-Particle-Filters.ipynb)Particle filters uses Monte Carlo techniques to filter data. They easily handle highly nonlinear and non-Gaussian systems, as well as multimodal distributions (tracking multiple objects simultaneously) at the cost of high computational requirements.[**Chapter 13: Smoothing**](./13-Smoothing.ipynb)Kalman filters are recursive, and thus very suitable for real time filtering. However, they work extremely well for post-processing data. After all, Kalman filters are predictor-correctors, and it is easier to predict the past than the future! We discuss some common approaches.[**Chapter 14: Adaptive Filtering**](./14-Adaptive-Filtering.ipynb) Kalman filters assume a single process model, but manuevering targets typically need to be described by several different process models. Adaptive filtering uses several techniques to allow the Kalman filter to adapt to the changing behavior of the target.[**Appendix A: Installation, Python, NumPy, and FilterPy**](./Appendix-A-Installation.ipynb)Brief introduction of Python and how it is used in this book. Description of the companionlibrary FilterPy. [**Appendix B: Symbols and Notations**](./Appendix-B-Symbols-and-Notations.ipynb)Most books opt to use different notations and variable names for identical concepts. This is a large barrier to understanding when you are starting out. I have collected the symbols and notations used in this book, and built tables showing what notation and names are used by the major books in the field.*Still just a collection of notes at this point.*[**Appendix D: H-Infinity Filters**](./Appendix-D-HInfinity-Filters.ipynb) Describes the $H_\infty$ filter. *I have code that implements the filter, but no supporting text yet.*[**Appendix E: Ensemble Kalman Filters**](./Appendix-E-Ensemble-Kalman-Filters.ipynb)Discusses the ensemble Kalman Filter, which uses a Monte Carlo approach to deal with very large Kalman filter states in nonlinear systems.[**Appendix F: FilterPy Source Code**](./Appendix-F-Filterpy-Code.ipynb)Listings of important classes from FilterPy that are used in this book. Supporting NotebooksThese notebooks are not a primary part of the book, but contain information that might be interested to a subest of readers.[**Computing and plotting PDFs**](./Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)Describes how I implemented the plotting of various pdfs in the book.[**Interactions**](./Supporting_Notebooks/Interactions.ipynb)Interactive simulations of various algorithms. Use sliders to change the output in real time.[**Converting the Multivariate Equations to the Univariate Case**](./Supporting_Notebooks/Converting-Multivariate-Equations-to-Univariate.ipynb)Demonstrates that the Multivariate equations are identical to the univariate Kalman filter equations by setting the dimension of all vectors and matrices to one.[**Iterative Least Squares for Sensor Fusion**](./Supporting_Notebooks/Iterative-Least-Squares-for-Sensor-Fusion.ipynb)Deep dive into using an iterative least squares technique to solve the nonlinear problem of finding position from multiple GPS pseudorange measurements.[**Taylor Series**](./Supporting_Notebooks/Taylor-Series.ipynb)A very brief introduction to Taylor series. Github repositoryhttp://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
###Code
#format the book
from book_format import load_style
load_style()
###Output
_____no_output_____ |
notebooks/20210316_MagnetCooldown.ipynb | ###Markdown
Magnet testing
###Code
import sys
import os
sys.path.append('../')
import pandas as pd
import src.io as sio
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
MAGNET_FOLDER1 = sio.get_qudi_data_path("2021\\03\\20210319\\Magnet\\")
MAGNET_FOLDER2 = sio.get_qudi_data_path("2021\\03\\20210322\\Magnet\\")
MAGNET_FOLDER3 = sio.get_qudi_data_path("2021\\03\\20210323\\Magnet\\")
MAGNET_FOLDER4 = sio.get_qudi_data_path("2021\\03\\20210324\\Magnet\\")
%matplotlib inline
files = os.listdir(MAGNET_FOLDER1)
dat_files = []
for file in files:
filename, ext = os.path.splitext(file)
if ext == ".dat":
dat_files.append(filename + ext)
dat_files = [dat_value for idx, dat_value in enumerate(dat_files) if idx in [0, 2, 4, 5]]
fig, ax = plt.subplots(nrows=len(dat_files), sharex=True, sharey=True)
for idx, file in enumerate(dat_files):
filepath = os.path.join(MAGNET_FOLDER1, file)
df = pd.read_csv(filepath, skiprows=10, delimiter="\t", usecols=[0, 1, 2, 3], names=["Time", "current_x", "current_y", "current_z"])
ax[idx].plot(df["Time"], df[f"current_z"], "-")
if idx == 2:
title = f"[Run {idx+1}] Ramping up z-coil → No quench"
color = "tab:green"
elif idx == 3:
title = f"[Run {idx+1}] Max Current → Ramping down z-coil"
color = "tab:green"
ax[idx].set_xlabel("Time (s)")
else:
title = f"[Run {idx+1}] Ramping up z-coil → Quench"
color = "tab:red"
ax[idx].set_title(title, color=color)
max_current = max(df[f"current_z"])
ax[idx].axhline(max_current, linestyle="--", color=color, label="Max $I_z$" + f" = {max_current:.2f} A")
ax[idx].legend(loc="upper right")
ax[idx].set_ylabel("$I_z$ (A)")
fig.tight_layout()
plt.savefig("1.png", dpi=300)
%matplotlib inline
files = os.listdir(MAGNET_FOLDER3)
dat_files = []
for file in files:
filename, ext = os.path.splitext(file)
if ext == ".dat":
dat_files.append(filename + ext)
dat_files = [dat_value for idx, dat_value in enumerate(dat_files) if idx in [3, 4]]
fig, ax = plt.subplots(nrows=len(dat_files), sharex=True, sharey=False)
for idx, file in enumerate(dat_files):
filepath = os.path.join(MAGNET_FOLDER3, file)
df = pd.read_csv(filepath, skiprows=10, delimiter="\t", usecols=[0, 1, 2, 3], names=["Time", "current_x", "current_y", "current_z"])
for axis in ["x", "y", "z"]:
ax[idx].plot(df["Time"], df[f"current_{axis}"], "-", label=f"{axis}-coil")
if idx == 1:
title = f"[Run {idx+3}] Ramping up all coils → No quench"
color = "tab:green"
ax[idx].set_xlabel("Time (s)")
elif idx == 3:
title = f"[Run {idx+2}] Max Current → Ramping down"
color = "tab:green"
ax[idx].set_xlabel("Time (s)")
else:
title = f"[Run {idx+1}] Ramping up all coils → Quench"
color = "tab:red"
ax[idx].set_title(title, color=color)
# max_current = max(df[f"current_z"])
# ax[idx].axhline(max_current, linestyle="--", color=color, label="Max $I_z$" + f" = {max_current:.2f} A")
ax[idx].legend(loc="lower right")
ax[idx].set_ylabel("$I$ (A)")
fig.tight_layout()
plt.savefig("1.png", dpi=300)
def draw_ellipsoid(a, b, c):
coefs = (a, b, c) # Coefficients in a0/c x**2 + a1/c y**2 + a2/c z**2 = 1
# Radii corresponding to the coefficients:
rx, ry, rz = coefs
# Set of all spherical angles:
u = np.linspace(0, 2 * np.pi, 20)
v = np.linspace(0, np.pi, 20)
# Cartesian coordinates that correspond to the spherical angles:
# (this is the equation of an ellipsoid):
x = rx * np.outer(np.cos(u), np.sin(v))
y = ry * np.outer(np.sin(u), np.sin(v))
z = rz * np.outer(np.ones_like(u), np.cos(v))
return x, y, z
x, y, z = draw_ellipsoid(10, 10, 20)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z)
ax.set_zlabel("$I_z$ (A)")
ax.set_xlim([-20, 20])
ax.set_xlabel("$I_x$ (A)")
ax.set_ylim([-20, 20])
ax.set_ylabel("$I_y$ (A)")
%matplotlib inline
x, y, z = draw_ellipsoid(10, 10, 20)
x1, y1, z1 = draw_ellipsoid(3, 3, 19)
x2, y2, z2 = draw_ellipsoid(9, 9, 5)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(x, y, z, label="Measured", alpha=1)
#ax.plot_wireframe(x1, y1, z1, color="tab:orange", label="NV [111] tips", alpha=1)
ax.plot_wireframe(x2, y2, z2, color="tab:green", label="Meron sample", alpha=1)
ax.set_zlabel("$I_z$ (A)")
ax.set_xlim([-20, 20])
ax.set_xlabel("$I_x$ (A)")
ax.set_ylim([-20, 20])
ax.set_ylabel("$I_y$ (A)")
ax.legend()
#plt.savefig("Measured.png", dpi=300)
###Output
_____no_output_____ |
03_create_classification/Create_training_data_from_polygons.ipynb | ###Markdown
Create training data from rasters and polgyonsThis script takes input rasters and polygons, outomatically calculates overlap and gathers sample points. The gathered data is converted into a .csv file that can easilyt be used in any type of ML algorithm. Predefined raster and shapes
###Code
# load libraries
# gbdx
from gbdxtools import Interface
gbdx = Interface()
# geo data tools
from shapely.geometry import Polygon, MultiPolygon, Point
from shapely.geometry import Point
from shapely.geometry import shape
import scipy.spatial as spatial
from scipy.spatial import distance
import fiona
import rasterio
import pyproj
from shapely.ops import transform
from functools import partial
# data tools
import pandas as pd
import numpy as np
from numpy import random
import glob
import os
from pprint import pprint
import random
#visualization tools
from IPython.display import clear_output
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from rasterio.plot import show
import matplotlib as mpl
from descartes import PolygonPatch
# define functions
def wgs2epsgzone(x,y):
EPSG = 32700-round((45+y)/90,0)*100+round((183+x)/6,0)
UTM_EPSG_code = EPSG
return UTM_EPSG_code
def random_points_within(poly, n_points_per_sqm):
min_x, min_y, max_x, max_y = poly.bounds
epsg = wgs2epsgzone(max_x, max_y)
project = partial(
pyproj.transform,
pyproj.Proj(init='epsg:4326'),
pyproj.Proj(init='epsg:%i' % (epsg)))
poly_wgs = transform(project, poly)
n_points = int((poly_wgs.area / 1e6) * n_points_per_sqm)
if n_points > 10000:
n_points = 10000
if n_points < 10:
n_points = 50
# print('area ' + str(poly_wgs.area))
# print('n_points ' + str(n_points))
points = []
while len(points) < n_points:
random_point = Point([random.uniform(min_x, max_x), random.uniform(min_y, max_y)])
if (random_point.within(poly)):
points.append(random_point)
return points
def get_values_for_points(dataset, points):
bands_save = []
x_save = []
y_save = []
data_array = dataset.read()
for i in range(len(points)):
x = points[i].x
y = points[i].y
x_save.append(x)
y_save.append(y)
index = dataset.index(x, y)
try:
band_values = data_array[:,index[0],index[1]]
bands_save.append(band_values)
except:
# print('point ' + str(i) + ' out of image')
continue
# print(band_values)
return bands_save, x_save, y_save
def closest_node(node, nodes):
closest_index = distance.cdist([node], nodes).argmin()
return closest_index
def get_closest_image_polygons(polygons, image_locations):
x = [Multi.centroid.x for Multi in polygons]
y = [Multi.centroid.y for Multi in polygons]
polygon_centroids = np.array([x,y])
polygon_centroids = polygon_centroids.T.reshape(len(x),2)
df_polygon_image = np.zeros([len(polygon_centroids),2])
for i in tqdm(range(len(polygon_centroids))):
x,y = polygon_centroids[i]
some_pt = (x, y);
closest_index = closest_node(some_pt, image_locations)
closest_pt = image_locations[closest_index]
df_polygon_image[i,0] = int(i)
df_polygon_image[i,1] = int(closest_index)
df_polygon_image = pd.DataFrame(df_polygon_image, columns = ['polygon_id' , 'image_path_id'], dtype = int)
return df_polygon_image
def rgb_from_raster(data, brightness):
bands, x, y = data.shape
# create plottable image
brightness = 0.3
blue = data[1].astype(np.float32)
green = data[2].astype(np.float32)
red = data[4].astype(np.float32)
rgb = np.zeros((x,y,3))
rgb[...,0] = red
rgb[...,1] = green
rgb[...,2] = blue
rgb = rgb - np.min(rgb)
rgb = rgb / np.maximum(np.max(rgb), 1) + brightness
rgb[rgb > 255] = 225
return rgb
def check_valid_geometries(shapefile_path):
shape_list = []
for pol in fiona.open(shapefile_path):
if pol['geometry'] != None:
# if pol['geometry']['type'] == 'MultiPolygon':
# for sub_pol in pol['geometry']['coordinates']:
# pol = sub_pol[0]
# shape_list.append(pol)
# else:
shape_list.append(pol)
return shape_list
def raster_path_list2image_centroid_coordinate(raster_path_list):
image_locations = np.zeros([len(raster_path_list), 2])
i = 0
for path in tqdm(raster_path_list):
dataset = rasterio.open(path)
# get info from filenames
stringlist = path.split('/')[-1].split('_')
seq_nr = stringlist[1]
image_id = stringlist[-1].split('.')[0]
city = stringlist[0]
# calculate center of image
left, bottom, right, top = dataset.bounds
x_center = (left + right) / 2
y_center = (top + bottom) / 2
image_locations[i,0] = x_center
image_locations[i,1] = y_center
i = 1 + i
return image_locations
def polygon_raster_overlap(poly, n_closest, raster_files):
for n in n_closest:
# load raster data
dataset = rasterio.open(raster_files[n])
# convert bounds to polygon
left, bottom, right, top = dataset.bounds
polygon = Polygon([(left, bottom), (left, top), (right, top), (right, bottom)])
# calculate overlap fraction
overlap_area = round(poly.intersection(polygon).area / poly.area, 2)
if overlap_area == 1:
break
return n
def get_data_raster_polygons(class_MultiPolygon, variables, image_locations, raster_files, plotting_overlap = False):
nr_of_polygons = len(class_MultiPolygon)
# create dataframe with all variables + label
variables_str_list= variables + ['label']
df_subsample = pd.DataFrame(columns = variables_str_list)
# len(df_polygon_image_id)
for i in tqdm(range(nr_of_polygons)):
# select polygon
poly = class_MultiPolygon[i]
# get polygon location as shapely obj.
[x,y] = poly.centroid.xy
point = np.array([x[0],y[0]])
# get index of closest 5 images to polygon
n_closest = distance.cdist([point], image_locations).argsort()[0,0:5]
# check overlap for each image agianst the polygon
n_maxoverlap = polygon_raster_overlap(poly, n_closest, raster_files)
# get raster corresponding to polygon
path_match_image = raster_files[n_maxoverlap]
# create random samples in polygon
samples = random_points_within(poly, n_points_per_sqkm)
df = pd.DataFrame(np.zeros([len(samples),len(variables)]), columns = variables)
# get band values for random samples
dataset = rasterio.open(path_match_image)
bands_save, x_save, y_save = get_values_for_points(dataset, samples)
# visualize training data selection
try:
if plotting_overlap:
# clear_output(wait = True)
# get boundary xy coordinates for plotting
try:
if len(poly.boundary) == 1:
x,y = poly.boundary.xy
else:
n_largest = np.array([boundary.length for boundary in poly.boundary]).argmax()
x,y = poly.boundary[n_largest].xy
except:
x,y = poly.boundary.xy
data = dataset.read()
data = data - np.min(data)
data = data / np.maximum(np.max(data), 1) + 0.3
plt.figure(figsize = (10,10))
plt.plot(x,y, color = 'w')
show(data[[1,3,4],:,:], transform=dataset.transform)
except Exception as e:
print('failed')
print('poly nr: ', i, ' image: ', path_match_image)
print(e)
print('___________________________________________\n')
# plt.show()
try:
df.loc[:,0:8] = bands_save
# print('data', start_row, end_row)
except:
print('no data', path_match_image)
continue
# get image_id from filename
stringlist = path_match_image.split('/')[-1].split('_')
image_id = stringlist[-1].split('.')[0]
# get metadata
record = gbdx.catalog.get(image_id)
df.loc[:,'x'] = x_save
df.loc[:,'y'] = y_save
for property_name in property_names:
try:
property_record = record['properties'][property_name]
df[property_name] = property_record
except:
print('failed ', property_name, image_id)
df[property_name] = None
df['label'] = label
df_subsample = df_subsample.append(df, ignore_index=True)
return df_subsample
###Output
_____no_output_____
###Markdown
things
###Code
# select variables
property_names = ['cloudCover',
'multiResolution',
'targetAzimuth',
# 'timestamp',
'sunAzimuth',
'offNadirAngle',
# 'platformName',
'sunElevation',
# 'scanDirection',
'panResolution']
variables = [0, 1, 2, 3, 4, 5, 6, 7, 'x', 'y'] + property_names
variables
# find all files in folders for specific classes
# find current working directory
cwd = os.getcwd()
# define paths with training data polygons
class1_shapes_path = '../../TreeTect/data/shapefiles_waterbodies_osm/hand_water/*.shp'
class2_shapes_path = '../../TreeTect/data/shapefiles_waterbodies_osm/hand_nonwater/*.shp'
# define paths with raster data
rasters_file_path = '../../TreeTect/data/rasters_waterbodies_osm/**/*.tif'
# find files in shapefile folder
class1_shape_files = glob.glob(class1_shapes_path)
class2_shape_files = glob.glob(class2_shapes_path)
# find files in raster folder
raster_files = glob.glob(rasters_file_path)
class2_shape_files
class1_shape_files
from shapely.ops import cascaded_union
# convert esri shapefiles to shapely objects
# check valid geometries
class1_valid_shape_list = check_valid_geometries(class1_shape_files[0])
class2_valid_shape_list = check_valid_geometries(class2_shape_files[0])
# convert list to shapely MultiPolgyons
class1_MultiPoly = cascaded_union([shape(pol['geometry']) for pol in class1_valid_shape_list])
class2_MultiPoly = cascaded_union([shape(pol['geometry']) for pol in class2_valid_shape_list])
print('number of polygons class 1: ', len(class1_MultiPoly))
print('number of polygons class 2: ', len(class2_MultiPoly))
print('----------------------')
print('number of raster files: ', len(raster_files))
# get centroids of all rasters
image_locations = raster_path_list2image_centroid_coordinate(raster_files)
# # Uncomment if you want to see some results
# plt.scatter(image_locations[0:10:,0], image_locations[0:10,1])
# calculate UTM zone (projected coordinate system) for plotting and meter calculations
min_x, min_y, max_x, max_y = class2_MultiPoly.bounds
UTM_EPSG_code = wgs2epsgzone(max_x,max_y)
n_points_per_sqkm = 500000
label = 'water'
df_class1 = get_data_raster_polygons(class1_MultiPoly,
variables,
image_locations,
raster_files,
plotting_overlap = False)
label = 'non_water'
df_class2 = get_data_raster_polygons(class2_MultiPoly, variables, image_locations, raster_files)
df_all = df_class1.append(df_class2, ignore_index=True)
df_all
df_all.sample(10)
###Output
_____no_output_____
###Markdown
Export training data
###Code
df_all.label.unique()
import datetime
file_name = 'all_polygons'
NOW = datetime.datetime.now()
csv_filename = "../../TreeTect/data/trainings_data_waterbodies/data_non_acomp_{}_{}.csv".format(file_name, NOW)
csv_filename
df_all.to_csv(csv_filename)
print('data written to file')
###Output
data written to file
###Markdown
In case of emergency
###Code
# Save dataframe as csv file and create download link
from IPython.display import HTML
import base64
import pandas as pd
import datetime
NOW = datetime.datetime.now()
def create_download_link(df, title = "Download CSV file",
filename = "data_{}_{}.csv".format(file_name, NOW)):
csv = df.to_csv(index =False)
b64 = base64.b64encode(csv.encode())
payload = b64.decode()
html = '<a download="{filename}" href="data:text/csv;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
create_download_link(df)
###Output
_____no_output_____
###Markdown
all pixels method
###Code
import rasterio
from rasterio.mask import mask
import matplotlib.pyplot as plt
dataset = rasterio.open(path_match_image)
bands,x,y = dataset.read().shape
array_dataset = dataset.read().reshape([x*y,8])
rasterio.__version__
water_pixels_mask = mask(dataset, [water_Multi[0]],invert = False, crop = False)[0]
b,x,y = water_pixels_mask.shape
water_pixels = water_pixels_mask.reshape([x*y,8])[:,0].astype(int)
water_pixels = water_pixels != 0
sum(water_pixels) / len(water_pixels)
plt.imshow(water_pixels_mask[1,:,:])
# Create dataframe from clicked values
import pandas as pd
df_bands = pd.DataFrame(array_dataset)
df_bands['label'] = 'non_water'
df_bands.loc[water_pixels, ['label']] = 'water'
sum(df_bands.label == 'water')
sum(df_bands['label'] == 'water') + sum(df_bands['label'] == 'non_water')
df = df_bands
###Output
_____no_output_____
###Markdown
Visualizations
###Code
## check data!!
# !pip install folium
import folium
m = folium.Map([water_Multi_wgs.centroid.y, water_Multi_wgs.centroid.x], zoom_start = 16,
tiles = 'https://{s}.basemaps.cartocdn.com/light_nolabels/{z}/{x}/{y}{r}.png',
attr='CartoDB') #, name = 'cartocdn')
folium.TileLayer('https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',attr='esri', name = 'esri Imagery').add_to(m)
# folium.raster_layers.ImageOverlay(
# image=image.rgb(),
# name='image 2017',
# bounds=[[bbox[1], bbox[0]],[bbox[3],bbox[2]]],
# opacity=1,
# interactive=False,
# cross_origin=False,
# zindex=1,
# colormap=lambda x: (0,0,0, x)
# ).add_to(m)
# folium.raster_layers.ImageOverlay(
# image=classification_plot,
# name='Classification 2017',
# bounds=[[bbox[1], bbox[0]],[bbox[3],bbox[2]]],
# opacity=1,
# interactive=False,
# cross_origin=False,
# zindex=1,
# colormap=lambda x: (0,x,x, 1)
# ).add_to(m)
folium.Choropleth(water_Multi, name = 'Training set water').add_to(m)
folium.Choropleth(non_water_Multi_wgs, name = 'Training set water').add_to(m)
# folium.Choropleth(setu_smooth, name = 'Smooth setu delineation').add_to(m)
# f_smooth = [0.00001,0.00002,0.00003,0.00004,0.00006,0.00008]
# for i in f_smooth:
# setu_smooth = setu_wgs.simplify(i)
# folium.Choropleth(setu_smooth, name = 'smooth setu delineation'.format(i)).add_to(m)
# #
# I can add marker one by one on the map
# for i in range(0,len(data)):
# folium.Marker([data.iloc[i]['lon'], data.iloc[i]['lat']], popup=data.iloc[i]['name']).add_to(m)
for point in points_water:
#point_wgs = transform(project, point)
folium.Marker([point.y, point.x]).add_to(m)
for point in points_non_water:
#point_wgs = transform(project, point)
folium.Marker([point.y, point.x]).add_to(m)
folium.LayerControl().add_to(m)
# view folium map
# m
from shapely.ops import transform
from functools import partial
import pyproj
project = partial(
pyproj.transform,
pyproj.Proj(init='epsg:4326'),
pyproj.Proj(init='epsg:4326'))
non_water_Multi_wgs = transform(project, non_water_Multi)
water_Multi_wgs = transform(project, water_Multi)
non_water_Multi_wgs
###Output
_____no_output_____ |
02-cs231n/spring1617_assignment1/assignment1/knn.ipynb | ###Markdown
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
###Code
# Run some setup code for this notebook.
from __future__ import print_function
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
###Output
_____no_output_____
###Markdown
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
###Code
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
**Inline Question 1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns? **Your Answer**: *fill this in.*
###Code
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____
###Markdown
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
###Code
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____
###Markdown
You should expect to see a slightly better performance than with `k = 1`.
###Code
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
###Output
_____no_output_____
###Markdown
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
###Code
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
_____no_output_____ |
nb2_clustering.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/ckbjimmy/2018_mlw/blob/master/nb2_clustering.ipynb) Machine Learning for Clinical Predictive AnalyticsWe would like to introduce basic machine learning techniques and toolkits for clinical knowledge discovery in the workshop.The material will cover common useful algorithms for clinical prediction tasks, as well as the diagnostic workflow of applying machine learning to real-world problems. We will use [Google colab](https://colab.research.google.com/) / python jupyter notebook and two datasets:- Breast Cancer Wisconsin (Diagnostic) Database, and - pre-extracted ICU data from PhysioNet Database to build predictive models.The learning objectives of this workshop tutorial are:- Learn how to use Google colab / jupyter notebook- Learn how to build machine learning models for clinical classification and/or clustering tasksTo accelerate the progress without obstacles, we hope that the readers fulfill the following prerequisites:- [Skillset] basic python syntax- [Requirements] Google account OR [anaconda](https://anaconda.org/anaconda/python)In part 1, we will go through the basic of machine learning for classification problems.In part 2, we will investigate more on unsupervised learning methods for clustering and visualization.In part 3, we will play with neural networks. Part II – Unsupervised learning algorithmsIn part 2, we will investigate more on unsupervised learning algorithms of clustering and dimensionality reduction.In the first part of the workshop, we introduce many algorithms for classification tasks. Those tasks belong to the scenario of **supervised learning**, which means that the label/annotation of your training dataset are given. For example, you already know some tumor samples are malignant or benign.Now we will look at the other scenario called **unsupervised learning**, which is for finding the patterns (hidden representation) in the data.In such scenario, the data do not need to be labelled, and we just need the input variables/features without any outcome variables.Unsupervised learning algorithms will try to discover the pattern and inner structure of the data by itself, and group the **similar** data points together and form a cluster, or compress high dimension data to lower dimension data representation.The difference between supervised (classification and regression problems) and unsupervised learning can be rougly shown in the following picture.![unsup](http://oliviaklose.azurewebsites.net/content/images/2015/02/2-supervised-vs-unsupervised-1.png)[Source] Andrew Ng's Machine Learning Coursera Course Lecture 1After going through this tutorial, we hope that you will understand how to use scikit-learn to design and build models for clustering and dimensionality reduction, and how to evaluate them. Again, we start from the breast cancer dataset in UCI data repository to have a quick view on how to do the analysis and build models using well-structured data. We load the breast cancer dataset from `sklearn.datasets`, and preprocess the dataset as we did in Part I.We visualize the data in the vector space just using the data in the first two columns, and color them with the provided labels.We realize that simply using two features may separate two clusters at some degrees.
###Code
from sklearn import datasets
import matplotlib.pyplot as plt
df_bc = datasets.load_breast_cancer()
print(df_bc.feature_names)
print(df_bc.target_names)
X = df_bc['data']
y = df_bc['target']
label = {0: 'malignant', 1: 'benign'}
x_axis = X[:, 0] # mean radius
y_axis = X[:, 1] # mean texture
plt.scatter(x_axis, y_axis, c=y)
plt.show()
###Output
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
['malignant' 'benign']
###Markdown
ClusteringWe are now going to use clustering algorithms to cluster data points into several groups just using predictors/features. K-means clusteringK-means clustering is an iterative algorithm that aims to find local maxima in each iteration. In k-means, we need to choose the number of clusters, $k$, beforehand.There are many methods to decide $k$ value if it is unknown. The simplest approach is that we can use elbow (bend) method in the sum of squared error screen plot for deciding $k$ value. The elbow point can be suggested as the number of culsters for k-means.
###Code
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import PCA
# decide k value
Nc = range(1, 5)
kmeans = [KMeans(n_clusters=i) for i in Nc]
kmeans
score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))]
score
plt.plot(Nc, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
###Output
_____no_output_____
###Markdown
Since we already know that there are two classes in our dataset, we then set the $k$ value of 2 to the parameter `n_clusters` in our model.Based on the centroid distance between each points, the next given inputs are segregated into respected clusters. Each centroid of a cluster is a collection of feature values which define the resulting groups. Examining the centroid feature weights can be used to qualitatively interpret what kind of group each cluster represent.Now we use all features (`X`) for clustering (`km`).We use confusion matrix to demonstrate the performance of k-means clustering. The accuracy of the model can be computed by the summation of diagonal (or reverse diagonal) elements divided by the sample size.In our case, $\frac{(356+130)}{(82+356+130+1)} = 0.85$.For visualization, here we use principal component analysis (PCA) for higher dimension data since it is impossible to simply do it on 2D plot with raw data. We will introduce PCA later in the section of dimensionality reduction.We can see that two clusters can be well separated with given features.For the details of k-means algorithm, please check the [wikipedia page](https://en.wikipedia.org/wiki/K-means_clustering).
###Code
# k-means
k = 2
km = KMeans(n_clusters=k)
km.fit(X)
print(km.labels_)
# performance
cm = confusion_matrix(y, km.labels_)
print(cm)
# visualization
pca = PCA(n_components=2).fit(X)
pca_2d = pca.transform(X)
for i in range(0, pca_2d.shape[0]):
if km.labels_[i] == 0:
c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+')
elif km.labels_[i] == 1:
c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o')
plt.legend([c1, c2], ['Cluster 1', 'Cluster 2'])
plt.title('K-means finds 2 clusters')
plt.show()
###Output
[0 0 0 1 0 1 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1
1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1
1 0 1 0 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1
1 1 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1
1 0 1 1 1 1 1 1 1 1 1 1 0 0 1 1 0 0 1 1 1 1 0 1 1 0 1 0 1 1 1 1 1 0 0 1 1
1 1 1 1 1 1 1 1 0 1 1 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 0 0 1 0 1 0
1 0 0 0 1 0 0 1 1 1 1 1 1 0 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1
1 1 0 1 0 1 0 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 0
1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1
1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1
0 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0
1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 0 0 1 0 1]
[[130 82]
[ 1 356]]
###Markdown
DBSCAN clusteringDBSCAN (Density-Based Spatial Clustering of Applications with Noise) is another clustering algorithm that you don't need to decide $k$ value beforehand.However, the tradeoff is that you need to decide values of two parameters:- `eps` (maximum distance between two data points to be considered in the same neighborhood) and - `min_samples` (minimum amount of data points in a neighborhood to be considered a cluster for DBSCAN).We can see that some samples are not clustered in the correct groups. You may use different values of two parameters for better clustering.
###Code
from sklearn.cluster import DBSCAN
# DBSCAN
dbscan = DBSCAN(eps=100, min_samples=10)
dbscan.fit(X)
print(dbscan.labels_)
# performance
cm = confusion_matrix(y, dbscan.labels_)
print(cm)
# visualization
pca = PCA(n_components=2).fit(X)
pca_2d = pca.transform(X)
for i in range(0, pca_2d.shape[0]):
if dbscan.labels_[i] == 0:
c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+')
elif dbscan.labels_[i] == 1:
c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o')
elif dbscan.labels_[i] == -1:
c3 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='b', marker='*')
plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2', 'Noise'])
plt.title('DBSCAN finds 2 clusters and Noise')
plt.show()
###Output
[-1 -1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 -1 1 1 1 1 -1
-1 -1 1 -1 1 1 0 -1 1 -1 1 1 1 1 1 1 1 1 0 1 1 0 1 1
1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1
-1 1 1 1 1 0 0 1 1 1 -1 -1 1 0 1 0 1 1 1 1 1 1 1 0
1 1 1 1 1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1
1 0 -1 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 -1 1 -1 1 1 1
-1 1 1 1 1 1 1 1 1 1 1 1 -1 -1 1 1 1 1 0 1 1 1 1 1
1 1 1 1 1 1 0 1 1 1 -1 -1 1 1 1 1 1 1 0 1 -1 -1 1 1
1 1 -1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 -1 0 1 1
1 1 1 1 -1 1 1 1 1 1 -1 1 -1 1 -1 1 -1 1 1 1 0 1 1 1
-1 -1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 -1 1 0 1 1 1 1 1 1 1 1 1
1 1 1 1 1 0 1 1 1 0 1 -1 1 1 1 1 1 1 1 1 1 1 1 1
1 -1 1 -1 1 1 1 0 1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1
1 1 1 1 1 0 0 1 -1 -1 1 1 -1 -1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 -1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
-1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 -1 1 -1 1 1 1 1
1 1 1 1 -1 -1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 0 0 1 1 1 -1
1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 -1 1 1 1 1 1 1
1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 0 -1 0 1 0 1]
[[ 0 0 0]
[ 56 44 112]
[ 0 0 357]]
###Markdown
There are also other clustering algorithms provided in the `scikit-learn`. You may check the [scikit-learn document of clustering](http://scikit-learn.org/stable/modules/clustering.htmlclustering) and play with them! Dimensionality reductionDimensionality reduction methods can reduce the number of features and represent the data with much smaller, compressed representation.The technique is helpful for analyzing sparse data that may cause an issue of ["curse of dimensionality"](https://en.wikipedia.org/wiki/Curse_of_dimensionality). Here we will introduce two commonly seen algorithms for dimensionality reduction, principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). Principal component analysis (PCA)PCA guarantees finding the best linear transformation that reduces the number of dimensions with a minimum loss of information. ![loss](https://raw.githubusercontent.com/ckbjimmy/2018_mlw/master/img/pca.png)[Source] Courtesy by Prof. HY Lee (NTU)Sometimes the information that was lost is regarded as noise---information that does not represent the phenomena we are trying to model, but is rather a side effect of some usually unknown processes.In the example, we preserve the first two principal components (PC1 and PC2) and visualize the data after PCA transformation.The figure shows that PCA compresses the data from 30-dimension to 2-dimension without lossing too much information for clustering data points.
###Code
from sklearn.decomposition import PCA
# original feature number
print(X.shape[1])
# PCA
pca = PCA(n_components=2).fit(X)
pca_2d = pca.transform(X)
x_axis = pca_2d[:, 0]
y_axis = pca_2d[:, 1]
plt.scatter(x_axis, y_axis, c=y)
plt.show()
###Output
30
###Markdown
We can even use the result of PCA transformation to perform classification task (just simply use logistic regression as an example) with much compact data.The results show that using PCA transformed data for classification does not decrease the performance of classification too much.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
# use all features
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use PCA transformed features
clf = LogisticRegression(fit_intercept=True)
clf.fit(pca_2d, y)
yhat = clf.predict_proba(pca_2d)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
###Output
0.995 - AUROC of model (training set).
0.978 - AUROC of model (training set).
###Markdown
t-SNE (t-distributed stochastic neighbor embedding)PCA utilizes the **linear transformation** of data. However, it may be better to consider **non-linearity** for data with higher dimension.t-SNE is one of the unsupervised learning method for higher dimension data visualization. It adopts the idea of manifold learning of modeling each high-dimensional data point by a lower dimensional data point in such a way that similar objects are modeled by nearby points with high probability.Again, we use the result of t-SNE modeling and realize that it still preserve most of the information inside the data for classification (but worse than simple PCA in this case).
###Code
from sklearn.manifold import TSNE
# t-SNE
ts = TSNE(learning_rate=100)
tsne = ts.fit_transform(X)
x_axis = tsne[:, 0]
y_axis = tsne[:, 1]
plt.scatter(x_axis, y_axis, c=y)
plt.show()
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
# use all features
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use tSNE transformed features
clf = LogisticRegression(fit_intercept=True)
clf.fit(tsne, y)
yhat = clf.predict_proba(tsne)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
###Output
0.995 - AUROC of model (training set).
0.954 - AUROC of model (training set).
###Markdown
Exercise Iris datasetTry to use iris dataset!We show the result of using k-means for iris dataset. Please try to modify the above codes to see what will happen when you apply DBSCAN, PCA and t-SNE on this dataset.
###Code
df = datasets.load_iris()
print(df.feature_names)
print(df.target_names)
X = df['data']
y = df['target']
label = {0: 'setosa', 1: 'versicolor', 2: 'virginica'}
# simply visualize using two features
x_axis = X[:, 0]
y_axis = X[:, 1]
plt.scatter(x_axis, y_axis, c=y)
plt.show()
# find optimal k value
Nc = range(1, 5)
kmeans = [KMeans(n_clusters=i) for i in Nc]
kmeans
score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))]
score
plt.plot(Nc, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
# k-means
k = 3
km = KMeans(n_clusters=k)
km.fit(X)
# performance
cm = confusion_matrix(y, km.labels_)
print(cm)
# PCA
pca = PCA(n_components=2).fit(X)
pca_2d = pca.transform(X)
for i in range(0, pca_2d.shape[0]):
if km.labels_[i] == 0:
c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+')
elif km.labels_[i] == 1:
c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o')
elif km.labels_[i] == 2:
c3 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='b', marker='*')
plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2', 'Cluster 3'])
plt.title('K-means finds 3 clusters')
plt.show()
###Output
_____no_output_____
###Markdown
PhysioNet datasetHow about PhysioNet dataset?It seems like that the quality of unsupervised model is not good enough.This may because of significant reduction of dimension, which yield loss of information.
###Code
import numpy as np
import pandas as pd
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import StandardScaler
# load data
dataset = pd.read_csv('https://raw.githubusercontent.com/ckbjimmy/2018_mlw/master/data/PhysionetChallenge2012_data.csv')
X = dataset.iloc[:, 1:].values
y = dataset.iloc[:, 0].values
# imputation and normalization
X = Imputer(missing_values='NaN', strategy='mean', axis=0).fit(X).transform(X)
X = StandardScaler().fit(X).transform(X)
# find k value
Nc = range(1, 5)
kmeans = [KMeans(n_clusters=i) for i in Nc]
kmeans
score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))]
score
plt.plot(Nc, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
# k-means
k = 2
km = KMeans(n_clusters=k)
km.fit(X)
# performance
cm = confusion_matrix(y, km.labels_)
print(cm)
# visualization
pca = PCA(n_components=2).fit(X)
pca_2d = pca.transform(X)
for i in range(0, pca_2d.shape[0]):
if km.labels_[i] == 0:
c1 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='r', marker='+')
elif km.labels_[i] == 1:
c2 = plt.scatter(pca_2d[i, 0], pca_2d[i, 1], c='g', marker='o')
plt.legend([c1, c2], ['Cluster 1', 'Cluster 2'])
plt.title('K-means finds 2 clusters')
plt.show()
# use all features
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use PCA transformed features
clf = LogisticRegression(fit_intercept=True)
clf.fit(pca_2d, y)
yhat = clf.predict_proba(pca_2d)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
ts = TSNE(learning_rate=200)
tsne = ts.fit_transform(X)
x_axis = tsne[:, 0]
y_axis = tsne[:, 1]
plt.scatter(x_axis, y_axis, c=y)
plt.show()
# use all
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use tSNE
clf = LogisticRegression(fit_intercept=True)
clf.fit(tsne, y)
yhat = clf.predict_proba(tsne)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
pca_16 = PCA(n_components=16).fit(X).transform(X)
tsne = TSNE(learning_rate=200).fit_transform(pca_16)
x_axis = tsne[:, 0]
y_axis = tsne[:, 1]
plt.scatter(x_axis, y_axis, c=y)
plt.show()
# use all
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use tSNE
clf = LogisticRegression(fit_intercept=True)
clf.fit(tsne, y)
yhat = clf.predict_proba(tsne)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
###Output
_____no_output_____
###Markdown
From the above cases, we may guess that 2-dimension is enough for breast cancer and Iris data representation but not PhysioNet mortality prediction.Afterall, the feature number of PhysioNet data is more than 180.You can try to increase the number of dimensionality reduction from 2 to more (16, 32, 64) and see how the performance will improve.The following codes give an example of using 16 dimensions---although they can not be visualized in a 2D plot.
###Code
pca_16 = PCA(n_components=16).fit(X).transform(X)
# use all
clf = LogisticRegression(fit_intercept=True)
clf.fit(X, y)
yhat = clf.predict_proba(X)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
# use tSNE
clf = LogisticRegression(fit_intercept=True)
clf.fit(pca_16, y)
yhat = clf.predict_proba(pca_16)[:,1]
auc = metrics.roc_auc_score(y, yhat)
print('{:0.3f} - AUROC of model (training set).'.format(auc))
###Output
0.881 - AUROC of model (training set).
0.787 - AUROC of model (training set).
###Markdown
More unsupervised learning algorithmsThere are still a lot unsupervised ways to represent the data.We won't cover the remaining algorithms but you may check them in the future when you want to dive into this field.- Anomaly detection- Autoencoders- Generative Adversarial Networks (GAN)- ...more
###Code
###Output
_____no_output_____ |