markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
We can use these to make vp and rho earth models. We can use NumPy’s fancy indexing by passing our array of indicies to access the rock properties (in this case acoustic impedance) for every element at once.
vp = vps[w] rho = rhos[w]
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
07dbb0361893fbd319e38fa4bb6e8f91
Each of these new arrays is the shape of the model, but is filled with a rock property:
vp.shape vp[:5, :5]
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
6e22f10ab04e744ccf26a30a3b4a0c0b
Now we can create the reflectivity profile:
rc = bg.reflection.acoustic_reflectivity(vp, rho)
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
6a3b4858707c7da74e5bfbd5d45ff575
Then make a wavelet and convolve it with the reflectivities:
ricker, _ = bg.filters.ricker(duration=0.064, dt=0.001, f=40) syn = bg.filters.convolve(rc, ricker) syn.shape
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
0a131dfca4bbb43c7568370a90a6c115
The easiest way to check everything worked is probably to plot it.
fig, axs = plt.subplots(figsize=(17, 4), ncols=5, gridspec_kw={'width_ratios': (4, 4, 4, 1, 4)}) axs[0].imshow(w) axs[0].set_title('Wedge model') axs[1].imshow(vp * rho) axs[1].set_title('Impedance') axs[2].imshow(rc) axs[2].set_title('Reflectivity') axs[3].plot(ricker, np.arange(ricker.size)) axs[3].axis('off') axs[3].set_title('Wavelet') axs[4].imshow(syn) axs[4].set_title('Synthetic') axs[4].plot(top, 'w', alpha=0.5) axs[4].plot(base, 'w', alpha=0.5) plt.show()
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
8f5736a2c28cda9e9a32ff718b320b8b
Alternative workflow In the last example, we made an array of integers, then used indexing to place rock properties in the array, using the index as a sort of look-up. But we could make the impedance model directly, passing rock properties in to the wedge() function via teh strat argument. It just depends how you want to make your models. The strat argument was the default [0, 1, 2] in the last example. Let's pass in the rock properties instead.
vps = np.array([2320, 2350, 2350]) rhos = np.array([2650, 2600, 2620]) impedances = vps * rhos w, top, base, ref = bg.models.wedge(strat=impedances)
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
daa9196ec957b4dccdd0b125dbcc00c4
Now the wedge contains rock properties, not integer labels. Offset reflectivity Let's make things more realistic by computing offset reflectivities, not just normal incidence (acoustic) reflectivity. We'll need Vs as well:
vps = np.array([2320, 2350, 2350]) vss = np.array([1150, 1250, 1200]) rhos = np.array([2650, 2600, 2620])
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
c5f0e6207885e41da13866d107571723
We need the model with integers like 0, 1, 2 again:
w, top, base, ref = bg.models.wedge()
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
1d92831eca1cf0ae1eba26844e858ae8
Index to get the property models:
vp = vps[w] vs = vss[w] rho = rhos[w]
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
2154bea4978915e0de5abe9058164f2a
Compute the reflectivity for angles up to 45 degrees:
rc = bg.reflection.reflectivity(vp, vs, rho, theta=range(46)) rc.shape
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
f73d26d9caa300561d749f1624c17ee8
The result is three-dimensional: the angles are in the first dimension. So the zero-offset reflectivities are in w[0] and 30 degrees is at w[30]. Or, you can slice this cube in another orientation and see how reflectivity varies with angle:
plt.imshow(rc.real[:, :, 50].T)
docs/_userguide/_A_quick_wedge_model.ipynb
agile-geoscience/bruges
apache-2.0
b6ec5c52e54ec7510701dbdb718d85ab
init from dict and xrange index vs from somethign else Timings
%%timeit d = pd.DataFrame(columns=['A'], index=xrange(1000)) %%timeit d = pd.DataFrame(columns=['A'], index=xrange(1000), dtype='float') %%timeit d = pd.DataFrame({'A': np.zeros(1000)})
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
f0f0c5ebaef457361ad189c3504285a1
The problem here is that dataframe init being called 7000 times because of the aw ba factor finder Maybe it's not worth using a data frame here. use a list or numpy and then convert to dataframe when the factor is found, e.g.:
%%timeit for _ in xrange(5000): d = pd.DataFrame(columns=['A'], index=xrange(1000)) %%timeit for _ in xrange(5000): d = np.zeros(1000)
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
a5b4621fbeb2d83b48d6f37fd853c0cf
Review the code to see how this can be applied The numpy/purepython approach as potential But there's a couple issues for which the code must be examined The problem comes from the following call chain simulate_forwards_df (called 1x) -> get_factors_for_all_species (called 10x, 1x per plot) -> BAfactorFinder_Aw (called 2x, 1x per plot that has aw) -> BAfromZeroToDataAw (called 7191 times, most of which in this chain) -> DataFrame.__init__ (called 7932 times, most of which in this chain) ... why does BAfromZeroToDataAw create a dataframe? It's good to see the code: First, simulate_forwards_df calls get_factors_for_all_species and then BAfromZeroToDataAw with some parameters and simulation choice of false Note that when simulation==False, that is the only time that the list is created. otherwise the list is left empty. Note also that simulation_choice defaults to True in forward simulation, i.e. for when BAfromZeroToData__ are called from forward simulation. get_factors_for_all_species calls factor finder functions for each species, if the species is present, and returns a dict of the factors BAfactorFinder_Aw is the main suspect, for sime reason aspen has a harder time converging, so the loop in this function runs many times It calls BAfromZeroToDataAw with simulation_choice of 'yes' and simulation=True BUT IT ONLY USES THE 1ST RETURN VALUE slow lambdas below is left here for the record, but the time is actually spent in getitem, not so much in the callables applied, that is an easy fix With the df init improved by using np array, the next suspect is the lambdas. The method for optimizing is generally to use cython, the functiosn themselves can be examined for opportunities: they are pretty basic - everything is a float. ``` python def MerchantableVolumeAw(N_bh_Aw, BA_Aw, topHeight_Aw, StumpDOB_Aw, StumpHeight_Aw, TopDib_Aw, Tvol_Aw): # ... if N_bh_Aw > 0: k_Aw = (BA_Aw * 10000.0 / N_bh_Aw)**0.5 else: k_Aw = 0 if k_Aw > 0 and topHeight_Aw > 0: b0 = 0.993673 b1 = 923.5825 b2 = -3.96171 b3 = 3.366144 b4 = 0.316236 b5 = 0.968953 b6 = -1.61247 k1 = Tvol_Aw * (k_Aw**b0) k2 = (b1* (topHeight_Aw**b2) * (StumpDOB_Aw**b3) * (StumpHeight_Aw**b4) * (TopDib_Aw**b5) * (k_Aw**b6)) + k_Aw MVol_Aw = k1/k2 else: MVol_Aw = 0 return MVol_Aw ``` ``` python def GrossTotalVolume_Aw(BA_Aw, topHeight_Aw): # ... Tvol_Aw = 0 if topHeight_Aw > 0: a1 = 0.248718 a2 = 0.98568 a3 = 0.857278 a4 = -24.9961 Tvol_Aw = a1 * (BA_Aw**a2) * (topHeight_Aw**a3) * numpy.exp(1+(a4/((topHeight_Aw**2)+1))) return Tvol_Aw ``` Timings for getitem There's a few ways to get an item from a series:
d = pd.Series(np.random.randint(0,100, size=(100)), index=['%d' %d for d in xrange(100)]) %%timeit d['1'] %%timeit d.at('1') %%timeit d.loc('1')
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
c1af0b622ee1b3525845e2ed178a5811
loc or at are faster than [] indexing. Revise the code Go on. Do it. Review code changes
%%bash git log --since 2016-11-09 --oneline ! git diff HEAD~23 ../gypsy
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
0a0d95601570c722afd7a5cd8e677a83
Tests Do tests still pass? Run timings
%%bash # git checkout dev # time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp # rm -rfd tmp # real 8m18.753s # user 8m8.980s # sys 0m1.620s %%bash # after factoring dataframe out of zerotodata functions # git checkout -b da080a79200f50d2dda7942c838b7f3cad845280 df-factored-out-zerotodata # time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp # rm -rfd tmp # real 5m51.028s # user 5m40.130s # sys 0m1.680s
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
5a1d684cec5bcd8242fdc36ac81af86a
Removing the data frame init gets a 25% time reduction
%%bash # after using a faster indexing method for the arguments put into the apply functions # git checkout 6b541d5fb8534d6fb055961a9d5b09e1946f0b46 -b applys-use-faster-getitem # time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp # rm -rfd tmp # real 6m16.021s # user 5m59.620s # sys 0m2.030s
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
c31e69fdbdc12344d353e9037fbc697b
Hm, this actually got worse, although it is a small sample......... if anything i suspect its because we're calling row.at[] instead of assigning the variable outside the loop. It's ok as the code has less reptition, it's a good tradeoff.
%%bash # after fixing `.at` redundancy - calling it in each apply call # git checkout 4c978aff110001efdc917ed60cb611139e1b54c9 -b remove-getitem-redundancy # time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp # rm -rfd tmp # real 5m36.407s # user 5m25.740s # sys 0m2.140s
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
e3e771f971340a2f622072424cbe3521
It doesn't totrally remove redundancy, we still get an attr/value of an object, but now its a dict instead of a pandas series. Hopefully it's faster. Should have tested first using MWE. It is moderately faster. Not much. Leave cython optimization for next iteration Run profiling
from gypsy.forward_simulation import simulate_forwards_df data = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10) %%prun -D forward-sim-2.prof -T forward-sim-2.txt -q result = simulate_forwards_df(data) !head forward-sim-2.txt
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
80f6ef54fee93973b387781a4382fb29
Compare performance visualizations Now use either of these commands to visualize the profiling ``` pyprof2calltree -k -i forward-sim-1.prof forward-sim-1.txt or dc run --service-ports snakeviz notebooks/forward-sim-1.prof ``` Old New Summary of performance improvements forward_simulation is now 2x faster than last iteration, 8 times in total, due to the changes outlined in the code review section above on my hardware, this takes 1000 plots to ~4 minutes on carol's hardware, this takes 1000 plots to ~13 minutes For 1 million plots, we're looking at 2 to 9 days on desktop hardware Profile with I/O
! rm -rfd gypsy-output output_dir = 'gypsy-output' %%prun -D forward-sim-2.prof -T forward-sim-2.txt -q # restart the kernel first data = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10) result = simulate_forwards_df(data) os.makedirs(output_dir) for plot_id, df in result.items(): filename = '%s.csv' % plot_id output_path = os.path.join(output_dir, filename) df.to_csv(output_path)
notebooks/#32-address-testing-findings/#32-isolated-profiling-2.ipynb
tesera/pygypsy
mit
8f412bbc92cef09fcfbda7ff338958c8
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
a = 5.0 b = 1.0 v = [] x = np.linspace(-3,3,50) for i in x: v.append(hat(i,5.0,1.0)) plt.figure(figsize=(7,5)) plt.plot(x,v) plt.tick_params(top=False,right=False,direction='out') plt.xlabel('x') plt.ylabel('V(x)') plt.title('V(x) vs. x'); assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
bjshaw/phys202-2015-work
mit
dd32f0d4c0c2877365b47e7a2b1286ce
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective.
x1=opt.minimize(hat,-1.8,args=(5.0,1.0))['x'] x2=opt.minimize(hat,1.8,args=(5.0,1.0))['x'] print(x1,x2) v = [] x = np.linspace(-3,3,50) for i in x: v.append(hat(i,5.0,1.0)) plt.figure(figsize=(7,5)) plt.plot(x,v) plt.scatter(x1,hat(x1,5.0,1.0),color='r',label='Local Minima') plt.scatter(x2,hat(x2,5.0,1.0),color='r') plt.tick_params(top=False,right=False,direction='out') plt.xlabel('x') plt.ylabel('V(x)') plt.xlim(-3,3) plt.ylim(-10,35) plt.legend() plt.title('V(x) vs. x'); assert True # leave this for grading the plot
assignments/assignment11/OptimizationEx01.ipynb
bjshaw/phys202-2015-work
mit
ca5f80229d5b17ba478e112d38a25406
To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters. To find the minima of the equation $V(x) = -a x^2 + b x^4$, we first have to find the $x$ values where the slope is $0$. To do this, we first compute the derivative, $V'(x)=-2ax+4bx^3$ Then we set $V'(x)=0$ and solve for $x$ with our parameters $a=5.0$ and $b=1.0$ $\hspace{15 mm}$$0=-10x+4x^3$ $\Rightarrow$ $10=4x^2$ $\Rightarrow$ $x^{2}=\frac{10}{4}$ $\Rightarrow$ $x=\pm \sqrt{\frac{10}{4}}$ Computing $x$:
x_1 = np.sqrt(10/4) x_2 = -np.sqrt(10/4) print(x_1,x_2)
assignments/assignment11/OptimizationEx01.ipynb
bjshaw/phys202-2015-work
mit
cd0761f70ac4de4f8f38c58eb145c1ed
Extracting Data of Southwest states of the United states from 1992 - 2016. The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
def getData(state): pipeline = [{"$match":{"$and":[{"year":{"$gt":1991, "$lt":2017}},{"stateCode":state}]}}, {"$project":{"_id":0, "structureNumber":1, "yearBuilt":1, "yearReconstructed":1, "deck":1, ## Rating of deck "year":1, 'owner':1, "countyCode":1, "substructure":1, ## Rating of substructure "superstructure":1, ## Rating of superstructure "Structure Type":"$structureTypeMain.typeOfDesignConstruction", "Type of Wearing Surface":"$wearingSurface/ProtectiveSystem.typeOfWearingSurface", }}] dec = collection.aggregate(pipeline) conditionRatings = pd.DataFrame(list(dec)) ## Creating new column: Age conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt'] return conditionRatings
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
8351f7efb18150b720d262afa189dbfe
Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
def findSurvivalProbablities(conditionRatings): i = 1 j = 2 probabilities = [] while j < 121: v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck']) k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber']) Age1 = {key:int(value) for key, value in zip(k,v)} #v = conditionRatings.loc[conditionRatings['Age'] == j] v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck']) k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber']) Age2 = {key:int(value) for key, value in zip(k_2,v_2)} intersectedList = list(Age1.keys() & Age2.keys()) reconstructed = 0 for structureNumber in intersectedList: if Age1[structureNumber] < Age2[structureNumber]: if (Age1[structureNumber] - Age2[structureNumber]) < -1: reconstructed = reconstructed + 1 try: probability = reconstructed / len(intersectedList) except ZeroDivisionError: probability = 0 probabilities.append(probability*100) i = i + 1 j = j + 1 return probabilities
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
96c2fea0c7ff0c6057a18c0115ed183a
The following script will select all the bridges in the Southwest United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
states = ['48','40','35','04'] # Mapping state code to state abbreviation stateNameDict = {'25':'MA', '04':'AZ', '08':'CO', '38':'ND', '09':'CT', '19':'IA', '26':'MI', '48':'TX', '35':'NM', '17':'IL', '51':'VA', '23':'ME', '16':'ID', '36':'NY', '56':'WY', '29':'MO', '39':'OH', '28':'MS', '11':'DC', '21':'KY', '18':'IN', '06':'CA', '47':'TN', '12':'FL', '24':'MD', '34':'NJ', '46':'SD', '13':'GA', '55':'WI', '30':'MT', '54':'WV', '15':'HI', '32':'NV', '37':'NC', '10':'DE', '33':'NH', '44':'RI', '50':'VT', '42':'PA', '05':'AR', '20':'KS', '45':'SC', '22':'LA', '40':'OK', '72':'PR', '41':'OR', '27':'MN', '53':'WA', '01':'AL', '31':'NE', '02':'AK', '49':'UT' } def getProbs(states, stateNameDict): # Initializaing the dataframes for deck, superstructure and subtructure df_prob_recon = pd.DataFrame({'Age':range(1,61)}) df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)}) for state in states: conditionRatings_state = getData(state) stateName = stateNameDict[state] print("STATE - ",stateName) conditionRatings_state = filterConvert(conditionRatings_state) print("\n") probabilities_state = findSurvivalProbablities(conditionRatings_state) cumsum_probabilities_state = np.cumsum(probabilities_state) df_prob_recon[stateName] = probabilities_state[:60] df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60] #df_prob_recon.set_index('Age', inplace = True) #df_cumsum_prob_recon.set_index('Age', inplace = True) return df_prob_recon, df_cumsum_prob_recon df_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict) df_prob_recon.to_csv('prsouthwest.csv') df_cumsum_prob_recon.to_csv('cprsouthwest.csv')
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
16725accb7bacdb1a13beb4bb43e9015
In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the Southwest United States, as the bridges grow older the probability of reconstruction increases.
plt.figure(figsize=(12,8)) plt.title("CDF Probability of Reconstruction vs Age") palette = [ 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive' ] linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-'] for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)): plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4) plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction'); plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2) plt.ylim(1,100) plt.show()
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
a2ff71ce3e34ac8fdeae8a8357098dd5
The below figure presents CDF Probability of reconstruction, of bridge in the Southwest United States.
plt.figure(figsize = (16,12)) plt.xlabel('Age') plt.ylabel('Mean') # Initialize the figure plt.style.use('seaborn-darkgrid') # create a color palette palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive' ] # multiple line plot num = 1 linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-'] for n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)): # Find the right spot on the plot plt.subplot(4,3, num) # Plot the lineplot plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column) # Same limits for everybody! plt.xlim(1,60) plt.ylim(1,100) # Add title plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num]) plt.text(30, -1, 'Age', ha='center', va='center') plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical') num = num + 1 # general title plt.suptitle("CDF Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
2ddd62ced72516c2f2f2df14e6e73861
A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed. To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the Southwest United States.
plt.figure(figsize = (16,12)) plt.xlabel('Age') plt.ylabel('Mean') # Initialize the figure plt.style.use('seaborn-darkgrid') # create a color palette palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive' ] # multiple line plot num = 1 linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-'] for n, column in enumerate(df_prob_recon.drop('Age', axis=1)): # Find the right spot on the plot plt.subplot(4,3, num) # Plot the lineplot plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column) # Same limits for everybody! plt.xlim(1,60) plt.ylim(1,25) # Add title plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num]) plt.text(30, -1, 'Age', ha='center', va='center') plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical') num = num + 1 # general title plt.suptitle("Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Southwest+United+States.ipynb
kaleoyster/nbi-data-science
gpl-2.0
9a703466e9ff063b84f18085264d433a
Set the workspace loglevel to not print anything
wrk = op.Workspace() wrk.loglevel=50
PaperRecreations/Wu2010_part_a.ipynb
PMEAL/OpenPNM-Examples
mit
500b31f4f0097b411429efe588f25b63
Convert a grid from one format to another We will start with a common simple task, converting a grid from one format to another. Geosoft supports many common geospatial grid formats which can all be openned as a geosoft.gxpy.grid.Grid instance. Different formats and characteristics are specified using grid decorations, which are appended to the grid file name. See Grid File Name Decorations for all supported grid and image types and how to decorate the grid file name. Problem: You have a grid in a Geosoft-supported format, and you need the grid in some other format to use in a different application. Grid: elevation_surfer.grd, which is a Surfer v7 format grid file. Approach: Open the surfer grid with decoration (SRF;VER=V7). Use the gxgrid.Grid.copy class method to create an ER Mapper grid, which will have decoration (ERM).
# open surfer grid with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer: # copy the grid to an ER Mapper format grid file with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm: print('file:', grid_erm.file_name, '\ndecorated:', grid_erm.file_name_decorated)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
1970df6d4082fb607d559aa6d47fc914
Working with Grid instances You work with a grid using a geosoft.gxpy.grid.Grid instance, which is a spatial dataset sub-class of a geosoft.gxpy.geometry.Geometry. In Geosoft, all spatial objects are sub-classed from the Geometry class, and all Geometry instances have a coordinate system and spatial extents. Other spatial datasets include Geosoft databases (gdb files), voxels (geosoft_voxel files), surfaces (geosoft_surface files), 2d views, which are contained in Geosoft map files, and 3d views which can be contained in a Geosoft map file or a geosoft_3dv file. Dataset instances will usually be associated with a file on your computer and, like Python files, you should open and work with datasets using the python with statement, which ensures that the instance and associated resources are freed after the with statement looses context. For example, the following shows two identical ways work with a grid instance, though the with is prefered:
# open surfer grid, then set to None to free resources grid_surfer = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') print(grid_surfer.name) grid_surfer = None # open surfer grid using with with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer: print(grid_surfer.name)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
b009086afa53899f4cd8b3b90b9a89fd
Displaying a grid One often needs to see what a grid looks like, and this is accomplished by displaying the grid as an image in which the colours represent data ranges. A simple way to do this is to create a grid image file as a png file ising the image_file() method. In this example we create a shaded image with default colouring, and we create a 500 pixel-wide image:
image_file = gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)').image_file(shade=True, pix_width=500) Image(image_file)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
e966698c8de9331079db0f6a7db6e69b
A nicer image might include a neat-line outline, colour legend, scale bar and title. The gxgrid.figure_map() function will create a figure-style map, which can be saved to an image file using the image_file() method of the map instance.
image_file = gxgrid.figure_map('elevation_surfer.grd(SRF;VER=V7)', title='Elevation').image_file(pix_width=800) Image(image_file)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
9f4fc60400ee7c524a39a3d05d67d984
Grid Coordinate System In Geosoft all spatial data should have a defined coordinate system which allows data to be located on the Earth. This also takes advantage of Geosoft's ability to reproject data as required. However, in this example the Surfer grid does not store the coordinate system information, but we know that the grid uses projection 'UTM zone 54S' on datum 'GDA94'. Let's modify this script to set the coordinate system, which will be saved as part of the ER Mapper grid, which does have the ability to store the coordinate system description. In Geosoft, well-known coordinate systems like this can be described using the form 'GDA94 / UTM zone 54S', which conforms to the SEG Grid Exchange Format standard for describing coordinate systems. You only need to set the coordinate_system property of the grid_surfer instance.
# define the coordinate system of the Surfer grid with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid_surfer: grid_surfer.coordinate_system = 'GDA94 / UTM zone 54S' # copy the grid to an ER Mapper format grid file and the coordinate system is transferred with gxgrid.Grid.copy(grid_surfer, 'elevation.ers(ERM)', overwrite=True) as grid_erm: print(str(grid_erm.coordinate_system))
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
50361a2e9c00c4a48df2a75013d8100a
Coordinate systems also contain the full coordinate system parameter information, from which you can construct coordinate systems in other applications.
with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm: print('Grid Exchange Format coordinate system:\n', grid_erm.coordinate_system.gxf) with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm: print('ESRI WKT format:\n', grid_erm.coordinate_system.esri_wkt) with gxgrid.Grid.open('elevation.ers(ERM)') as grid_erm: print('JSON format:\n', grid_erm.coordinate_system.json)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
9bc28db8b68b65dd8694aef1539d14e2
Display with coordinate systems The grids now have known coordinate systems and displaying the grid will show the coordinate system on the scale bar. We can also annotate geographic coordinates. This requires a Geosoft Desktop License.
# show the grid as an image Image(gxgrid.figure_map('elevation.ers(ERM)', features=('NEATLINE', 'SCALE', 'LEGEND', 'ANNOT_LL')).image_file(pix_width=800))
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
547a828cc7834c9ba12b5f5177b633f1
Basic Grid Statistics In this exercise we will work with the data stored in a grid. One common need is to determine some basic statistical information about the grid data, such as the minimum, maximum, mean and standard deviation. This exercise will work with the grid data a number of ways that demonstrate some useful patterns. Statistics using numpy The smallest code and most efficient approach is to read the grid into a numpy array and then use the optimized numpy methods to determine statistics. This has the benefit of speed and simplicity at the expense memory, which may be a concern for very large grids, though on modern 64-bit computers with most grids this would be the approach of choice.
import numpy as np # open the grid, using the with construct ensures resources are released with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid: # get the data in a numpy array data_values = grid.xyzv()[:, :, 3] # print statistical properties print('minimum: ', np.nanmin(data_values)) print('maximum: ', np.nanmax(data_values)) print('mean: ', np.nanmean(data_values)) print('standard deviation: ', np.nanstd(data_values))
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
e0bed30e5689886d07e2bbe7be8bfe1d
Statistics using Geosoft VVs Many Geosoft methods will work with a geosoft.gxpy.vv.GXvv, which wraps the geosoft.gxapi.GXVV class that deals with very long single-value vectors. The Geosoft GXVV methods works with Geosoft data types and, like numpy, is optimized to take advantage of multi-core processors to improve performance. The pattern in this exercise reads a grid one grid row at a time, returning a GXvv instance and accumulate statistics in an instance of the geosoft.gxapi.GXST class.
import geosoft.gxapi as gxapi # the GXST class requires a desktop license if gxc.entitled: # create a gxapi.GXST instance to accumulate statistics stats = gxapi.GXST.create() # open the grid with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid: # add data from each row to the stats instance for row in range(grid.ny): stats.data_vv(grid.read_row(row).gxvv) # print statistical properties print('minimum: ', stats.get_info(gxapi.ST_MIN)) print('maximum: ', stats.get_info(gxapi.ST_MAX)) print('mean: ', stats.get_info(gxapi.ST_MEAN)) print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV))
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
f1de40dee322618c63689a7881b2401d
Grid Iterator A grid instance also behaves as an iterator that works through the grid data points by row, then by column, each iteration returning the (x, y, z, grid_value). In this example we will iterate through all points in the grid and accumulate the statistics a point at a time. This is the least-efficient way to work through a grid, but the pattern can be useful to deal with a very simple need. For example, any Geosoft supported grid can be easily converted to an ASCII file that has lists the (x, y, z, grid_value) for all points in a grid.
# the GXST class requires a desktop license if gxc.entitled: # create a gxapi.GXST instance to accumulate statistics stats = gxapi.GXST.create() # add each data to stats point-by-point (slow, better to use numpy or vector approach) number_of_dummies = 0 with gxgrid.Grid.open('elevation_surfer.grd(SRF;VER=V7)') as grid: for x, y, z, v in grid: if v is None: number_of_dummies += 1 else: stats.data(v) total_points = grid.nx * grid.ny # print statistical properties print('minimum: ', stats.get_info(gxapi.ST_MIN)) print('maximum: ', stats.get_info(gxapi.ST_MAX)) print('mean: ', stats.get_info(gxapi.ST_MEAN)) print('standard deviation: ', stats.get_info(gxapi.ST_STDDEV)) print('number of dummies: ', number_of_dummies) print('number of valid data points: ', total_points - number_of_dummies)
examples/jupyter_notebooks/Tutorials/Grids and Images.ipynb
GeosoftInc/gxpy
bsd-2-clause
4312338c4da6eb77e649c09dc2cebd29
AGB and massive star tables used
table='yield_tables/agb_and_massive_stars_nugrid_MESAonly_fryer12delay.txt'
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
b5f936861d59e02306fa341db45b5a96
Setup
# OMEGA parameters for MW mass_loading = 0.0 nb_1a_per_m = 3.0e-3 sfe=0.04 SF_law=True DM_evolution=False imf_yields_range=[1.0,30.0] special_timesteps=30 Z_trans=0.0 iniZ=0.0001
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
1b36e5769b9b23ebd209fdb4fabc9d13
Default setup
o0=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\ mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps, imf_yields_range=imf_yields_range, SF_law=SF_law)
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
2be502550ee4b73a6792512f5c0c50d2
Setup with different extra sources Here we use yields in two (extra source) yield tables which we apply in the mass range from 8Msun to 12Msun and from 12Msun to 30Msun respectively. We apply a factor of 0.5 to the extra yields of the first yield table and 1. to the second yield table.
extra_source_table=['yield_tables/r_process_arnould_2007.txt', 'yield_tables/r_process_arnould_2007.txt'] #Apply yields only in specific mass ranges; extra_source_mass_range = [[8,12],[12,30]] #percentage of stars to which the yields are added. First entry for first yield table etc. f_extra_source = [0.5,1.] #metallicity to exclude (in this case none) extra_source_exclude_Z = [[], []] #you can look at the yields directly with the y1 and y2 parameter below. y1=ry.read_yields_Z("./NuPyCEE/"+extra_source_table[0]) y2=ry.read_yields_Z("./NuPyCEE/"+extra_source_table[1])
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
4aecb976e9bc6a8dcf5ce57a067d2154
SYGMA
s0 = s.sygma(iniZ=0.0001,extra_source_on=False) #default False s0p1 = s.sygma(iniZ=0.0001,extra_source_on=True, extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range, f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
d0c263399cd1679dda6491e958fcb9ec
OMEGA
o0p1=o.omega(iniZ=iniZ,galaxy='milky_way',Z_trans=Z_trans, table=table,sfe=sfe, DM_evolution=DM_evolution,\ mass_loading=mass_loading, nb_1a_per_m=nb_1a_per_m, special_timesteps=special_timesteps, imf_yields_range=imf_yields_range,SF_law=SF_law,extra_source_on=True, extra_source_table=extra_source_table,extra_source_mass_range=extra_source_mass_range, f_extra_source=f_extra_source, extra_source_exclude_Z=extra_source_exclude_Z)
DOC/Teaching/ExtraSources.ipynb
NuGrid/NuPyCEE
bsd-3-clause
cc8f4726cee60b2e292b008731a05487
Now we do some data cleaning and remove all rows where Longitude and Latitude are 'null'.
df = df[df['Longitude'].notnull()] df = df[df['Latitude'].notnull()] # will display all rows that have null values #df[df.isnull().any(axis=1)]
geojson/geojson_stations.ipynb
rueedlinger/python-snippets
mit
392261fcda4e00bf6cd788957636e01a
Convert pandas data frame to GeoJSON Next we convert the panda data frame to geosjon objects (FeatureCollection/Feature/Point).
import geojson as geojson values = zip(df['Longitude'], df['Latitude'], df['Remark']) points = [geojson.Feature(geometry=geojson.Point((v[0], v[1])), properties={'name': v[2]}) for v in values] geo_collection = geojson.FeatureCollection(points) print(points[0])
geojson/geojson_stations.ipynb
rueedlinger/python-snippets
mit
3dac170f288055cca4d9376385444580
Save the GeoJSON (FeatureCollection) to a file Finally we dump the GeoJSON objects to a file.
dump = geojson.dumps(geo_collection, sort_keys=True) ''' with open('stations.geojson', 'w') as file: file.write(dump) '''
geojson/geojson_stations.ipynb
rueedlinger/python-snippets
mit
59a8ace7522e81f037f6067493a9bb3c
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (8, 100) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
9444a1a79b34e8b9f2db4b29fa6a75a7
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function words = set() index_to_word = {} word_to_index = {} for word in text: words.add(word) for index, word in enumerate(words): #print (word,index) index_to_word[index] = word word_to_index[word] = index return word_to_index, index_to_word """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
adfb3d7f9cd3691e6c95a852296ee2b7
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function ret = {} ret['.'] = "||Period||" #( . ) ret[','] = "||Comma||" #( , ) ret['"'] = "||Quotation_Mark||" # ( " ) ret[';'] = "||Semicolon||" #( ; ) ret['!'] = "||Exclamation_mark||" #( ! ) ret['?'] = "||Question_mark||" #( ? ) ret['('] = "||Left_Parentheses||" #( ( ) ret[')'] = "||Right_Parentheses||" #( ) ) ret['--'] = "||Dash||" # ( -- ) ret['\n'] = "||Return||" # ( \n ) return ret """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
a7b119eca5873e2cd4defd70895f0216
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None ], name="input") targets = tf.placeholder(tf.int32, [None, None ], name="targets") learning_rate = tf.placeholder(tf.float32, None, name="LearningRate") return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
a867ae53cf06cc24afb30d844f7c0178
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function layer_count = 2 keep_prob = tf.constant(0.7,tf.float32, name="keep_prob") lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True) lstm2 = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True) dropout = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm, lstm2], state_is_tuple=True) initial_state = cell.zero_state( batch_size, tf.float32) initial_state = tf.identity(initial_state, name="initial_state" ) #_outputs, final_state = tf.nn.rnn(cell, rnn_inputs, initial_state=init_state) return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
b8b48d87e66dd76f3f907b8f5effa483
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
import random import math def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function ret = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) ret = tf.nn.embedding_lookup(ret, input_data) print("shape {}".format(ret.get_shape().as_list())) return ret """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
244d38fa8b69423ab1cbe79d6c50cb64
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32) final_state = tf.identity (final_state, "final_state") return output, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
1c0e18e850b82e829f5ff8122f6b8cf2
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embedded = get_embed(input_data, vocab_size, rnn_size) out, fin = build_rnn(cell, embedded) out = tf.contrib.layers.fully_connected(out,vocab_size, activation_fn=None) out_shape = out.get_shape().as_list() print("build_nn embedded{}, out:{}, fin:{}".format(embedded.get_shape().as_list(),out_shape, fin.get_shape().as_list())) print() return out, fin """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
4ccfaa9e3ab3d70f277fb975a5051799
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ```
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function text = int_text ret = np.array([]) inputs = [] targets = [] text_len = len(text) - len(text) % (seq_length*batch_size) print ("get_batches text:{}, batch:{}, seq:{}".format(text_len, batch_size, seq_length)) ret=[] for i in range(0, text_len-1, seq_length): seq = list(int_text[i:i+seq_length]) inputs.append(list(int_text[i:i+seq_length])) targets.append(list(int_text[i+1:i+seq_length+1])) for i in range(0,len(inputs),batch_size): pos=batch_size #batch_pair = n ret.append([inputs[i:i+batch_size], targets[i:i+batch_size]]) ret = np.asanyarray(ret) print("batch test ", ret.shape, ret[3,:,2]) return ret """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
50c41da68e6192b94712c1567164d2c5
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 300 # previously 150, but want to get lower loss. # Batch Size batch_size = 128 # RNN Size rnn_size = 1024 # Embedding Dimension Size embed_dim = None # Sequence Length seq_length = 12 # already discouraged from using 6 and 16, avg sentence length being 10-12 # I'm favoring this formula frm the curse of lerning rate being a function of parameter count. #This is guess work (empirical), but gives good results. learning_rate = 1/np.sqrt(rnn_size*seq_length*6700) print( "learning rate {}, vocab_size {}".format(learning_rate,6700)) """ 100 inf 0.0012 -- 1.666 860-1210: 1.259 0.00012 -- 5.878 1920-2190: 1.070 0.000012 7.4 3000: 2.107 0.00012 -- 6.047 3000: 0.964-- embedding w truncated normal. 1024 0.00812 -- 1.182 stuck 0.00612 -- 0.961 stuck """ # Show stats for every n number of batches show_every_n_batches = 20 tf.set_random_seed(42) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
f078b7fd588b6b1b3f53630f7aa4e2ed
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function inputs = loaded_graph.get_tensor_by_name("input:0") initials = loaded_graph.get_tensor_by_name("initial_state:0") finals = loaded_graph.get_tensor_by_name("final_state:0") probs = loaded_graph.get_tensor_by_name("probs:0") return inputs, initials, finals, probs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
d4f619e57b7e825a31de4439688922c4
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function # As suggested by the last reviewer - tuning randomness #print("probabs:{}, - {}".format(probabilities.shape, int_to_vocab[np.argmax(probabilities)])) mostprobable = np.argsort(probabilities) ret = np.random.choice(mostprobable[-3:],1, p=[0.1, 0.2, 0.7]) return int_to_vocab[ret[0]] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
rally12/deep-learning
mit
e982415decc8ceacfce88a04f68cb12c
Exercise Compute some basic descriptive statistics about the graph, namely: the number of nodes, the number of edges, the graph density, the distribution of degree centralities in the graph,
# Number of nodes: len(G.nodes()) # Number of edges: len(G.edges()) # Graph density: nx.density(G) # Degree centrality distribution: list(nx.degree_centrality(G).values())[0:5]
archive/bonus-1-network-statistical-inference-instructor.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
e419e5211ffdbf8ca840b5e2905145a9
How are protein-protein networks formed? Are they formed by an Erdos-Renyi process, or something else? In the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge. If protein-protein networks are formed by an E-R process, then we would expect that properties of the protein-protein graph would look statistically similar to those of an actual E-R graph. Exercise Make an ECDF of the degree centralities for the protein-protein interaction graph, and the E-R graph. - The construction of an E-R graph requires a value for n and p. - A reasonable number for n is the number of nodes in our protein-protein graph. - A reasonable value for p might be the density of the protein-protein graph.
ppG_deg_centralities = list(nx.degree_centrality(G).values()) plt.plot(*ecdf(ppG_deg_centralities)) erG = nx.erdos_renyi_graph(n=len(G.nodes()), p=nx.density(G)) erG_deg_centralities = list(nx.degree_centrality(erG).values()) plt.plot(*ecdf(erG_deg_centralities)) plt.show()
archive/bonus-1-network-statistical-inference-instructor.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
ea57d3fe4edb863140e63a8d0439f31d
One of the most effective use of pandas is the ease at which we can select rows and coloumns in different ways, here's how we do it: To access the coloumns, there are three different ways we can do it, these are: data_set_var[ "coloumn-name" ] &lt; data_set_var &gt;.&lt; coloumn-name &gt; We can add coloumns too, say we rank them: &lt;data_set_var&gt;["new-coloumn-name"] = &lt; list of values &gt;
# Add a new coloumn brics["on_earth"] = [ True, True, True, True, True ] # Print them brics # Manupalating Coloumns """Coloumns can be manipulated using arithematic operations on other coloumns"""
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
dataDogma/Computer-Science
gpl-3.0
9100a3181cd744919f10ea2b2f487412
Accessing Rows: Syntax: dataframe.loc[ &lt;"row name"&gt; ] Go to top:TOC Element access To get just one element in the table, we can specify both coloumn and row label in the loc(). Syntax: dataframe.loc[ &lt;"row-name, coloumn name"&gt; ] dataframe[ &lt;"row-name"&gt; ].loc[ &lt;"coloumn-name"&gt; ] dataframe.loc[ &lt;"rowName'&gt; ][&lt; "coloumnName" &gt;] Lab: Objective: Practice importing data into python as Pandas DataFrame. Practise accessig Row and Coloumns Lab content: CSV to DataFrame1 CSV to DataFrame2 Square Brackets Loc1 Loc2 Go to:TOC CSV to DataFrame1 Preface: The DataFrame is one of Pandas' most important data structures. It's basically a way to store tabular data, where you can label the rows and the columns. In the exercises that follow, you will be working wit vehicle data in different countries. Each observation corresponds to a country, and the columns give information about the number of vehicles per capita, whether people drive left or right, and so on. This data is available in a CSV file, named cars.csv. It is available in your current working directory, so the path to the file is simply 'cars.csv'. To import CSV data into Python as a Pandas DataFrame, you can use read_csv(). Instructions: To import CSV files, you still need the pandas package: import it as pd. Use pd.read_csv() to import cars.csv data as a DataFrame. Store this dataframe as cars. Print out cars. Does everything look OK?
""" # Import pandas as pd import pandas as pd # Import the cars.csv data: cars cars = pd.read_csv("cars.csv") # Print out cars print(cars) """
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
dataDogma/Computer-Science
gpl-3.0
d49cbf6398ed86c0ed068ae4975b78a8
CSV to DataFrame2 Preface: We have a slight of a problem, the row labels are imported as another coloumn, that has no name. To fix this issue, we are goint to pass an argument index_col = 0 to read_csv(). This is used to specify which coloumn in the CSV file should be used as row label? Instructions: Run the code with Submit Answer and assert that the first column should actually be used as row labels. Specify the index_col argument inside pd.read_csv(): set it to 0, so that the first column is used as row labels. Has the printout of cars improved now? Go to top:TOC
""" # Import pandas as pd import pandas as pd # Import the cars.csv data: cars cars = pd.read_csv("cars.csv", index_col=0) # Print out cars print(cars) """
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
dataDogma/Computer-Science
gpl-3.0
c1dee3a9b1ce8d90329dfbfee880c3be
Square Brackets Preface Selecting coloumns can be done in two way. variable_containing_CSV_file['coloumn-name'] variable_containing_CSV_file[['coloumn-name']] The former gives a pandas series, whereas the latter gives a pandas dataframe. Instructions: Use single square brackets to print out the country column of cars as a Pandas Series. Use double square brackets to print out the country column of cars as a Pandas DataFrame. Do this by putting country in two square brackets this time.
""" # Import cars data import pandas as pd cars = pd.read_csv('cars.csv', index_col = 0) # Print out country column as Pandas Series print( cars['country']) # Print out country column as Pandas DataFrame print( cars[['country']]) """
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
dataDogma/Computer-Science
gpl-3.0
fbcf613a40db6b067d5376684bccf442
Loc1 With loc we can do practically any data selection operation on DataFrames you can think of. loc is label-based, which means that you have to specify rows and coloumns based on their row and coloumn labels. Instructions: Use loc to select the observation corresponding to Japan as a Series. The label of this row is JAP. Make sure to print the resulting Series. Use loc to select the observations for Australia and Egypt as a DataFrame.
""" # Import cars data import pandas as pd cars = pd.read_csv('cars.csv', index_col = 0) # Print out observation for Japan print( cars.loc['JAP'] ) # Print out observations for Australia and Egypt print( cars.loc[ ['AUS', 'EG'] ]) """
Courses/DAT-208x/DAT208x - Week 6 - Pandas.ipynb
dataDogma/Computer-Science
gpl-3.0
0b59b7c55c8746d5ddcbd7f96b61afc4
Данные Возьмите данные с https://www.kaggle.com/c/shelter-animal-outcomes . Обратите внимание, что в этот раз у нас много классов, почитайте в разделе Evaluation то, как вычисляется итоговый счет (score). Визуализация <div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 1.</h3> </div> </div> Выясните, построив необходимые графики, влияет ли возраст, пол или фертильность животного на его шансы быть взятыми из приюта. Подготовим данные
visual = pd.read_csv('data/CatsAndDogs/TRAIN2.csv') #Сделаем числовой столбец Outcome, показывающий, взяли животное из приюта или нет #Сначала заполним единицами, типа во всех случах хорошо visual['Outcome'] = 'true' #Неудачные случаи занулим visual.loc[visual.OutcomeType == 'Euthanasia', 'Outcome'] = 'false' visual.loc[visual.OutcomeType == 'Died', 'Outcome'] = 'false' #Заменим строки, где в SexuponOutcome NaN, на что-нибудь осмысленное visual.loc[visual.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown' #Сделаем два отдельных столбца для пола и фертильности visual['Gender'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[-1]) visual['Fertility'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[0])
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
133e86fb0d17b737ad426dcdcf2acada
<b>Вывод по возрасту:</b> лучше берут не самых старых, но и не самых молодых <br> <b>Вывод по полу:</b> по большому счёту не имеет значения <br> <b>Вывод по фертильности:</b> лучше берут животных с ненарушенными репродуктивными способностями. Однако две следующие группы не сильно различаются по сути и, если их сложить, то разница не столь велика. Построение моделей <div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 2.</h3> </div> </div> Посмотрите тетрадку с генерацией новых признаков. Сделайте как можно больше релевантных признаков из всех имеющихся. Не забудьте параллельно обрабатывать отложенную выборку (test), чтобы в ней были те же самые признаки, что и в обучающей. <b>Возьмем исходные данные</b>
train, test = pd.read_csv( 'data/CatsAndDogs/TRAIN2.csv' #наши данные #'data/CatsAndDogs/train.csv' #исходные данные ), pd.read_csv( 'data/CatsAndDogs/TEST2.csv' #наши данные #'data/CatsAndDogs/test.csv' #исходные данные ) train.head() test.shape
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
9fc9e78774c6c74ed441cf9aaff3363a
<b>Добавим новые признаки в train</b>
#Сначала по-аналогии с визуализацией #Заменим строки, где в SexuponOutcome, Breed, Color NaN train.loc[train.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown' train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0' train.loc[train.Breed.isnull(), 'Breed'] = 'Unknown' train.loc[train.Color.isnull(), 'Color'] = 'Unknown' #Сделаем два отдельных столбца для пола и фертильности train['Gender'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[-1]) train['Fertility'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[0]) #Теперь что-то новое #Столбец, в котором отмечено, есть имя у животного или нет train['hasName'] = 1 train.loc[train.Name.isnull(), 'hasName'] = 0 #Столбец, в котором объединены порода и цвет train['breedColor'] = train.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1) #Декомпозируем DateTime #Во-первых, конвертируем столбец в тип DateTime из строкового train['DateTime'] = pd.to_datetime(train['DateTime']) #А теперь декомпозируем train['dayOfWeek'] = train.DateTime.apply(lambda dt: dt.dayofweek) train['month'] = train.DateTime.apply(lambda dt: dt.month) train['day'] = train.DateTime.apply(lambda dt: dt.day) train['quarter'] = train.DateTime.apply(lambda dt: dt.quarter) train['hour'] = train.DateTime.apply(lambda dt: dt.hour) train['minute'] = train.DateTime.apply(lambda dt: dt.hour) train['year'] = train.DateTime.apply(lambda dt: dt.year) #Разбиение возраста #Сделаем два отдельных столбца для обозначения года/месяца и их количества train['AgeuponFirstPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[0]) train['AgeuponSecondPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[-1]) #Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s train['AgeuponSecondPartInDays'] = 0 train.loc[train.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365 train.loc[train.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365 train.loc[train.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30 train.loc[train.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30 train.loc[train.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7 train.loc[train.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7 #Во-первых, конвертируем столбец в числовой тип из строкового train['AgeuponFirstPart'] = pd.to_numeric(train['AgeuponFirstPart']) train['AgeuponSecondPartInDays'] = pd.to_numeric(train['AgeuponSecondPartInDays']) #А теперь получим нормальное время жизни в днях train['LifetimeInDays'] = train['AgeuponFirstPart'] * train['AgeuponSecondPartInDays'] #Удалим уж совсем бессмысленные промежуточные столбцы train = train.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1) train.head()
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
7d8853c00b92a957b78a2630df602764
<div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 3.</h3> </div> </div> Выполните отбор признаков, попробуйте различные методы. Проверьте качество на кросс-валидации. Выведите топ самых важных и самых незначащих признаков. Предобработка данных
np.random.seed = 1234 from sklearn.preprocessing import LabelEncoder from sklearn import preprocessing #####################Заменим NaN значения на слово Unknown################## #Уберем Nan значения из train train.loc[train.AnimalID.isnull(), 'AnimalID'] = 'Unknown' train.loc[train.Name.isnull(), 'Name'] = 'Unknown' train.loc[train.OutcomeType.isnull(), 'OutcomeType'] = 'Unknown' train.loc[train.AnimalType.isnull(), 'AnimalType'] = 'Unknown' train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown' train.loc[train.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown' #Уберем Nan значения из test test.loc[test.AnimalID.isnull(), 'AnimalID'] = 'Unknown' test.loc[test.Name.isnull(), 'Name'] = 'Unknown' test.loc[test.AnimalType.isnull(), 'AnimalType'] = 'Unknown' test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown' test.loc[test.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown' #####################Закодируем слова числами################################ #Закодировали AnimalID цифрами вместо названий в test & train #encAnimalID = preprocessing.LabelEncoder() #encAnimalID.fit(pd.concat((test['AnimalID'], train['AnimalID']))) #test['AnimalID'] = encAnimalID.transform(test['AnimalID']) #train['AnimalID'] = encAnimalID.transform(train['AnimalID']) #Закодировали имя цифрами вместо названий в test & train encName = preprocessing.LabelEncoder() encName.fit(pd.concat((test['Name'], train['Name']))) test['Name'] = encName.transform(test['Name']) train['Name'] = encName.transform(train['Name']) #Закодировали DateTime цифрами вместо названий в test & train encDateTime = preprocessing.LabelEncoder() encDateTime.fit(pd.concat((test['DateTime'], train['DateTime']))) test['DateTime'] = encDateTime.transform(test['DateTime']) train['DateTime'] = encDateTime.transform(train['DateTime']) #Закодировали OutcomeType цифрами вместо названий в train, т.к. в test их нет encOutcomeType = preprocessing.LabelEncoder() encOutcomeType.fit(train['OutcomeType']) train['OutcomeType'] = encOutcomeType.transform(train['OutcomeType']) #Закодировали AnimalType цифрами вместо названий в test & train encAnimalType = preprocessing.LabelEncoder() encAnimalType.fit(pd.concat((test['AnimalType'], train['AnimalType']))) test['AnimalType'] = encAnimalType.transform(test['AnimalType']) train['AnimalType'] = encAnimalType.transform(train['AnimalType']) #Закодировали SexuponOutcome цифрами вместо названий в test & train encSexuponOutcome = preprocessing.LabelEncoder() encSexuponOutcome.fit(pd.concat((test['SexuponOutcome'], train['SexuponOutcome']))) test['SexuponOutcome'] = encSexuponOutcome.transform(test['SexuponOutcome']) train['SexuponOutcome'] = encSexuponOutcome.transform(train['SexuponOutcome']) #Закодировали AgeuponOutcome цифрами вместо названий в test & train encAgeuponOutcome = preprocessing.LabelEncoder() encAgeuponOutcome.fit(pd.concat((test['AgeuponOutcome'], train['AgeuponOutcome']))) test['AgeuponOutcome'] = encAgeuponOutcome.transform(test['AgeuponOutcome']) train['AgeuponOutcome'] = encAgeuponOutcome.transform(train['AgeuponOutcome']) #Закодировали Breed цифрами вместо названий в test & train encBreed = preprocessing.LabelEncoder() encBreed.fit(pd.concat((test['Breed'], train['Breed']))) test['Breed'] = encBreed.transform(test['Breed']) train['Breed'] = encBreed.transform(train['Breed']) #Закодировали Color цифрами вместо названий в test & train encColor = preprocessing.LabelEncoder() encColor.fit(pd.concat((test['Color'], train['Color']))) test['Color'] = encColor.transform(test['Color']) train['Color'] = encColor.transform(train['Color']) #Закодировали Gender цифрами вместо названий в test & train encGender = preprocessing.LabelEncoder() encGender.fit(pd.concat((test['Gender'], train['Gender']))) test['Gender'] = encGender.transform(test['Gender']) train['Gender'] = encGender.transform(train['Gender']) #Закодировали Fertility цифрами вместо названий в test & train encFertility = preprocessing.LabelEncoder() encFertility.fit(pd.concat((test['Fertility'], train['Fertility']))) test['Fertility'] = encFertility.transform(test['Fertility']) train['Fertility'] = encFertility.transform(train['Fertility']) #Закодировали breedColor цифрами вместо названий в test & train encbreedColor = preprocessing.LabelEncoder() encbreedColor.fit(pd.concat((test['breedColor'], train['breedColor']))) test['breedColor'] = encbreedColor.transform(test['breedColor']) train['breedColor'] = encbreedColor.transform(train['breedColor']) ####################################Предобработка################################# from sklearn.model_selection import cross_val_score #poly_features = preprocessing.PolynomialFeatures(3) #Подготовили данные так, что X_tr - таблица без AnimalID и OutcomeType, а в y_tr сохранены OutcomeType X_tr, y_tr = train.drop(['AnimalID', 'OutcomeType'], axis=1), train['OutcomeType'] #Типа перевели dataFrame в array и сдалали над ним предварительную обработку #X_tr = poly_features.fit_transform(X_tr) X_tr.head()
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
eae1780932bb967f7cecf50b2b427945
<b>Вывод по признакам:</b> <br> <b>Не нужны:</b> Name, DateTime, month, day, Breed, breedColor. Всё остальное менее однозначно, можно и оставить. <div class="panel panel-info" style="margin: 50px 0 0 0"> <div class="panel-heading"> <h3 class="panel-title">Задание 4.</h3> </div> </div> Попробуйте смешать разные модели с помощью <b>sklearn.ensemble.VotingClassifier</b>. Увеличилась ли точность? Изменилась ли дисперсия?
#Для начала выкинем ненужные признаки, выявленные на прошлом этапе X_tr = X_tr.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1) test = test.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1) X_tr.head() from sklearn.ensemble import VotingClassifier from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier clf1 = LogisticRegression(random_state=1234) clf3 = GaussianNB() clf5 = KNeighborsClassifier() eclf = VotingClassifier(estimators=[ ('lr', clf1), ('gnb', clf3), ('knn', clf5)], voting='soft', weights=[1,1,10]) scores = cross_val_score(eclf, X_tr, y_tr) eclf = eclf.fit(X_tr, y_tr) print('Best score:', scores.min()) #delete AnimalID from test X_te = test.drop(['AnimalID'], axis=1) X_te.head() y_te = eclf.predict(X_te) y_te ans_nn = pd.DataFrame({'AnimalID': test['AnimalID'], 'type': encOutcomeType.inverse_transform(y_te)}) ans_nn.head() #Зададим функцию для трансформации def onehot_encode(df_train, column): from sklearn.preprocessing import LabelBinarizer cs = df_train.select_dtypes(include=['O']).columns.values if column not in cs: return (df_train, None) rest = [x for x in df_train.columns.values if x != column] lb = LabelBinarizer() train_data = lb.fit_transform(df_train[column]) new_col_names = ['%s' % x for x in lb.classes_] if len(new_col_names) != train_data.shape[1]: new_col_names = new_col_names[::-1][:train_data.shape[1]] new_train = pd.concat((df_train.drop([column], axis=1), pd.DataFrame(data=train_data, columns=new_col_names)), axis=1) return (new_train, lb) ans_nn, lb = onehot_encode(ans_nn, 'type') ans_nn ans_nn.head()
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
cd012443f9d650bba441f26bede4e4a0
Проверим, что никакие строчки при манипуляции с NaN не потерялись
test.shape[0] == ans_nn.shape[0] #Сделаем нумерацию индексов не с 0, а с 1 ans_nn.index += 1 #Воткнем столбец с индексами как столбец в конкретное место ans_nn.insert(0, 'ID', ans_nn.index) #delete AnimalID from test ans_nn = ans_nn.drop(['AnimalID'], axis=1) ans_nn.head() #Сохраним ans_nn.to_csv('ans_catdog.csv', index=False)
3. Котики и собачки.ipynb
lithiumdenis/MLSchool
mit
1522abf224516bfe2308098766eedccb
Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP) Decoding of motor imagery applied to EEG data decomposed using CSP. Here the classifier is applied to features extracted on CSP filtered signals. See https://en.wikipedia.org/wiki/Common_spatial_pattern and [1]. The EEGBCI dataset is documented in [2]. The data set is available at PhysioNet [3]_. References .. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. .. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE TBME 51(6):1034-1043. .. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220.
# Authors: Martin Billinger <martin.billinger@tugraz.at> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.model_selection import ShuffleSplit, cross_val_score from mne import Epochs, pick_types, events_from_annotations from mne.channels import make_standard_montage from mne.io import concatenate_raws, read_raw_edf from mne.datasets import eegbci from mne.decoding import CSP print(__doc__) # ############################################################################# # # Set parameters and read data # avoid classification of evoked responses by using epochs that start 1s after # cue onset. tmin, tmax = -1., 4. event_id = dict(hands=2, feet=3) subject = 1 runs = [6, 10, 14] # motor imagery: hands vs feet raw_fnames = eegbci.load_data(subject, runs) raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames]) eegbci.standardize(raw) # set channel names montage = make_standard_montage('standard_1005') raw.set_montage(montage) # strip channel names of "." characters raw.rename_channels(lambda x: x.strip('.')) # Apply band-pass filter raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge') events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3)) picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') # Read epochs (train will be done only between 1 and 2s) # Testing will be done with a running classifier epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) epochs_train = epochs.copy().crop(tmin=1., tmax=2.) labels = epochs.events[:, -1] - 2
0.20/_downloads/a4d4c1a667c2374c09eed24ac047d840/plot_decoding_csp_eeg.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
a2654e957ed36da34dabaae620ec5f86
Solución: Inicialmente encontremos los valores principales $(\lambda)$ a partir de la solución del polinomio característico: ${\lambda ^3} - {I_\sigma}{\lambda ^2} + {II_\sigma}\lambda - {III_\sigma} = 0$ Donde ${I_\sigma}$, ${II_\sigma}$ y ${III_\sigma}$ son los invariantes 1, 2 y 3 respectivamente que están dados por: ${I_\sigma } = {\sigma {xx}} + {\sigma {yy}} + {\sigma _{zz}}$ ${II_\sigma } = {\sigma {xx}}{\sigma {yy}} + {\sigma {xx}}{\sigma {zz}} + {\sigma {zz}}{\sigma {yy}} - \tau {xy}^2 - \tau {xz}^2 - \tau _{yz}^2$ ${III_\sigma } = {\sigma {xx}}{\sigma {yy}}{\sigma {zz}} + 2{\tau {xy}}{\tau {xz}}{\tau {yz}} - {\sigma {xx}}\tau {yz}^2 - {\sigma {yy}}\tau {xz}^2 - {\sigma {zz}}\tau {xy}^2$
import numpy as np from scipy import linalg S = np.array([ [200,100,300.], [100,0,0], [300,0,0]]) IS = S[0,0]+S[1,1]+S[2,2] IIS = S[0,0]*S[1,1]+S[1,1]*S[2,2]+S[0,0]*S[2,2]-(S[0,1]**2)-(S[0,2]**2)-(S[1,2]**2) IIIS = S[0,0]*S[1,1]*S[2,2]-S[0,0]*(S[1,2]**2)-S[1,1]*(S[0,2]**2)-S[2,2]*(S[0,1]**2)+2*S[1,2]*S[0,2]*S[0,1] print print 'Invariantes:', IS,IIS,IIIS print
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
6bf3de03d57e77d6962a3baf35fbce23
Resolviendo vía polinomio característico:
coeff=[1.0,-IS,IIS,-IIIS] ps=np.roots(coeff) print print "Esfuerzos principales:", np.sort(np.round(ps,1)) print
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
9dfe16695ba6e0c05b762bd44588102b
Resolviendo vía librerías python con linalg.eig podemos encontrar valores (la) y direcciones principales (n) simultaneamente
la, n= linalg.eigh(S) la = la.real print print "Esfuerzos principales:", np.round(la,1) print #print S print print 'n=', np.round(n,2) print
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
aca907942fe9129b0840d1d7459b5d78
De esta manera escribamos en tensor asociado a las direcciones principales:
print Sp = np.array([ [la[0],0,0], [0,la[1],0], [0,0,la[2]]]) print 'Sp =',np.round(Sp,1) print Image(filename='FIGURES/Sprinc.png',width=400)
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
72ece91460b9e03213ead55ce7f9ab35
Los vectores $i'$, $j'$ y $k'$ están dados por:
print "i'=", np.round(n[:,0],2) print "j'=", np.round(n[:,1],2) print "k'=", np.round(n[:,2],2) print
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
56f25aa077836934db0a48f23167ed36
Verifiquemos que se cumplen los invariantes en el tensor asociado a direcciones principales:
IS = Sp[0,0]+Sp[1,1]+Sp[2,2] IIS =Sp[0,0]*Sp[1,1]+Sp[1,1]*Sp[2,2]+Sp[0,0]*Sp[2,2]-(Sp[0,1]**2)-(Sp[0,2]**2)-(Sp[1,2]**2) IIIS =Sp[0,0]*Sp[1,1]*Sp[2,2]-Sp[0,0]*(Sp[1,2]**2)-Sp[1,1]*(Sp[0,2]**2)-Sp[2,2]*(Sp[0,1]**2)+2*Sp[1,2]*Sp[0,2]*Sp[0,1] print print 'Invariantes:', IS,IIS,IIIS print
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
3dcacaf9ae7a75102b1bedb7f8800dc8
Para terminar se debe de tener en cuenta que las direcciones principales no son otra cosa que la matriz de cosenos directores que transformaría el tensor original al tensor en direcciones principales mediante la ecuación de transformación: \begin{align} &[\sigma']=[C][\sigma][C]^T\ \end{align} Teniendo en cuenta que n es dado por vectores columna entonces la matriz de cosenos directores está dada por: \begin{align} &[C] = [n]^T \end{align}
C = n.T Sp2 = np.dot(np.dot(C,S),C.T) print print 'Sp =', np.round(Sp2,1) from IPython.core.display import HTML def css_styling(): styles = open('./custom_barba.css', 'r').read() return HTML(styles) css_styling()
NOTEBOOKS/Ej4_Eingen.ipynb
jgomezc1/medios
mit
9417b3e185d28738a7937030e8a5e974
Tracking a CO$_2$ Plume CO$_2$ from an industrial site can be compressed and injected into a deep saline aquifer for storage. This technology is called CO$_2$ capture and storage or CCS, proposed in (TODO) to combat global warming. As CO$_2$ is lighter than the saline water, it may leak through a natural fracture and contanimate the drinking water. Therefore, monitoring and predicting the long term fate of CO$_2$ at the deep aquifer level is crucial as it will provide an early warning for the CO$_2$ leakage. The goal is to interprete the time-series data recorded in the seismic sensors into spatial maps of a moving CO$_2$ plume, a problem very similar to CT scanning widely used in medical imaging. The goal is * Predict and monitor the location of CO$_2$ plume * Simulating the Movement of CO$_2$ Plume Here is a simulated CO$_2$ plume for $5$ days resulted from injecting $300$ tons of CO$_2$ at a depth of $1657m$. $$ x_{k+1} = f(x_k) + w $$ run code that displays the simulated moving CO$_2$ plume, stored the plume data in SQL?? (TODO)
CO2 = CO2simulation('low') data = [] x = [] for i in range(10): data.append(CO2.move_and_sense()) x.append(CO2.x) param = vco2.getImgParam('low') vco2.plotCO2map(x,param) plt.show()
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
d49740df22c842286bf8d95aed7556ce
Simulating the Sensor Measurement The sensor measures the travel time of a seismic signal from a source to a receiver. $$ y = Hx + v $$ $x$ is the grid block value of CO$_2$ slowness, an idicator of how much CO$_2$ in a block. The product $Hx$ simulate the travel time measurements by integrating $x$ along a raypath. $v$ is the measurement noise. The presence of CO$_2$ slows down the seismic signal and increases its travel time along a ray path. If the ray path does not intercepts the CO$_2$ plume, the travel time remains constant over time (Ray path 1), otherwise it tends to increase once the CO$_2$ plume intercepts the ray path (Ray path 2).
reload(visualizeCO2) vco2.plotCO2data(data,0,47)
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
421ee555018bb586b9c293fa9875af23
TODO: Fig: Run animation/image of the ray path (shooting from one source and receiver) on top of a CO$_2$ plume and display the travel time changes over time. Fig: Show the time-series data (Path 1 and Path 2) at a receiver with and without noise. optional: run getraypath will give me all the index of the cells and the length of the ray path within each cell, this can help me compute the travel time along this particular ray path Kalman filtering Initialization step Define $x$, $P$. Before injection took place, there was no CO$_2$ in the aquifer.
np.dot(1,5) run runCO2simulation
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
f56b54fd9a4291d93f11e0eecba63a38
Implementing the Prediction Step $$ x_{k+1} = x_{k} + w_k $$ Note here a simplified Random Walk forecast model is used to substitute $f(x)$. The advantage of using a random walk forecast model is that now we are dealing with a linear instead of nonlinear filtering problem, and the computational cost is much lower as we don't need to evaluate $f(x)$. However, when $dt$ is very large, this random walk forecast model will give poor predictions, and the prediction error cannot be well approximated by $w_k\approx N(0,Q)$, a zero mean Gaussian process noise term. Therefore, the random walk forecast model is only useful when the measuremetns are sampled at a high frequency, and $Q$ has to be seclected to reflect the true model error.
from filterpy.common import Q_discrete_white_noise kf.F = np.diag(np.ones(dim_x)) # kf.Q = Q_discrete_white_noise(dim = dim_x, dt = 0.1, var = 2.35) kf.Q = 2.5 kf.predict() print kf.x[:10]
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
8e422313302334666714c2abb0cde793
Implementing the Update Step
kf.H = CO2.H_mtx kf.R *= 0.5 z = data[0] kf.update(z)
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
ba7aaaa119daa6437bcc13c1b44ce3ec
TODO - Fig: Estimate at k, Forecast at k+1, Estimate at k+1, True at k+1 - A table showing: x: the time CO2 reaches the monitoring well y: the time CO2 reaches the ray path PREDICT: x var y UPDATE: x var y - Fig: MSE vs time - Fig: Data fitting, slope 45 degree indicates a perfect fit Use HiKF instead of KF
from HiKF import HiKF hikf = HiKF(dim_x, dim_z) hikf.x
.ipynb_checkpoints/FristExample-checkpoint.ipynb
judithyueli/pyFKF
mit
5aa695b907981f060a478a985c6deada
Coefficients to find
w_true = [3,3,3] w_true = w_true / np.sum(w_true) mu_true = [3,10,20] sigma_true = [2,4,1]
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
e21183ebe153df091aab40e0fff85a9a
Sampling the distribution
def draw_from_gaussian_mixture(w, mu, sigma, n_samples): samples = [] for i in range(n_samples): idx_comp = np.random.multinomial(1,w).argmax() samples.append( np.random.randn()*sigma[idx_comp] + mu[idx_comp] ) return samples samples = np.array(draw_from_gaussian_mixture(w_true, mu_true, sigma_true, n_samples)) plt.plot(plt.hist(samples, bins=100)[0][0]) from scipy.stats import norm def plot_gaussian_mixture(w,mu,sigma,color="b"): x = np.linspace(-5,30,200) y = [] for i in range(len(x)): z = x[i] s=0 for i in range(3): s+= norm(loc= mu[i], scale = sigma[i]).pdf(z) * w[i] y.append(s) plt.plot(x,y, color=color) plot_gaussian_mixture(w_true, mu_true, sigma_true)
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
0ae054b4080be9486e287b398d55ceff
Finding coefficients with Tensorflow
import tensorflow as tf
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
fdc772e7c828592c6947151506632841
Loss function
import math oneDivSqrtTwoPI = tf.constant(1 / math.sqrt(2*math.pi)) # normalisation factor for gaussian, not needed. my_epsilon = tf.constant(1e-14) def tf_normal(y, mu, sigma): result = tf.subtract(y, mu) result = tf.divide(result,sigma) result = -tf.square(result)/2 return tf.divide(tf.exp(result),sigma)*oneDivSqrtTwoPI # On utilise un signe moins pour minimiser moins l'entropie def get_density(out_pi, out_sigma, out_mu, y): result = tf_normal(y, out_mu, out_sigma) result = tf.multiply(result, out_pi) result = tf.reduce_sum(result, 1, keep_dims=True) return result def get_lossfunc(out_pi, out_sigma, out_mu, y): result = get_density(out_pi, out_sigma, out_mu, y) result = -tf.log(result + my_epsilon) return tf.reduce_mean(result) def get_mixture_coef(theta): out_pi, out_sigma, out_mu = tf.split(theta, num_or_size_splits=3,axis=1) max_pi = tf.reduce_max(out_pi, 1, keep_dims=True) out_pi = tf.subtract(out_pi, max_pi) out_pi = tf.exp(out_pi) normalize_pi = tf.divide(out_pi, tf.reduce_sum(out_pi, axis=1, keep_dims=True)) out_sigma = tf.exp(out_sigma) return normalize_pi, out_sigma, out_mu theta = tf.Variable(tf.random_normal([1,9], stddev=1.0, dtype=tf.float32), name="theta") out_pi, out_sigma, out_mu = get_mixture_coef(theta) samples_tf = tf.placeholder(dtype=tf.float32, shape=[None,1], name="samples") loss = get_lossfunc(out_pi, out_sigma, out_mu, samples_tf)
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
6bb13c98c7ae7039aeb53dc487a79f39
Optimizer
train_op = tf.train.AdamOptimizer(learning_rate=0.05, epsilon=1E-12).minimize(loss)
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
67dd21076e5a4194f3bf7f6bb03cbf09
Init Session
sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) def do(x): return sess.run(x, feed_dict={samples_tf: samples.reshape(-1,1)}) loss_list = [] sess.run(get_density(out_pi, out_sigma, out_mu, samples_tf), feed_dict={samples_tf: samples.reshape(-1,1)}) for i in range(2000): sess.run(train_op, feed_dict={samples_tf:samples.reshape(-1,1)}) loss_val = sess.run(loss, feed_dict={samples_tf:samples.reshape(-1,1)}) loss_list.append(loss_val) plt.plot(np.log10(loss_list)) out_pi, out_sigma, out_mu = do(get_mixture_coef(theta)) plot_gaussian_mixture(out_pi[0],out_mu[0],out_sigma[0],"r") plot_gaussian_mixture(w_true, mu_true, sigma_true,"b")
tensorflow_fit_gaussian_mixture_model.ipynb
pierresendorek/tensorflow_crescendo
lgpl-3.0
9490a8c6a4e12fd36f61f98321e9308d
$$ \log \exp(a) = a $$ $$ \log (\exp(a) \exp(b)) = \log(\exp(a)) + \log(\exp(b)) = a + b $$
print(math.log(math.exp(0.2) * math.exp(0.7))) print(0.2 + 0.7)
maths/logs.ipynb
hughperkins/pub-prototyping
apache-2.0
449f8f46db0785fdb19200f774232c2d
NLP Lab, Part I Welcome to the first lab of 6.S191! Administrivia Things to install: - tensorflow - word2vec Lab Objectives: Learn Machine Learning methodology basics (train/dev/test sets) Learn some Natural Language Processing basics (word embeddings with word2vec) Learn the basics of tensorflow, build your first deep neural nets (MLP, RNN) and get results! And we'll be doing all of this in te context of Twitter sentiment analysis. Given a tweet like: omg 6.S196 is so cool #deeplearning #mit We want an algorithm to label this tweet as positive or negative. It's intractable to try to solve this task via some lexical rules, so instead, we're going to use deep learning to embed these tweets into some deep latent space where distinguishing between the two is realtively simple. Machine Learning Basics Given some dataset with tweets $X$, and sentiments $Y$, we want to learn a function $f$, such that $Y = f(X)$. In our context, $f$ is deep neural network parameterized by some network weights $\Theta$, and we're going to do our learning via gradient descent. Objective Function To start, we need someway to measure how good our $f$ is, so we can take a gradient in respective to that performance and move in the right direction. We call this performance evaluation our Loss function, L , and this is something we want to minimize. Since we are doing classification (pos vs neg), a common loss function to use is cross entropy. $$L( \Theta ) = - \Sigma_i ( f(x_i)*log(y_i) + (1-f(x_i))log(1-y_i) ) $$ where $f(x)$ is the probablity a tweet $x$ is positive, which we want to be 1 given it's postive and 0 given that it's negative and $y$ is the correct answer. We can access this function with tf.nn.sigmoid_cross_entropy_with_logits, which will come handy in code. Given that $f$ is parameterized by $\Theta$, we can take the gradient $\frac{dL}{d\Theta}$, and we learn by updating our parameters to minimize the loss. Note that this loss is 0 if the prediction is correct and very large if we predict something has 0 probablity of being positive when it is positive. Methodology To measure how well we're doing, we can't just look at how well our model performs on it's training data. It could be just memorizing the training data and perform terribly on data it hasn't seen before. To really measure how $f$ performs in the wild, we need to present it with unseen data, and we can tune our hyper-parameters (like learning rate, num layers etc.) over this first unseen set, which we call our development (or validation) set. However, given that we optimized our hyper-parameters to the development set, to get a true fair assesment of the model, we test it in respect to a held-out test set at the end, and generaly report those numbers. In summary: Namely, we training on one set, i.e. a training set, evaluate and tune our hyper paremeters in regards to our performance on the dev set, and report finals results on a completely heldout test set. Let's load these now, this ratio of sizes if fairly standard.
trainSet = p.load( open('data/train.p','rb')) devSet = p.load( open('data/dev.p','rb')) testSet = p.load( open('data/test.p','rb')) ## Let's look at the size of what we have here. Note, we could use a much larger train set, ## but we keep it mid-size so you can run this whole thing off your laptop len(trainSet), len(devSet), len(testSet)
draft/part1.ipynb
yala/introdeeplearning
mit
5b3c0c232e851bff8546ea81288263a9
NLP Basics The first question we need to address is how do we represent a tweet? how do we represent a word? One way to do this is with one_hot vectors for each word. Where a given word $w_i= [0,0,...,1,..0]$. However, in this representation, words like "love" and "adore" are as similar as "love" and "hate", because the cosine similarity is 0 in both cases. Another issue is that these vectors are huge in order to represent the whole vocab. To get around this issue the NLP community developed a techique called Word Embeddings. Word2Vec The basic idea is we represent a word with a vector $\phi$ by the context the word appears in. By training a neural network to predict the context of words across a large training set, we can use the weights of that neural networks to get a dense, and useful representation that captures context. Word Embeddings capture all kinds of useful semantic relationships. For example, one cool emergent property is $ \phi(king) - \phi(queen) = \phi(man) - \phi(woman)$. To learn more about the magic behind word embeddings we recommend Chris Colahs "blog post". A common tool for generating Word Embeddings is word2vec, which is what we'll be using today.
## Note, these tweets were preprocessings to remove non alphanumeric chars, replace unfrequent words, and padded to same length. ## Note, we're going to train our embeddings on only our train set in order to keep our dev/test tests fair trainSentences = [" ".join(tweetPair[0]) for tweetPair in trainSet] print trainSentences[0] p.dump(trainSentences, open('data/trainSentences.p','wb')) ## Word2vec module expects a file containing a list of strings, a target to store the model, and then the size of the ## embedding vector word2vec.word2vec('data/trainSentences.p','data/emeddings.bin', 100, verbose=True) w2vModel = word2vec.load('data/emeddings.bin') print w2vModel.vocab ## Each word looks something like represented by a 100 dimension vector like this print "embedding for the word fun", w2vModel['fun']
draft/part1.ipynb
yala/introdeeplearning
mit
0ff47ba383960df2118d2ee637587661
Now lets look at the words most similar to the word "fun"
indices, cosineSim = w2vModel.cosine('fun') print w2vModel.vocab[indices] word_embeddings = w2vModel.vectors vocab_size = len(w2vModel.vocab)
draft/part1.ipynb
yala/introdeeplearning
mit
b7055c142f8607ec8517bb37c8269fa6
Feel free to play around here test the properties of your embeddings, how they cluster etc. In the interest of time, we're going to move on straight to models. Now in order to use these embeddings, we have to represent each tweet as a list of indices into the embedding matrix. This preprocessing code is available in processing.py if you are interested. Tensorflow Basics Tensorflow is a hugely popular library for building neural nets. The general workflow in building models in tensorflow is as follows: - Specify a computation graph (The struture and computations of your neural net) - Use your session to feed data into the graph and fetch things from the graph (like the loss, and train operation) Inside the graph, we put our neural net, our loss function, and our optimizer and once this is constructed, we can feed in the data, fetch the loss and the train op to train it. Here is a toy example putting 2 and 2 together, and initializing some random weight matrix.
session = tf.Session() # 1.BUILD GRAPH # Set placeholders with a type for data you'll eventually feed in (like tweets and sentiments) a = tf.placeholder(tf.int32) b = tf.placeholder(tf.int32) # Set up variables, like weight matrices. # Using tf.get_variable, specify the name, shape, type and initliazer of the variable. W = tf.get_variable("ExampleMatrix", [2, 2], tf.float32, tf.random_normal_initializer(stddev=1.0 / 2)) # Set up the operations you need, like matrix multiplications, non-linear layers, and your loss function minimizer c = a*b # 2.RUN GRAPH # Initialize any variables you have (just W in this case) tf.global_variables_initializer().run(session=session) # Specify the values tensor you want returned, and ops you want run fetch = {'c':c, 'W':W} # Fill in the place holders feed_dict = { a: 2, b: 2, } # Run and get back fetch filled in results = session.run( fetch, feed_dict = feed_dict) print( results['c']) print( results['W']) # Close session session.close() # Reset the graph so it doesn't get in the way later tf.reset_default_graph()
draft/part1.ipynb
yala/introdeeplearning
mit
b8a898bb85c0e600f91bd22f900680b0
Building an MLP MLP or Multi-layer perceptron is a basic archetecture where where we multiply our representation with some matrix W and add some bias b and then apply some nonlineanity like tanh at each layer. Layers are fully connected to the next. As the network gets deeper, it's expressive power grows exponentially and so they can draw some pretty fancy decision boundaries. In this exercise, you'll build your own MLP, with 1 hidden layer (layer that isn't input or output), with 100 dimensions. To make training more stable and efficient, we'll do this we'll actually evalaute 20 tweets at a time, and take gradients and respect to the loss on the 20. We call this idea training with mini-batches. Defining the Graph Step 1: Placeholders, Variables with specified shapes Let start off with placeholders for our tweets, and lets use a minibatch of size 20. Remember each tweet is will be represented as a vector of sentence length (20) word_ids , and since we are packing mini-batch size number of tweets in the graph a time tweets per iteration, we need a matrix of minibatch * sentence length. Feel free to check out the placeholder api here Set up a placeholder for your labels, namely the mini-batch size length vector of sentiments. Set up a placeholder for our pretrained word embeddings. This will take shape vocab_size * embedding_size Set up a variable for your weight matrix, and bias. Check out the variable api here Let's use a hidden dimension size of 100 (so 100 neurons in the next layer) For the Weight matrix, use tf.random_normal_initializer(stddev=1.0 / hidden_dim_size), as this does something called symetry breaking and keeps the neural network from getting stuck at the start. For the bias vector, use tf.constant_initializer(0)
"TODO"
draft/part1.ipynb
yala/introdeeplearning
mit
26cc4568ccdf50b332b70cf64e8b139f