markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Lower left panel
pu.PlotFrequency(s4gdata.logfgas, ii_barred_limited1_m8_5, ii_unbarred_limited1_m8_5, -3,2,0.5, noErase=False, fmt='ro', ms=9, label=ss1m) pu.PlotFrequency(s4gdata.logfgas, ii_barred_limited2_m9, ii_unbarred_limited2_m9, -3,2,0.5, offset=0.03, noErase=True, fmt='ro', mfc='None', mec='r', ms=9, label=ss2m) plt.xlabel(xtfgas);plt.ylabel('Bar fraction') plt.ylim(0,1);plt.xlim(-3,1) legend(fontsize=9, loc='lower left', framealpha=0.5) # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fbar-vs-fgas_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
fe94f629bb93114a919baa1c252b114e
Lower right panel
pu.PlotFrequency(s4gdata.logfgas, ii_SB_limited1_m8_5, ii_nonSB_limited1_m8_5, -3,2,0.5, fmt='ko', ms=8, label="SB ("+ss1m+")") pu.PlotFrequency(s4gdata.logfgas, ii_SB_limited2_m9, ii_nonSB_limited2_m9, -3,2,0.5, noErase=True, ms=8, fmt='ko', mfc='None', mec='k', offset=0.03, label="SB ("+ss2m+")") pu.PlotFrequency(s4gdata.logfgas, ii_SAB_limited1_m8_5, ii_nonSAB_limited1_m8_5, -3,2,0.5, noErase=True, fmt='co', ms=8, label="SAB ("+ss1m+")") pu.PlotFrequency(s4gdata.logfgas, ii_SAB_limited2_m9, ii_nonSAB_limited2_m9, -3,2,0.5, noErase=True, ms=8, fmt='co', mfc='None', mec='c', offset=0.03, label="SAB ("+ss2m+")") plt.legend(loc='upper left', ncol=2, fontsize=10) plt.ylim(0,1);xlim(-3,1) plt.xlabel(xtfgas) plt.ylabel('Bar fraction') # push bottom of plot upwards so that x-axis label isn't clipped in PDF output plt.subplots_adjust(bottom=0.14) if savePlots: plt.savefig(plotDir+"fSB-fSAB-vs-fgas_2sample.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
c2a75d52ed96d455e7eb701ad2874832
Figure B2
ii_all_limited1_S0 = [i for i in range(nDisksTotal) if s4gdata.dist[i] <= 25 and s4gdata.t_s4g[i] <= -0.5] ii_barred_limited1_with_S0 = [i for i in range(nDisksTotal) if i in ii_barred and s4gdata.dist[i] <= 25] ii_unbarred_limited1_with_S0 = [i for i in range(nDisksTotal) if i in ii_unbarred and s4gdata.dist[i] <= 25] ii_barred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_barred] ii_unbarred_limited1_S0 = [i for i in ii_all_limited1_S0 if i in ii_unbarred] fig,axs = plt.subplots(1,2, figsize=(15,5)) axs[0].plot([8.0,11.5], [0,1], color='None') pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1, ii_unbarred_limited1, 8.0, 11.3, 0.25, noErase=True, axisObj=axs[0], fmt='ro', ms=9, label=ss1 + ", spirals") txt2 = ss1 + ", S0s + spirals" pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1_with_S0, ii_unbarred_limited1_with_S0, 8.0, 11.3, 0.25, axisObj=axs[0], offset=-0.03, fmt='o', color='orange', mew=1.3, mfc='None', mec='orange', ms=7,noErase=True, label=txt2) txt3 = ss1 + ", S0s only" pu.PlotFrequencyWithWeights(s4gdata.logmstar, s4gdata.w25, ii_barred_limited1_S0, ii_unbarred_limited1_S0, 8.0, 11.3, 0.25, axisObj=axs[0], offset=0.04, fmt='D', mfc='None', mec='0.5', ecolor='0.5', ms=7, noErase=True, label=txt3) axs[0].set_ylim(0,1) axs[0].set_xlabel(xtmstar) axs[0].set_ylabel('Bar fraction') axs[0].legend(loc='upper left', fontsize=10) plt.subplots_adjust(bottom=0.14) bins = np.arange(8,11.5, 0.25) axs[1].hist(s4gdata.logmstar[ii_all_limited1], bins=bins, label='Spirals') axs[1].hist(s4gdata.logmstar[ii_all_limited1_S0], bins=bins, color='r', label='S0') axs[1].set_ylim(0,100) axs[1].set_xlabel(xtmstar);axs[1].set_ylabel(r"$N$") axs[1].legend(loc='upper right', fontsize=10) plt.subplots_adjust(bottom=0.14) if savePlots: savefig(plotDir+"fbar-spirals+S0-vs-mstar-with-mstar-hist.pdf")
s4gbars_main.ipynb
perwin/s4g_barfractions
bsd-3-clause
168b1d88f44947d2bc46ef91b9395eb4
Create Two Vectors
# Create two vectors vector_a = np.array([1,2,3]) vector_b = np.array([4,5,6])
machine-learning/calculate_dot_product_of_two_vectors.ipynb
tpin3694/tpin3694.github.io
mit
6ac3da4d5ba05646359a73d9d4a2ecc9
Calculate Dot Product (Method 1)
# Calculate dot product np.dot(vector_a, vector_b)
machine-learning/calculate_dot_product_of_two_vectors.ipynb
tpin3694/tpin3694.github.io
mit
9d3922a5648859cb0b8983376ec45da0
Calculate Dot Product (Method 2)
# Calculate dot product vector_a @ vector_b
machine-learning/calculate_dot_product_of_two_vectors.ipynb
tpin3694/tpin3694.github.io
mit
86b8c43968231715b1c84cbc42398a0a
Data science friday tales: Using Fréchet Bounds for Bandwidth selection in MV Kernel Methods. Lia Silva-Lopez Tuesday, 19/03/2019 This story starts with a reading accident One moment you are reading a book... <img src="img/reading_accident.png"> ...And the next there are bounds for everything. Bounds for distributions* in terms of its marginals? That makes perfect sense! Why aren't these bounds more mainstream? Is it because it's hard to pronounce 'Fréchet'? *with distributions we mean CDFs, not densities Allright, let's point the fingers at ourselves: What would I do with Fréchet bounds? 'Cause the whole point of bounds is to try not to break them. Right? <img src="https://media3.giphy.com/media/145lrrvcdNq43m/source.gif" style="width: 280px;"> An idea: Let's use them for Bandwidth selection in MV KDEs. MV kernel estimation is expensive, slow and often hard to check with d>2. Which is why Kernel methods are recommended for smoothing (mostly). However, working with multivariate is not easy. So a lot of people turn to KDEs in the lack of better information. Or, throw samples at a blackbox and hope for the best. How can we use Fréchet bounds here? There are different methods for BW selection, many based on some kind of optimization. To prune the search space on any iterative method: (Naive) Removing bws that lead to estimates violating the bounds. (Less naive, but less parsinomic) prune using thresholds. To construct functions to optimize over: (Cheap, uninformative) Counting the number of violations of the bound? (Cheap, informative) Summing all diffs between each violation and the bound point? Other questions to answer: Are we breaking Fréchet bounds when estimating CDFs with Kernels? And if we break them, How badly are they usually broken? Do they get broken often? What are the consequences of selecting BWs that lead to breaking Fréchet? <img src="https://memegenerator.net/img/instances/63332969/do-all-the-things.jpg" style="width: 70%;"> What's in Python for this? Scikit-Learn, Scipy, StatsModels are the usual suspects. Only StatsModels has a convenience method to estimate CDFs with use Kernels. Based solely on making my life easy, I chose StatsModels to hack MV KDE methods and insert Fréchet bounds in BW estimation. StatsModels KDEMultivariate Package Wrapper code for MV KDE here Base code for bandwith selection methods here. Let's have a quick overview for how this normally works. First we generate some data to call the methods. We will use some betas.
n=1000 distr=spst.beta #<-- From SciPy smpl=np.linspace(0,1,num=n) params={'horns':(0.5,0.5),'horns1':(0.5,0.55), 'shower':(5.,2.),'grower':(2.,5.)} v_type=f'{"c"*len(params)}' #<-- Statsmodels wants to know if data is # continuous (c) # discrete ordered (o) # discrete unordered (u) fig, ax = plt.subplots(1,2,figsize=(10,5)) list(map(lambda x: ax[0].plot(distr.cdf(smpl,*x),smpl) , params.values())) list(map(lambda x: ax[1].plot(smpl,distr.pdf(smpl,*x)) , params.values())) ax[0].legend(list(params.keys()));ax[1].legend(list(params.keys())) ax[0].grid(); ax[1].grid() fig.suptitle(f'CDFs & PDFs for different marginals (Beta distributed)') plt.show()
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
73332a763cd6b185f13218f6415bfb56
Kernels and BW Selection Methods Kernel selection depends on "v_type". For "c" -> Gaussian Kernel. This is a list of the kernel functions available in the package kernel_func = dict( wangryzin=kernels.wang_ryzin, aitchisonaitken=kernels.aitchison_aitken, gaussian=kernels.gaussian, aitchison_aitken_reg = kernels.aitchison_aitken_reg, wangryzin_reg = kernels.wang_ryzin_reg, gauss_convolution=kernels.gaussian_convolution, wangryzin_convolution=kernels.wang_ryzin_convolution, aitchisonaitken_convolution=kernels.aitchison_aitken_convolution, gaussian_cdf=kernels.gaussian_cdf, aitchisonaitken_cdf=kernels.aitchison_aitken_cdf, wangryzin_cdf=kernels.wang_ryzin_cdf, d_gaussian=kernels.d_gaussian) Different kernels are selected for different reasons: variable features and if they want to fit pdfs or cdfs. (probably) Bandwidth selection methods We have a choice of 3 BW selection methods: 1. normal_reference: normal reference rule of thumb (default) BW from this method is the starting point of the other two algorithms Silverman's rule for MV case Quick, but too smooth. 2. cv_ml: cross validation maximum likelihood Not quick, but reasonable estimates in reasonable time (within seconds to few minutes). Uses the bandwidth estimate that maximizes the leave-out-out likelihood. Implemented in method "_cv_ml(self)" of "class GenericKDE(object)" in "statsmodels.nonparametric._kernel_base" The leave-one-out log likelihood function is: $$\ln L=\sum_{i=1}^{n}\ln f_{-i}(X_{i})$$ The leave-one-out kernel estimator of $f_{-i}$ is: $$f_{-i}(X_{i})=\frac{1}{(n-1)h} \sum_{j=1,j\neq i}K_{h}(X_{i},X_{j})$$ where $K_{h}$ represents the Generalized product kernel estimator: $$ K_{h}(X_{i},X_{j})=\prod_{s=1}^{q}h_{s}^{-1}k\left(\frac{X_{is}-X_{js}}{h_{s}}\right) $$ The Generalized product Kernel Estimator is also a method of GenericKDE(object). 3. cv_ls: cross validation least squares Very, very slow (>8x times slower than ml) Returns the value of the bandwidth that maximizes the integrated mean square error between the estimated and actual distribution. Implemented in method "_cv_ls(self)" of "class GenericKDE(object)" in "statsmodels.nonparametric._kernel_base" The integrated mean square error (IMSE) is given by: $$ \int\left[\hat{f}(x)-f(x)\right]^{2}dx $$ Comparing times and bandwidth choices Let's compare times and values for bandwidth selection for each method available in StatsModels, considering 4 dimensions and 1000 samples. Rule-of-thumb: 3 loops, best of 3: 109 µs per loop bw with reference [0.15803504 0.15817752 0.07058083 0.07048409] CV-LOO ML: 3 loops, best of 3: 1min 39s per loop bw with maximum likelihood [0.04915534 0.03477012 0.09889865 0.09816758] CV-LS: 3 loops, best of 3: 12min 30s per loop bw with least squares [1.12156416e-01 1.00000000e-10 1.03594669e-01 9.11747124e-02] But, check out the bandwidths sizes!
import statsmodels.api as sm #Generate some independent data for each parameter set mvdata={k:distr.rvs(*params[k],size=n) for k in params} rd=np.array(list(mvdata.values())) %timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='normal_reference') dens_u_rot = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='normal_reference') print('bw with reference',dens_u_rot.bw, '(only available for gaussian kernels)') %timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ml') dens_u_ml = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ml') print('bw with maximum likelihood',dens_u_ml.bw) # BW with least squares takes >8x more than with ml %timeit -n3 sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ls') dens_u_ls = sm.nonparametric.KDEMultivariate(data=rd,var_type=v_type, bw='cv_ls') print('bw with least squares',dens_u_ls.bw)
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
6ac7102a8b2e2771894d7acbf8b586d5
Now the fun part: modifying the package to do our bidding All we need is in two classes: class KDEMultivariate(GenericKDE) and its parent, class GenericKDE(object). When we call the constructor for the KDEMultivariate object, this happens: Data checks & reshaping, internal stuff settings. Bandwidth selection function method is chosen and bandwidth is calculated by doing a call to a hidden parent method (self._compute_bw(bw) or self._compute_efficient(bw)) At the parent method, either one of this methods of the parent called: _normal_reference() <- Silverman's rule _cv_ml() <- Cross validation maximum likelihood _cv_ls() <- Cross validation least squares How do the BW calculation methods work? _cv_ml() and _cv_ls() are almost the same method except for: | _cv_ml() | _cv_ls() | |---|---| | h0 = self._normal_reference() | h0 = self._normal_reference() | |bw = optimize.fmin(self.loo_likelihood, | bw = optimize.fmin(self.imse,| |x0=h0, args=(np.log, ),| x0=h0,| |maxiter=1e3, | maxiter=1e3 | |maxfun=1e3, | maxfun=1e3, | |disp=0, | disp=0,| |xtol=1e-3) | xtol=1e-3) | A bummer: no direct way to feed ranges of hyperparameters in order to constrain the search space! They simply call scipy.optimize.fmin underneath. optimize.fmin comes from scipy.optimize So everything is passed to an optimization function! Pretty much. Doesn't mean we can't do something about it :). Let's look inside loo_likelihood, and see where can we intervene:
def loo_likelihood(self, bw, func=lambda x: x): LOO = LeaveOneOut(self.data) #<- iterator for a leave-one-out over the data L = 0 for i, X_not_i in enumerate(LOO): #<- per leave-one-out of the data (ouch!) f_i = gpke(bw, #<- provided by the optimization algorithm data=-X_not_i, #<- dataset minus one sample as given by LOO data_predict=-self.data[i, :], #<- REAL dataset, ith point var_type=self.var_type) #<- 'cccc' or something similar L += func(f_i) #<- _cv_ml() passed np.log, so its log-likelihood # of gkpe at ith point. return -L
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
43a2d16c76f3594c912a601cb5207395
What happens inside gkpe? Both the CDF and PDF are estimated with a gpke. They just use a different kernel. All the kernel implementations are here.
def gpke(bw, data, data_predict, var_type, ckertype='gaussian', okertype='wangryzin', ukertype='aitchisonaitken', tosum=True): kertypes = dict(c=ckertype, o=okertype, u=ukertype) #<- kernel selection Kval = np.empty(data.shape) for ii, vtype in enumerate(var_type): #per ii dimension func = kernel_func[kertypes[vtype]] Kval[:, ii] = func(bw[ii], data[:, ii], data_predict[ii]) iscontinuous = np.array([c == 'c' for c in var_type]) dens = Kval.prod(axis=1) / np.prod(bw[iscontinuous]) #<- Ta-da, kernel products. if tosum: return dens.sum(axis=0) else: return dens
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
8afee7d5f3b6fa89be71b22c6d6212a9
What did I do? Groundwork: Methods for: Estimating the Fréchet bounds for a dataset. Visualizing the bounds (2d datasets) see here Counting how many violations of the bound were made by a CDF. Measuring the size of the violation at each point (diff between the point of the CDF in which the violation happened, and the bound that was broken) Generating experiments Massaging outputs of the profiler Then... Estimated a percentage of bound breaking for winning bandwidths in different methods. It was not zero!!! Tried using the violations as a way to prune "unpromising" bandwidths before applying the gpke thru the whole LOO iteration. It made the optimization algorithm go coo-coo Because scipy.optimize.fmin was expecting a number out of that function. To return something "proportionally punishing", I probably should keeping track of the previous estimates. That would require more work. Basically also hacking the code for the optimization. Future work! Then more... Hijacked loo_likelihood() to make my own method in which violations are used to guide the optimization algorithm. Tried feeding number of violations to the algorithm. The algorithm got lost. Maybe too little information? Tried feeding the sum of the size of all violations. It kinda worked but the final steps of the algorithm were unstable. Can we make it a bit more informative? And then, some more. Tried feeding a weighted sum of the size of all violations. The weights were the size of the bound at each violation point. The rationale is that a violation at a narrow point should be punished more than a violation at an already wide point. It still takes between 20% to 200% more time than cv_ml when it should be at least an order of magnitude faster (cdf estimation is faster than leave-one-out) Gee, I wonder if I have a bug somewhere? Yup, I actually had a bug. What was the bug? While making this presentation I realized I had a bug. My method for estimating bounds should be called with THE CDFs of each dimension. I was calling it with the data directly (!!!). No wonder I was getting horrible results. So the actual results of this hack will have to wait :P. All my tests of the weekend are now useless. Will keep them somewhere in my hard-drive as mementos... ;D I will repeat the tests with the right call and show the results for the next presentation. <img src="https://i.kym-cdn.com/photos/images/newsfeed/000/187/324/allthethings.png" style="width: 70%;"> Ok, is not like everything is wrong Let's do some quick counts here for Fréchet violations.
def get_frechets(dvars): d=len(dvars) n=len(dvars[0]) dimx=np.array(range(d)) un=np.ones(d,dtype=int) bottom_frechet = np.array([max( np.sum( dvars[dimx,un*i] ) +1-d, 0 ) for i in range(n) ]) top_frechet = np.array([min([y[i] for y in dvars]) for i in range(n)]) return {'top': top_frechet, 'bottom': bottom_frechet} cdfs={fname :distr.cdf(smpl,*params[fname]) for fname in params} frechets=get_frechets(np.array(list(cdfs.values())))
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
31a21c1d454b7b549c7a0425b59f2355
Calculating number of violations
def check_frechet_fails(guinea_cdf,frechets): fails={'top':[], 'bottom':[]} for n in range(len(guinea_cdf)): #n_hyper_point=np.array([x[n] for x in rd]) if guinea_cdf[n]>frechets['top'][n]: fails['top'].append(True) else: fails['top'].append(False) if guinea_cdf[n]<frechets['bottom'][n]: fails['bottom'].append(True) else: fails['bottom'].append(False) return {'top':np.array(fails['top']), 'bottom':np.array(fails['bottom'])}
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
6a274309b0dce17e2a67801f94797fba
Given 4 dimensions and 1000 samples, we got: For Silverman: 58.8% violations For cv_ml: 58.0% violations For cv_ls: 57.0% violations
# For Silverman violations_silverman=check_frechet_fails(dens_u_rot.cdf(),frechets) violations_silverman=np.sum(violations_silverman['top'])+ np.sum(violations_silverman['bottom']) print(f'violations:{violations_silverman} ({100.*violations_silverman/len(smpl)}%)') # For cv_ml violations_cv_ml=check_frechet_fails(dens_u_ml.cdf(),frechets) violations_cv_ml=np.sum(violations_cv_ml['top'])+ np.sum(violations_cv_ml['bottom']) print(f'violations:{violations_cv_ml} ({100.*violations_cv_ml/len(smpl)}%)') # For cv_ls violations_cv_ls=check_frechet_fails(dens_u_ls.cdf(),frechets) violations_cv_ls=np.sum(violations_cv_ls['top'])+ np.sum(violations_cv_ls['bottom']) print(f'violations:{violations_cv_ls} ({100.*violations_cv_ls/len(smpl)}%)')
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
403282e8ba8281cbe5654613e9739cef
What more? Quite a lot of sweat went into generating code for comparing my approaches with cv_ml I may as well show it to you, and point where the bug was :(.
def generate_experiments(reps,n,params, distr, dims): bws_frechet={f'bw_{x}':[] for x in params} bws_cv_ml={f'bw_{x}':[] for x in params} for iteration in range(reps): mvdata = {k: distr.rvs(*params[k], size=n) for k in params} rd = np.array(list(mvdata.values())) #<---- THIS IS NOT A CDF!!!!! # get frechets and thresholds frechets = get_frechets(rd) #<------- THEREFORE THIS IS A BUG !!!!! bw_frechets, bw_cv_ml=profile_run(rd, frechets,iteration) for ix,x in enumerate(params): bws_frechet[f'bw_{x}'].append(bw_frechets[ix]) bws_cv_ml[f'bw_{x}'].append(bw_cv_ml[ix]) pd.DataFrame(bws_frechet).to_csv(f'/home/lia/liaProjects/outs/bws_frechet_d{dims}-n{n}-iter{reps}.csv') pd.DataFrame(bws_cv_ml).to_csv(f'/home/lia/liaProjects/outs/bws_cv_ml_d{dims}-n{n}-iter{reps}.csv')
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
8eaa66c1688385d6af9d71b5a7d65e66
And this is how the functions that make the calculations look underneath.
def get_bw(datapfft, var_type, reference, frech_bounds=None): # Using leave-one-out likelihood # the initial value for the optimization is the normal_reference # h0 = normal_reference() data = adjust_shape(datapfft, len(var_type)) if not frech_bounds: fmin =lambda bw, funcx: loo_likelihood(bw, data, var_type, func=funcx) argsx=(np.log,) else: fmin = lambda bw, funcx: frechet_likelihood(bw, data, var_type, frech_bounds, func=funcx) argsx=(None,) #second element of tuple is if debug mode h0 = reference bw = optimize.fmin(fmin, x0=h0, args=argsx, #feeding logarithm for loo maxiter=1e3, maxfun=1e3, disp=0, xtol=1e-3) # bw = self._set_bw_bounds(bw) # bound bw if necessary return bw
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
e862bda8fd214465ba85f9df154b8866
And this was my frechet_likelihood method
def frechet_likelihood(bww, datax, var_type, frech_bounds, func=None, debug_mode=False,): cdf_est = cdf(datax, bww, var_type) # <- calls gpke underneath, but is a short call d_violations = calc_frechet_fails(cdf_est, frech_bounds) width_bound = frech_bounds['top'] - frech_bounds['bottom'] viols=(d_violations['top']+d_violations['bottom'])/width_bound L= np.sum(viols) return L
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
755ee81bf5d53bce117a3207c3755be7
And this is how profiling info was collected The python profiler is a bit unfriendly, so maybe this code could be useful as a snippet? Or, getting a professional license of pycharm ;) (Thanks boss!)
def profile_run(rd,frechets,iterx): dims=len(rd) n=len(rd[0]) v_type = f'{"c"*dims}' # threshold: number of violations by the cheapest method. dens_u_rot = sm.nonparametric.KDEMultivariate(data=rd, var_type=v_type, bw='normal_reference') cdf_dens_u_rot = dens_u_rot.cdf() violations_rot = count_frechet_fails(cdf_dens_u_rot, frechets) #profile frechets pr = cProfile.Profile() pr.enable() bw_frechets = get_bw(rd, v_type, dens_u_rot.bw, frech_bounds=frechets) pr.disable() s = io.StringIO() ps = pstats.Stats(pr, stream=s).sort_stats('cumtime') ps.print_stats() s = s.getvalue() with open(f'/home/lia/liaProjects/outs/frechet-profile-d{dims}-n{n}-iter{iterx}.txt', 'w+') as f: f.write(s) #profile cv_ml pr = cProfile.Profile() pr.enable() bw_cv_ml = get_bw(rd, v_type, dens_u_rot.bw) pr.disable() s = io.StringIO() ps = pstats.Stats(pr, stream=s).sort_stats('cumtime') ps.print_stats() s = s.getvalue() with open(f'/home/lia/liaProjects/outs/loo-ml-profile-d{dims}-n{n}-iter{iterx}.txt', 'w+') as f: f.write(s) return bw_frechets,bw_cv_ml
mv_kecdf_frechet.ipynb
lia-statsletters/notebooks
gpl-3.0
f5e9bfbda9d1cf62ca146dff1a4378dc
Elif Let's say you want to check a different condition before just saying, "The first condition was false, let's do the else statement." We could just use a second if statement, but instead we have the else-if statement, elif. It allows us to check a second condition after the first one fails. Let us concrete this idea with an example.
#I love food, let's take a look in my fridge fridge = ['bananas', 'apples', 'water', 'tortillas', 'cheese'] #I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese if('pizza' in fridge): print('Patrick ate pizza and was happy') elif('tortillas' in fridge and 'cheese' in fridge): print('Patrick didn\'t get his pizza, but he did get a quesadilla and is still happy!') else: print('Patrick is still hungry')
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
3c54091c25a2399a1d0c181617f58fd5
Let's revamp that example, but this time, I went out and bought a pizza.
#I love food, let's take a look in my fridge fridge = ['bananas', 'apples', 'water', 'tortillas', 'cheese', 'pizza'] #I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese if('pizza' in fridge): print('Patrick ate pizza and was happy') elif('tortillas' in fridge and 'cheese' in fridge): print('Patrick didn\'t get his pizza, but he did get a quesadilla and is still happy!') else: print('Patrick is still hungry')
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
2dbec1a0d2811423a12feb2e82154e20
Notice that, although I had the fixings for a quesadilla in my fridge, I had pizza so I never needed to check for a tortilla and cheese. This illustrates the fact that elif wont run unless the if statements before it fails. Further, you can stack elif statements forever. Let's see that.
#I love food, let's take a look in my fridge fridge = ['bananas', 'apples', 'water', 'tortillas', 'beer'] #I want some pizza, but if I don't have any I will settle for a quesadilla which requires tortillas and cheese if('pizza' in fridge): print('Patrick ate pizza and was happy') elif('tortillas' in fridge and 'cheese' in fridge): print('Patrick didn\'t get his pizza, but he did get a quesadilla and is still happy!') elif('beer' in fridge): print('Patrick is still hungry, but he has beer so he is happy') else: print('Patrick is still hungry')
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
3439ad9c17a0a4d062c7f4821ea85d0d
Exercises Write some "dummy" if, if else, and if elif else statements that will print out exactly what you expect until you feel comfortable with them. What will be the output of the following code sample: if(2<4): if(len([1,2,3])<=len(set([1,1,1,2,2,3,3,3]))): print("This will certainly print") elif(2>1): print("Or will this print?") else: print("It's gotta be this one...") else: print("This won't print...or will it.") Loops "I feel like I'm doing this over and over again" -Your computer on loops. Wanna do something 10, 100, n times? Loops are your best friend! Want to loop through a list containing all of your data? Loops are your bestest friend! We will look at two different types of loops, while loops and for loops. While Loops While loops will continue to loop until some condition is false. While loops follow the format: while (some condition): #some code here While loops can go on forever if the condition is never false. This is not the end of the world and you won't crash your computer. To stop a cell that is running, you can click on the stop button in the Jupyter toolbar. Let's see what we can do with this.
t=15 while(t>0): print("t-minus " + str(t)) t-=1
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
cb743a5a84833518de50499b7649cfa0
While loops are really good if you want to do something over and over again. Let's generate some fake data with this. I introduce here the range() function. This generates a list of numbers. Let's see briefly how it works.
# Let's make a list of numbers starting at zero and going to 99. range() by default uses a step size of 1 #so this will yield integers from 0 to 99 x = range(0,100) print(x) # Unfortunately range does some strange things and doesn't return a list, if you want a list, you already know how to convert it. print(list(x)) y = [] x = 1 while(x<100): y.append(x**5-27*x**2-2300*x**-1+x%(x+1)) x+=1 print(y)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
d5f764eb781a3d0ac2fec88bb1952219
So that was a cute example of how we can generate some data based on some equation. Later on, however, we will want to graph our data and this requires a second list for our x values. The while loop is cumbersome in the respect and so we now introduce the for loop. For Loops A for loop will loop through any container element by element and conveniently place each element in a special new variable. The format of a for loop is as follows: for (special variable name) in (container we are looping through): #do some stuff with, or without, that special variable The advantage of for loops is that you get each element of some list handed to you on a platter...er, in a variable. Our previous example of generating data now allows us to make a list for our x data and loop through that. Let's see that in action.
x = range(1,100) #Remember that this makes a list of integers from 1 to 99 y = [] for val in x: #val is our special variable here, it will take on the value of every element in x print(val) y.append(val**2+3*val) print(y)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
681c142b9d5950990a03ca87ad23ccc3
Again, a neat little example. The true power of for loops comes when we have lists that are not numerical. Let's make every string in a list uppercase.
words = ['i', 'am', 'sorry', 'dave', 'i', 'can\'t', 'do', 'that'] upperwords = [] for word in words: #remember that word will take on the value of every element of words print(word) upperwords.append(word.upper()) # to make a string uppercase, you can use the .upper() function. print(upperwords)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
3a073d4e47a822d37c8ab619caedaea1
We have one more special type of loop to cover. List comprehensions; a quick way to make a list in one line. List Comprehensions A list comprehension is essentially a for loop sandwiched into a list. The syntax for a list comprehension is as follows: X = [(expression involving special variable) for (special variable) in (some list)] For example, we want a list containing x^2 for x in [0,1,2,3,4,5,6,7,8,9,10], we can create this list by using: Y = [x**2 for x in range(0,11)] Does this actually work?
y = [x**2 for x in range(0,11)] print(y)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
87b4ae21a44095c9926cfc14c153c829
What about something wierder?
print(words) wordslength = [len(word) for word in words] print(wordslength)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
b2865aa8b02ea3af52dcee4d60a29977
My god, it worked! Think of the possibilities! With these new tools we can do 90% of all programming we will ever do. Pretty neat huh. I would like to show you one more example of list comprehensions.
# I only want words with length less than 3 newwords = [word for word in words if len(word)<3] print(newwords)
Python Workshop/Logic.ipynb
CalPolyPat/Python-Workshop
mit
69132212334a94ba6660ad288e405e32
Problem statement Tuning the hyper-parameters of a machine learning model is often carried out using an exhaustive exploration of (a subset of) the space all hyper-parameter configurations (e.g., using sklearn.model_selection.GridSearchCV), which often results in a very time consuming operation. In this notebook, we illustrate how skopt can be used to tune hyper-parameters using sequential model-based optimisation, hopefully resulting in equivalent or better solutions, but within less evaluations. Objective The first step is to define the objective function we want to minimize, in this case the cross-validation mean absolute error of a gradient boosting regressor over the Boston dataset, as a function of its hyper-parameters:
from sklearn.datasets import load_boston from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import cross_val_score boston = load_boston() X, y = boston.data, boston.target reg = GradientBoostingRegressor(n_estimators=50, random_state=0) def objective(params): max_depth, learning_rate, max_features, min_samples_split, min_samples_leaf = params reg.set_params(max_depth=max_depth, learning_rate=learning_rate, max_features=max_features, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf) return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1, scoring="mean_absolute_error"))
examples/hyperparameter-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
4e13457515ad42ea59d53ed7d67045bd
Next, we need to define the bounds of the dimensions of the search space we want to explore, and (optionally) the starting point:
space = [(1, 5), # max_depth (10**-5, 10**-1, "log-uniform"), # learning_rate (1, X.shape[1]), # max_features (2, 30), # min_samples_split (1, 30)] # min_samples_leaf x0 = [3, 0.01, 6, 2, 1]
examples/hyperparameter-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
477e410e47f1a7b8fa18b517a439dd5a
Optimize all the things! With these two pieces, we are now ready for sequential model-based optimisation. Here we compare gaussian process-based optimisation versus forest-based optimisation.
from skopt import gp_minimize res_gp = gp_minimize(objective, space, x0=x0, n_calls=50, random_state=0) "Best score=%.4f" % res_gp.fun print("""Best parameters: - max_depth=%d - learning_rate=%.6f - max_features=%d - min_samples_split=%d - min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1], res_gp.x[2], res_gp.x[3], res_gp.x[4])) from skopt import forest_minimize res_forest = forest_minimize(objective, space, x0=x0, n_calls=50, random_state=0) "Best score=%.4f" % res_forest.fun print("""Best parameters: - max_depth=%d - learning_rate=%.6f - max_features=%d - min_samples_split=%d - min_samples_leaf=%d""" % (res_forest.x[0], res_forest.x[1], res_forest.x[2], res_forest.x[3], res_forest.x[4]))
examples/hyperparameter-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
1d4772f195ffed748f424dc577c4957f
As a baseline, let us also compare with random search in the space of hyper-parameters, which is equivalent to sklearn.model_selection.RandomizedSearchCV.
from skopt import dummy_minimize res_dummy = dummy_minimize(objective, space, x0=x0, n_calls=50, random_state=0) "Best score=%.4f" % res_dummy.fun print("""Best parameters: - max_depth=%d - learning_rate=%.4f - max_features=%d - min_samples_split=%d - min_samples_leaf=%d""" % (res_dummy.x[0], res_dummy.x[1], res_dummy.x[2], res_dummy.x[3], res_dummy.x[4]))
examples/hyperparameter-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
0a046f94263c154cc7683ee2c35592e7
Convergence plot
from skopt.plots import plot_convergence plot_convergence(("gp_optimize", res_gp), ("forest_optimize", res_forest), ("dummy_optimize", res_dummy))
examples/hyperparameter-optimization.ipynb
glouppe/scikit-optimize
bsd-3-clause
14843373e0a8d26e77d02f127b273d2e
Prepare the pipeline (str) filepath: Give the csv file (str) y_col: The column to predict (bool) regression: Regression or Classification ? (bool) process: (WARNING) apply some preprocessing on your data (tune this preprocess with params below) (char) sep: delimiter (list) col_to_drop: which columns you don't want to use in your prediction (bool) derivate: for all features combination apply, n1 * n2, n1 / n2 ... (bool) transform: for all features apply, log(n), sqrt(n), square(n) (bool) scaled: scale the data ? (bool) infer_datetime: for all columns check the type and build new columns from them (day, month, year, time) if they are date type (str) encoding: data encoding (bool) dummify: apply dummies on your categoric variables The data files have been generated by sklearn.dataset.make_regression
cls = Baboulinet(filepath="toto2.csv", y_col="predict", regression=True)
mozinor/example/Mozinor example Reg.ipynb
Jwuthri/Mozinor
mit
d7bb941a055827b414868e5f33421bd6
Open the file located in the path directory, one line at a time, and store it in a list called records.
records = [json.loads(line) for line in open(path,'r')] type(records) records[0]
chapter 02/List-dict-defaultdict-Counter.ipynb
harishkrao/Python-for-Data-Analysis
mit
39794fee52ff11e21e6a4aaf6f10a6f4
Calling a specific key within the list
records[0]['tz']
chapter 02/List-dict-defaultdict-Counter.ipynb
harishkrao/Python-for-Data-Analysis
mit
40b5030331042666a63f2c7ff37e6b58
Printing all time zone values in the records list. Here we search for the string 'tz' in each element of the records list. If the search returns a string, then we print the corresponding value of the key 'tz' for that element.
time_zones = [rec['tz'] for rec in records if 'tz' in rec] time_zones[:10]
chapter 02/List-dict-defaultdict-Counter.ipynb
harishkrao/Python-for-Data-Analysis
mit
ae06624cfcfe164465083b5be897ae19
Counting the frequency of each time zone's occurrence in the list using a dict type in Python
counts = {} for x in time_zones: if x in counts: counts[x] = counts.get(x,0) + 1 else: counts[x] = 1 print(counts) from collections import defaultdict counts = defaultdict(int) for x in time_zones: counts[x] += 1 print(counts) counts['America/New_York'] len(time_zones)
chapter 02/List-dict-defaultdict-Counter.ipynb
harishkrao/Python-for-Data-Analysis
mit
34a67d38985f56005a5332c14a0893e5
To list the top n time zone occurrences
def top_counts(count_dict, n): value_key_pairs = [(count, tz) for tz, count in count_dict.items()] value_key_pairs.sort() return value_key_pairs[-n:] top_counts(counts,10) from collections import Counter counts = Counter(time_zones) counts.most_common(10)
chapter 02/List-dict-defaultdict-Counter.ipynb
harishkrao/Python-for-Data-Analysis
mit
7cefa1ab0d6073afdb11ae13d6557a90
ReportLab import the necessary functions one by one
from markdown import markdown as md_markdown from xml.etree.ElementTree import fromstring as et_fromstring from xml.etree.ElementTree import tostring as et_tostring from reportlab.platypus import BaseDocTemplate as plat_BaseDocTemplate from reportlab.platypus import Frame as plat_Frame from reportlab.platypus import Paragraph as plat_Paragraph from reportlab.platypus import PageTemplate as plat_PageTemplate from reportlab.lib.styles import getSampleStyleSheet as sty_getSampleStyleSheet from reportlab.lib.pagesizes import A4 as ps_A4 from reportlab.lib.pagesizes import A5 as ps_A5 from reportlab.lib.pagesizes import landscape as ps_landscape from reportlab.lib.pagesizes import portrait as ps_portrait from reportlab.lib.units import inch as un_inch
iPython/Reportlab2-FromMarkdown.ipynb
oditorium/blog
agpl-3.0
7907836a8e0a051c84c51149ff7684d2
The ReportFactory class creates a ReportLab document / report object; the idea is that all style information as well as page layouts are collected in this object, so that when a different factory is passed to the writer object the report looks different.
class ReportFactory(): """create a Reportlab report object using BaseDocTemplate the report creation is a two-step process 1. instantiate a ReportFactory object 2. retrieve the report using the report() method note: as it currently stands the report object is remembered in the factory object, so another call to report() return the _same_ object; this means that changing the paramters after report() has been called for the first time will not have an impact """ def __init__(self, filename=None): if filename == None: filename = 'report_x1.pdf' # f = open (filename,'wb') -> reports can take a file handle! self.filename = filename self.pagesize = ps_portrait(ps_A4) self.showboundary = 0 #PAGE_HEIGHT=defaultPageSize[1]; PAGE_WIDTH=defaultPageSize[0] self.styles=sty_getSampleStyleSheet() self.bullet = "\u2022" self._report = None @staticmethod def static_page(canvas,doc): """template for report page this template defines how the standard page looks (header, footer, background objects; it does _not_ define the flow objects though, as those are separately passed to the PageTemplate() function) """ canvas.saveState() canvas.setFont('Times-Roman',9) canvas.drawString(un_inch, 0.75 * un_inch, "Report - Page %d" % doc.page) canvas.restoreState() def refresh_styles(self): """refresh all styles derived ReportLab styles need to be refreshed in case the parent style has been modified; this does not really work though - it seems that the styles are simply flattened.... """ style_names = self.styles.__dict__['byName'].keys() for name in style_names: self.styles[name].refresh() def report(self): """initialise a report object this function initialised and returns a report object, based on the properties set on the factory object at this point (note: the report object is only generated _once_ and subsequent calls return the same object;this implies that most property changes after this function has been called are not taken into account) """ if self._report == None: rp = plat_BaseDocTemplate(self.filename,showBoundary=self.showboundary, pagesize=self.pagesize) frame_page = plat_Frame(rp.leftMargin, rp.bottomMargin, rp.width, rp.height, id='main') pagetemplates = [ plat_PageTemplate(id='Page',frames=frame_page,onPage=self.static_page), ] rp.addPageTemplates(pagetemplates) self._report = rp return self._report
iPython/Reportlab2-FromMarkdown.ipynb
oditorium/blog
agpl-3.0
4a097e101d59bbde591c48b5bde775f7
The ReportWriter object executes the conversion from markdown to pdf. It is currently very simplistic - for example there is no entry hook for starting the conversion at the html level rather than at markdown, and only a few basic tags are implemented.
class ReportWriter(): def __init__(self, report_factory): self._simple_tags = { 'h1' : 'Heading1', 'h2' : 'Heading2', 'h3' : 'Heading3', 'h4' : 'Heading4', 'h5' : 'Heading5', 'p' : 'BodyText', } self.rf = report_factory self.report = report_factory.report(); def _render_simple_tag(self, el, story): style_name = self._simple_tags[el.tag] el.tag = 'para' text = et_tostring(el) story.append(plat_Paragraph(text,self.rf.styles[style_name])) def _render_ol(self, el, story): return self._render_error(el, story) def _render_ul(self, ul_el, story): for li_el in ul_el: li_el.tag = 'para' text = et_tostring(li_el) story.append(plat_Paragraph(text,self.rf.styles['Bullet'], bulletText=self.rf.bullet)) def _render_error(self, el, story): story.append(plat_Paragraph( "<para fg='#ff0000' bg='#ffff00'>cannot render '%s' tag</para>" % el.tag,self.rf.styles['Normal'])) @staticmethod def html_from_markdown(mdown, remove_newline=True, wrap=True): """convert markdown to html mdown - the markdown to be converted remove_newline - if True, all \n characters are removed after conversion wrap - if True, the whole html is wrapped in an <html> tag """ html = md_markdown(mdown) if remove_newline: html = html.replace("\n", "") if wrap: html = "<html>"+html+"</html>" return html @staticmethod def dom_from_html(html, wrap=False): """convert html into a dom tree html - the html to be converted wrap - if True, the whole html is wrapped in an <html> tag """ if wrap: html = "<html>"+html+"</html>" dom = et_fromstring(html) return (dom) @staticmethod def dom_from_markdown(mdown): """convert markdown into a dom tree mdown - the markdown to be converted wrap - if True, the whole html is wrapped in an <html> tag """ html = ReportWriter.html_from_markdown(mdown, remove_newline=True, wrap=True) dom = ReportWriter.dom_from_html(html, wrap=False) return (dom) def create_report(self, mdown): """create report and write it do disk mdown - markdown source of the report """ dom = self.dom_from_markdown(mdown) story = [] for el in dom: if el.tag in self._simple_tags: self._render_simple_tag(el, story) elif el.tag == 'ul': self._render_ul(el, story) elif el.tag == 'ol': self._render_ol(el, story) else: self._render_error(el, story) self.report.build(story)
iPython/Reportlab2-FromMarkdown.ipynb
oditorium/blog
agpl-3.0
dffafa136fe289bffc65c4f1ff61b7a1
create a standard report (A4, black text etc)
rfa4 = ReportFactory('report_a4.pdf') pdfw = ReportWriter(rfa4) pdfw.create_report(markdown_text*10)
iPython/Reportlab2-FromMarkdown.ipynb
oditorium/blog
agpl-3.0
342efc0f632bd7a4677a00633e1047f3
create a second report with different parameters (A5, changed colors etc; the __dict__ method shows all the options that can be modified for changing styles)
#rfa5.styles['Normal'].__dict__ rfa5 = ReportFactory('report_a5.pdf') rfa5.pagesize = ps_portrait(ps_A5) #rfa5.styles['Normal'].textColor = '#664422' #rfa5.refresh_styles() rfa5.styles['BodyText'].textColor = '#666666' rfa5.styles['Bullet'].textColor = '#666666' rfa5.styles['Heading1'].textColor = '#000066' rfa5.styles['Heading2'].textColor = '#000066' rfa5.styles['Heading3'].textColor = '#000066' pdfw = ReportWriter(rfa5) pdfw.create_report(markdown_text*10)
iPython/Reportlab2-FromMarkdown.ipynb
oditorium/blog
agpl-3.0
44fee4a3fa903aac469695660ecc6df6
Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholders Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session. Exercise: Implement the function below to create the placeholders in tensorflow.
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(tf.float32, shape = [n_x, None], name = "X") Y = tf.placeholder(tf.float32, shape = [n_y, None], name = "Y") ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y))
archive/MOOC/Deeplearning_AI/ImprovingDeepNeuralNetworks/HyperparameterTuning/Tensorflow+Tutorial.ipynb
KrisCheng/ML-Learning
mit
f2ef368d658812dd79131fd8d40fa3a9
Expected Output: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute cost As seen before, it is very easy to compute the cost using: python tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) Question: Implement the cost function below. - It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you. - Besides, tf.reduce_mean basically does the summation over the examples.
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost))
archive/MOOC/Deeplearning_AI/ImprovingDeepNeuralNetworks/HyperparameterTuning/Tensorflow+Tutorial.ipynb
KrisCheng/ML-Learning
mit
fcda76e1b56771dba602b0eb0986326c
<a id='sec3.2'></a> 3.2 Compute POI Info Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean) poi_coords.reset_index(inplace=True) poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True) poi_coords.head()
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
ecf936e3ab765eb6b3c1a97615472754
<a id='sec3.3'></a> 3.3 Construct Travelling Sequences
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\ .groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max]) seq_all.head() seq_all.columns = seq_all.columns.droplevel() seq_all.head() seq_all.reset_index(inplace=True) seq_all.head() seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True) seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime'] seq_all.head() #tseq = seq_all[['poiID', 'poiDuration(sec)']].copy().groupby('poiID').agg(np.mean) #tseq seq_user = seq_all[['seqID', 'userID']].copy() seq_user = seq_user.groupby('seqID').first() seq_user.head() seq_len = seq_all[['userID', 'seqID', 'poiID']].copy() seq_len = seq_len.groupby(['userID', 'seqID']).agg(np.size) seq_len.reset_index(inplace=True) seq_len.rename(columns={'poiID':'seqLen'}, inplace=True) #seq_len.head() ax = seq_len['seqLen'].hist(bins=20) ax.set_yscale('log')
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
1017df5c08c8d080ebe7a137177aab0a
<a id='sec3.4'></a> 3.4 Transition Matrix 3.4.1 Transition Matrix for Time at POI
users = seq_all['userID'].unique() transmat_time = pd.DataFrame(np.zeros((len(users), poi_all.index.shape[0]), dtype=np.float64), \ index=users, columns=poi_all.index) poi_time = seq_all[['userID', 'poiID', 'poiDuration(sec)']].copy().groupby(['userID', 'poiID']).agg(np.sum) poi_time.head() for idx in poi_time.index: transmat_time.loc[idx[0], idx[1]] += poi_time.loc[idx].iloc[0] print(transmat_time.shape) transmat_time.head() # add 1 (sec) to each cell as a smooth factor log10_transmat_time = np.log10(transmat_time.copy() + 1) print(log10_transmat_time.shape) log10_transmat_time.head()
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
a471174d46f441f57b2c0326ea1a7311
3.4.2 Transition Matrix for POI Category
poi_cats = traj['poiTheme'].unique().tolist() poi_cats.sort() poi_cats ncats = len(poi_cats) transmat_cat = pd.DataFrame(data=np.zeros((ncats, ncats), dtype=np.float64), index=poi_cats, columns=poi_cats) for seqid in seq_all['seqID'].unique().tolist(): seqi = seq_all[seq_all['seqID'] == seqid].copy() seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True) for j in range(len(seqi.index)-1): idx1 = seqi.index[j] idx2 = seqi.index[j+1] poi1 = seqi.loc[idx1, 'poiID'] poi2 = seqi.loc[idx2, 'poiID'] cat1 = poi_all.loc[poi1, 'poiTheme'] cat2 = poi_all.loc[poi2, 'poiTheme'] transmat_cat.loc[cat1, cat2] += 1 transmat_cat
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
d67e62c04305f13895c7c9e87ee4e85b
Normalise each row to get an estimate of transition probabilities (MLE).
for r in transmat_cat.index: rowsum = transmat_cat.ix[r].sum() if rowsum == 0: continue # deal with lack of data transmat_cat.loc[r] /= rowsum transmat_cat
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
1181a9e9b48b0b2988415b4ef32c1ac5
Compute the log of transition probabilities with smooth factor $\epsilon=10^{-12}$.
log10_transmat_cat = np.log10(transmat_cat.copy() + 1e-12) log10_transmat_cat
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
80ad5f0d9443fc6a0a53792a2c744238
<a id='sec4'></a> 4. Trajectory Recommendation -- Approach I A different leave-one-out cross-validation approach: - For each user, choose one trajectory (with length >= 3) uniformly at random from all of his/her trajectories as the validation trajectory - Use all other trajectories (of all users) to 'train' (i.e. compute metrics for the ILP formulation) <a id='sec4.1'></a> 4.1 Choose Cross Validation Sequences
cv_seqs = seq_all[['userID', 'seqID', 'poiID']].copy().groupby(['userID', 'seqID']).agg(np.size) cv_seqs.rename(columns={'poiID':'seqLen'}, inplace=True) cv_seqs = cv_seqs[cv_seqs['seqLen'] > 2] cv_seqs.reset_index(inplace=True) print(cv_seqs.shape) cv_seqs.head() cv_seq_set = [] # choose one sequence for each user in cv_seqs uniformly at random for user in cv_seqs['userID'].unique(): seqlist = cv_seqs[cv_seqs['userID'] == user]['seqID'].tolist() seqid = random.choice(seqlist) cv_seq_set.append(seqid) len(cv_seq_set)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
69fa6c09489835c3e41e609144a5b710
<a id='sec4.2'></a> 4.2 Recommendation by Solving ILPs
def calc_poi_info(seqid_set, seq_all, poi_all): poi_info = seq_all[seq_all['seqID'].isin(seqid_set)][['poiID', 'poiDuration(sec)']].copy() poi_info = poi_info.groupby('poiID').agg([np.mean, np.size]) poi_info.columns = poi_info.columns.droplevel() poi_info.reset_index(inplace=True) poi_info.rename(columns={'mean':'avgDuration(sec)', 'size':'popularity'}, inplace=True) poi_info.set_index('poiID', inplace=True) poi_info['poiTheme'] = poi_all.loc[poi_info.index, 'poiTheme'] poi_info['poiLon'] = poi_all.loc[poi_info.index, 'poiLon'] poi_info['poiLat'] = poi_all.loc[poi_info.index, 'poiLat'] return poi_info.copy() def calc_user_interest(seqid_set, seq_all, poi_all, poi_info): user_interest = seq_all[seq_all['seqID'].isin(seqid_set)][['userID', 'poiID', 'poiDuration(sec)']].copy() user_interest['timeRatio'] = [poi_info.loc[x, 'avgDuration(sec)'] for x in user_interest['poiID']] user_interest['timeRatio'] = user_interest['poiDuration(sec)'] / user_interest['timeRatio'] user_interest['poiTheme'] = [poi_all.loc[x, 'poiTheme'] for x in user_interest['poiID']] user_interest.drop(['poiID', 'poiDuration(sec)'], axis=1, inplace=True) user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.sum, np.size]) # the sum user_interest.columns = user_interest.columns.droplevel() user_interest.rename(columns={'sum':'timeBased', 'size':'freqBased'}, inplace=True) user_interest.reset_index(inplace=True) user_interest.set_index(['userID', 'poiTheme'], inplace=True) return user_interest.copy() def calc_dist(longitude1, latitude1, longitude2, latitude2): """Calculate the distance (unit: km) between two places on earth""" # convert degrees to radians lon1 = math.radians(longitude1) lat1 = math.radians(latitude1) lon2 = math.radians(longitude2) lat2 = math.radians(latitude2) radius = 6371.009 # mean earth radius is 6371.009km, en.wikipedia.org/wiki/Earth_radius#Mean_radius # The haversine formula, en.wikipedia.org/wiki/Great-circle_distance dlon = math.fabs(lon1 - lon2) dlat = math.fabs(lat1 - lat2) return 2 * radius * math.asin(math.sqrt(\ (math.sin(0.5*dlat))**2 + math.cos(lat1) * math.cos(lat2) * (math.sin(0.5*dlon))**2 )) def calc_dist_mat(poi_info): poi_dist_mat = pd.DataFrame(data=np.zeros((poi_info.shape[0], poi_info.shape[0]), dtype=np.float64), \ index=poi_info.index, columns=poi_info.index) for i in range(poi_info.index.shape[0]): for j in range(i+1, poi_info.index.shape[0]): r = poi_info.index[i] c = poi_info.index[j] dist = calc_dist(poi_info.loc[r, 'poiLon'], poi_info.loc[r, 'poiLat'], \ poi_info.loc[c, 'poiLon'], poi_info.loc[c, 'poiLat']) assert(dist > 0.) poi_dist_mat.loc[r, c] = dist poi_dist_mat.loc[c, r] = dist return poi_dist_mat def calc_seq_budget(user, seq, poi_info, poi_dist_mat, user_interest): """Calculate the travel budget for the given travelling sequence""" assert(len(seq) > 1) budget = 0. # travel budget for i in range(len(seq)-1): px = seq[i] py = seq[i+1] assert(px in poi_info.index) assert(py in poi_info.index) budget += 60 * 60 * poi_dist_mat.loc[px, py] / speed # travel time (seconds) caty = poi_info.loc[py, 'poiTheme'] avgtime = poi_info.loc[py, 'avgDuration(sec)'] userint = 0 if (user, caty) in user_interest.index: userint = user_interest.loc[user, caty] # for testing set budget += userint * avgtime # expected visit duration return budget def recommend_ILP(user, budget, startPoi, endPoi, poi_info, poi_dist_mat, eta, speed, user_interest): assert(0 <= eta <= 1); assert(budget > 0) p0 = str(startPoi); pN = str(endPoi); N = poi_info.index.shape[0] # REF: pythonhosted.org/PuLP/index.html pois = [str(p) for p in poi_info.index] # create a string list for each POI prob = pulp.LpProblem('TourRecommendation', pulp.LpMaximize) # create problem # visit_i_j = 1 means POI i and j are visited in sequence visit_vars = pulp.LpVariable.dicts('visit', (pois, pois), 0, 1, pulp.LpInteger) # a dictionary contains all dummy variables dummy_vars = pulp.LpVariable.dicts('u', [x for x in pois if x != p0], 2, N, pulp.LpInteger) # add objective objlist = [] for pi in [x for x in pois if x not in {p0, pN}]: for pj in [y for y in pois if y != p0]: cati = poi_info.loc[int(pi), 'poiTheme'] userint = 0; poipop = 0 if (user, cati) in user_interest.index: userint = user_interest.loc[user, cati] if int(pi) in poi_info.index: poipop = poi_info.loc[int(pi), 'popularity'] objlist.append(visit_vars[pi][pj] * (eta * userint + (1.-eta) * poipop)) prob += pulp.lpSum(objlist), 'Objective' # add constraints, each constraint should be in ONE line prob += pulp.lpSum([visit_vars[p0][pj] for pj in pois if pj != p0]) == 1, 'StartAtp0' prob += pulp.lpSum([visit_vars[pi][pN] for pi in pois if pi != pN]) == 1, 'EndAtpN' for pk in [x for x in pois if x not in {p0, pN}]: prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) == \ pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]), 'Connected_' + pk prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) <= 1, 'LeaveAtMostOnce_' + pk prob += pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]) <= 1, 'EnterAtMostOnce_' + pk costlist = [] for pi in [x for x in pois if x != pN]: for pj in [y for y in pois if y != p0]: catj = poi_info.loc[int(pj), 'poiTheme'] traveltime = 60 * 60 * poi_dist_mat.loc[int(pi), int(pj)] / speed # seconds userint = 0; avgtime = 0 if (user, catj) in user_interest.index: userint = user_interest.loc[user, catj] if int(pj) in poi_info.index: avgtime = poi_info.loc[int(pj), 'avgDuration(sec)'] costlist.append(visit_vars[pi][pj] * (traveltime + userint * avgtime)) prob += pulp.lpSum(costlist) <= budget, 'WithinBudget' for pi in [x for x in pois if x != p0]: for pj in [y for y in pois if y != p0]: prob += dummy_vars[pi] - dummy_vars[pj] + 1 <= (N - 1) * (1 - visit_vars[pi][pj]), \ 'SubTourElimination_' + str(pi) + '_' + str(pj) # solve problem #prob.solve() # using PuLP's default solver #prob.solve(pulp.PULP_CBC_CMD(options=['-threads', '8', '-strategy', '1', '-maxIt', '2000000'])) # CBC #prob.solve(pulp.GLPK_CMD()) # GLPK gurobi_options = [('TimeLimit', '7200'), ('Threads', '18'), ('NodefileStart', '0.9'), ('Cuts', '2')] prob.solve(pulp.GUROBI_CMD(options=gurobi_options)) # GUROBI print('status:', pulp.LpStatus[prob.status]) # print the status of the solution #print('obj:', pulp.value(prob.objective)) # print the optimised objective function value #for v in prob.variables(): # print each variable with it's resolved optimum value # print(v.name, '=', v.varValue) # if v.varValue != 0: print(v.name, '=', v.varValue) visit_mat = pd.DataFrame(data=np.zeros((len(pois), len(pois)), dtype=np.float), index=pois, columns=pois) for pi in pois: for pj in pois: visit_mat.loc[pi, pj] = visit_vars[pi][pj].varValue # build the recommended trajectory recseq = [p0] while True: pi = recseq[-1] pj = visit_mat.loc[pi].idxmax() assert(round(visit_mat.loc[pi, pj]) == 1) recseq.append(pj); #print(recseq); sys.stdout.flush() if pj == pN: return [int(x) for x in recseq] cv_seq_dict = dict() rec_seq_dict = dict() for seqid in cv_seq_set: seqi = seq_all[seq_all['seqID'] == seqid].copy() seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True) cv_seq_dict[seqid] = seqi['poiID'].tolist() eta = 0.5 time_based = True doCompute = True if os.path.exists(frecseq): seq_dict = pickle.load(open(frecseq, 'rb')) if (np.array(sorted(cv_seq_dict.keys())) == np.array(sorted(seq_dict.keys()))).all(): rec_seq_dict = seq_dict doCompute = False if doCompute: n = 1 print('#sequences', len(cv_seq_set)) for seqid, seq in cv_seq_dict.items(): train_set = [x for x in seq_all['seqID'].unique() if x != seqid] poi_info = calc_poi_info(train_set, seq_all, poi_all) user_interest = calc_user_interest(train_set, seq_all, poi_all, poi_info) poi_dist_mat = calc_dist_mat(poi_info) user = seq_user.loc[seqid].iloc[0] the_user_interest = None if time_based == True: the_user_interest = user_interest['timeBased'].copy() else: the_user_interest = user_interest['freqBased'].copy() budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest) print(n, 'sequence', seq, ', user', user, ', budget', budget); sys.stdout.flush() recseq = recommend_ILP(user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest) rec_seq_dict[seqid] = recseq print('->', recseq, '\n'); sys.stdout.flush() n += 1 pickle.dump(rec_seq_dict, open(frecseq, 'wb'))
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
38c63f9f43cb44a14a1b07523ccad699
<a id='sec4.3'></a> 4.3 Evaluation Results from paper (Toronto data, time-based uesr interest, eta=0.5): - Recall: 0.779&plusmn;0.10 - Precision: 0.706&plusmn;0.013 - F1-score: 0.732&plusmn;0.012
def calc_recall_precision_F1score(seq_act, seq_rec): assert(len(seq_act) > 0) assert(len(seq_rec) > 0) actset = set(seq_act) recset = set(seq_rec) intersect = actset & recset recall = len(intersect) / len(seq_act) precision = len(intersect) / len(seq_rec) F1score = 2. * precision * recall / (precision + recall) return recall, precision, F1score recall = [] precision = [] F1score = [] for seqid in rec_seq_dict.keys(): assert(seqid in cv_seq_dict) seq = cv_seq_dict[seqid] recseq = rec_seq_dict[seqid] r, p, F1 = calc_recall_precision_F1score(seq, recseq) recall.append(r) precision.append(p) F1score.append(F1) print('Recall:', np.mean(recall), np.std(recall)) print('Precision:', np.mean(precision), np.std(precision)) print('F1-score:', np.mean(F1score), np.std(F1score))
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
8627f33d0031897894b6ecbc73eaedc2
<a id='sec5'></a> 5. Trajectory Recommendation -- Approach II The paper stated "We evaluate PERSTOUR and the baselines using leave-one-out cross-validation [Kohavi,1995] (i.e., when evaluating a specific travel sequence of a user, we use this user's other travel sequences for training our algorithms" While it's not clear if this means when evaluate a travel sequence for a user, - all other sequences of this user (except the one for validation) as well as all sequences of other users are used for training, (i.e. the approach in the section above) or - use leave-one-out for each user to construct a testing set (the approach in this section) <a id='sec5.1'></a> 5.1 Choose Travelling Sequences for Training and Testing Trajectories with length greater than 3 are used in the paper.
seq_ge3 = seq_len[seq_len['seqLen'] >= 3] seq_ge3['seqLen'].hist(bins=20)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
75e7797292acb3cba7070a537bf4dfcf
Split travelling sequences into training set and testing set using leave-one-out for each user. For testing purpose, users with less than two travelling sequences are not considered in this experiment.
train_set = [] test_set = [] user_seqs = seq_ge3[['userID', 'seqID']].groupby('userID') for user, indices in user_seqs.groups.items(): if len(indices) < 2: continue idx = random.choice(indices) test_set.append(seq_ge3.loc[idx, 'seqID']) train_set.extend([seq_ge3.loc[x, 'seqID'] for x in indices if x != idx]) print('#seq in trainset:', len(train_set)) print('#seq in testset:', len(test_set)) seq_ge3[seq_ge3['seqID'].isin(train_set)]['seqLen'].hist(bins=20) #data = np.array(seqs1['seqLen']) #hist, bins = np.histogram(data, bins=3) #print(hist)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
9df8fd835ad17c397907d11ef834671e
Sanity check: the total number of travelling sequences used in training and testing
seq_exp = seq_ge3[['userID', 'seqID']].copy() seq_exp = seq_exp.groupby('userID').agg(np.size) seq_exp.reset_index(inplace=True) seq_exp.rename(columns={'seqID':'#seq'}, inplace=True) seq_exp = seq_exp[seq_exp['#seq'] > 1] # user with more than 1 sequences print('total #seq for experiment:', seq_exp['#seq'].sum()) #seq_exp.head()
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
8f41ed0e3f0a1ee6251442738c14b91f
<a id='sec5.2'></a> 5.2 Compute POI popularity and user interest using training set Compute average POI visit duration, POI popularity as defined at the top of the notebook.
poi_info = seq_all[seq_all['seqID'].isin(train_set)] poi_info = poi_info[['poiID', 'poiDuration(sec)']].copy() poi_info = poi_info.groupby('poiID').agg([np.mean, np.size]) poi_info.columns = poi_info.columns.droplevel() poi_info.reset_index(inplace=True) poi_info.rename(columns={'mean':'avgDuration(sec)', 'size':'popularity'}, inplace=True) poi_info.set_index('poiID', inplace=True) print('#poi:', poi_info.shape[0]) if poi_info.shape[0] < poi_all.shape[0]: extra_index = list(set(poi_all.index) - set(poi_info.index)) extra_poi = pd.DataFrame(data=np.zeros((len(extra_index), 2), dtype=np.float64), \ index=extra_index, columns=['avgDuration(sec)', 'popularity']) poi_info = poi_info.append(extra_poi) print('#poi after extension:', poi_info.shape[0]) poi_info['poiTheme'] = poi_all.loc[poi_info.index, 'poiTheme'] poi_info['poiLon'] = poi_all.loc[poi_info.index, 'poiLon'] poi_info['poiLat'] = poi_all.loc[poi_info.index, 'poiLat'] poi_info.head()
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
6aa25926c8d92af83e4928b3321487c1
Compute time/frequency based user interest as defined at the top of the notebook.
user_interest = seq_all[seq_all['seqID'].isin(train_set)] user_interest = user_interest[['userID', 'poiID', 'poiDuration(sec)']].copy() user_interest['timeRatio'] = [poi_info.loc[x, 'avgDuration(sec)'] for x in user_interest['poiID']] #user_interest[user_interest['poiID'].isin({9, 10, 12, 18, 20, 26})] #user_interest[user_interest['timeRatio'] < 1] user_interest.head() user_interest['timeRatio'] = user_interest['poiDuration(sec)'] / user_interest['timeRatio'] user_interest.head() user_interest['poiTheme'] = [poi_all.loc[x, 'poiTheme'] for x in user_interest['poiID']] user_interest.drop(['poiID', 'poiDuration(sec)'], axis=1, inplace=True)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
cda7c3d2ea3495c6ae9c3b0544083358
<a id='switch'></a> Sum defined in paper, but sum of (time ratio) * (avg duration) will become extremely large in some cases, which is unrealistic, switch between the two to have a look at the effects.
#user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.sum, np.size]) # the sum user_interest = user_interest.groupby(['userID', 'poiTheme']).agg([np.mean, np.size]) # try the mean value user_interest.columns = user_interest.columns.droplevel() #user_interest.rename(columns={'sum':'timeBased', 'size':'freqBased'}, inplace=True) user_interest.rename(columns={'mean':'timeBased', 'size':'freqBased'}, inplace=True) user_interest.reset_index(inplace=True) user_interest.set_index(['userID', 'poiTheme'], inplace=True) user_interest.head() #user_interest.columns.shape[0]
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
42da309bff5784aaa395e3eed4e9f87c
<a id='sec5.3'></a> 5.3 Generate ILP
poi_dist_mat = pd.DataFrame(data=np.zeros((poi_info.shape[0], poi_info.shape[0]), dtype=np.float64), \ index=poi_info.index, columns=poi_info.index) for i in range(poi_info.index.shape[0]): for j in range(i+1, poi_info.index.shape[0]): r = poi_info.index[i] c = poi_info.index[j] dist = calc_dist(poi_info.loc[r, 'poiLon'], poi_info.loc[r, 'poiLat'], \ poi_info.loc[c, 'poiLon'], poi_info.loc[c, 'poiLat']) assert(dist > 0.) poi_dist_mat.loc[r, c] = dist poi_dist_mat.loc[c, r] = dist def generate_ILP(lpFilename, user, budget, startPoi, endPoi, poi_info, poi_dist_mat, eta, speed, user_interest): """Recommend a trajectory given an existing travel sequence S_N, the first/last POI and travel budget calculated based on S_N """ assert(0 <= eta <= 1) assert(budget > 0) p0 = str(startPoi) pN = str(endPoi) N = poi_info.index.shape[0] # The MIP problem # REF: pythonhosted.org/PuLP/index.html # create a string list for each POI pois = [str(p) for p in poi_info.index] # create problem prob = pulp.LpProblem('TourRecommendation', pulp.LpMaximize) # visit_i_j = 1 means POI i and j are visited in sequence visit_vars = pulp.LpVariable.dicts('visit', (pois, pois), 0, 1, pulp.LpInteger) # a dictionary contains all dummy variables dummy_vars = pulp.LpVariable.dicts('u', [x for x in pois if x != p0], 2, N, pulp.LpInteger) # add objective objlist = [] for pi in [x for x in pois if x not in {p0, pN}]: for pj in [y for y in pois if y != p0]: cati = poi_info.loc[int(pi), 'poiTheme'] userint = 0 if (user, cati) in user_interest.index: userint = user_interest.loc[user, cati] objlist.append(visit_vars[pi][pj] * (eta * userint + (1.-eta) * poi_info.loc[int(pi), 'popularity'])) prob += pulp.lpSum(objlist), 'Objective' # add constraints # each constraint should be in ONE line prob += pulp.lpSum([visit_vars[p0][pj] for pj in pois if pj != p0]) == 1, 'StartAtp0' # starts at the first POI prob += pulp.lpSum([visit_vars[pi][pN] for pi in pois if pi != pN]) == 1, 'EndAtpN' # ends at the last POI for pk in [x for x in pois if x not in {p0, pN}]: prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) == \ pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]), \ 'Connected_' + pk # the itinerary is connected prob += pulp.lpSum([visit_vars[pi][pk] for pi in pois if pi != pN]) <= 1, \ 'LeaveAtMostOnce_' + pk # LEAVE POIk at most once prob += pulp.lpSum([visit_vars[pk][pj] for pj in pois if pj != p0]) <= 1, \ 'EnterAtMostOnce_' + pk # ENTER POIk at most once # travel cost within budget costlist = [] for pi in [x for x in pois if x != pN]: for pj in [y for y in pois if y != p0]: catj = poi_info.loc[int(pj), 'poiTheme'] traveltime = 60 * 60 * poi_dist_mat.loc[int(pi), int(pj)] / speed # seconds userint = 0 if (user, catj) in user_interest.index: userint = user_interest.loc[user, catj] costlist.append(visit_vars[pi][pj] * (traveltime + userint * poi_info.loc[int(pj), 'avgDuration(sec)'])) prob += pulp.lpSum(costlist) <= budget, 'WithinBudget' for pi in [x for x in pois if x != p0]: for pj in [y for y in pois if y != p0]: prob += dummy_vars[pi] - dummy_vars[pj] + 1 <= \ (N - 1) * (1 - visit_vars[pi][pj]), \ 'SubTourElimination_' + str(pi) + '_' + str(pj) # TSP sub-tour elimination # write problem data to an .lp file prob.writeLP(lpFilename)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
8847201b95093c0b908f4d44e7bf4efa
5.3.1 Generate ILPs for training set
def extract_seq(seqid_set, seq_all): """Extract the actual sequences (i.e. a list of POI) from a set of sequence ID""" seq_dict = dict() for seqid in seqid_set: seqi = seq_all[seq_all['seqID'] == seqid].copy() seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True) seq_dict[seqid] = seqi['poiID'].tolist() return seq_dict train_seqs = extract_seq(train_set, seq_all) lpDir = os.path.join(data_dir, 'lp_' + suffix) if not os.path.exists(lpDir): print('Please create directory "' + lpDir + '"') eta = 0.5 #eta = 1 time_based = True for seqid in sorted(train_seqs.keys()): if not os.path.exists(lpDir): print('Please create directory "' + lpDir + '"') break seq = train_seqs[seqid] lpFile = os.path.join(lpDir, str(seqid) + '.lp') user = seq_user.loc[seqid].iloc[0] the_user_interest = None if time_based == True: the_user_interest = user_interest['timeBased'].copy() else: the_user_interest = user_interest['freqBased'].copy() budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest) print('generating ILP', lpFile, 'for user', user, 'sequence', seq, 'budget', round(budget, 2)) generate_ILP(lpFile, user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
afaf95db042721ea2df357763421ba43
5.3.2 Generate ILPs for testing set
test_seqs = extract_seq(test_set, seq_all) for seqid in sorted(test_seqs.keys()): if not os.path.exists(lpDir): print('Please create directory "' + lpDir + '"') break seq = test_seqs[seqid] lpFile = os.path.join(lpDir, str(seqid) + '.lp') user = seq_user.loc[seqid].iloc[0] the_user_interest = None if time_based == True: the_user_interest = user_interest['timeBased'].copy() else: the_user_interest = user_interest['freqBased'].copy() budget = calc_seq_budget(user, seq, poi_info, poi_dist_mat, the_user_interest) print('generating ILP', lpFile, 'for user', user, 'sequence', seq, 'budget', round(budget, 2)) generate_ILP(lpFile, user, budget, seq[0], seq[-1], poi_info, poi_dist_mat, eta, speed, the_user_interest)
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
c3b43f8a6a48506ec6981f9012bda27f
<a id='sec5.4'></a> 5.4 Evaluation
def load_solution_gurobi(fsol, startPoi, endPoi): """Load recommended itinerary from MIP solution file by GUROBI""" seqterm = [] with open(fsol, 'r') as f: for line in f: if re.search('^visit_', line): # e.g. visit_0_7 1\n item = line.strip().split(' ') # visit_21_16 1.56406801399038e-09\n if round(float(item[1])) == 1: fromto = item[0].split('_') seqterm.append((int(fromto[1]), int(fromto[2]))) p0 = startPoi pN = endPoi recseq = [p0] while True: px = recseq[-1] for term in seqterm: if term[0] == px: recseq.append(term[1]) if term[1] == pN: return recseq else: seqterm.remove(term) break
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
68a1a5242dab1a9061df309190b904c3
5.4.1 Evaluation on training set
train_seqs_rec = dict() solDir = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta05_time')) #solDir = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta10_time')) if not os.path.exists(solDir): print('Directory for solution files', solDir, 'does not exist.') for seqid in sorted(train_seqs.keys()): if not os.path.exists(solDir): print('Directory for solution files', solDir, 'does not exist.') break seq = train_seqs[seqid] solFile = os.path.join(solDir, str(seqid) + '.lp.sol') recseq = load_solution_gurobi(solFile, seq[0], seq[-1]) train_seqs_rec[seqid] = recseq print('Sequence', seqid, 'Actual:', seq, ', Recommended:', recseq) recall = [] precision = [] F1score = [] for seqid in train_seqs.keys(): r, p, F1 = calc_recall_precision_F1score(train_seqs[seqid], train_seqs_rec[seqid]) recall.append(r) precision.append(p) F1score.append(F1) print('Recall:', round(np.mean(recall), 2), ',', round(np.std(recall), 2)) print('Precision:', round(np.mean(precision), 2), ',', round(np.std(recall), 2)) print('F1-score:', round(np.mean(F1score), 2), ',', round(np.std(recall), 2))
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
37feb5fcfb9163aa0dcade3e6cfee3b2
5.4.2 Evaluation on testing set Results from paper (Toronto data, time-based uesr interest, eta=0.5): - Recall: 0.779&plusmn;0.10 - Precision: 0.706&plusmn;0.013 - F1-score: 0.732&plusmn;0.012
test_seqs_rec = dict() solDirTest = os.path.join(data_dir, os.path.join('lp_' + suffix, 'eta05_time.test')) if not os.path.exists(solDirTest): print('Directory for solution files', solDirTest, 'does not exist.') for seqid in sorted(test_seqs.keys()): if not os.path.exists(solDirTest): print('Directory for solution files', solDirTest, 'does not exist.') break seq = test_seqs[seqid] solFile = os.path.join(solDirTest, str(seqid) + '.lp.sol') recseq = load_solution_gurobi(solFile, seq[0], seq[-1]) test_seqs_rec[seqid] = recseq print('Sequence', seqid, 'Actual:', seq, ', Recommended:', recseq) recallT = [] precisionT = [] F1scoreT = [] for seqid in test_seqs.keys(): r, p, F1 = calc_recall_precision_F1score(test_seqs[seqid], test_seqs_rec[seqid]) recallT.append(r) precisionT.append(p) F1scoreT.append(F1) print('Recall:', round(np.mean(recallT), 2), ',', round(np.std(recallT), 2)) print('Precision:', round(np.mean(precisionT), 2), ',', round(np.std(recallT), 2)) print('F1-score:', round(np.mean(F1scoreT), 2), ',', round(np.std(recallT), 2))
tour/ijcai15.ipynb
charmasaur/digbeta
gpl-3.0
46861a5064105860c0d5f1f321632fd8
The Pearson's test Exercise: See the similarities The above example shows you how two number sequences can be compared with nothing more complicated than by using the dot product. This works as long as the sequences comprise of the same numbers but in a shuffled order. To compare different sequences with the original we normalise by the magnitude of the vectors. To include this step. We use a more complicated equation: <img src="eqn_full.gif"> https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient https://en.wikipedia.org/wiki/Cross-correlation Hopefully you can see the top of this equation is very similar to the dot-product, except that it is centered on zero (subtraction of the mu, the mean) and the variance is normalised (division by standard deviation). Because the equation is normalised, a perfectly correlated sequence yeilds a rho value of 1.0. A perfectly random comparison yields 0 and two anti-correlated sequences will yield a value of -1.0.
#The cross-correlation algorithm is another name for the Pearson's test. #Here it is written in code form and utilising the builtin functions: c = [0,1,2] d = [3,4,5] rho = np.average((c-np.average(c))*(d-np.average(d)))/(np.std(c)*np.std(d)) print('rho',np.round(rho,3)) #equally you can write rho = np.dot(c-np.average(c),d-np.average(d))/sqrt(((np.dot(c-np.average(c),c-np.average(c)))*np.dot(d-np.average(d),d-np.average(d)))) print('rho',round(rho,3)) #Why is the rho for c and d, 1.0? #Edit the variables c and d and find the pearson's value for 'a' and 'b'. #What happens when you correlate 'a' with 'a'? #Here is an image from the Fiji practical from tifffile import imread as imreadtiff im = imreadtiff('neuron.tif') print('image dimensions',im.shape, ' im dtype:',im.dtype) subplot(2,2,1) imshow(im[0,:,:],cmap='Blues_r') subplot(2,2,2) imshow(im[1,:,:],cmap='Greens_r') subplot(2,2,3) imshow(im[2,:,:],cmap='Greys_r') subplot(2,2,4) imshow(im[3,:,:],cmap='Reds_r')
day2_colocalisation/2015 Correlation and Colocalisation practical.ipynb
dwaithe/ONBI_image_analysis
gpl-2.0
d7cda7795db8c9b1c829ab7e1d48bf1f
Pearson's comparison of microscopy derived images
a = im[0,:,:].reshape(-1) b = im[3,:,:].reshape(-1) #Calculate the pearson's coefficent (rho) for the image channel 0, 3. #You should hopefully obtain a value 0.829 #from tifffile import imread as imreadtiff im = imreadtiff('composite.tif') #The organisation of this file is not simple. It is also a 16-bit image. print("shape of im: ",im.shape,"bit-depth: ",im.dtype) #We can assess the image data like so. CH0 = im[0,0,:,:] CH1 = im[1,0,:,:] #Single channels visualisation can handle 16-bit subplot(2,2,1) imshow(CH0,cmap='Reds_r') subplot(2,2,2) imshow(CH1,cmap='Greens_r') subplot(2,2,3) #RGB data have to range between 0 and 255 in each channel and be int (8-bit). imRGB = np.zeros((CH0.shape[0],CH0.shape[1],3)) imRGB[:,:,0] = CH0/255.0 imRGB[:,:,1] = CH1/255.0 imshow((imRGB.astype(np.uint8))) #What is the current Pearson's value for this image?
day2_colocalisation/2015 Correlation and Colocalisation practical.ipynb
dwaithe/ONBI_image_analysis
gpl-2.0
ad299ecd9845c9b7c13145ebb349902d
Maybe remove so not to clash with Mark's. Last challenge Exercise: The above image is not registered. Can you devise a way of registering this image using the Pearson's test, as a measure for the similarity of the image in different positions. hint you will need to move one of the images relative to the other and measure the colocalisation in this position. The best localisation will have the highest rho value. Produce an image of your fully registered image.
np.max(imRGB/256.0) rho_max = 0 #This moves one of your images with respect to the other. for c in range(1,40): for r in range(1,40): #We need to dynamically sample our image. temp = CH0[c:-40+c,r:-40+r].reshape(-1); #The -40 makes sure they are the same size. ref = CH1[:-40,:-40].reshape(-1); rho = np.dot(temp-np.average(temp),ref-np.average(ref))/sqrt(((np.dot(temp-np.average(temp),temp-np.average(temp)))*np.dot(ref-np.average(ref),ref-np.average(ref)))) #You will need to work out the highest rho value is recorded. #You will then need to find the coordinates of this high rho. #You will then need to provide a visualisation with the image translated. np.max(imRGB) imshow? whos
day2_colocalisation/2015 Correlation and Colocalisation practical.ipynb
dwaithe/ONBI_image_analysis
gpl-2.0
0a2d38fdb50a2eb976fcb8e225619878
Exercise: Read the documentation of scipy.interpolate.interp1d. Pass a keyword argument to interpolate to specify one of the other kinds of interpolation, and run the code again to see what it looks like.
# Solution goes here
notebooks/chap17.ipynb
AllenDowney/ModSimPy
mit
4d274239dcd36e2d6295f9581e8975e9
Exercise: Interpolate the glucose data and generate a plot, similar to the previous one, that shows the data points and the interpolated curve evaluated at the time values in ts.
# Solution goes here
notebooks/chap17.ipynb
AllenDowney/ModSimPy
mit
553979166bfb26f20870ecc9f64ed756
将 tf.summary 用法迁移到 TF 2.0 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/tensorboard/migrate"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/migrate.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/migrate.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> </table> 注:本文档面向已经熟悉 TensorFlow 1.x TensorBoard 并希望将大型 TensorFlow 代码库从 TensorFlow 1.x 迁移至 2.0 的用户。如果您是 TensorBoard 的新用户,另请参阅入门文档。如果您使用 tf.keras,那么可能无需执行任何操作即可升级到 TensorFlow 2.0。
import tensorflow as tf
site/zh-cn/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
a1348ad618bc908eff8fcd9fcfde400b
TensorFlow 2.0 包含对 tf.summary API(用于写入摘要数据以在 TensorBoard 中进行可视化)的重大变更。 变更 将 tf.summary API 视为两个子 API 非常实用: 一组用于记录各个摘要(summary.scalar()、summary.histogram()、summary.image()、summary.audio() 和 summary.text())的运算,从您的模型代码内嵌调用。 写入逻辑,用于收集各个摘要并将其写入到特殊格式化的日志文件中(TensorBoard 随后会读取该文件以生成可视化效果)。 在 TF 1.x 中 上述二者必须手动关联在一起,方法是通过 Session.run() 获取摘要运算输出,并调用 FileWriter.add_summary(output, step)。v1.summary.merge_all() 运算通过使用计算图集合汇总所有摘要运算输出使这个操作更轻松,但是这种方式对 Eager Execution 和控制流的效果仍不尽人意,因此特别不适用于 TF 2.0。 在 TF 2.X 中 上述二者紧密集成。现在,单独的 tf.summary 运算在执行时可立即写入其数据。在您的模型代码中使用 API 的方式与以往类似,但是现在对 Eager Execution 更加友好,同时也保留了与计算图模式的兼容性。两个子 API 的集成意味着 summary.FileWriter 现已成为 TensorFlow 执行上下文的一部分,可直接通过 tf.summary 运算访问,因此配置写入器将是主要的差异。 Eager Execution 的示例用法(TF 2.0 中默认):
writer = tf.summary.create_file_writer("/tmp/mylogs/eager") with writer.as_default(): for step in range(100): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) writer.flush() ls /tmp/mylogs/eager
site/zh-cn/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
61f32b323166423fa6353ff061e7103f
tf.function 计算图执行的示例用法:
writer = tf.summary.create_file_writer("/tmp/mylogs/tf_function") @tf.function def my_func(step): with writer.as_default(): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) for step in tf.range(100, dtype=tf.int64): my_func(step) writer.flush() ls /tmp/mylogs/tf_function
site/zh-cn/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
15c12598001e41819822a1c7e304281e
旧 TF 1.x 计算图执行的示例用法:
g = tf.compat.v1.Graph() with g.as_default(): step = tf.Variable(0, dtype=tf.int64) step_update = step.assign_add(1) writer = tf.summary.create_file_writer("/tmp/mylogs/session") with writer.as_default(): tf.summary.scalar("my_metric", 0.5, step=step) all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops() writer_flush = writer.flush() with tf.compat.v1.Session(graph=g) as sess: sess.run([writer.init(), step.initializer]) for i in range(100): sess.run(all_summary_ops) sess.run(step_update) sess.run(writer_flush) ls /tmp/mylogs/session
site/zh-cn/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
c86a0d449ad7d0e7b960d30828992816
The figure below shows the input data-matrix, and the current batch batchX_placeholder is in the dashed rectangle. As we will see later, this “batch window” is slided truncated_backprop_length steps to the right at each run, hence the arrow. In our example below batch_size = 3, truncated_backprop_length = 3, and total_series_length = 36. Note that these numbers are just for visualization purposes, the values are different in the code. The series order index is shown as numbers in a few of the data-points.
Image(url= "https://cdn-images-1.medium.com/max/1600/1*n45uYnAfTDrBvG87J-poCA.jpeg") #Now it’s time to build the part of the graph that resembles the actual RNN computation, #first we want to split the batch data into adjacent time-steps. # Unpack columns #Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors. #so a bunch of arrays, 1 batch per time step # Change to unstack for new version of TF inputs_series = tf.unstack(batchX_placeholder, axis=1) labels_series = tf.unstack(batchY_placeholder, axis=1)
How-to-Use-Tensorflow-for-Time-Series-Live--master/demo_full_notes.ipynb
swirlingsand/deep-learning-foundations
mit
e9ef315bcc820267a67ed9e40086041c
Project 3D electrodes to a 2D snapshot Because we have the 3D location of each electrode, we can use the :func:mne.viz.snapshot_brain_montage function to return a 2D image along with the electrode positions on that image. We use this in conjunction with :func:mne.viz.plot_alignment, which visualizes electrode positions.
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir, surfaces=['pial'], meg=False) mlab.view(200, 70) xy, im = snapshot_brain_montage(fig, mon) # Convert from a dictionary to array to plot xy_pts = np.vstack([xy[ch] for ch in info['ch_names']]) # Define an arbitrary "activity" pattern for viz activity = np.linspace(100, 200, xy_pts.shape[0]) # This allows us to use matplotlib to create arbitrary 2d scatterplots fig2, ax = plt.subplots(figsize=(10, 10)) ax.imshow(im) ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm') ax.set_axis_off() # fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage
0.18/_downloads/66fec418bceb5ce89704fb8b44930330/plot_3d_to_2d.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
1cee83e4b0f0bb5abe39a959d1cecfe6
Custom STIX Content Custom Properties Attempting to create a STIX object with properties not defined by the specification will result in an error. Try creating an Identity object with a custom x_foo property:
from stix2 import Identity Identity(name="John Smith", identity_class="individual", x_foo="bar")
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
b7889192bf1ca1ab32168c348c59d7b3
To create a STIX object with one or more custom properties, pass them in as a dictionary parameter called custom_properties:
identity = Identity(name="John Smith", identity_class="individual", custom_properties={ "x_foo": "bar" }) print(identity.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
eefbb595880c62c5b378550cafb9b4f6
Alternatively, setting allow_custom to True will allow custom properties without requiring a custom_properties dictionary.
identity2 = Identity(name="John Smith", identity_class="individual", x_foo="bar", allow_custom=True) print(identity2.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
0e0a40d28fb1e37b2bcb69068fe76008
Likewise, when parsing STIX content with custom properties, pass allow_custom=True to parse():
from stix2 import parse input_string = """{ "type": "identity", "spec_version": "2.1", "id": "identity--311b2d2d-f010-4473-83ec-1edf84858f4c", "created": "2015-12-21T19:59:11Z", "modified": "2015-12-21T19:59:11Z", "name": "John Smith", "identity_class": "individual", "x_foo": "bar" }""" identity3 = parse(input_string, allow_custom=True) print(identity3.x_foo)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
1c685be5523bbfe41bccc356af2720a1
To remove a custom properties, use new_version() and set that property to None.
identity4 = identity3.new_version(x_foo=None) print(identity4.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
1acbe649aaf5b58608a3926430b91e12
Custom STIX Object Types To create a custom STIX object type, define a class with the @CustomObject decorator. It takes the type name and a list of property tuples, each tuple consisting of the property name and a property instance. Any special validation of the properties can be added by supplying an __init__ function. Let's say zoo animals have become a serious cyber threat and we want to model them in STIX using a custom object type. Let's use a species property to store the kind of animal, and make that property required. We also want a property to store the class of animal, such as "mammal" or "bird" but only want to allow specific values in it. We can add some logic to validate this property in __init__.
from stix2 import CustomObject, properties @CustomObject('x-animal', [ ('species', properties.StringProperty(required=True)), ('animal_class', properties.StringProperty()), ]) class Animal(object): def __init__(self, animal_class=None, **kwargs): if animal_class and animal_class not in ['mammal', 'bird', 'fish', 'reptile']: raise ValueError("'%s' is not a recognized class of animal." % animal_class)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
6c4ff617eb1f8836438a4d488d26bfad
Now we can create an instance of our custom Animal type.
animal = Animal(species="lion", animal_class="mammal") print(animal.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
4782d160533e9f858d1f3af4791a05ee
Trying to create an Animal instance with an animal_class that's not in the list will result in an error:
Animal(species="xenomorph", animal_class="alien")
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
e53bcf56f2084e4bc6acb6e3021b6205
Parsing custom object types that you have already defined is simple and no different from parsing any other STIX object.
input_string2 = """{ "type": "x-animal", "id": "x-animal--941f1471-6815-456b-89b8-7051ddf13e4b", "created": "2015-12-21T19:59:11Z", "modified": "2015-12-21T19:59:11Z", "spec_version": "2.1", "species": "shark", "animal_class": "fish" }""" animal2 = parse(input_string2) print(animal2.species)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
8e4e9238a13faa2e771221d5b385cb91
However, parsing custom object types which you have not defined will result in an error:
input_string3 = """{ "type": "x-foobar", "id": "x-foobar--d362beb5-a04e-4e6b-a030-b6935122c3f9", "created": "2015-12-21T19:59:11Z", "modified": "2015-12-21T19:59:11Z", "bar": 1, "baz": "frob" }""" parse(input_string3)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
4a4a098ea58a524a5593b70ccdd35c88
Custom Cyber Observable Types Similar to custom STIX object types, use a decorator to create custom Cyber Observable types. Just as before, __init__() can hold additional validation, but it is not necessary.
from stix2 import CustomObservable @CustomObservable('x-new-observable', [ ('a_property', properties.StringProperty(required=True)), ('property_2', properties.IntegerProperty()), ]) class NewObservable(): pass new_observable = NewObservable(a_property="something", property_2=10) print(new_observable.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
de4712cc0fb039208eeab3512214f4fd
Likewise, after the custom Cyber Observable type has been defined, it can be parsed.
from stix2 import ObservedData input_string4 = """{ "type": "observed-data", "id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf", "spec_version": "2.1", "created_by_ref": "identity--f431f809-377b-45e0-aa1c-6a4751cae5ff", "created": "2016-04-06T19:58:16.000Z", "modified": "2016-04-06T19:58:16.000Z", "first_observed": "2015-12-21T19:00:00Z", "last_observed": "2015-12-21T19:00:00Z", "number_observed": 50, "objects": { "0": { "type": "x-new-observable", "a_property": "foobaz", "property_2": 5 } } }""" obs_data = parse(input_string4) print(obs_data.objects["0"].a_property) print(obs_data.objects["0"].property_2)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
cf14d32f87ffa0617054cee7f31896bc
ID-Contributing Properties for Custom Cyber Observables STIX 2.1 Cyber Observables (SCOs) have deterministic IDs, meaning that the ID of a SCO is based on the values of some of its properties. Thus, if multiple cyber observables of the same type have the same values for their ID-contributing properties, then these SCOs will have the same ID. UUIDv5 is used for the deterministic IDs, using the namespace "00abedb4-aa42-466c-9c01-fed23315a9b7". A SCO's ID-contributing properties may consist of a combination of required properties and optional properties. If a SCO type does not have any ID contributing properties defined, or all of the ID-contributing properties are not present on the object, then the SCO uses a randomly-generated UUIDv4. Thus, you can optionally define which of your custom SCO's properties should be ID-contributing properties. Similar to standard SCOs, your custom SCO's ID-contributing properties can be any combination of the SCO's required and optional properties. You define the ID-contributing properties when defining your custom SCO with the CustomObservable decorator. After the list of properties, you can optionally define the list of id-contributing properties. If you do not want to specify any id-contributing properties for your custom SCO, then you do not need to do anything additional. See the example below:
from stix2 import CustomObservable @CustomObservable('x-new-observable-2', [ ('a_property', properties.StringProperty(required=True)), ('property_2', properties.IntegerProperty()), ], [ 'a_property' ]) class NewObservable2(): pass new_observable_a = NewObservable2(a_property="A property", property_2=2000) print(new_observable_a.serialize(pretty=True)) new_observable_b = NewObservable2(a_property="A property", property_2=3000) print(new_observable_b.serialize(pretty=True)) new_observable_c = NewObservable2(a_property="A different property", property_2=3000) print(new_observable_c.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
2358d2a72558cfd835e713b39a60e886
In this example, a_property is the only id-contributing property. Notice that the ID for new_observable_a and new_observable_b is the same since they have the same value for the id-contributing a_property property. Custom Cyber Observable Extensions Finally, custom extensions to existing Cyber Observable types can also be created. Just use the @CustomExtension decorator. Note that you must provide the Cyber Observable class to which the extension applies. Again, any extra validation of the properties can be implemented by providing an __init__() but it is not required. Let's say we want to make an extension to the File Cyber Observable Object:
from stix2 import CustomExtension @CustomExtension('x-new-ext', [ ('property1', properties.StringProperty(required=True)), ('property2', properties.IntegerProperty()), ]) class NewExtension(): pass new_ext = NewExtension(property1="something", property2=10) print(new_ext.serialize(pretty=True))
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
1815eea7c322cbc184f225e496e2858e
Once the custom Cyber Observable extension has been defined, it can be parsed.
input_string5 = """{ "type": "observed-data", "id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf", "spec_version": "2.1", "created_by_ref": "identity--f431f809-377b-45e0-aa1c-6a4751cae5ff", "created": "2016-04-06T19:58:16.000Z", "modified": "2016-04-06T19:58:16.000Z", "first_observed": "2015-12-21T19:00:00Z", "last_observed": "2015-12-21T19:00:00Z", "number_observed": 50, "objects": { "0": { "type": "file", "name": "foo.bar", "hashes": { "SHA-256": "35a01331e9ad96f751278b891b6ea09699806faedfa237d40513d92ad1b7100f" }, "extensions": { "x-new-ext": { "property1": "bla", "property2": 50 } } } } }""" obs_data2 = parse(input_string5) print(obs_data2.objects["0"].extensions["x-new-ext"].property1) print(obs_data2.objects["0"].extensions["x-new-ext"].property2)
docs/guide/custom.ipynb
oasis-open/cti-python-stix2
bsd-3-clause
b827777da9d69113c50b3f4d00e2ebe1
<table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/genome"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/genome.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/genome.ipynb">{img1下载笔记本</a></td> </table> 概述 本教程将演示 tfio.genome 软件包,其中提供了常用的基因组学 IO 功能,即读取多种基因组学文件格式,以及提供一些用于准备数据(例如,独热编码或将 Phred 质量解析为概率)的常用运算。 此软件包使用 Google Nucleus 库来提供一些核心功能。 设置
try: %tensorflow_version 2.x except Exception: pass !pip install tensorflow-io import tensorflow_io as tfio import tensorflow as tf
site/zh-cn/io/tutorials/genome.ipynb
tensorflow/docs-l10n
apache-2.0
c2e35b213431595810f806d3bdd0d974
FASTQ 数据 FASTQ 是一种常见的基因组学文件格式,除了基本的质量信息外,还存储序列信息。 首先,让我们下载一个样本 fastq 文件。
# Download some sample data: !curl -OL https://raw.githubusercontent.com/tensorflow/io/master/tests/test_genome/test.fastq
site/zh-cn/io/tutorials/genome.ipynb
tensorflow/docs-l10n
apache-2.0
1e02e436822d2bf745927e6375338146
读取 FASTQ 数据 现在,让我们使用 tfio.genome.read_fastq 读取此文件(请注意,tf.data API 即将发布)。
fastq_data = tfio.genome.read_fastq(filename="test.fastq") print(fastq_data.sequences) print(fastq_data.raw_quality)
site/zh-cn/io/tutorials/genome.ipynb
tensorflow/docs-l10n
apache-2.0
50d0c4fbfcb4e419edaeb0f145ae2d13
如您所见,返回的 fastq_data 具有 fastq_data.sequences,后者是 fastq 文件中所有序列的字符串张量(大小可以不同);并具有 fastq_data.raw_quality,其中包含与在序列中读取的每个碱基的质量有关的 Phred 编码质量信息。 质量 如有兴趣,您可以使用辅助运算将此质量信息转换为概率。
quality = tfio.genome.phred_sequences_to_probability(fastq_data.raw_quality) print(quality.shape) print(quality.row_lengths().numpy()) print(quality)
site/zh-cn/io/tutorials/genome.ipynb
tensorflow/docs-l10n
apache-2.0
0fca47f209079f2d213d97b43d3ead16
独热编码 您可能还需要使用独热编码器对基因组序列数据(由 A T C G 碱基组成)进行编码。有一项内置运算可以帮助编码。
print(tfio.genome.sequences_to_onehot.__doc__) print(tfio.genome.sequences_to_onehot.__doc__)
site/zh-cn/io/tutorials/genome.ipynb
tensorflow/docs-l10n
apache-2.0
ebfd9bd271eb1812eea42a68b04159a8
We will often define functions to take optional keyword arguments, like this:
def hello(name, loud=False): if loud: print ('HELLO, %s' % name.upper()) else: print ('Hello, %s!' % name) hello('Bob') loud = True hello('Fred', True)
python-tutorial.ipynb
w4zir/ml17s
mit
174887841d6c78f0bec455d231820a0d
KNN Classifier
# read X and y # cols = ['pclass','sex','age','fare'] cols = ['pclass','sex','age'] X = dframe[cols] y = dframe[["survived"]] dframe.head() # Use scikit-learn KNN classifier to predit survival probability from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=3) neigh.fit(X, y) # check accuracy neigh.score(X,y) # define a passenger passenger = [1,1,29] # predict survial label print(neigh.predict([passenger])) # predict survial probability print(neigh.predict_proba([passenger])) # find k-nearest neighbors neigh.kneighbors(passenger,3) # Let's create some data for DiCaprio and Winslet and you import numpy as np colsidx = [0,2,3]; dicaprio = np.array([3, 'Jack Dawson', 0, 19, 0, 0, 'N/A', 5.0000]) winslet = np.array([1, 'Rose DeWitt Bukater', 1, 17, 1, 2, 'N/A', 100.0000]) you = np.array([1, 'user', 1, 21, 0, 2, 'N/A', 50.0000]) # Preprocess data dicaprio = dicaprio[colsidx] winslet = winslet[colsidx] you = you[colsidx] # # Predict surviving chances (class 1 results) pred = neigh.predict([dicaprio, winslet, you]) prob = neigh.predict_proba([dicaprio, winslet, you]) print("DiCaprio Surviving:", pred[0], " with probability", prob[0]) print("Winslet Surviving Rate:", pred[1], " with probability", prob[2]) print("user Surviving Rate:", pred[2], " with probability", prob[2])
python-tutorial.ipynb
w4zir/ml17s
mit
ff85ede85859a2e0f242472a72378ddc
Get Movielens-1M data this will download movielens-1m dataset from http://grouplens.org/datasets/movielens/:
data, genres = get_movielens_data(get_genres=True) data.head() data.info() genres.head() %matplotlib inline
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
2c73782aa1a524c0172f23aca7dd3000