markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
The default score method for the MKSHomogenizationModel is the R-squared value. Let's look at the how the mean R-squared values and their standard deviations change as we varied the number of n_components and degree using draw_gridscores_matrix from pymks.tools.
from pymks.tools import draw_gridscores_matrix draw_gridscores_matrix(gs, ['n_components', 'degree'], score_label='R-Squared', param_labels=['Number of Components', 'Order of Polynomial'])
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
ddea96f862c268df04f928ff805f1828
For the parameter range that we searched, we have found that a model with 3rd order polynomial and 3 components had the best R-squared value. It's difficult to see the differences in the score values and the standard deviation when we have 3 or more components. Let's take a closer look at those values using draw_grid_scores.
from pymks.tools import draw_gridscores gs_deg_1 = [x for x in gs.grid_scores_ \ if x.parameters['degree'] == 1][2:-1] gs_deg_2 = [x for x in gs.grid_scores_ \ if x.parameters['degree'] == 2][2:-1] gs_deg_3 = [x for x in gs.grid_scores_ \ if x.parameters['degree'] == 3][2:-1] draw_gridscores([gs_deg_1, gs_deg_2, gs_deg_3], 'n_components', data_labels=['1st Order', '2nd Order', '3rd Order'], colors=['#f46d43', '#1a9641', '#762a83'], param_label='Number of Components', score_label='R-Squared')
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
d8d316893336fe4728c4592b929e366b
Prediction using MKSHomogenizationModel Now that we have selected values for n_components and degree, lets fit the model with the data. Again because our microstructures are periodic, we need to use the periodic_axes argument.
model.fit(X, y, periodic_axes=[0, 1])
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
4dc2b892d2cd2cd315a7d5d53a9bdc6f
Lastly, we can also evaluate our prediction by looking at a goodness-of-fit plot. We can do this by importing draw_goodness_of_fit from pymks.tools.
from pymks.tools import draw_goodness_of_fit fit_data = np.array([y, model.predict(X, periodic_axes=[0, 1])]) pred_data = np.array([y_new, y_predict]) draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Testing Data'])
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
efc52d53759eefc4cd1fe751b29d8b75
Step 1: The Data Science of Shoelaces Nike has hired you as a data science consultant to help them save money on shoe materials. Your first assignment is to review a model one of their employees built to predict how many shoelaces they'll need each month. The features going into the machine learning model include: - The current month (January, February, etc) - Advertising expenditures in the previous month - Various macroeconomic features (like the unemployment rate) as of the beginning of the current month - The amount of leather they ended up using in the current month The results show the model is almost perfectly accurate if you include the feature about how much leather they used. But it is only moderately accurate if you leave that feature out. You realize this is because the amount of leather they use is a perfect indicator of how many shoes they produce, which in turn tells you how many shoelaces they need. Do you think the leather used feature constitutes a source of data leakage? If your answer is "it depends," what does it depend on? After you have thought about your answer, check it against the solution below.
# Check your answer (Run this code cell to receive credit!) q_1.check()
notebooks/ml_intermediate/raw/ex7.ipynb
Kaggle/learntools
apache-2.0
a13bd086531ce523d544998479366778
Step 2: Return of the Shoelaces You have a new idea. You could use the amount of leather Nike ordered (rather than the amount they actually used) leading up to a given month as a predictor in your shoelace model. Does this change your answer about whether there is a leakage problem? If you answer "it depends," what does it depend on?
# Check your answer (Run this code cell to receive credit!) q_2.check()
notebooks/ml_intermediate/raw/ex7.ipynb
Kaggle/learntools
apache-2.0
e00dd49b4751062cd8d11aea096b47b1
Step 3: Getting Rich With Cryptocurrencies? You saved Nike so much money that they gave you a bonus. Congratulations. Your friend, who is also a data scientist, says he has built a model that will let you turn your bonus into millions of dollars. Specifically, his model predicts the price of a new cryptocurrency (like Bitcoin, but a newer one) one day ahead of the moment of prediction. His plan is to purchase the cryptocurrency whenever the model says the price of the currency (in dollars) is about to go up. The most important features in his model are: - Current price of the currency - Amount of the currency sold in the last 24 hours - Change in the currency price in the last 24 hours - Change in the currency price in the last 1 hour - Number of new tweets in the last 24 hours that mention the currency The value of the cryptocurrency in dollars has fluctuated up and down by over $\$$100 in the last year, and yet his model's average error is less than $\$$1. He says this is proof his model is accurate, and you should invest with him, buying the currency whenever the model says it is about to go up. Is he right? If there is a problem with his model, what is it?
# Check your answer (Run this code cell to receive credit!) q_3.check()
notebooks/ml_intermediate/raw/ex7.ipynb
Kaggle/learntools
apache-2.0
a089c441db1e4986adf5388baa97dee0
Step 4: Preventing Infections An agency that provides healthcare wants to predict which patients from a rare surgery are at risk of infection, so it can alert the nurses to be especially careful when following up with those patients. You want to build a model. Each row in the modeling dataset will be a single patient who received the surgery, and the prediction target will be whether they got an infection. Some surgeons may do the procedure in a manner that raises or lowers the risk of infection. But how can you best incorporate the surgeon information into the model? You have a clever idea. 1. Take all surgeries by each surgeon and calculate the infection rate among those surgeons. 2. For each patient in the data, find out who the surgeon was and plug in that surgeon's average infection rate as a feature. Does this pose any target leakage issues? Does it pose any train-test contamination issues?
# Check your answer (Run this code cell to receive credit!) q_4.check()
notebooks/ml_intermediate/raw/ex7.ipynb
Kaggle/learntools
apache-2.0
77e7923b6691234b172ed68b8bddc0c7
Step 5: Housing Prices You will build a model to predict housing prices. The model will be deployed on an ongoing basis, to predict the price of a new house when a description is added to a website. Here are four features that could be used as predictors. 1. Size of the house (in square meters) 2. Average sales price of homes in the same neighborhood 3. Latitude and longitude of the house 4. Whether the house has a basement You have historic data to train and validate the model. Which of the features is most likely to be a source of leakage?
# Fill in the line below with one of 1, 2, 3 or 4. potential_leakage_feature = ____ # Check your answer q_5.check() #%%RM_IF(PROD)%% potential_leakage_feature = 1 q_5.assert_check_failed() #%%RM_IF(PROD)%% potential_leakage_feature = 2 q_5.assert_check_passed() #_COMMENT_IF(PROD)_ q_5.hint() #_COMMENT_IF(PROD)_ q_5.solution()
notebooks/ml_intermediate/raw/ex7.ipynb
Kaggle/learntools
apache-2.0
6712270eda9eeadd5af02e950dbb48db
Data Preparation and Model Selection Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
import xgboost as xgb X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 ) Y_train = training_data['Facies' ] - 1 dtrain = xgb.DMatrix(X_train, Y_train) train = X_train.copy() train['Facies']=Y_train train.head()
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
2070e90e9ce71765abfd3e1cd44cc248
General Approach for Parameter Tuning We are going to preform the steps as follows: 1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems. 2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required. 3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees. 4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance. 5.Lower the learning rate and decide the optimal parameters. Step 1:Fix learning rate and number of estimators for tuning tree-based parameters In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values: 1.max_depth = 5 2.min_child_weight = 1 3.gamma = 0 4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value. 5.scale_pos_weight = 1 Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
from xgboost import XGBClassifier xgb1 = XGBClassifier( learning_rate = 0.1, n_estimators=1000, max_depth=5, min_child_weight=1, gamma = 0, subsample=0.8, colsample_bytree=0.8, objective='multi:softmax', nthread =4, seed = 123, ) modelfit(xgb1, train, features) xgb1
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
2aab9d0299b035a15c41b9a4f1debddb
Step 2: Tune max_depth and min_child_weight
from sklearn.model_selection import GridSearchCV param_test1={ 'max_depth':range(3,10,2), 'min_child_weight':range(1,6,2) } gs1 = GridSearchCV(xgb1,param_grid=param_test1, scoring='accuracy', n_jobs=4,iid=False, cv=5) gs1.fit(train[features],train[target]) gs1.grid_scores_, gs1.best_params_,gs1.best_score_ param_test2={ 'max_depth':[8,9,10], 'min_child_weight':[1,2] } gs2 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5, min_child_weight=1, n_estimators=290, nthread=4, objective='multi:softprob', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test2, scoring='accuracy', n_jobs=4,iid=False, cv=5) gs2.fit(train[features],train[target]) gs2.grid_scores_, gs2.best_params_,gs2.best_score_ gs2.best_estimator_
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
4e6af2b0025329ca8c190ff169a14b59
Step 3: Tune gamma
param_test3={ 'gamma':[i/10.0 for i in range(0,5)] } gs3 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=9, min_child_weight=1, n_estimators=370, nthread=4, objective='multi:softprob', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test3, scoring='accuracy', n_jobs=4,iid=False, cv=5) gs3.fit(train[features],train[target]) gs3.grid_scores_, gs3.best_params_,gs3.best_score_ xgb2 = XGBClassifier( learning_rate = 0.1, n_estimators=1000, max_depth=9, min_child_weight=1, gamma = 0.2, subsample=0.8, colsample_bytree=0.8, objective='multi:softmax', nthread =4, scale_pos_weight=1, seed = seed, ) modelfit(xgb2,train,features) xgb2
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
fcc43bd0b27d305241e05cc6b9a8b71a
Step 5: Tuning Regularization Parameters
param_test5={ 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100] } gs5 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8, gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9, min_child_weight=1, n_estimators=236, nthread=4, objective='multi:softprob', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test5, scoring='accuracy', n_jobs=4,iid=False, cv=5) gs5.fit(train[features],train[target]) gs5.grid_scores_, gs5.best_params_,gs5.best_score_ param_test6={ 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05] } gs6 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8, gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9, min_child_weight=1, n_estimators=236, nthread=4, objective='multi:softprob', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test6, scoring='accuracy', n_jobs=4,iid=False, cv=5) gs6.fit(train[features],train[target]) gs6.grid_scores_, gs6.best_params_,gs6.best_score_ xgb3 = XGBClassifier( learning_rate = 0.1, n_estimators=1000, max_depth=9, min_child_weight=1, gamma = 0.2, subsample=0.6, colsample_bytree=0.8, reg_alpha=0.05, objective='multi:softmax', nthread =4, scale_pos_weight=1, seed = seed, ) modelfit(xgb3,train,features) xgb3 model = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8, gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9, min_child_weight=1, missing=None, n_estimators=122, nthread=4, objective='multi:softprob', reg_alpha=0.05, reg_lambda=1, scale_pos_weight=1, seed=123, silent=True, subsample=0.6) model.fit(X_train, Y_train) xgb.plot_importance(model)
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
3ff9dde05b403a0fa84ebd402796fe6d
Cross Validation Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
# Load data filename = './facies_vectors.csv' data = pd.read_csv(filename) # Change to category data type data['Well Name'] = data['Well Name'].astype('category') data['Formation'] = data['Formation'].astype('category') # Leave one well out for cross validation well_names = data['Well Name'].unique() f1=[] for i in range(len(well_names)): # Split data for training and testing X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) Y_train = data['Facies' ] - 1 train_X = X_train[X_train['Well Name'] != well_names[i] ] train_Y = Y_train[X_train['Well Name'] != well_names[i] ] test_X = X_train[X_train['Well Name'] == well_names[i] ] test_Y = Y_train[X_train['Well Name'] == well_names[i] ] train_X = train_X.drop(['Well Name'], axis = 1 ) test_X = test_X.drop(['Well Name'], axis = 1 ) # Final recommended model based on the extensive parameters search model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8, gamma=0.2, learning_rate=0.01, max_delta_step=0, max_depth=9, min_child_weight=1, missing=None, n_estimators=432, nthread=4, objective='multi:softmax', reg_alpha=0.05, reg_lambda=1, scale_pos_weight=1, seed=123, silent=1, subsample=0.6) # Train the model based on training data model_final.fit( train_X , train_Y , eval_metric = 'merror' ) # Predict on the test set predictions = model_final.predict(test_X) # Print report print ("\n------------------------------------------------------") print ("Validation on the leaving out well " + well_names[i]) conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) ) print ("\nModel Report") print ("-Accuracy: %.6f" % ( accuracy(conf) )) print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )) print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) )) f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' )) facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] print ("\nConfusion Matrix Results") from classification_utilities import display_cm, display_adj_cm display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True) print ("\n------------------------------------------------------") print ("Final Results") print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
ba6c86d8a9a801b14e294844d5e2d7cf
Model from all data set
# Load data filename = './facies_vectors.csv' data = pd.read_csv(filename) # Change to category data type data['Well Name'] = data['Well Name'].astype('category') data['Formation'] = data['Formation'].astype('category') # Split data for training and testing X_train_all = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) Y_train_all = data['Facies' ] - 1 X_train_all = X_train_all.drop(['Well Name'], axis = 1) # Final recommended model based on the extensive parameters search model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8, gamma=0.2, learning_rate=0.01, max_delta_step=0, max_depth=9, min_child_weight=1, missing=None, n_estimators=432, nthread=4, objective='multi:softmax', reg_alpha=0.05, reg_lambda=1, scale_pos_weight=1, seed=123, silent=1, subsample=0.6) # Train the model based on training data model_final.fit(X_train_all , Y_train_all , eval_metric = 'merror' ) # Leave one well out for cross validation well_names = data['Well Name'].unique() f1=[] for i in range(len(well_names)): X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 ) Y_train = data['Facies' ] - 1 train_X = X_train[X_train['Well Name'] != well_names[i] ] train_Y = Y_train[X_train['Well Name'] != well_names[i] ] test_X = X_train[X_train['Well Name'] == well_names[i] ] test_Y = Y_train[X_train['Well Name'] == well_names[i] ] train_X = train_X.drop(['Well Name'], axis = 1 ) test_X = test_X.drop(['Well Name'], axis = 1 ) #print(test_Y) predictions = model_final.predict(test_X) # Print report print ("\n------------------------------------------------------") print ("Validation on the leaving out well " + well_names[i]) conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) ) print ("\nModel Report") print ("-Accuracy: %.6f" % ( accuracy(conf) )) print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) )) print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) )) f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' )) facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] print ("\nConfusion Matrix Results") from classification_utilities import display_cm, display_adj_cm display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True) print ("\n------------------------------------------------------") print ("Final Results") print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
1ed83943c4fe757d965c5cf7297ed92f
Use final model to predict the given test data set
# Load test data test_data = pd.read_csv('validation_data_nofacies.csv') test_data['Well Name'] = test_data['Well Name'].astype('category') X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) # Predict facies of unclassified data Y_predicted = model_final.predict(X_test) test_data['Facies'] = Y_predicted + 1 # Store the prediction test_data.to_csv('Prediction3.csv') test_data
HouMath/Face_classification_HouMath_XGB_03.ipynb
esa-as/2016-ml-contest
apache-2.0
7f315b3dc0de9707cb92300e80e62f9b
Linear Regression This is one of the simplest models to use, since it has uses linearly correlated data to 'predict' a value given
import pandas as pd from sklearn import linear_model import matplotlib.pyplot as plt
linear-regression/Linear-Regression.ipynb
morphean/deep-learning
apache-2.0
27a52ea0f4411cce0e6c0b7fd550ee6d
We will use pandas to handle reading of the data, pandas is pretty much the de facto standard for data manipulation in python.
df = pd.read_fwf('brain_body.txt') x_values = df[['Brain']] y_values = df[['Body']] df.head()
linear-regression/Linear-Regression.ipynb
morphean/deep-learning
apache-2.0
273df57f13c04bb42ccee7ee64f4a6f4
now lets train the model using the data
import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") body_regression = linear_model.LinearRegression() body_regression.fit(x_values, y_values) fig = plt.figure() plt.scatter(x_values, y_values) plt.plot(x_values, body_regression.predict(x_values)) #add some axes and labelling fig.suptitle('Linear Regression', fontsize=14, fontweight='bold') ax = fig.add_subplot(111) ax.set_title('Body vs Brain') fig.subplots_adjust(top=0.85) ax.set_xlabel('Body weight (kg)') ax.set_ylabel('Brain weight (kg)') plt.show()
linear-regression/Linear-Regression.ipynb
morphean/deep-learning
apache-2.0
9d09c5ed6f4a4f4fd493e4f7e1ce7afd
Set up rotation matrices representing a 3-1-3 $(\psi,\theta,\phi)$ Euler angle set.
aCi = rotMat(3,psi) cCa = rotMat(1,th) bCc = rotMat(3,ph) aCi,cCa,bCc bCi = bCc*cCa*aCi; bCi #3-1-3 rotation bCi_dot = difftotalmat(bCi,t,{th:thd,psi:psid,ph:phd}); bCi_dot
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
9b637face1d68c818f739915e2856ea8
$\tilde{\omega} = {}^\mathcal{B}C^{\mathcal{I}} {}^\mathcal{B}{\dot{C}}^{\mathcal{I}}$
omega_tilde = bCi*bCi_dot.T; omega_tilde
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
3ad5a8cceb11230492d1496be06656de
$\left[{}^\mathcal{I}\boldsymbol{\omega}^{\mathcal{B}}\right]\mathcal{B} = \left[ {}^\mathcal{B}C^{\mathcal{I}}{32} \quad {}^\mathcal{B}C^{\mathcal{I}}{13} \quad {}^\mathcal{B}C^{\mathcal{I}}{21} \right]$
omega = simplify(Matrix([omega_tilde[2,1],omega_tilde[0,2],omega_tilde[1,0]])) omega w1,w2,w3 = symbols('omega_1,omega_2,omega_3') s0 = solve(omega - Matrix([w1,w2,w3]),[psid,thd,phd]); s0
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
070dd4dbac1512bd9d626a58a1ead458
Find EOM (second derivatives of Euler Angles)
I1,I2,I3 = symbols("I_1,I_2,I_3",real=True,positive=True) iWb_B = omega I_G_B = diag(I1,I2,I3) I_G_B diffmap = {th:thd,psi:psid,ph:phd,thd:thdd,psid:psidd,phd:phdd} diffmap t1 = I_G_B*difftotalmat(iWb_B,t,diffmap) t2 = skew(iWb_B)*I_G_B*iWb_B t1,t2 dh_G_B = t1+t2 dh_G_B t3 = expand(dh_G_B[0]*cos(ph)*I2 - dh_G_B[1]*sin(ph)*I1) sol_thdd = simplify(solve(t3,thdd)) sol_thdd t4= expand(dh_G_B[0]*sin(ph)*I2 + dh_G_B[1]*cos(ph)*I1) t4 sol_psidd = simplify(solve(t4,psidd)) sol_psidd sol_phdd = solve(dh_G_B[2],phdd) sol_phdd
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
c05b58b03a16fddc84471d1b22a4e08d
Find initial orientation such that $\mathbf h$ is down-pointing
h = sqrt(((I_G_B*Matrix([w1,w2,w3])).transpose()*(I_G_B*Matrix([w1,w2,w3])))[0]);h eqs1 = simplify(bCi.transpose()*I_G_B*Matrix([w1,w2,w3]) - Matrix([0,0,-h])); eqs1 #equal 0 simplify(solve(simplify(eqs1[0]*cos(psi) + eqs1[1]*sin(psi)),ph)) #phi solution solve(simplify(expand(simplify(-eqs1[0]*sin(psi) + eqs1[1]*cos(psi)).subs(ph,atan(I1*w1/I2/w2)))),th) #th solution simplify(eqs1[2].subs(ph,atan(I1*w1/I2/w2)))
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
68c0d0cb0cf68e9023f97d027d5af417
Generate MATLAB Code
out = codegen(("eom1",sol_psidd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3]);out codegen(("eom1",sol_thdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3]) codegen(("eom1",sol_phdd[0]), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd]) codegen(("eom1",[s0[psid],s0[thd],s0[phd]]), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd]) codegen(("eom1",bCi), 'Octave', argument_sequence=[th,thd,psi,psid,ph,phd,I1,I2,I3,psidd]) codegen(("eom1",omega), 'Octave', argument_sequence=[w1,w2,w3,th,thd,psi,psid,ph,phd,I1,I2,I3,psidd])
Notebooks/Torque Free 3-1-3 Body Dynamics.ipynb
dsavransky/MAE4060
mit
83f5f986e14ee5f86685306940d26f95
Problem 3 Write a function that asks for an integer and prints the square of it. Use a while loop with a try,except, else block to account for incorrect inputs.
def ask(): while True: try: n = input('Input an integer: ') except: print 'An error occurred! Please try again!' continue else: break print 'Thank you, you number squared is: ',n**2 ask()
PythonBootCamp/Complete-Python-Bootcamp-master/Errors and Exceptions Homework - Solution.ipynb
yashdeeph709/Algorithms
apache-2.0
de44d7d6b8799a890c5b156d7be4d8d2
<h3>Calculation of the phosphor layer thickness of lanex regular given its areal density:</h3>
Dcell = 55*10**-6 DPET = 175*10**-6 Dcell2 = 13*10**-6 rhocell = 1.44*10**3 rhoPET = 1.38*10**3 rhophos = 4.48*10**3 sigma = 70*10**-2 Dphos = (sigma - Dcell*rhocell - DPET*rhoPET - Dcell2*rhocell)/rhophos print(Dphos*10**6)
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
ca494689e90d0fa038f510a6bf74eeac
<h3>Define functions for photon density and photon current density</h3> <font size="4"><p>These functions were derived from Fick's laws of diffusion: $$ \frac{\partial n(z,t)}{\partial t} = D \frac{\partial^2 n(z,t)}{\partial z^2}, \: \: \phi (z,t) = -D \frac{\partial n(z,t)}{\partial z}$$ where n(z,t) is the photon density, D is the diffusion constant $D=\lambda_s c /6$ with $\lambda_s$ corresponding to the mean photon scattering length, and $\phi (z,t)$ is the photon current density. The fluorescence of the lanex phosphor was modeled as the instantaneous generation of light within a infinitely thin segment within a rectangular slab. The slab thickness $L$ is taken to be small compared to the other dimensions so that the problem can be treated one-dimensionally. The initial condition is taken to be $$ n(z,0) = \frac{N_0}{A} \delta (z) \: \: \mathrm{with} \: \: n(z,t)=0 \: \: \mathrm{for} \: t<0 $$ where $N_0$ is the number of photons generated and $A$ is the cross-sectional area of the slab. In other words, at $t=0$, $N_0$ photons are generated and modeled as a dirac delta function at $z=0$. Fick's laws are then solved with absorbing boundary conditions at the edges of the lanex: $$ n(d,t)=0, \: n(-l,t)=0$$ where $z=d$ is the location of the CCD and $z=-l$ is the location of the top edge of the phosphor. This yields $$ n(z,t) = \frac{N_0}{2 A \sqrt{\pi D t}} \sum\limits_{m=-\infty}^\infty \left[ e^{-(z-2mL)^2/4Dt} - e^{-(z+2mL-2d)^2/4Dt} \right] $$ The photon current density then follows by differentiation with respect to $z$. </font></p>
N0 = 10**6 #Number of photons emitted at t=0 lambdas = 2.85*10**-6 #Diffusion length in m D = lambdas*c/6 #Diffusion constant A = 100*10**-6*100*10**-6 #Area of segment in m^2 L = 81*10**-6 #Depth of lanex in m l = 10.0*10**-6 #Distance from top lanex edge to segment in m d = L-l #Distance from bottom lanex edge to segment def n(z,t): '''Returns the photon density at position z and time t''' n0 = N0/(2*A*sqrt(pi*D*t)) Sum = 0 maxm = 10 for m in range(-maxm,maxm+1): Sum += exp(-(z-2*m*(l+d))**2/(4*D*t))-exp(-(z+2*m*(l+d)-2*d)**2/(4*D*t)) return n0*Sum def particlecurrent(t): '''Returns the particle current (photons per second per meter^2) at the boundary z=d at time t''' Sum = 0 maxm = 10 for m in range(-maxm,maxm+1): am = d-2*m*L Sum += am*exp(-am**2/(4*D*t)) return N0/(A*sqrt(4*pi*D*t**3))*Sum
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
f15049c8dfcdd20ea771877174dc6e56
<h3>Plot photon density</h3> <font size="4"><p>The function $n(z,t)$ is calculated from 1 fs to 10 ps and plotted below. The boundary conditions are visibly satisfied and the function approaches a dirac delta function for short times. It then spreads out with the total number of photons decreasing as they're absorbed at the boundaries.</font></p>
narray = [] zarray = np.linspace(-l,d,1000) time = [1,10,10**2,10**3,10**4] time = np.multiply(time,10**-15) #convert to s for i in range(len(time)): narray.append([]) for z in zarray: narray[i].append(n(z,time[i])*10**-6) zarray = np.multiply(zarray,10**6) #Update the matplotlib configuration parameters mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'}) #Adjust figure size plt.subplots(figsize=(12,6)) color = ['r','g','b','c','m','y','k'] legend = [] for i in range(5): legend.append(str(int(time[i]*10**15))+' fs') plt.plot(zarray,narray[i],color=color[i],linewidth=2,label=legend[i]) plt.xlim(np.min(zarray),np.max(zarray)) plt.ylim(1.0*10**6,np.max(narray[0])) plt.xlabel('Position (um)') plt.ylabel('Photon Density (m^-3)') #plt.semilogy() plt.legend(loc=1)
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
bdf09b847bf1e4570a91ace8a9b28a64
<h3>Plot photon current density</h3> <font size="4"><p>Photon current density is then calculated at $z=d$ and plotted below as a function of time.</font></p>
particlecurrentarray = [] tarray = [] for t in linspace(10**-15,50*10**-12,1000): tarray.append(t*10**12) particlecurrentarray.append(particlecurrent(t)) #Update the matplotlib configuration parameters mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'}) #Adjust figure size plt.subplots(figsize=(12,6)) plt.plot(tarray,particlecurrentarray,linewidth=2) plt.xlim(np.min(tarray),np.max(tarray)) plt.ylim(0) plt.xlabel('time (ps)') plt.ylabel('Photon Current at $z=d$ $(s^{-1} \cdot m^{-2})$') #plt.semilogy() plt.legend(loc=4)
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
6ae176d2cffba43857cab338ac6219b9
<h3>Integrate photon current density</h3> <font size="4"><p>The photon current density at $z=d$ is then integrated over large times and multiplied by the area $A$ to determine the total number of photons absorbed by the CCD. This is done numerically and analytically with the function defined below. A plot of the fraction of photons absorbed is plotted as a function of time to ensure that the integral converges.</font></p>
Nabs = A*quad(particlecurrent,0,400*10**-12)[0] #Total number of photons absorbed at the boundary z=d print(Nabs/N0) def F(t,maxm,distance): Sum1 = 0 Sum2 = 0 for m in range(-maxm,1): am = distance-2*m*L Sum1 += 1 - erf(am/sqrt(4*D*t)) for m in range(1,maxm+1): am = distance-2*m*L Sum2 += 1 + erf(am/sqrt(4*D*t)) return (Sum1 - Sum2) FractionAbsArray = [] FractionAbsArrayAnalytic = [] tarray = [] for t in linspace(10**-12,50*10**-12,10000): tarray.append(t*10**12) #FractionAbsArray.append(A*quad(particlecurrent,0,t)[0]/N0) FractionAbsArrayAnalytic.append(F(t,100,d)) #Adjust figure size plt.subplots(figsize=(12,6)) plt.plot(tarray,FractionAbsArrayAnalytic,linewidth=2) plt.xlim(np.min(tarray),np.max(tarray)) plt.ylim(0,1.0) plt.xlim(0,50) plt.xlabel('time (ps)') plt.ylabel('Fraction Absorbed at $z=d$') #plt.semilogy() plt.legend(loc=4)
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
c46ebeb3cfd8c82753fa327a6c5759ca
<h3>Calculate number of photons absorbed as a function of d</h3> <font size="4"><p>A function is defined to calculate the total number of photons absorbed at $z=d$ after all time as a function of $d$. The results are then plotted and found to be linear. The rather unpleasant expression defined above evidently can be approximated as (or is exactly equal to) $$N_{abs}(d) = N_0 (1 - d/L) $$ </font></p>
FractionAbsArrayAnalytic = [] distancearray = [] #Find the fraction of photons absorbed at z=d for various values of d ranging from 0 to L - 1 um (to avoid division by zero errors) for distance in linspace(0,L-10**-6,100): Integrationtime = 10**-12 TargetError = 10**-3 Error = 1.0 FractionAbsAnalytic=0 while Error>TargetError: Error = abs(FractionAbsAnalytic-F(Integrationtime,100,distance))/F(Integrationtime,100,distance) FractionAbsAnalytic = F(Integrationtime,100,distance) Integrationtime *= 2 FractionAbsArrayAnalytic.append(FractionAbsAnalytic) distancearray.append(distance*10**6) #Update the matplotlib configuration parameters mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'}) #Adjust figure size plt.subplots(figsize=(12,6)) plt.plot(distancearray,FractionAbsArrayAnalytic,linewidth=2) #plt.xlim(np.min(tarray),np.max(tarray)) #plt.ylim(0,1.0) #plt.xlim(0,50) plt.xlabel('Segment Distance (um)') plt.ylabel('Fraction Absorbed by CCD') #plt.semilogy()
Photon_Diffusion/Photon_Diffusion.ipynb
drakero/Electron_Spectrometer
mit
1a55d1a7fdf53bf5e583c386e7d3f0d0
Tuning a scikit-learn estimator with skopt Gilles Louppe, July 2016 Katie Malone, August 2016 Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt If you are looking for a :obj:sklearn.model_selection.GridSearchCV replacement checkout sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py instead. Problem statement Tuning the hyper-parameters of a machine learning model is often carried out using an exhaustive exploration of (a subset of) the space all hyper-parameter configurations (e.g., using :obj:sklearn.model_selection.GridSearchCV), which often results in a very time consuming operation. In this notebook, we illustrate how to couple :class:gp_minimize with sklearn's estimators to tune hyper-parameters using sequential model-based optimisation, hopefully resulting in equivalent or better solutions, but within fewer evaluations. Note: scikit-optimize provides a dedicated interface for estimator tuning via :class:BayesSearchCV class which has a similar interface to those of :obj:sklearn.model_selection.GridSearchCV. This class uses functions of skopt to perform hyperparameter search efficiently. For example usage of this class, see sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py example notebook.
print(__doc__) import numpy as np
dev/notebooks/auto_examples/hyperparameter-optimization.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
0e7ce609397034eef301e41a5fcd690b
Please change the pkg_path and model_file to be correct path
pkg_path = '../../python-package/' model_file = 's3://my-bucket/xgb-demo/model/0002.model' sys.path.insert(0, pkg_path) import xgboost as xgb
xgboost-master/demo/distributed-training/plot_model.ipynb
RPGOne/Skynet
bsd-3-clause
00f82ee2ae5dcac9f14b0bebae0fec31
Plot the Feature Importance
# plot the first two trees. bst = xgb.Booster(model_file=model_file) xgb.plot_importance(bst)
xgboost-master/demo/distributed-training/plot_model.ipynb
RPGOne/Skynet
bsd-3-clause
c5c08f9165b752ba82f3869728517f41
Plot the First Tree
tree_id = 0 xgb.to_graphviz(bst, tree_id)
xgboost-master/demo/distributed-training/plot_model.ipynb
RPGOne/Skynet
bsd-3-clause
9afcfde3455de935df02678ad56aaf15
Model We start with popular assumption that light is absorbed uniformly inside the device. The assumption can be justified for thin devices, and for white illumination.
def absorption(x): return 1.5e28
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
26ab9e26f03a8a04005ae4561ac1c99f
The base model consists in Poisson's equation coupled to the drift-diffusion equations for electrons and holes. Constant mobilities are assumed. Additionally, contacts can be defined as selective (blocking electrons and holes at "invalid" electrode"), or non-selective with local thermal equilibrium is assumed at any electrode for all charge carriers.
def base_model(L=50e-9,selective=False,**kwargs): model = models.BaseModel() mesh = fvm.mesh1d(L) models.std.bulk_heterojunction(model, mesh, absorption=absorption,selective_contacts=selective,**kwargs) model.setUp() return model
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
79ba60593b592f6825cf02c194457375
The basic model created by function above contains no recombination term, and must be suplemented with it. Complete models are created by functions below, with the following options for the recombination model: - direct : $R=\beta \left(n p-n_i p_i \right)$ - Langevin: $R=\frac{\mu_n+\mu_p}{\varepsilon} \left(n p-n_i p_i \right)$ - Shockley-Reed-Hall recombination, in parallel with direct recombination Absorption is assumed to create free electrons and holes directly.
def model_Langevin(**kwargs): return base_model(langevin_recombination=True,**kwargs) def model_const(**kwargs): return base_model(const_recombination=True,**kwargs) def model_SRH(**kwargs): return base_model(const_recombination=True,srh_recombination=True,**kwargs)
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
5815bc4e63b05314daf95e14e828f963
Below is a procedure generating default simulation parameters. They are parametrized by the bandgap, by the (symmetric) barrier at the electrodes, and by SRH lifetime. Note that not all parameters are used at the same time, for example SRH parameters are not used by non-SRH models.
def make_params(barrier=0.3,bandgap=1.2,srh_tau=1e-8): params=models.std.bulk_heterojunction_params(barrier=barrier,bandgap=bandgap,Nc=1e27,Nv=1e27) srh_trate=1e-8 params.update({ 'beta':7.23e-17, 'electron.srh.trate':srh_trate, 'hole.srh.trate':srh_trate, 'srh.N0':1./(srh_tau*srh_trate), 'srh.energy':-bandgap*0.5 }) return params
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
e8ab1a9ff930bad4fda982ea085a358f
Calculations The function below takes I_V curve, which should include points V=0 and J=0, and calculates the open circuit voltage, the power at maximum power point, and the fill factor.
def performance(v,J): iv=scipy.interpolate.InterpolatedUnivariateSpline(v,J) Isc=iv(0.) Voc,=iv.roots() v=np.linspace(0,Voc) Pmax=np.amax(-v*iv(v)) Ff=-Pmax/(Voc*Isc) return dict(Ff=Ff,Voc=Voc,Isc=Isc,Pmax=Pmax)
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
7cd1320d6c83c5e33aca21d70032e878
In the reference, the mobilities of electrons and holes are varied but kept equal. The following shows how such sweep can be implemented.
mu_values = np.logspace(-10,-2,19) def mu_sweep(params): for mu in mu_values: p=dict(params) p['electron.mu']=mu p['hole.mu']=mu yield mu,p v_sweep = sweep('electrode0.voltage',np.linspace(0.,0.8,40))
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
5847a2c723fa437375172238f2b5895b
Because different models are considered below, a common function is defined here to run the simulation and to plot the result. The function takes model as an argument.
def Voc_Ff(model,params): c=context(model) result=[] def onemu(mu, cmu): for _ in cmu.sweep(cmu.params, v_sweep): pass v,J=cmu.teval(v_sweep.parameter_name,'J') p = performance(v,J) return (mu, p['Voc'], p['Ff']) result = np.asarray([onemu(*_) for _ in c.sweep(params, mu_sweep)]) testing.store(result) fig,(ax_voc,ax_ff)=plt.subplots(nrows=2,sharex=True) ax_voc.plot(result[:,0],result[:,1]) ax_ff.plot(result[:,0],result[:,2]) ax_ff.set_xlabel(r'$\mu \mathrm{[m^2 V^{-1} s^{-1}]}$') ax_ff.set_xscale('log') ax_ff.set_ylabel('FF') ax_voc.set_ylabel('$V_{oc}$'); return result params=make_params()
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
9205569158dec348966e8fad6275e508
Results Direct recombination, non-selective contacts As seen below, in the case of direct recombination, selective contacts are useful for improving fill factor and open-circuit voltage regardless of mobilities.
Voc_Ff(model_const(selective=False),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
64903d267cacc1b6b197d2ef7c8c07a4
Direct recombination, selective contacts
Voc_Ff(model_const(selective=True),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
3e4538c93304c2ac7c1778d9adf9d39c
Langevin recombination, non-selective contants If Langevin recombination is assumed, open circuit voltage drops regardless of contact selectivity.
Voc_Ff(model_Langevin(selective=False),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
dd27c6a57fd4c9238206b4951949abc7
Langevin recombination, selective contants
Voc_Ff(model_Langevin(selective=True),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
9e6149b21aac1dc9c454e93776d3ef7f
SRH recombination, non-selective contacts The case of SRH recombination resembles the case of direct recombination in its dependence on mobility. This is not surprising, as in both cases the mobility does not enter the recombination term $R$.
Voc_Ff(model_SRH(selective=False),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
ac08e41401b7146e582fd8067acbed2f
SRH recombination, selective contacts
Voc_Ff(model_SRH(selective=True),params);
examples/photovoltaic/Voc-Ff.ipynb
mzszym/oedes
agpl-3.0
0fddc97526614b5f144d140957a7404e
This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic. As an exercise, fill in sample_stat below with any of these statistics: Standard deviation of the sample. Coefficient of variation, which is the sample standard deviation divided by the sample standard mean. Min or Max Median (which is the 50th percentile) 10th or 90th percentile. Interquartile range (IQR), which is the difference between the 75th and 25th percentiles. NumPy array methods you might find useful include std, min, max, and percentile. Depending on the results, you might want to adjust xlim.
def sample_stat(sample): # TODO: replace the following line with another sample statistic return sample.mean() slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([0, 100])) None
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
steinam/teacher
mit
08d3a7e78069c9faf3dd6e7b8e83a80c
Part Two So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI. But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it! In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it. Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
class Resampler(object): """Represents a framework for computing sampling distributions.""" def __init__(self, sample, xlim=None): """Stores the actual sample.""" self.sample = sample self.n = len(sample) self.xlim = xlim def resample(self): """Generates a new sample by choosing from the original sample with replacement. """ new_sample = numpy.random.choice(self.sample, self.n, replace=True) return new_sample def sample_stat(self, sample): """Computes a sample statistic using the original sample or a simulated sample. """ return sample.mean() def compute_sample_statistics(self, iters=1000): """Simulates many experiments and collects the resulting sample statistics. """ stats = [self.sample_stat(self.resample()) for i in range(iters)] return numpy.array(stats) def plot_sample_stats(self): """Runs simulated experiments and summarizes the results. """ sample_stats = self.compute_sample_statistics() summarize_sampling_distribution(sample_stats) pyplot.hist(sample_stats, color=COLOR2) pyplot.xlabel('sample statistic') pyplot.xlim(self.xlim)
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
steinam/teacher
mit
3cfc81418bec4ce5ac51b95eddee283d
Exercise: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
class StdResampler(Resampler): """Computes the sampling distribution of the standard deviation.""" def sample_stat(self, sample): """Computes a sample statistic using the original sample or a simulated sample. """ return sample.std()
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
steinam/teacher
mit
05817907ea3ec47d6dfcae282cc73a89
When your StdResampler is working, you should be able to interact with it:
slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_resampled_stats, n=slider) None
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
steinam/teacher
mit
fd2bd294e0923c3b48d578f8fdfe94f9
Part Three We can extend this framework to compute SE and CI for a difference in means. For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
female_weight = scipy.stats.lognorm(0.23, 0, 70.8) female_weight.mean(), female_weight.std()
jup_notebooks/data-science-ipython-notebooks-master/scipy/sampling.ipynb
steinam/teacher
mit
476d61b0257db1d0210c8cd77d7300ee
This vectorization of operations simplifies the syntax of operating on arrays of data: we no longer have to worry about the size or shape of the array, but just about what operation we want done. For arrays of strings, NumPy does not provide such simple access, and thus you're stuck using a more verbose loop syntax:
data = ['peter', 'Paul', 'MARY', 'gUIDO'] [s.capitalize() for s in data]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
e8d06b3c36b36fbe9abad8ffa87d822a
This is perhaps sufficient to work with some data, but it will break if there are any missing values. For example:
data = ['peter', 'Paul', None, 'MARY', 'gUIDO'] [s.capitalize() for s in data]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
e73b988e06017d548ff6ebab2f05b09c
Pandas includes features to address both this need for vectorized string operations and for correctly handling missing data via the str attribute of Pandas Series and Index objects containing strings. So, for example, suppose we create a Pandas Series with this data:
import pandas as pd names = pd.Series(data) names
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
4b6c4bda1a4d87406091a03972f0109f
We can now call a single method that will capitalize all the entries, while skipping over any missing values:
names.str.capitalize()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
75e3c85d5182281ff2c1e94e42b9eff5
Using tab completion on this str attribute will list all the vectorized string methods available to Pandas. Tables of Pandas String Methods If you have a good understanding of string manipulation in Python, most of Pandas string syntax is intuitive enough that it's probably sufficient to just list a table of available methods; we will start with that here, before diving deeper into a few of the subtleties. The examples in this section use the following series of names:
monte = pd.Series(['Graham Chapman', 'John Cleese', 'Terry Gilliam', 'Eric Idle', 'Terry Jones', 'Michael Palin'])
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
f99c7fa5f022ce14de466d932a2a3c98
Methods similar to Python string methods Nearly all Python's built-in string methods are mirrored by a Pandas vectorized string method. Here is a list of Pandas str methods that mirror Python string methods: | | | | | |-------------|------------------|------------------|------------------| |len() | lower() | translate() | islower() | |ljust() | upper() | startswith() | isupper() | |rjust() | find() | endswith() | isnumeric() | |center() | rfind() | isalnum() | isdecimal() | |zfill() | index() | isalpha() | split() | |strip() | rindex() | isdigit() | rsplit() | |rstrip() | capitalize() | isspace() | partition() | |lstrip() | swapcase() | istitle() | rpartition() | Notice that these have various return values. Some, like lower(), return a series of strings:
monte.str.lower()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
9ee051bcfd60e011bd39fdc30ab96bb8
But some others return numbers:
monte.str.len()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
3d2aa54143a25585b8008cd4b0271a6a
Or Boolean values:
monte.str.startswith('T')
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
a1b626061f557868090f1969988dd0e6
Still others return lists or other compound values for each element:
monte.str.split()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
563072c0d5552778fd8c0d05ebc87d12
We'll see further manipulations of this kind of series-of-lists object as we continue our discussion. Methods using regular expressions In addition, there are several methods that accept regular expressions to examine the content of each string element, and follow some of the API conventions of Python's built-in re module: | Method | Description | |--------|-------------| | match() | Call re.match() on each element, returning a boolean. | | extract() | Call re.match() on each element, returning matched groups as strings.| | findall() | Call re.findall() on each element | | replace() | Replace occurrences of pattern with some other string| | contains() | Call re.search() on each element, returning a boolean | | count() | Count occurrences of pattern| | split() | Equivalent to str.split(), but accepts regexps | | rsplit() | Equivalent to str.rsplit(), but accepts regexps | With these, you can do a wide range of interesting operations. For example, we can extract the first name from each by asking for a contiguous group of characters at the beginning of each element:
monte.str.extract('([A-Za-z]+)', expand=False)
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
fa23b78d421c23adf63ef84cca7063dc
Or we can do something more complicated, like finding all names that start and end with a consonant, making use of the start-of-string (^) and end-of-string ($) regular expression characters:
monte.str.findall(r'^[^AEIOU].*[^aeiou]$')
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
4aca0dec94c8acf33f809df7db13e321
The ability to concisely apply regular expressions across Series or Dataframe entries opens up many possibilities for analysis and cleaning of data. Miscellaneous methods Finally, there are some miscellaneous methods that enable other convenient operations: | Method | Description | |--------|-------------| | get() | Index each element | | slice() | Slice each element| | slice_replace() | Replace slice in each element with passed value| | cat() | Concatenate strings| | repeat() | Repeat values | | normalize() | Return Unicode form of string | | pad() | Add whitespace to left, right, or both sides of strings| | wrap() | Split long strings into lines with length less than a given width| | join() | Join strings in each element of the Series with passed separator| | get_dummies() | extract dummy variables as a dataframe | Vectorized item access and slicing The get() and slice() operations, in particular, enable vectorized element access from each array. For example, we can get a slice of the first three characters of each array using str.slice(0, 3). Note that this behavior is also available through Python's normal indexing syntax–for example, df.str.slice(0, 3) is equivalent to df.str[0:3]:
monte.str[0:3]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
9def2c81c34f5cbea5e4973b5772e9ef
Indexing via df.str.get(i) and df.str[i] is likewise similar. These get() and slice() methods also let you access elements of arrays returned by split(). For example, to extract the last name of each entry, we can combine split() and get():
monte.str.split().str.get(-1)
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
7fca350b7d497a4cf8c5b87bcf418314
Indicator variables Another method that requires a bit of extra explanation is the get_dummies() method. This is useful when your data has a column containing some sort of coded indicator. For example, we might have a dataset that contains information in the form of codes, such as A="born in America," B="born in the United Kingdom," C="likes cheese," D="likes spam":
full_monte = pd.DataFrame({'name': monte, 'info': ['B|C|D', 'B|D', 'A|C', 'B|D', 'B|C', 'B|C|D']}) full_monte
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
5b654542ebc2195377b036b7192ee3f2
The get_dummies() routine lets you quickly split-out these indicator variables into a DataFrame:
full_monte['info'].str.get_dummies('|')
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
2ee29955a6a1e948b0503a6d696ad8f8
With these operations as building blocks, you can construct an endless range of string processing procedures when cleaning your data. We won't dive further into these methods here, but I encourage you to read through "Working with Text Data" in the Pandas online documentation, or to refer to the resources listed in Further Resources. Example: Recipe Database These vectorized string operations become most useful in the process of cleaning up messy, real-world data. Here I'll walk through an example of that, using an open recipe database compiled from various sources on the Web. Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand. The scripts used to compile this can be found at https://github.com/fictivekin/openrecipes, and the link to the current version of the database is found there as well. As of Spring 2016, this database is about 30 MB, and can be downloaded and unzipped with these commands:
# !curl -O http://openrecipes.s3.amazonaws.com/recipeitems-latest.json.gz # !gunzip recipeitems-latest.json.gz
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
1e9b9522661e5ec4466bc7a05a978f2b
The database is in JSON format, so we will try pd.read_json to read it:
try: recipes = pd.read_json('recipeitems-latest.json') except ValueError as e: print("ValueError:", e)
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
35c27fc36836d22771c6f2d6caf87b5a
Oops! We get a ValueError mentioning that there is "trailing data." Searching for the text of this error on the Internet, it seems that it's due to using a file in which each line is itself a valid JSON, but the full file is not. Let's check if this interpretation is true:
with open('recipeitems-latest.json') as f: line = f.readline() pd.read_json(line).shape
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
99fb3594d69d498b2f69193cfc136ebe
Yes, apparently each line is a valid JSON, so we'll need to string them together. One way we can do this is to actually construct a string representation containing all these JSON entries, and then load the whole thing with pd.read_json:
# read the entire file into a Python array with open('recipeitems-latest.json', 'r') as f: # Extract each line data = (line.strip() for line in f) # Reformat so each line is the element of a list data_json = "[{0}]".format(','.join(data)) # read the result as a JSON recipes = pd.read_json(data_json) recipes.shape
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
f8f25751d0ec0fc7d71547c03b406f52
We see there are nearly 200,000 recipes, and 17 columns. Let's take a look at one row to see what we have:
recipes.iloc[0]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
6d52dfc1883cd4ec07c9cb2de2bf8e95
There is a lot of information there, but much of it is in a very messy form, as is typical of data scraped from the Web. In particular, the ingredient list is in string format; we're going to have to carefully extract the information we're interested in. Let's start by taking a closer look at the ingredients:
recipes.ingredients.str.len().describe()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
06674ca95c4534f7dd4570b4ad0cb52b
The ingredient lists average 250 characters long, with a minimum of 0 and a maximum of nearly 10,000 characters! Just out of curiousity, let's see which recipe has the longest ingredient list:
recipes.name[np.argmax(recipes.ingredients.str.len())]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
a811393d57b4b08ed645a8473288e8c2
That certainly looks like an involved recipe. We can do other aggregate explorations; for example, let's see how many of the recipes are for breakfast food:
recipes.description.str.contains('[Bb]reakfast').sum()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
22a81e1b93fc6b81561ca5d0b985f624
Or how many of the recipes list cinnamon as an ingredient:
recipes.ingredients.str.contains('[Cc]innamon').sum()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
3f69707da8fa1bbd5b63e6585466f379
We could even look to see whether any recipes misspell the ingredient as "cinamon":
recipes.ingredients.str.contains('[Cc]inamon').sum()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
2b19e4d8140d124fef4e57b1bf993bf4
This is the type of essential data exploration that is possible with Pandas string tools. It is data munging like this that Python really excels at. A simple recipe recommender Let's go a bit further, and start working on a simple recipe recommendation system: given a list of ingredients, find a recipe that uses all those ingredients. While conceptually straightforward, the task is complicated by the heterogeneity of the data: there is no easy operation, for example, to extract a clean list of ingredients from each row. So we will cheat a bit: we'll start with a list of common ingredients, and simply search to see whether they are in each recipe's ingredient list. For simplicity, let's just stick with herbs and spices for the time being:
spice_list = ['salt', 'pepper', 'oregano', 'sage', 'parsley', 'rosemary', 'tarragon', 'thyme', 'paprika', 'cumin']
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
95a090cab7c4300d81a902e654ee40d5
We can then build a Boolean DataFrame consisting of True and False values, indicating whether this ingredient appears in the list:
import re spice_df = pd.DataFrame(dict((spice, recipes.ingredients.str.contains(spice, re.IGNORECASE)) for spice in spice_list)) spice_df.head()
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
b18100977c45d1bee1a1e60106fb4f5f
Now, as an example, let's say we'd like to find a recipe that uses parsley, paprika, and tarragon. We can compute this very quickly using the query() method of DataFrames, discussed in High-Performance Pandas: eval() and query():
selection = spice_df.query('parsley & paprika & tarragon') len(selection)
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
20d99505ab058962d102a0fce3b7ee4f
We find only 10 recipes with this combination; let's use the index returned by this selection to discover the names of the recipes that have this combination:
recipes.name[selection.index]
jup_notebooks/data-science-ipython-notebooks-master/pandas/03.10-Working-With-Strings.ipynb
steinam/teacher
mit
a755d458069e715e9fa8c863ad8f92ea
The X matrix correspond to the inputs and the Y matrix to the outputs to predict.
data = pd.read_csv('datasets/data_regression.csv') X = data['X'] Y = data['Y'] # Normalization X = np.asmatrix(normalize_pd(X)).T Y = np.asmatrix(normalize_pd(Y)).T
src/Regression.ipynb
pvchaumier/ml_by_example
isc
3e439ad3ccf726b3103d46a25d382b38
Linear regression Here we have $\Phi(X) = X$. The function we look for has the form $f(x) = ax + b$.
def linear_regression(X, Y): # Building the Phi matrix Ones = np.ones((X.shape[0], 1)) phi_X = np.hstack((Ones, X)) # Calculating the weights w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y) # Predicting the output values Y_linear_reg = np.dot(phi_X, w) return Y_linear_reg Y_linear_reg = linear_regression(X, Y) plt.plot(X, Y, '.') plt.plot(X, Y_linear_reg, 'r') plt.title('Linear Regression') plt.legend(['Data', 'Linear Regression'])
src/Regression.ipynb
pvchaumier/ml_by_example
isc
b08bfcd31426aa8aa62507b8cf6165b5
The obtained solution does not represent the data very well. It is because the power of representation is too low compared to the target function. This is usually referred to as underfitting. Polynomial Regression Now, we approximate the target function by a polynom $f(x) = w_0 + w_1 x + w_2 x^2 + ... + w_d x^d$ with $d$ the degree of the polynom. We plotted the results obtained with different degrees.
def polynomial_regression(X, Y, degree): # Building the Phi matrix Ones = np.ones((X.shape[0], 1)) # Add a column of ones phi_X = np.hstack((Ones, X)) # add a column of X elevated to all the powers from 2 to degree for i in range(2, degree + 1): # calculate the vector X to the power i and add it to the Phi matrix X_power = np.array(X) ** i phi_X = np.hstack((phi_X, np.asmatrix(X_power))) # Calculating the weights w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y) # Predicting the output values Y_poly_reg = np.dot(phi_X, w) return Y_poly_reg # Degrees to plot you can change these values to # see how the degree of the polynom affects the # predicted function degrees = [1, 2, 20] legend = ['Data'] plt.plot(X, Y, '.') for degree in degrees: Y_poly_reg = polynomial_regression(X, Y, degree) plt.plot(X, Y_poly_reg) legend.append('degree ' + str(degree)) plt.legend(legend) plt.title('Polynomial regression results depending on the degree of the polynome used')
src/Regression.ipynb
pvchaumier/ml_by_example
isc
ee8bcedacdd61788d3609d79dd09d702
The linear case is still underfitting but now, we see that the polynom of degree 20 is too sensitive to the data, especially around $[-2.5, -1.5]$. This phenomena is called overfitting: the model starts fitting the noise in the data as well and looses its capacity to generalize. Regression with kernel gaussian Lastly, we look at function of the type $f(x) = \sum \phi_i(x)$ with $\phi_i(x) = \exp({-\frac{x - b_i}{\sigma^2}}$). $b_i$ is called the base and $\sigma$ is its width. Usually, the $b_i$ are taken randomly within the dataset. That is what I did in the implementation with b the number of bases. In the plot, there is the base function used to compute the regressed function and the latter.
def gaussian_regression(X, Y, b, sigma, return_base=True): """b is the number of bases to use, sigma is the variance of the base functions.""" # Building the Phi matrix Ones = np.ones((X.shape[0], 1)) # Add a column of ones phi_X = np.hstack((Ones, X)) # Choose randomly without replacement b values from X # to be the center of the base functions X_array = np.array(X).reshape(1, -1)[0] bases = np.random.choice(X_array, b, replace=False) bases_function = [] for i in range(1, b): base_function = np.exp(-0.5 * (((X_array - bases[i - 1] * np.ones(len(X_array))) / sigma) ** 2)) bases_function.append(base_function) phi_X = np.hstack((phi_X, np.asmatrix(base_function).T)) w = np.dot(np.dot(inv(np.dot(phi_X.T, phi_X)), phi_X.T), Y) if return_base: return np.dot(phi_X, w), bases_function else: return np.dot(phi_X, w) # By changing this value, you will change the width of the base functions sigma = 0.2 # b is the number of base functions used b = 5 Y_gauss_reg, bases_function = gaussian_regression(X, Y, b, sigma) # Plotting the base functions and the dataset plt.plot(X, Y, '.') plt.plot(X, Y_gauss_reg) legend = ['Data', 'Regression result'] for i, base_function in enumerate(bases_function): plt.plot(X, base_function) legend.append('Base function n°' + str(i)) plt.legend(legend) plt.title('Regression with gaussian base functions')
src/Regression.ipynb
pvchaumier/ml_by_example
isc
bdfe4d7b2f61e7f6b827bf9788e8aa83
<header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" height="100px" align="left"/> <img src="images/mat.png" alt="" height="100px" align="right"/> </header> <br/><br/><br/><br/><br/> MAT281 Aplicaciones de la Matemática en la Ingeniería Sebastián Flores https://www.github.com/usantamaria/mat281 Clase anterior Regresión Lineal * ¿Cómo se llamaba el algoritmo que vimos? * ¿Cuál era la aproximación ingenieril? ¿Machine Learning? ¿Estadística? * ¿Cuándo funcionaba y cuándo fallaba? ¿Qué veremos hoy? Clasificación y Regresión logística. ¿Porqué veremos ese contenido? Porque clasificación es un problema muy común puesto que permite la toma de decisiones. Regresión logística es un algoritmo que surge naturalmente como extensión de regresión lineal pero en el contexto de clasificación. Problemas de Clasificación ¿Conocen algún problema de clasificación? Wine Dataset <img src="images/wine.jpg" alt="" width="600px" align="middle"/> Wine Dataset Los datos corresponden a 3 cultivos diferentes de vinos de la misma región de Italia, y que han sido identificados con las etiquetas 1, 2 y 3. Para cada tipo de vino se realizado 13 análisis químicos: Alcohol Malic acid Ash Alcalinity of ash Magnesium Total phenols Flavanoids Nonflavanoid phenols Proanthocyanins Color intensity Hue OD280/OD315 of diluted wines Proline La base de datos contiene 178 muestras distintas en total. Wine dataset Si no conocemos de antemano las etiquetas, es decir, los cultivos 1,2 o 3 a los que pertenece cada muestra, el problema es de clustering: $$\textrm{Tenemos } X \in R^{n,m} \textrm{ buscamos las etiquetas } Y \in N^m$$ ¿Cuántos grupos existen? ¿A que grupo pertenece cada dato? Si conocemos los valores y las etiquetas, y se desea obtener la etiqueta de una muestra sin etiquetar, el problema es de clasificación: $$\textrm{Tenemos } X \in R^{n \times m} \textrm{ y } Y \in N^m \textrm{ y buscamos las etiquetas de } x \in R^n$$ ¿A qué grupo pertenece este nuevo dato? Regresión Lineal Se buscaba entrenar una función lineal $$h_{\theta}(x) = \theta_0 + \theta_1 x_1 + ... + \theta_n x_n$$ de forma que se minimice $$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$ Regresión Logística Buscaremos entrenar una función logística $$h_{\theta}(x) = \frac{1}{1 + e^{-(\theta_0 + \theta_1 x_1 + ... + \theta_n x_n)}}$$ de forma que se minimice $$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$ Ejemplo 2D ¿Conocen el accidente del Space Shuttle Challeger? 28 Junio 1986. A pesar de existir evidencia de funcionamiento defectuoso, se da luz verde al lanzamiento. <img src="images/Challenger1.gif" alt="" width="600px" align="middle"/> Ejemplo 2D A los 73 segundos de vuelo, el transbordador espacial explota, matando a los 7 pasajeros. <img src="images/Challenger2.jpg" alt="" width="600px" align="middle"/> Ejemplo 2D Como parte del debriefing del accidente, se obtuvieron los siguientes datos
%%bash cat data/Challenger.txt
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
b407b815659fb5dca2e573d1ff990f9d
Ejemplo 2D Grafiquemos los datos
from matplotlib import pyplot as plt import numpy as np # Plot of data data = np.loadtxt("data/Challenger.txt", skiprows=1) x = data[:,0] y = data[:,1] plt.figure(figsize=(16,8)) plt.plot(x, y, 'bo', ms=8) plt.title("Exito o Falla en lanzamiento de Challenger") plt.xlabel(r"T [${}^o F$]") plt.ylabel(r"Bad Rings") plt.ylim([-0.1,3.1]) plt.show()
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
d0049544fbfe1f55a810a1f8d0447bf8
Ejemplo 2D Nos gustaría saber en qué condiciones se produce accidente. No nos importa el número de fallas, sólo si existe falla o no.
# Plot of data data = np.loadtxt("data/Challenger.txt", skiprows=1) x = (data[:,0]-32.)/1.8 y = np.array(data[:,1]==0,int) plt.figure(figsize=(16,8)) plt.plot(x[y==0], y[y==0], 'bo', label="Falla", ms=8) plt.plot(x[y>0], y[y>0], 'rs', label="Exito", ms=8) plt.ylim([-0.1, 1.1]) plt.legend(loc=0, numpoints=1) plt.title("Exito o Falla en lanzamiento de Challenger") plt.xlabel(r"T [${}^o C$]") plt.ylabel(r"$y$") plt.show()
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
bb09c091df163fa407447066b415ebca
Modelo Definimos como antes $$\begin{aligned} Y &= \begin{bmatrix}y^{(1)} \ y^{(2)} \ \vdots \ y^{(m)}\end{bmatrix}\end{aligned}$$ y $$\begin{aligned} X = \begin{bmatrix} 1 & x^{(1)}_1 & \dots & x^{(1)}_n \ 1 & x^{(2)}_1 & \dots & x^{(2)}_n \ \vdots & \vdots & & \vdots \ 1 & x^{(m)}_1 & \dots & x^{(m)}_n \ \end{bmatrix}\end{aligned}$$ Modelo Luego la evaluación de todos los datos puede escribirse matricialmente como $$\begin{aligned} X \theta &= \begin{bmatrix} 1 & x_1^{(1)} & ... & x_n^{(1)} \ \vdots & \vdots & & \vdots \ 1 & x_1^{(m)} & ... & x_n^{(m)} \ \end{bmatrix} \begin{bmatrix}\theta_0 \ \theta_1 \ \vdots \ \theta_n\end{bmatrix} \ & = \begin{bmatrix} 1 \theta_0 + x^{(1)}_1 \theta_1 + ... + x^{(1)}_n \theta_n \ \vdots \ 1 \theta_0 + x^{(m)}_1 \theta_1 + ... + x^{(m)}_n \theta_n \ \end{bmatrix}\end{aligned}$$ Modelo Nuestro problema es encontrar un “buen” conjunto de valores $\theta$ de modo que $$\begin{aligned} g(X\theta) \approx Y\end{aligned}$$ donde $g(z)$ es la función sigmoide (en. sigmoid function). $$g(z) = \frac{1}{1+e^{-z}}$$ Interpretación gráfica
from matplotlib import pyplot as plt import numpy as np def sigmoid(z): return (1+np.exp(-z))**(-1.) z = np.linspace(-5,5,100) g = sigmoid(z) fig = plt.figure(figsize=(16,8)) plt.plot(z,sigmoid(z), lw=2.0) plt.plot(z,sigmoid(z*2), lw=2.0) plt.plot(z,sigmoid(z-2), lw=2.0) plt.grid("on") plt.show()
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
df3b6efe744bf71143cec9b7422da397
Modelo Función Sigmoide La función sigmoide $g(z) = (1+e^{-z})^{-1}$ tiene la siguiente propiedad: $$g'(z) = g(z)(1-g(z))$$ Modelo Función Sigmoide $g(z) = (1+e^{-z})^{-1}$ y $g'(z) = g(z)(1-g(z))$. Demostración: $$\begin{aligned} g'(z) &= \frac{-1}{(1+e^{-z})^2} (-e^{-z}) \ &= \frac{e^{-z}}{(1+e^{-z})^2} \ &= \frac{1}{1+e^{-z}} \frac{e^{-z}}{1+e^{-z}} \ &= \frac{1}{1+e^{-z}} \left(1 - \frac{1}{1+e^{-z}} \right) \ &= g(z)(1-g(z))\end{aligned}$$ Interpretación gráfica
from matplotlib import pyplot as plt import numpy as np def sigmoid(z): return (1+np.exp(-z))**(-1.) z = np.linspace(-5,5,100) g = sigmoid(z) dgdz = g*(1-g) fig = plt.figure(figsize=(16,8)) plt.plot(z, g, "k", label="g(z)", lw=2) plt.plot(z, dgdz, "r", label="dg(z)/dz", lw=2) plt.legend() plt.grid("on") plt.show()
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
b8d5415a75cf91828290868a1a87e0ce
Aproximación Ingenieril ¿Cómo podemos reutilizar lo que conocemos de regresión lineal? Si buscamos minimizar $$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$ Podemos calcular el gradiente y luego utilizar el método del máximo descenso para obtener $\theta$. Aproximación Ingenieril El cálculo del gradiente es directo: $$\begin{aligned} \frac{\partial J(\theta)}{\partial \theta_k} &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) \frac{\partial}{\partial \theta_k} h_{\theta}(x^{(i)}) \ &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) \frac{\partial}{\partial \theta_k} g(\theta^T x^{(i)}) \ &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) h_{\theta}(x^{(i)}) \left(1-h_{\theta}(x^{(i)})\right) \frac{\partial}{\partial \theta_k} (\theta^T x^{(i)}) \ &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) h_{\theta}(x^{(i)}) \left(1-h_{\theta}(x^{(i)})\right) x^{(i)}_k\end{aligned}$$ Aproximación Ingenieril ¿Hay alguna forma de escribir todo esto de manera matricial? Recordemos que si las componentes eran $$\begin{aligned} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) x^{(i)}k = \sum{i=1}^{m} x^{(i)}k \left( h{\theta}(x^{(i)}) - y^{(i)}\right)\end{aligned}$$ podíamos escribirlo vectorialmente como $$X^T (X\theta - Y)$$ Aproximación Ingenieril Luego, para $$\begin{aligned} \frac{\partial J(\theta)}{\partial \theta_k} &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) h_{\theta}(x^{(i)}) \left(1-h_{\theta}(x^{(i)})\right) x^{(i)}k \ &= \sum{i=1}^{m} x^{(i)}k \left( h{\theta}(x^{(i)}) - y^{(i)}\right) h_{\theta}(x^{(i)}) \left(1-h_{\theta}(x^{(i)})\right)\end{aligned}$$ podemos escribirlo vectorialmente como $$\nabla_{\theta} J(\theta) = X^T \Big[ (g(X\theta) - Y) \odot g(X\theta) \odot (1-g(X\theta)) \Big]$$ donde $\odot$ es la multiplicación elemento a elemento (element-wise). Aproximación Ingenieril Observación crucial: $$\nabla_{\theta} J(\theta) = X^T \Big[ (g(X\theta) - Y) \odot g(X\theta) \odot (1-g(X\theta)) \Big]$$ no permite construir un sistema lineal para $\theta$, por lo cual sólo podemos resolver iterativamente. Aproximación Ingenieril Por lo tanto tenemos el algoritmo $$\begin{aligned} \theta^{(n+1)} & = \theta^{(n)} - \alpha \nabla_{\theta} J(\theta^{(n)}) \ \nabla_{\theta} J(\theta) &= X^T \Big[ (g(X\theta) - Y) \odot g(X\theta) \odot (1-g(X\theta)) \Big]\end{aligned}$$ Aproximación Ingenieril El código sería el siguiente:
import numpy as np def sigmoid(z): return 1./(1+np.exp(-z)) def norm2_error_logistic_regression(X, Y, theta0, tol=1E-6): converged = False alpha = 0.01/len(Y) theta = theta0 while not converged: H = sigmoid(np.dot(X, theta)) gradient = np.dot(X.T, (H-Y)*H*(1-H)) new_theta = theta - alpha * gradient converged = np.linalg.norm(theta-new_theta) < tol * np.linalg.norm(theta) theta = new_theta return theta
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
1f086937025f9ab264afd9dbde6836ca
Interpretación Probabilística ¿Es la derivación anterior probabilísticamente correcta? Asumamos que la pertenencia a los grupos está dado por $$\begin{aligned} \mathbb{P}[y = 1| \ x ; \theta ] & = h_\theta(x) \ \mathbb{P}[y = 0| \ x ; \theta ] & = 1 - h_\theta(x)\end{aligned}$$ Esto es, una distribución de Bernoulli con $p=h_\theta(x)$.\ Las expresiones anteriores pueden escribirse de manera más compacta como $$\begin{aligned} \mathbb{P}[y | \ x ; \theta ] & = (h_\theta(x))^y (1 - h_\theta(x))^{(1-y)} \\end{aligned}$$ Interpretación Probabilística La función de verosimilitud $L(\theta)$ nos permite entender que tan probable es encontrar los datos observados, para una elección del parámetro $\theta$. $$\begin{aligned} L(\theta) &= \prod_{i=1}^{m} \mathbb{P}[y^{(i)}| x^{(i)}; \theta ] \ &= \prod_{i=1}^{m} \Big(h_{\theta}(x^{(i)})\Big)^{y^{(i)}} \Big(1 - h_\theta(x^{(i)})\Big)^{(1-y^{(i)})}\end{aligned}$$ Nos gustaría encontrar el parámetro $\theta$ que más probablemente haya generado los datos observados, es decir, el parámetro $\theta$ que maximiza la función de verosimilitud. Interpretación Probabilística Calculamos la log-verosimilitud: $$\begin{aligned} l(\theta) &= \log L(\theta) \ &= \log \prod_{i=1}^{m} (h_\theta(x^{(i)}))^{y^{(i)}} (1 - h_\theta(x^{(i)}))^{(1-y^{(i)})} \ &= \sum_{i=1}^{m} y^{(i)}\log (h_\theta(x^{(i)})) + (1-y^{(i)}) \log (1 - h_\theta(x^{(i)}))\end{aligned}$$ No existe una fórmula cerrada que nos permita obtener el máximo de la log-verosimitud. Pero podemos utilizar nuevamente el método del gradiente máximo. Interpretación Probabilística Recordemos que si $$\begin{aligned} g(z) = \frac{1}{1+e^{-z}}\end{aligned}$$ Entonces $$\begin{aligned} g'(z) &= g(z)(1-g(z))\end{aligned}$$ y luego tenemos que $$\begin{aligned} \frac{\partial}{\partial \theta_k} h_\theta(x) &= h_\theta(x) (1-h_\theta(x)) x_k\end{aligned}$$ Interpretación Probabilística $$\begin{aligned} \frac{\partial}{\partial \theta_k} l(\theta) &= \frac{\partial}{\partial \theta_k} \sum_{i=1}^{m} y^{(i)}\log (h_\theta(x^{(i)})) + (1-y^{(i)}) \log (1 - h_\theta(x^{(i)})) \ &= \sum_{i=1}^{m} y^{(i)}\frac{\partial}{\partial \theta_k} \log (h_\theta(x^{(i)})) + (1-y^{(i)}) \frac{\partial}{\partial \theta_k} \log (1 - h_\theta(x^{(i)})) \ &= \sum_{i=1}^{m} y^{(i)}\frac{1}{h_\theta(x^{(i)})}\frac{\partial h_\theta(x^{(i)})}{\partial \theta_k} + (1-y^{(i)}) \frac{1}{1 - h_\theta(x^{(i)})} \frac{\partial (1-h_\theta(x^{(i)}))}{\partial \theta_k} \ &= \sum_{i=1}^{m} y^{(i)}(1-h_\theta(x^{(i)})) x^{(i)}- (1-y^{(i)}) h_\theta(x^{(i)}) x^{(i)}\ &= \sum_{i=1}^{m} y^{(i)}x^{(i)}- y^{(i)}h_\theta(x^{(i)}) x^{(i)}- h_\theta(x^{(i)}) x^{(i)}+ y^{(i)}h_\theta(x^{(i)}) x^{(i)}\ &= \sum_{i=1}^{m} (y^{(i)}-h_\theta(x^{(i)})) x^{(i)}\end{aligned}$$ Interpretación Probabilística Es decir, para maximizar la log-verosimilitud obtenemos igual que para la regresión lineal: $$\begin{aligned} \theta^{(n+1)} & = \theta^{(n)} - \alpha \nabla_{\theta} l(\theta^{(n)}) \ \frac{\partial l(\theta)}{\partial \theta_k} &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) x^{(i)}_k\end{aligned}$$ Aunque, en el caso de regresión logística, se tiene $h_\theta(x)=1/(1+e^{-x^T\theta})$ OBS: La elección de $\alpha$ es crucial para la convergencia. En particular, $0.01/m$ funciona bien. Recuerdo de la aproximación Ingenieril Por lo tanto tenemos el algoritmo $$\begin{aligned} \theta^{(n+1)} & = \theta^{(n)} - \alpha \nabla_{\theta} J(\theta^{(n)}) \ \nabla_{\theta} J(\theta) &= X^T \Big[ (g(X\theta) - Y) \odot g(X\theta) \odot (1-g(X\theta)) \Big]\end{aligned}$$ Interpretación Probabilística Es decir, para maximizar la log-verosimilitud obtenemos igual que para la regresión lineal: $$\begin{aligned} \theta^{(n+1)} & = \theta^{(n)} - \alpha \nabla_{\theta} l(\theta^{(n)}) \ \frac{\partial l(\theta)}{\partial \theta_k} &= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) x^{(i)}_k\end{aligned}$$ Aunque, en el caso de regresión logística, se tiene $h_\theta(x)=1/(1+e^{-x^T\theta})$ OBS: La elección de $\alpha$ es crucial para la convergencia. En particular, $0.01/m$ funciona bien.
import numpy as np def likelihood_logistic_regression(X, Y, theta0, tol=1E-6): converged = False alpha = 0.01/len(Y) theta = theta0 while not converged: H = sigmoid(np.dot(X, theta)) gradient = np.dot(X.T, H-Y) new_theta = theta - alpha * gradient converged = np.linalg.norm(theta-new_theta) < tol * np.linalg.norm(theta) theta = new_theta return theta def sigmoid(z): return 1./(1+np.exp(-z))
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
cd1631e7c7a35740ca8a29a9f19264f4
Interpretación del resultado ¿Qué significa el parámetro obtenido $\theta$? ¿Cómo relacionamos la pertenencia a una clase (discreto) con la hipótesis $h_{\theta}(x)$ (continuo). 1. Aplicación a Datos del Challenger Apliquemos lo anterior a los datos que tenemos del Challenger.
# Plot of data data = np.loadtxt("data/Challenger.txt", skiprows=1) x = (data[:,0]-32.)/1.8 X = np.array([np.ones(x.shape[0]), x]).T y = np.array(data[:,1]==0,int) theta_0 = y.mean() / X.mean(axis=0) print "theta_0", theta_0 theta_J = norm2_error_logistic_regression(X, y, theta_0) print "theta_J", theta_J theta_l = likelihood_logistic_regression(X, y, theta_0) print "theta_l",theta_l
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
1f6547154a4fec338b9222128b3d64cf
1. Aplicación a Datos del Challenger Visualización de resultados
# Predictions y_pred_J = sigmoid(np.dot(X, theta_J)) y_pred_l = sigmoid(np.dot(X, theta_l)) # Plot of data plt.figure(figsize=(16,8)) plt.plot(x[y==0], y[y==0], 'bo', label="Falla", ms=8) plt.plot(x[y>0], y[y>0], 'rs', label="Exito", ms=8) plt.plot(x, y_pred_J, label="Norm 2 error prediction") plt.plot(x, y_pred_l, label="Likelihood prediction") plt.ylim([-0.1, 1.1]) plt.legend(loc=0, numpoints=1) plt.title("Exito o Falla en lanzamiento de Challenger") plt.xlabel(r"T [${}^o C$]") plt.ylabel(r"$y$") plt.show()
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
2e49423e675fe5aac8dd122074824b9c
2. Aplicación al Iris Dataset Hay definidas $3$ clases, pero nosotros sólo podemos clasificar en $2$ clases. ¿Qué hacer?
import numpy as np from sklearn import datasets # Loading the data iris = datasets.load_iris() X = iris.data Y = iris.target print iris.target_names # Print data and labels for x, y in zip(X,Y): print x, y
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
c25d6478081f3fe405730f9070efbd96
2. Aplicación al Iris Dataset Podemos definir 2 clases: Iris Setosa y no Iris Setosa. ¿Que label le pondremos a cada clase?
import numpy as np from sklearn import datasets # Loading the data iris = datasets.load_iris() names = iris.target_names print names X = iris.data Y = np.array(iris.target==0, int) # Print data and labels for x, y in zip(X,Y): print x, y
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
f6032719b0343cf740417c8cc1d85896
2. Aplicación al Iris Dataset Para aplicar el algoritmo, utilizando el algoritmo Logistic Regression de la librería sklearn, requerimos un código como el siguiente:
import numpy as np from sklearn import datasets from sklearn.linear_model import LogisticRegression # Loading the data iris = datasets.load_iris() names = iris.target_names X = iris.data Y = np.array(iris.target==0, int) # Fitting the model Logit = LogisticRegression() Logit.fit(X,Y) # Obtain the coefficients print Logit.intercept_, Logit.coef_ # Predicting values Y_pred = Logit.predict(X) #x = X.mean(axis=0) #Y_pred_mean = Logit.predict(x) #print x, Y_pred_mean
clases/Unidad4-MachineLearning/Clase05-Clasificacion-RegresionLogistica/ClasificacionRegresionLogistica.ipynb
sebastiandres/mat281
cc0-1.0
b1ec24f4f803a16b2c6a12fea39d1e3a