markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values | hash
stringlengths 32
32
|
---|---|---|---|---|---|
<img src="inner_join.png" width="50%" />
We can utilize the pandas merge method to join our members DataFrame and our rsvps DataFrame: | joined_with_rsvps_df = pd.merge(members_df, rsvps_df, left_on='anon_id', right_on='member_id')
joined_with_rsvps_df.head(3)
joined_with_rsvps_df.columns | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | e5e546b340577ad906e60e6d3e4e9333 |
Now we have a ton of data, let's see what kind of interesting things we can discover.
Let's look at the some stats on male attendees vs. female attendees:
First we can use the isin method to make DataFrames for male and female members. | male_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['male', 'mostly_male'])]
male_attendees.tail(3)
female_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['female', 'mostly_female'])]
female_attendees.tail(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 3409247e88eaef57c4709f19781ef1cd |
Next we can use the sum method to count the number of male and female attendees per event and create a Series for each. | event_ids = [
'102502622', '106043892', '107740582', '120425212', '133803672', '138415912', '144769822', '149515412',
'160323532', '168747852', '175993712', '182860422', '206754182', '215265722', '219055217', '219840555',
'220526799', '221245827', '225488147', '89769502', '98833672'
]
male_attendees[event_ids].sum().head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | b7fad0247b41150dfb51d6e42cb8f942 |
We can then recombine the male and female Series' into a new DataFrame. | gender_attendance = pd.DataFrame({'male': male_attendees[event_ids].sum(), 'female': female_attendees[event_ids].sum()})
gender_attendance.head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 82f83d8373899023939af76762ccbe70 |
And then we can use merge again to combine this with our events DataFrame. | events_with_gender_df = pd.merge(events_df, gender_attendance, left_on='id', right_index=True)
events_with_gender_df.head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | f37859b5c4603da1387587646b502712 |
The we can plot the attendance by gender over time | gender_df = events_with_gender_df[['female', 'male']]
gender_df.plot(title='Attendance by gender over time') | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | da300d5f87bb8cf2ea218f6a5ca27f5b |
This might be easier to interpret by looking at the percentage of females in attendance. We can use the div (divide) method to calculate this. | female_ratio = gender_df['female'].div(gender_df['male'] + gender_df['female'])
female_ratio.plot(title='Percentage female attendance over time', ylim=(0.0, 1.0)) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 00d4629ec61f6374f63d6a0f7e48b982 |
The members DataFrame also has some other interesting stuff in it. Let's take a look at the topics column. | members_df['topics'].iloc[0] | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | bf3bc64c8dc46985c1bd5c8abd493cc4 |
Let's see if we can identify any trends in member's topics. Let's start off by identifying the most common topics: | from collections import Counter
topic_counter = Counter()
for m in members_df['topics']:
topic_counter.update([t['name'] for t in m])
topic_counter.most_common(20) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 3977d3d1f7346f8bae848b09773d6fa3 |
Next let's create a new DataFrame where each column is one of the top 100 topics, and each row is a member. We'll set the values of each cell to be either 0 or 1 to indicate that that member has (or doesn't have) that topic. | top_100_topics = set([t[0] for t in topic_counter.most_common(100)])
topic_member_map = {}
for i, m in members_df.iterrows():
if m['topics']:
top_topic_count = {}
for topic in m['topics']:
if topic['name'] in top_100_topics:
top_topic_count[topic['name']] = 1
topic_member_map[m['anon_id']] = top_topic_count
top_topic_df = pd.DataFrame(topic_member_map)
top_topic_df.head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 42ad1c25d35877278e56ce0ccec011d8 |
Okay for what I'm going to do next, I want the rows to be the members and the columns to be the topics. We can use the T (transpose) method to fix this. | top_topic_df = top_topic_df.T
top_topic_df.head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 91fb7e9a17f7736ba923f641c78b139a |
Next we can use the fillna method to fill in the missing values with zeros. | top_topic_df.fillna(0, inplace=True)
top_topic_df.head(3) | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | aeea1a9cdd4c5750d1d3f59abbadb225 |
Next let's use a clustering algorithm to see if there are any patterns in the topics members are interested in. A clustering algorithm groups a set of data points so that similar objects are in the same group. This is a classic type of unsupervised machine learning. Below you can find visualisations of how different clustering algorithms perform on various kinds of data:
<img src="plot_cluster_comparison_001.png" width="90%" />
Kmeans clustering is quick and can scale well to larger datasets. Let's see how it performs on our dataset:
scikit-learn
<img src="scikit-learn-logo-notext.png" width="20%" />
We'll use a python machine learning library called scikit-learn to do the clustering. | from sklearn.cluster import MiniBatchKMeans as KMeans
X = top_topic_df.as_matrix()
n_clusters = 3
k_means = KMeans(init='k-means++', n_clusters=n_clusters, n_init=10, random_state=47)
k_means.fit(X)
k_means.labels_ | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | f3d66e343999d3ff6e6c279ee20afb0d |
We've grouped our members into 3 clusters, let's see how many members are in each cluster | Counter(list(k_means.labels_)).most_common() | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | 815a53fe934bde878eda1bdc08aaff46 |
Next let's see which topics are most popular in each cluster: | from collections import defaultdict
cluster_index_map = defaultdict(list)
for i in range(k_means.labels_.shape[0]):
cluster_index_map[k_means.labels_[i]].append(top_topic_df.index[i])
for cluster_num in range(n_clusters):
print 'Cluster {}'.format(cluster_num)
f = top_topic_df[top_topic_df.index.isin(cluster_index_map[cluster_num])].sum()
f2 = f[f > 0]
f3 = f2.sort_values(ascending=False)
print f3[:10]
print | DataPhilly_Analysis.ipynb | mdbecker/daa_philly_2015 | mit | eff8f4692cdd9d784caa5cdf46807708 |
Exploring TIMIT Data <a id='timit'></a>
We will start off by exploring TIMIT data taken from 8 different regions. These measurements are taken at the midpoint of vowels, where vowel boundaries were determined automatically using forced alignment.
Uploading the data
Prior to being able to work with the data, we have to upload our dataset. The following two lines of code will read in our data and create a dataframe. The last line of code prints the timit dataframe, but instead of printing the whole dataframe, by using the method .head, it only prints the first 5 rows. | timit = pd.read_csv('data/timitvowels.csv')
timit.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | d7bd1e0f926cd4e21381ed63abfaef75 |
Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information.
|Column Name|Details|
|---|---|
|speaker|unique speaker ID|
|gender|Speaker’s self-reported gender|
|region|Speaker dialect region number|
|word|Lexical item (from sentence prompt)|
|vowel|Vowel ID|
|duration|Vowel duration (seconds)|
|F1/F2/F3/f0|f0 and F1-F3 in BPM (Hz)|
Sometimes data is encoded with with an identifier, or key, to save space and simplify calculations. Each of those keys corresponds to a specific value. If you look at the region column, you will notice that all of the values are numbers. Each of those numbers corresponds to a region, for example, in our first row the speaker, cjf0, is from region 1. That corresponds to New England. Below is a table with all of the keys for region.
|Key|Region|
|---|---|
|1|New England|
|2|Northern|
|3|North Midland|
|4|South Midland|
|5|Southern|
|6|New York City|
|7|Western|
|8|Army Brat|
Transformations
When inspecting data, you may realize that there are changes to be made -- possibly due to the representation to the data or errors in the recording. Before jumping into analysis, it is important to clean the data.
One thing to notice about timit is that the column vowel contains ARPABET identifiers for the vowels. We want to convert the vowel column to be IPA characters, and will do so in the cell below. | IPAdict = {"AO" : "ɔ", "AA" : "ɑ", "IY" : "i", "UW" : "u", "EH" : "ɛ", "IH" : "ɪ", "UH":"ʊ", "AH": "ʌ", "AX" : "ə", "AE":"æ", "EY" :"eɪ", "AY": "aɪ", "OW":"oʊ", "AW":"aʊ", "OY" :"ɔɪ", "ER":"ɚ"}
timit['vowel'] = [IPAdict[x] for x in timit['vowel']]
timit.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | e3f9d7ec5c222288f9e8a433a733cf42 |
Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker. | timit_avg = timit.groupby(['speaker', 'vowel', 'gender', 'region']).mean().reset_index()
timit_avg.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 69a52b4746093113e664c78d03b71a97 |
Splitting on Gender
Using the same dataframe from above, timit_avg, we are going to split into dataframes grouped by gender. To identify the possible values of gender in the gender column, we can use the method .unique on the column. | timit_avg.gender.unique() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | a31d9066b012f2c6df621dd18d961b0e |
You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender.
We'll split timit_avg into two separate dataframes, one for females, timit_female, and one for males, timit_male. Creating these subset dataframes does not affect the original timit_avg dataframe. | timit_female = timit_avg[timit_avg['gender'] == 'female']
timit_male = timit_avg[timit_avg['gender'] == 'male'] | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 00ec707dea63fe726d90ff6165b3d036 |
Distribution of Formants
We want to inspect the distributions of F1, F2, and F3 for those that self-report as male and those that self-report as female to identify possible trends or relationships. Having our two split dataframes, timit_female and timit_male, eases the plotting process.
Run the cell below to see the distribution of F1. | sns.distplot(timit_female['F1'], kde_kws={"label": "female"})
sns.distplot(timit_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | d30d1bee52202599dc0149ae5dba7bef |
Does there seem to be a notable difference between male and female distributions of F1?
Next, we plot F2. | sns.distplot(timit_female['F2'], kde_kws={"label": "female"})
sns.distplot(timit_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 224d09eaf2d58e21d755e05c91a1bcda |
Finally, we create the same visualization, but for F3. | sns.distplot(timit_female['F3'], kde_kws={"label": "female"})
sns.distplot(timit_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 8fd31c90809277d61f8b6d64bc1657fc |
Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations?
An additional question: How do you think the fact that we average each vowel together first for each individual affects the shape of the histograms?
Using the Class's Data <a id='cls'></a>
This portion of the notebook will rely on the data that was submit for HW5. Just like we did for the TIMIT data, we are going to read it into a dataframe and modify the column vowel to reflect the corresponding IPA translation. We will name the dataframe class_data. | # reading in the data
class_data = pd.read_csv('data/110_formants.csv')
class_data.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | b1e43eab9ad064623c61a17b8d90495a |
The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured. | # translating the vowel column
class_data['vowel'] = [IPAdict[x] for x in class_data['vowel']]
class_data.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 5706398877b1b9b5fd12bda81bae9856 |
Splitting on Gender
As we did with the TIMIT data, we are going to split class_data based on self-reported gender. We need to figure out what the possible responses for the column were. | class_data['Gender'].unique() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | a9016d4ca1c8040a8eff8ebb091110e2 |
Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male. | class_female = class_data[class_data['Gender'] == 'Female']
class_male = class_data[class_data['Gender'] == 'Male'] | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 9cb38ff8b5e061e11947ec9ac9edb3cd |
Comparing Distributions
The following visualizations compare the the distribution of formants for males and females, like we did for the TIMIT data.
First, we'll start with F1. | sns.distplot(class_female['F1'], kde_kws={"label": "female"})
sns.distplot(class_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | d4225e4b5e00742f1ca800164ecbe232 |
Next is F2. | sns.distplot(class_female['F2'], kde_kws={"label": "female"})
sns.distplot(class_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 09cd3835132c75078192646b44422108 |
And finally F3. | sns.distplot(class_female['F3'], kde_kws={"label": "female"})
sns.distplot(class_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 9aee13f8b89c7c8c2ee6f6cd0b06c3b0 |
Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data?
Vowel Spaces <a id='vs'></a>
Run the cell below to define some functions that we will be using. | def plot_blank_vowel_chart():
im = plt.imread('images/blankvowel.png')
plt.imshow(im, extent=(plt.xlim()[0], plt.xlim()[1], plt.ylim()[0], plt.ylim()[1]))
def plot_vowel_space(avgs_df):
plt.figure(figsize=(10, 8))
plt.gca().invert_yaxis()
plt.gca().invert_xaxis()
vowels = ['eɪ', 'i', 'oʊ', 'u', 'æ', 'ɑ', 'ɚ', 'ɛ', 'ɪ', 'ʊ', 'ʌ'] + ['ɔ']
for i in range(len(avgs_df)):
plt.scatter(avgs_df.loc[vowels[i]]['F2'], avgs_df.loc[vowels[i]]['F1'], marker=r"$ {} $".format(vowels[i]), s=1000)
plt.ylabel('F1')
plt.xlabel('F2') | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 1b8680acba52ed036873fbe5ba6f583d |
We are going to be recreating the following graphic from this website.
Before we can get to creating, we need to get a singular value for each column for each of the vowels (so we can create coordinate pairs). To do this, we are going to find the average formant values for each of the vowels in our dataframes. We'll do this for both timit and class_data. | class_vowel_avgs = class_data.drop('ID', axis=1).groupby('vowel').mean()
class_vowel_avgs.head()
timit_vowel_avgs = timit.groupby('vowel').mean()
timit_vowel_avgs.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | ae2423dcd44fc70b0eeeb57320c87be0 |
Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers.
Plotting the Vowel Space
Run the cell below to construct a vowel space for the class's data, in which we plot F1 on F2.
Note that both axes are descending. | plot_vowel_space(class_vowel_avgs)
plt.xlabel('F2 (Hz)')
plt.ylabel('F1 (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 1c06ec1c1f5fe7e78e5f99d0ebca6327 |
Using Logarithmic Axes
In our visualization above, we use linear axes in order to construct our vowel space. The chart we are trying to recreate has logged axes (though the picture does not indicate it). Below we log-transform all of the values in our dataframes. | log_timit_vowels = timit_vowel_avgs.apply(np.log)
log_class_vowels = class_vowel_avgs.apply(np.log)
class_data['log(F1)'] = np.log(class_data['F1'])
class_data['log(F2)'] = np.log(class_data['F2'])
log_class_vowels.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 471c00f85efd9312a73a88f52ea82788 |
Below we plot the vowel space using these new values. | plot_vowel_space(log_class_vowels)
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | c53a82b240d1ac6fd1c4a3a5be48fef2 |
What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas.
Overlaying a Vowel Space Chart
Finally, we are going to overlay a blank vowel space chart outline to see how close our data reflects the theoretical vowel chart. | plot_vowel_space(log_class_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | c219e32b9357377340b1211d93ff6964 |
How well does it match the original?
Below we generate the same graph, except using the information from the TIMIT dataset. | plot_vowel_space(log_timit_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 916350be7469500bcd168dbc491d4d78 |
How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off?
Variation in Vowel Spaces <a id='vvs'></a>
In the following visualizations, we are going to show each individual vowel from each person in the F2 and F1 dimensions (logged). Each color corresponds to a different vowel -- see the legend for the exact pairs. | sns.lmplot('log(F2)', 'log(F1)', hue='vowel', data=class_data, fit_reg=False, size=8, scatter_kws={'s':30})
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 68db2d71b7cc3327e1be0a536b8fe157 |
In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together. | plt.figure(figsize=(10, 12))
pick_vowel = lambda v: class_data[class_data['vowel'] == v]
colors = ['Greys_r', 'Purples_r', 'Blues_r', 'Greens_r', 'Oranges_r', \
'Reds_r', 'GnBu_r', 'PuRd_r', 'winter_r', 'YlOrBr_r', 'pink_r', 'copper_r']
for vowel, color in list(zip(class_data.vowel.unique(), colors)):
vowel_subset = pick_vowel(vowel)
sns.kdeplot(vowel_subset['log(F2)'], vowel_subset['log(F1)'], n_levels=1, cmap=color, shade=False, shade_lowest=False)
for i in range(1, len(class_data)+1):
plt.scatter(class_data['log(F2)'][i], class_data['log(F1)'][i], color='black', linewidths=.5, marker=r"$ {} $".format(class_data['vowel'][i]), s=40)
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | ded5482c7ea8b4384d655087c6a7ce60 |
Formants vs Height <a id='fvh'></a>
We are going to compare each of the formants and height to see if there is a relationship between the two. To help visualize that, we are going to plot a regression line, which is also referred to as the line of best fit.
We are going to use the maximum of each formant to compare to height. So for each speaker, we will calculate their greatest F1, F2, and F3 across all vowels, then compare one of those to their height. We create the necessary dataframe in the cell below using the class's data. | genders = class_data['Gender']
plotting_data = class_data.drop('vowel', axis=1)[np.logical_or(genders == 'Male', genders == 'Female')]
maxes = plotting_data.groupby(['ID', 'Gender']).max().reset_index()[plotting_data.columns[:-2]]
maxes.columns = ['ID', 'Language', 'Gender', 'Height', 'Max F1', 'Max F2', 'Max F3']
maxes_female = maxes[maxes['Gender'] == 'Female']
maxes_male = maxes[maxes['Gender'] == 'Male']
maxes.head() | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 0aa6e07d8b4bd76881bfc39c7f42b704 |
First we will plot Max F1 against Height.
Note: Each gender has a different color dot, but the line represents the line of best fit for ALL points. | sns.regplot('Height', 'Max F1', data=maxes)
sns.regplot('Height', 'Max F1', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F1', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)')
print('female: green')
print('male: orange') | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | aedcd26aef6855db33a15fb8b27bd96a |
Is there a general trend for the data that you notice? What do you notice about the different color dots?
Next, we plot Max F2 on Height. | sns.regplot('Height', 'Max F2', data=maxes)
sns.regplot('Height', 'Max F2', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F2', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F2 (Hz)')
print('female: green')
print('male: orange') | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 59a7ac608b915a96dc2d628b53f9560b |
Finally, Max F3 vs Height. | sns.regplot('Height', 'Max F3', data=maxes)
sns.regplot('Height', 'Max F3', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F3', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F3 (Hz)')
print('female: green')
print('male: orange') | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | fddb0f109ca5659a38cdadba956e07f0 |
Do you notice a difference between the trends for the three formants?
Now we are going to plot two lines of best fit -- one for males, one for females. Before we plotted one line for all of the values, but now we are separating by gender to see if gender explains some of the difference in formants values.
For now, we're going deal with just Max F1. | sns.lmplot('Height', 'Max F1', data=maxes, hue='Gender')
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 66c43ce18faa967243b3c1417347d4ec |
Is there a noticeable difference between the two? Did you expect this result?
We're going to repeat the above graph, plotting a different regression line for males and females, but this time, using timit -- having a larger sample size may help expose patterns. Before we do that, we have to repeat the process of calulating the maximum value for each formants for each speaker. Run the cell below to do that and generate the plot. The blue dots are females, the orange dots are males, and the green line is the regression line for all speakers. | timit_maxes = timit.groupby(['speaker', 'gender']).max().reset_index()
timit_maxes.columns = ['speaker', 'gender', 'region', 'height', 'word', 'vowel', 'Max duration', 'Max F1', 'Max F2', 'Max F3', 'Max f0']
plt.xlim(140, 210)
plt.ylim(500, 1400)
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'female'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'male'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes, scatter=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)'); | FormantsUpdated/Assignment.ipynb | ds-modules/LINGUIS-110 | mit | 09ce968d5bed32e4e7abf7dcc8981bb9 |
We want to be able to march forward in time from our starting point
(just like the picture above)
where $\theta = \theta_0$ to obtain the value of $\theta$ at
later times. To do this, we need to approximate the original
differential equation, and, in particular, the value of the time
derivative at each time. There are a number of ways to do this.
First order numerical approximation
Assume that the variation in \(\theta(t) \) is linear, i.e.
\[
\theta(t') = \theta_n + \beta t'
\]
where we use a local time coordinate \(t' = t - n\Delta t\), so that when we differentiate
\[
\frac{d \theta}{dt} = \beta
\]
To determine the approximation for the derivative therefore
becomes the solution to the following equation:
\[
\begin{split}
& \theta_{n+1} = \theta_n + \beta \Delta t \
& \Rightarrow \beta = \frac{d \theta}{dt} = \frac{\theta_{n+1} - \theta_n}{\Delta t}
\end{split}
\]
This is a first order difference expression for the derivative which we
substitute into the original differential equation for radioactive decay at
the current timestep
\[
\frac{\theta_{n+1} - \theta_n}{\Delta t} = - k \theta_n
\]
This rearranges to give us a time-marching algorithm:
\[
\theta_{n+1} = \theta_n (1-k \Delta t)
\]
It is an indication of the fact that this problem is really not all that difficult
that this difference equation can be written recursively
to give:
\[
\theta_{n+1} = \theta_0 (1-k \Delta t)^n
\]
In a moment we will compute some values for this expression to see how
accurate it is. First we consider whether we can improve the accuracy of the
approximation by doing a bit more work. | steps = 10
theta_0 = 1.0
const_k = 10.0
delta_t = 1.0 / steps
theta_values = np.zeros(steps)
time_values = np.zeros(steps)
theta_values[0] = theta_0
time_values[0] = 0.0
for i in range(1, steps):
theta_values[i] = theta_values[i-1] * (1 - const_k * delta_t)
time_values[i] = time_values[i-1] + delta_t
exact_theta_values = theta_0 * np.exp(-const_k * time_values)
plot(time_values, exact_theta_values, linewidth=5.0)
plot(time_values, theta_values, linewidth=3.0, color="red")
| Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb | lmoresi/UoM-VIEPS-Intro-to-Python | mit | 2d70093e5fc3a2bf2926b651585897a9 |
Higher order expansion
First we try fitting the local expansion for \(\theta\) through an
additional point.
This time we assume that the variation in \(\theta(t)\) is quadratic, i.e.
$$
\theta(t') = \theta_{n-1} + \beta t' + \gamma {t'}^2
$$
The local time coordinate is $t' = t - (n-1)\Delta t$, and when we differentiate
$$
\frac{d \theta}{dt} = \beta + 2 \gamma t'
$$
To solve for \(\beta\) and \(\gamma\) we fit the curve through the sample points:
$$
\begin{split}
\theta_n &= \theta_{n-1} + \beta \Delta t + \gamma (\Delta t)^2 \
\theta_{n+1} &= \theta_{n-1} + 2 \beta \Delta t + 4 \gamma (\Delta t)^2
\end{split}
$$
Which solve to give
$$
\begin{split}
\beta &= \left( 4 \theta_n - \theta_{n+1} - 3\theta_{n-1} \right) \frac{1}{2\Delta t} \
\gamma &= \left( \theta_{n+1} + \theta_{n-1} -2 \theta_n \right) \frac{1}{2\Delta t^2}
\end{split}
$$
We can subsitute this back into the equation above and then into the original differential equation and we obtain the following
$$
\left. \frac{d\theta}{dt} \right|{t=n\Delta t} = \beta + 2\gamma \Delta t =
\frac{1}{2\Delta t} \left( \theta{n+1} - \theta_{n-1} \right) = -k \theta_n
$$
The difference approximation to the derivative turns out to be the average of the expressions for the previous derivative and the new derivative. We have now included information about the current timestep and the previous timestep in our expression for the value of \(\theta\) at the forthcoming timestep:
$$
\theta_{n+1} = \theta_{n-1} -2k \theta_n \Delta t
$$ | steps = 100
theta_0 = 1.0
const_k = 10.0
delta_t = 1.0 / steps
theta_values = np.zeros(steps)
time_values = np.zeros(steps)
theta_values[0] = theta_0
time_values[0] = 0.0
theta_values[1] = theta_values[0] * (1 - const_k * delta_t)
time_values[1] = delta_t
for i in range(2, steps):
theta_values[i] = theta_values[i-2] - 2.0 * theta_values[i-1] * const_k * delta_t
time_values[i] = time_values[i-1] + delta_t
exact_theta_values = theta_0 * np.exp(-const_k * time_values)
plot(time_values, exact_theta_values, linewidth=5.0)
plot(time_values, theta_values, linewidth=3.0, color="red") | Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb | lmoresi/UoM-VIEPS-Intro-to-Python | mit | 5a3b2a86edae33290e22ed8c97c74a03 |
The results are more accurate when a smaller timestep is used although it
requires more computation to achieve the greater accuracy. Higher order expansion
also increases the accuracy and may be more efficient in terms of the number of computations
required for a given level of accuracy.
Note, however, that the supposedly better quadratic expansion produces an error which
oscillates as time increases. Does this error grow ? Does this make second order
expansions useless ?
Second Order Runge-Kutta
<img src="images/theta_rk2-1.png" width="66%">
The Runge-Kutta approach to higher order integration methods is
illustrated in the figure above. The idea is to estimate the
gradient \(d \theta / d t\) at the half way point between two
timestep values. This is done in two stages. Initially a
first order estimate, \( \hat{\theta} \) is made for the value of the function
\( \theta\) at \(t=t+\Delta t /2\) in the future. This value is then
subsituted into the differential equation to obtain the
estimate for the gradient at this time. The revised gradient is
then used to update the original \(\theta(t)\) by an entire timestep.
The first order step is
$$
\begin{split}
\hat{\theta}(t+\Delta t /2) & = \theta(t) + \left. \frac{d \theta}{d t} \right|_t \frac{\Delta t}{2} \
&= \theta(t) \left[ 1-\frac{k\Delta t}{2} \right]
\end{split}
$$
Substitute to estimate the gradient at the mid-point
$$
\left. \frac{d \theta}{d t} \right|_{t+\Delta t /2} \approx -k \theta(t) \left[ 1-\frac{k\Delta t}{2} \right]
$$
Use this value as the average gradient over the interval \( t\rightarrow t+\Delta t\) to update \(\theta\)
$$
\begin{split}
\theta(t+\Delta t) & \approx \theta(t) + \delta t \left( -k \theta(t) \left[ 1-\frac{k\Delta t}{2} \right] \right) \
& \approx \theta(t) \left( 1 - k \Delta t + k^2 \frac{\Delta t^2}{2} \right)
\end{split}
$$
It's worth noting that the Taylor expansion of the solution should look like
$$
e^{-kt} = 1 - kt + \frac{k^2 t^2}{2!} - \frac{k^3 t^3}{3!} + \ldots
$$
The Runge Kutta method can be extended by repeating the estimates on smaller regions of the interval. The usual choice is fourth order RK. This is largely because, obviously, it's accurate to fourth order, but also because the number of operations to go higher than fourth order is disproportionately large. See Numerical Recipes for a discussion on this and better methods for ODE's. | steps = 100
theta_0 = 1.0
const_k = 10.0
delta_t = 1.0 / steps
theta_values = np.zeros(steps)
time_values = np.zeros(steps)
theta_values[0] = theta_0
time_values[0] = 0.0
for i in range(1, steps):
theta_values[i] = theta_values[i-1] * (1 - const_k * delta_t + const_k**2 * delta_t**2 / 2.0)
time_values[i] = time_values[i-1] + delta_t
exact_theta_values = theta_0 * np.exp(-const_k * time_values)
plot(time_values, exact_theta_values, linewidth=5.0)
plot(time_values, theta_values, linewidth=3.0, color="red")
| Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb | lmoresi/UoM-VIEPS-Intro-to-Python | mit | 28f99851b40d9207af29813d47090d42 |
Surface Analysis using Declarative Syntax
The MetPy declarative syntax allows for a simplified interface to creating common
meteorological analyses including surface observation plots. | from datetime import datetime, timedelta
import cartopy.crs as ccrs
import pandas as pd
from metpy.cbook import get_test_data
import metpy.plots as mpplots | dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb | metpy/MetPy | bsd-3-clause | 763bdb0e1662b974a0bcfadcdcd2bcf3 |
Getting the data
In this example, data is originally from the Iowa State ASOS archive
(https://mesonet.agron.iastate.edu/request/download.phtml) downloaded through a separate
Python script. The data are pre-processed to determine sky cover and weather symbols from
text output. | data = pd.read_csv(get_test_data('SFC_obs.csv', as_file_obj=False),
infer_datetime_format=True, parse_dates=['valid']) | dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb | metpy/MetPy | bsd-3-clause | 3bd73f3531ad6690b481a44100bedf69 |
Plotting the data
Use the declarative plotting interface to plot surface observations over the state of
Georgia. | # Plotting the Observations using a 15 minute time window for surface observations
obs = mpplots.PlotObs()
obs.data = data
obs.time = datetime(1993, 3, 12, 13)
obs.time_window = timedelta(minutes=15)
obs.level = None
obs.fields = ['tmpf', 'dwpf', 'emsl', 'cloud_cover', 'wxsym']
obs.locations = ['NW', 'SW', 'NE', 'C', 'W']
obs.colors = ['red', 'green', 'black', 'black', 'blue']
obs.formats = [None, None, lambda v: format(10 * v, '.0f')[-3:], 'sky_cover',
'current_weather']
obs.vector_field = ('uwind', 'vwind')
obs.reduce_points = 1
# Add map features for the particular panel
panel = mpplots.MapPanel()
panel.layout = (1, 1, 1)
panel.area = 'ga'
panel.projection = ccrs.PlateCarree()
panel.layers = ['coastline', 'borders', 'states']
panel.plots = [obs]
# Collecting panels for complete figure
pc = mpplots.PanelContainer()
pc.size = (10, 10)
pc.panels = [panel]
# Showing the results
pc.show() | dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb | metpy/MetPy | bsd-3-clause | da0c56256ae12e3cf7b755de8ef6f90b |
Homework 2 (DUE: Thursday February 16)
Instructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your code is doing and so that your code is readable.
Submit the assignment by saving your notebook as an html file (File -> Download as -> HTML) and uploading it to the appropriate Dropbox folder on EEE.
Question 1
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 20$. For each, assume that $y_0 = 0$, $w_1 = 1$, and $w_2 = w_3 = \cdots w_T = 0$.
$y_t = 0.99y_{t-1} + w_t$
$y_t = y_{t-1} + w_t$
$y_t = 1.01y_{t-1} + w_t$
Plot the the simulated values for each process on the same axes and be sure to include a legend. | # Question 1
| winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb | letsgoexploring/teaching | mit | 72d50b7650b32ad6b145f0ff38635b47 |
Question 2
For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 12$. For each, assume that $y_0 = 0$.
$y_t = 1 + 0.5y_{t-1}$
$y_t = 0.5y_{t-1}$
$y_t = -1 + 0.5y_{t-1}$
Plot the the simulated values for each process on the same axes and be sure to include a legend. Set the $y$-axis limits to $[-3,3]$. | # Question 2
| winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb | letsgoexploring/teaching | mit | be536e5a729acb248ca34a9197f693e5 |
Question 3
Download a file called Econ129_US_Production_A_Data.csv from the link "Production data for the US" under the "Data" section on the course website. The file contains annual production data for the US economy including ouput, consumption, investment, and labor hours, among others. The capital stock of the US is only given for 1948. Import the data into a Pandas DataFrame and do the following:
Suppose that the depreciation rate for the US is $\delta = 0.0375$. Use the capital accumulation equation $K_{t+1} = I_t + (1-\delta)K_t$ to fill in the missing values for the capital column. Construct a plot of the computed capital stock.
Add columns to your DataFrame equal to capital per worker and output per worker by dividing the capital and output columns by the labor column. Print the first five rows of the DataFrame.
Print the average annual growth rates of capital per worker and output per worker for the US.
Recall that the average annnual growth rate of a quantity $y$ from date $0$ to date $T$ is:
\begin{align}
g & = \left(\frac{y_T}{y_0}\right)^{\frac{1}{T}}-1
\end{align} | # Question 3.1
# Question 3.2
# Question 3.3
| winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb | letsgoexploring/teaching | mit | 4f9021087319d7f4d8f4a15b55965aa9 |
Question 4: The Solow model with exogenous population and TFP growth
Suppose that the aggregate production function is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}
\end{align}
where $Y_t$ denotes output, $K_t$ denotes the capital stock, $L_t$ denotes the labor supply, and $A_t$ denotes total factor productivity $TFP$. $\alpha$ is a constant.
The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:
\begin{align}
L_{t+1} & = (1+n) L_t. \tag{2}
\end{align}
Likewise, TFP grows at an exogenously determined rate $g$:
\begin{align}
A_{t+1} & = (1+g) A_t. \tag{3}
\end{align}
The rest of the economy is characterized by the same equations as before:
\begin{align}
C_t & = (1-s)Y_t \tag{4}\
Y_t & = C_t + I_t \tag{5}\
K_{t+1} & = I_t + ( 1- \delta)K_t. \tag{6}\
\end{align}
Equation (4) is the consumption function where $s$ denotes the exogenously given saving rate. Equation (5) is the aggregate market clearing condition. Finally, Equation (6) is the capital evolution equation specifying that capital in year $t+1$ is the sum of newly created capital $I_t$ and the capital stock from year $t$ that has not depreciated $(1-\delta)K_t$.
Combine Equations (1) and (4) through (6) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$, $A_t$, and $L_t$:
\begin{align}
K_{t+1} & = sA_tK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{7}
\end{align}
Given an initial values for capital and labor, Equations (2), (3), and (7) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1), (4), (5), and (6).
Simulation
Simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
g & = 0.015 \
n & = 0.01
\end{align}
Furthermore, suppose that the initial values of capital and labor are:
\begin{align}
K_0 & = 2\
A_0 & = 1\
L_0 & = 1
\end{align} | # Initialize parameters for the simulation (A, s, T, delta, alpha, g, n, K0, A0, L0)
# Initialize a variable called tfp as a (T+1)x1 array of zeros and set first value to A0
# Compute all subsequent tfp values by iterating over t from 0 through T
# Plot the simulated tfp series
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
# Compute all subsequent labor values by iterating over t from 0 through T
# Plot the simulated labor series
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
# Compute all subsequent capital values by iterating over t from 0 through T
# Plot the simulated capital series
# Store the simulated capital, labor, and tfp data in a pandas DataFrame called data
# Print the first 5 frows of the DataFrame
# Create columns in the DataFrame to store computed values of the other endogenous variables: Y, C, and I
# Print the first five rows of the DataFrame
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
# Print the first five rows of the DataFrame
# Create a 2x2 grid of plots of capital, output, consumption, and investment
# Create a 2x2 grid of plots of capital per worker, output per worker, consumption per worker, and investment per worker
| winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb | letsgoexploring/teaching | mit | 27a0a53e5297cd30662f4e588c0b83cb |
Question 5
Recall the Solow growth model with exogenous growth in labor and TFP:
\begin{align}
Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\
L_{t+1} & = (1+n) L_t \tag{5} \
A_{t+1} & = (1+g) A_t. \tag{6}
\end{align}
Suppose that two countries called Westeros and Essos are identical except that TFP in Westeros grows faster than in Essos. Specifically:
\begin{align}
g_{Westeros} & = 0.03\
g_{Essos} & = 0.01
\end{align}
Otherwise, the parameters for each economy are the same including the initial values of capital, labor, and TFP:
\begin{align}
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
n & = 0.01\
K_0 & = 20\
A_0 & = 10\
L_0 & = 1
\end{align}
Do the following:
Find the date (value for $t$) at which output per worker in Westeros becomes at least twice as large as output per worker in Essos. Print the value for t and the values of ouput per worker for each country.
On a single set of axes, plot simulated values of output per worker for each country for t = $1, 2, \ldots 100$.
Hint: Copy into this notebook the function that simulates the Solow model with exogenous labor growth from the end of the Notebook from Class 9. Modify the function to fit this problem. | # Question 5.1
# Question 5.2
| winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb | letsgoexploring/teaching | mit | 14641cd5c5ed29108a18d6e0dd870b82 |
The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do, we will use a few of the algorithms available in Shogun to embed this data into a two-dimensional space. This is essentially the dimension reduction process as we reduce the number of features from 3 to 2. The question that arises is: what principle should we use to keep some important relations between datapoints? In fact, different algorithms imply different criteria to answer this question.
Just to start, lets pick some algorithm and one of the data sets, for example lets see what embedding of the Swissroll is produced by the Isomap algorithm. The Isomap algorithm is basically a slightly modified Multidimensional Scaling (MDS) algorithm which finds embedding as a solution of the following optimization problem:
$$
\min_{x'_1, x'_2, \dots} \sum_i \sum_j \| d'(x'_i, x'_j) - d(x_i, x_j)\|^2,
$$
with defined $x_1, x_2, \dots \in X~~$ and unknown variables $x_1, x_2, \dots \in X'~~$ while $\text{dim}(X') < \text{dim}(X)~~~$,
$d: X \times X \to \mathbb{R}~~$ and $d': X' \times X' \to \mathbb{R}~~$ are defined as arbitrary distance functions (for example Euclidean).
Speaking less math, the MDS algorithm finds an embedding that preserves pairwise distances between points as much as it is possible. The Isomap algorithm changes quite small detail: the distance - instead of using local pairwise relationships it takes global factor into the account with shortest path on the neighborhood graph (so-called geodesic distance). The neighborhood graph is defined as graph with datapoints as nodes and weighted edges (with weight equal to the distance between points). The edge between point $x_i~$ and $x_j~$ exists if and only if $x_j~$ is in $k~$ nearest neighbors of $x_i$. Later we will see that that 'global factor' changes the game for the swissroll dataset.
However, first we prepare a small function to plot any of the original data sets together with its embedding. | %matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def plot(data, embedded_data, colors='m'):
fig = plt.figure()
fig.set_facecolor('white')
ax = fig.add_subplot(121,projection='3d')
ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
ax = fig.add_subplot(122)
ax.scatter(embedded_data[0],embedded_data[1],c=colors,cmap=plt.cm.Spectral)
plt.axis('tight'); plt.axis('off')
plt.show()
import shogun as sg
# wrap data into Shogun features
data, colors = generate_data('swissroll')
feats = sg.features(data)
# create instance of Isomap converter and configure it
isomap = sg.transformer('Isomap')
isomap.put('target_dim', 2)
# set the number of neighbours used in kNN search
isomap.put('k', 20)
# create instance of Multidimensional Scaling converter and configure it
mds = sg.transformer('MultidimensionalScaling')
mds.put('target_dim', 2)
# embed Swiss roll data
embedded_data_mds = mds.transform(feats).get('feature_matrix')
embedded_data_isomap = isomap.transform(feats).get('feature_matrix')
plot(data, embedded_data_mds, colors)
plot(data, embedded_data_isomap, colors) | doc/ipython-notebooks/converter/Tapkee.ipynb | besser82/shogun | bsd-3-clause | 05e3e9bd034f8358bfd706285842056c |
As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this new representation maintains the properties of the original data, while it reduces the amount of information required to represent it. Note that the fact the embedding of the Swiss roll looks good in two dimensions stems from the intrinsic dimension of the input data. Although the original data is in a three-dimensional space, its intrinsic dimension is lower, since the only degree of freedom are the polar angle and distance from the centre, or height.
Finally, we use yet another method, Stochastic Proximity Embedding (SPE) to embed the helix: | # wrap data into Shogun features
data, colors = generate_data('helix')
features = sg.features(data)
# create MDS instance
converter = sg.transformer('StochasticProximityEmbedding')
converter.put('target_dim', 2)
# embed helix data
embedded_features = converter.transform(features)
embedded_data = embedded_features.get('feature_matrix')
plot(data, embedded_data, colors) | doc/ipython-notebooks/converter/Tapkee.ipynb | besser82/shogun | bsd-3-clause | 1166e17b2a4572fe6862d82577b611fb |
Use pd.read_excel in order to open file. If it says file not found, then make sure your directory is correct
Make sure you assign the file to a variable so it doesn't have to run every time | table = pd.read_excel("GASISData.xls") | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 7f603fb3d2097ab8a564f645d8f39736 |
Lets say we want to see the first 10 rows of the data to make sure it is the correct file (Google "pandas data preview") #table.tail is end of data | table.head() | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 54d46e2cc8f7c8b159012fbce70e0465 |
What if I want to look at just one column of data | table['PLAYNAME'] | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | ab4752f6b65b850a80afec0905e088f1 |
What if I want to create a new column | table['NEW COLUMN'] = 5
table['NEW COLUMN'] | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | d7ffb6906fc5c273a6656747347f57b9 |
What if I want to find data in a certain set, such as only in Texas (Google) (panas find rows where value is) | texasTable = table.loc[table['STATE'] == "TEXAS"]
print(texasTable) | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 2191fc8d16fd0b864a6bb36f03ef2918 |
Run the following to get shape of table | sizeTable = table.shape
print(sizeTable) | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 1142e50046c049c4cd832cde42e8b49c |
This is done to find the count of number of rows and number of cols | num_rows = sizeTable[0]
num_cols = sizeTable[1]
print(num_rows)
print(num_cols) | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 52b2433f8bfdf6f356c067a34af4f4f3 |
Rows where you have some preset parameter, such as where lattitude is greater than 80 (Google) (Google same thing as above) | table.loc[table['LATITUDE'] > 10] | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | 071ecc6e0f59d012bbef927637031193 |
Exercise: Make them find out how to rename columns
Exercise: (Usually we use Excel equations, now we are gonna practice this) Google how to add two columns together, and then create a new column with all these added values
Give them 5 mins for each excersice, help anyone around you
If you want to learn more, look it up at home on how to do other operators
Lets make a histogram of average permeability (In column CN), use column name not CN
Google this | pd.DataFrame.hist(table,"AVPERM") | media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb | texaspse/blog | mit | addec537c2b7b9131b663079d08c628f |
過学習と学習不足について知る
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 docs-ja@tensorflow.org メーリングリストにご連絡ください。
いつものように、この例のプログラムはtf.keras APIを使用します。詳しくはTensorFlowのKeras guideを参照してください。
これまでの例、つまり、映画レビューの分類と燃費の推定では、検証用データでのモデルの正解率が、数エポックでピークを迎え、その後低下するという現象が見られました。
言い換えると、モデルが訓練用データを過学習したと考えられます。過学習への対処の仕方を学ぶことは重要です。訓練用データセットで高い正解率を達成することは難しくありませんが、我々は、(これまで見たこともない)テスト用データに汎化したモデルを開発したいのです。
過学習の反対語は学習不足(underfitting)です。学習不足は、モデルがテストデータに対してまだ改善の余地がある場合に発生します。学習不足の原因は様々です。モデルが十分強力でないとか、正則化のしすぎだとか、単に訓練時間が短すぎるといった理由があります。学習不足は、訓練用データの中の関連したパターンを学習しきっていないということを意味します。
モデルの訓練をやりすぎると、モデルは過学習を始め、訓練用データの中のパターンで、テストデータには一般的ではないパターンを学習します。我々は、過学習と学習不足の中間を目指す必要があります。これから見ていくように、ちょうどよいエポック数だけ訓練を行うというのは必要なスキルなのです。
過学習を防止するための、最良の解決策は、より多くの訓練用データを使うことです。多くのデータで訓練を行えば行うほど、モデルは自然により汎化していく様になります。これが不可能な場合、次善の策は正則化のようなテクニックを使うことです。正則化は、モデルに保存される情報の量とタイプに制約を課すものです。ネットワークが少数のパターンしか記憶できなければ、最適化プロセスにより、最も主要なパターンのみを学習することになり、より汎化される可能性が高くなります。
このノートブックでは、重みの正則化とドロップアウトという、よく使われる2つの正則化テクニックをご紹介します。これらを使って、IMDBの映画レビューを分類するノートブックの改善を図ります。 | import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | cf75e169fd02f807e42293155bd8d92b |
IMDBデータセットのダウンロード
以前のノートブックで使用したエンベディングの代わりに、ここでは文をマルチホットエンコードします。このモデルは、訓練用データセットをすぐに過学習します。このモデルを使って、過学習がいつ起きるかということと、どうやって過学習と戦うかをデモします。
リストをマルチホットエンコードすると言うのは、0と1のベクトルにするということです。具体的にいうと、例えば[3, 5]というシーケンスを、インデックス3と5の値が1で、それ以外がすべて0の、10,000次元のベクトルに変換するということを意味します。 | NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# 形状が (len(sequences), dimension)ですべて0の行列を作る
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # 特定のインデックスに対してresults[i] を1に設定する
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | dfe1c6309bbc010be084398dff68bf9c |
結果として得られるマルチホットベクトルの1つを見てみましょう。単語のインデックスは頻度順にソートされています。このため、インデックスが0に近いほど1が多く出現するはずです。分布を見てみましょう。 | plt.plot(train_data[0]) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | 0cf77ee9d81ecf408cc1dfa15723bdb2 |
過学習のデモ
過学習を防止するための最も単純な方法は、モデルのサイズ、すなわち、モデル内の学習可能なパラメータの数を小さくすることです(学習パラメータの数は、層の数と層ごとのユニット数で決まります)。ディープラーニングでは、モデルの学習可能なパラメータ数を、しばしばモデルの「キャパシティ」と呼びます。直感的に考えれば、パラメータ数の多いモデルほど「記憶容量」が大きくなり、訓練用のサンプルとその目的変数の間の辞書のようなマッピングをたやすく学習することができます。このマッピングには汎化能力がまったくなく、これまで見たことが無いデータを使って予測をする際には役に立ちません。
ディープラーニングのモデルは訓練用データに適応しやすいけれど、本当のチャレレンジは汎化であって適応ではないということを、肝に銘じておく必要があります。
一方、ネットワークの記憶容量が限られている場合、前述のようなマッピングを簡単に学習することはできません。損失を減らすためには、より予測能力が高い圧縮された表現を学習しなければなりません。同時に、モデルを小さくしすぎると、訓練用データに適応するのが難しくなります。「多すぎる容量」と「容量不足」の間にちょうどよい容量があるのです。
残念ながら、(層の数や、層ごとの大きさといった)モデルの適切なサイズやアーキテクチャを決める魔法の方程式はありません。一連の異なるアーキテクチャを使って実験を行う必要があります。
適切なモデルのサイズを見つけるには、比較的少ない層の数とパラメータから始めるのがベストです。それから、検証用データでの損失値の改善が見られなくなるまで、徐々に層の大きさを増やしたり、新たな層を加えたりします。映画レビューの分類ネットワークでこれを試してみましょう。
比較基準として、Dense層だけを使ったシンプルなモデルを構築し、その後、それより小さいバージョンと大きいバージョンを作って比較します。
比較基準を作る | baseline_model = keras.Sequential([
# `.summary` を見るために`input_shape`が必要
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | d8d52476f8ca09a9929376f39379ca82 |
より小さいモデルの構築
今作成したばかりの比較基準となるモデルに比べて隠れユニット数が少ないモデルを作りましょう。 | smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary() | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | e6b7ec4c8c958009df8f87d3ef31fbd8 |
同じデータを使って訓練します。 | smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | c9e8ca9d4babc2407629d3dff2647b26 |
より大きなモデルの構築
練習として、より大きなモデルを作成し、どれほど急速に過学習が起きるかを見ることもできます。次はこのベンチマークに、この問題が必要とするよりはるかに容量の大きなネットワークを追加しましょう。 | bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary() | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | 83ea9e2f589862e9cd437461f17ac705 |
このモデルもまた同じデータを使って訓練します。 | bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | fc9055adf73b914b2014b42bbb64b793 |
訓練時と検証時の損失をグラフにする
<!--TODO(markdaoust): This should be a one-liner with tensorboard -->
実線は訓練用データセットの損失、破線は検証用データセットでの損失です(検証用データでの損失が小さい方が良いモデルです)。これをみると、小さいネットワークのほうが比較基準のモデルよりも過学習が始まるのが遅いことがわかります(4エポックではなく6エポック後)。また、過学習が始まっても性能の低下がよりゆっくりしています。 | def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)]) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | 4f6f1c43157442883e4439bb18582246 |
より大きなネットワークでは、すぐに、1エポックで過学習が始まり、その度合も強いことに注目してください。ネットワークの容量が大きいほど訓練用データをモデル化するスピードが早くなり(結果として訓練時の損失値が小さくなり)ますが、より過学習しやすく(結果として訓練時の損失値と検証時の損失値が大きく乖離しやすく)なります。
戦略
重みの正則化を加える
「オッカムの剃刀」の原則をご存知でしょうか。何かの説明が2つあるとすると、最も正しいと考えられる説明は、仮定の数が最も少ない「一番単純な」説明だというものです。この原則は、ニューラルネットワークを使って学習されたモデルにも当てはまります。ある訓練用データとネットワーク構造があって、そのデータを説明できる重みの集合が複数ある時(つまり、複数のモデルがある時)、単純なモデルのほうが複雑なものよりも過学習しにくいのです。
ここで言う「単純なモデル」とは、パラメータ値の分布のエントロピーが小さいもの(あるいは、上記で見たように、そもそもパラメータの数が少ないもの)です。したがって、過学習を緩和するための一般的な手法は、重みが小さい値のみをとることで、重み値の分布がより整然となる(正則)様に制約を与えるものです。これを「重みの正則化」と呼ばれ、ネットワークの損失関数に、重みの大きさに関連するコストを加えることで行われます。このコストには2つの種類があります。
L1正則化 重み係数の絶対値に比例するコストを加える(重みの「L1ノルム」と呼ばれる)。
L2正則化 重み係数の二乗に比例するコストを加える(重み係数の二乗「L2ノルム」と呼ばれる)。L2正則化はニューラルネットワーク用語では重み減衰(Weight Decay)と呼ばれる。呼び方が違うので混乱しないように。重み減衰は数学的にはL2正則化と同義である。
tf.kerasでは、重みの正則化をするために、重み正則化のインスタンスをキーワード引数として層に加えます。ここでは、L2正則化を追加してみましょう。 | l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | 3d033ea717d4920c9dd26182b507176a |
l2(0.001)というのは、層の重み行列の係数全てに対して0.001 * 重み係数の値 **2をネットワークの損失値合計に加えることを意味します。このペナルティは訓練時のみに加えられるため、このネットワークの損失値は、訓練時にはテスト時に比べて大きくなることに注意してください。
L2正則化の影響を見てみましょう。 | plot_history([('baseline', baseline_history),
('l2', l2_model_history)]) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | abac29e3084c257d61071c6b4560eab2 |
ご覧のように、L2正則化ありのモデルは比較基準のモデルに比べて過学習しにくくなっています。両方のモデルのパラメータ数は同じであるにもかかわらずです。
ドロップアウトを追加する
ドロップアウトは、ニューラルネットワークの正則化テクニックとして最もよく使われる手法の一つです。この手法は、トロント大学のヒントンと彼の学生が開発したものです。ドロップアウトは層に適用するもので、訓練時に層から出力された特徴量に対してランダムに「ドロップアウト(つまりゼロ化)」を行うものです。例えば、ある層が訓練時にある入力サンプルに対して、普通は[0.2, 0.5, 1.3, 0.8, 1.1] というベクトルを出力するとします。ドロップアウトを適用すると、このベクトルは例えば[0, 0.5, 1.3, 0, 1.1]のようにランダムに散らばったいくつかのゼロを含むようになります。「ドロップアウト率」はゼロ化される特徴の割合で、通常は0.2から0.5の間に設定します。テスト時は、どのユニットもドロップアウトされず、代わりに出力値がドロップアウト率と同じ比率でスケールダウンされます。これは、訓練時に比べてたくさんのユニットがアクティブであることに対してバランスをとるためです。
tf.kerasでは、Dropout層を使ってドロップアウトをネットワークに導入できます。ドロップアウト層は、その直前の層の出力に対してドロップアウトを適用します。
それでは、IMDBネットワークに2つのドロップアウト層を追加しましょう。 | dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)]) | site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb | tensorflow/docs-l10n | apache-2.0 | 41c2178c60f57cc4706ef589005b1450 |
We want to train the model using the training set, then evaluate it on the test set. As our evaluation metric we will use the ROC AUC, averaged over the 12 tasks included in the dataset. First let's see how to do this with the DeepChem API. | model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean)
print(model.evaluate(test_dataset, [metric])) | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | e8daf3f12e1274ef8f667570ad62f708 |
Simple enough. Now let's see how to do the same thing with the Tensorflow APIs. Fair warning: this is going to take a lot more code!
To begin with, Tensorflow doesn't allow a dataset to be passed directly to a model. Instead, you need to write an "input function" to construct a particular set of tensors and return them in a particular format. Fortunately, Dataset's make_iterator() method provides exactly the tensors we need in the form of a tf.data.Iterator. This allows our input function to be very simple. | def input_fn(dataset, epochs):
x, y, weights = dataset.make_iterator(batch_size=100, epochs=epochs).get_next()
return {'x': x, 'weights': weights}, y | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | 75f47db21afaac443cd3a4177bc9392a |
Next, you have to use the functions in the tf.feature_column module to create an object representing each feature and weight column (but curiously, not the label column—don't ask me why!). These objects describe the data type and shape of each column, and give each one a name. The names must match the keys in the dict returned by the input function. | x_col = tf.feature_column.numeric_column('x', shape=(n_features,))
weight_col = tf.feature_column.numeric_column('weights', shape=(n_tasks,)) | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | d4a313bff9134f81be104d0301a8c71f |
Unlike DeepChem models, which allow arbitrary metrics to be passed to evaluate(), estimators require all metrics to be defined up front when you create the estimator. Unfortunately, Tensorflow doesn't have very good support for multitask models. It provides an AUC metric, but no easy way to average this metric over tasks. We therefore must create a separate metric for every task, then define our own metric function to compute the average of them. | def mean_auc(labels, predictions, weights):
metric_ops = []
update_ops = []
for i in range(n_tasks):
metric, update = tf.metrics.auc(labels[:,i], predictions[:,i], weights[:,i])
metric_ops.append(metric)
update_ops.append(update)
mean_metric = tf.reduce_mean(tf.stack(metric_ops))
update_all = tf.group(*update_ops)
return mean_metric, update_all | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | b0ba04f5d2e67b6a9b419c762b1a2a2c |
Now we create our Estimator by calling make_estimator() on the DeepChem model. We provide as arguments the objects created above to represent the feature and weight columns, as well as our metric function. | estimator = model.make_estimator(feature_columns=[x_col],
weight_column=weight_col,
metrics={'mean_auc': mean_auc},
model_dir='estimator') | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | f71c141456360e255f1fb84accda35d2 |
We are finally ready to train and evaluate it! Notice how the input function passed to each method is actually a lambda. This allows us to write a single function, then use it with different datasets and numbers of epochs. | estimator.train(input_fn=lambda: input_fn(train_dataset, 100))
print(estimator.evaluate(input_fn=lambda: input_fn(test_dataset, 1))) | examples/notebooks/Estimators.ipynb | ktaneishi/deepchem | mit | 08e7d8cf3aae0d4f54d45ff07ecf7ce3 |
Natural Neighbor Verification
Walks through the steps of Natural Neighbor interpolation to validate that the algorithmic
approach taken in MetPy is correct.
Find natural neighbors visual test
A triangle is a natural neighbor for a point if the
circumscribed circle <https://en.wikipedia.org/wiki/Circumscribed_circle>_ of the
triangle contains that point. It is important that we correctly grab the correct triangles
for each point before proceeding with the interpolation.
Algorithmically:
We place all of the grid points in a KDTree. These provide worst-case O(n) time
complexity for spatial searches.
We generate a Delaunay Triangulation <https://docs.scipy.org/doc/scipy/
reference/tutorial/spatial.html#delaunay-triangulations>_
using the locations of the provided observations.
For each triangle, we calculate its circumcenter and circumradius. Using
KDTree, we then assign each grid a triangle that has a circumcenter within a
circumradius of the grid's location.
The resulting dictionary uses the grid index as a key and a set of natural
neighbor triangles in the form of triangle codes from the Delaunay triangulation.
This dictionary is then iterated through to calculate interpolation values.
We then traverse the ordered natural neighbor edge vertices for a particular
grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate
proportional polygon areas.
Circumcenter of (n - 1), n, grid_location
Circumcenter of (n + 1), n, grid_location
Determine what existing circumcenters (ie, Delaunay circumcenters) are associated
with vertex n, and add those as polygon vertices. Calculate the area of this polygon.
Increment the current edges to be checked, i.e.:
n - 1 = n, n = n + 1, n + 1 = n + 2
Repeat steps 5 & 6 until all of the edge combinations of 3 have been visited.
Repeat steps 4 through 7 for each grid cell. | import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d
from scipy.spatial.distance import euclidean
from metpy.interpolate import geometry
from metpy.interpolate.points import natural_neighbor_point | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 53d0dc8892b0b99d102d9b269dd8b663 |
For a test case, we generate 10 random points and observations, where the
observation values are just the x coordinate value times the y coordinate
value divided by 1000.
We then create two test points (grid 0 & grid 1) at which we want to
estimate a value using natural neighbor interpolation.
The locations of these observations are then used to generate a Delaunay triangulation. | np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = (pts[:, 0] * pts[:, 0]) / 1000
tri = Delaunay(pts)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility
delaunay_plot_2d(tri, ax=ax)
for i, zval in enumerate(zp):
ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))
sim_gridx = [30., 60.]
sim_gridy = [30., 60.]
ax.plot(sim_gridx, sim_gridy, '+', markersize=10)
ax.set_aspect('equal', 'datalim')
ax.set_title('Triangulation of observations and test grid cell '
'natural neighbor interpolation values')
members, circumcenters = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
val = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0],
circumcenters)
ax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
val = natural_neighbor_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1],
circumcenters)
ax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1])) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 6dc5cc5a7eded3a2b7ad59829e919ecf |
Using the circumcenter and circumcircle radius information from
:func:metpy.interpolate.geometry.find_natural_neighbors, we can visually
examine the results to see if they are correct. | def draw_circle(ax, x, y, r, m, label):
th = np.linspace(0, 2 * np.pi, 100)
nx = x + r * np.cos(th)
ny = y + r * np.sin(th)
ax.plot(nx, ny, m, label=label)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility
delaunay_plot_2d(tri, ax=ax)
ax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)
for i, (x_t, y_t) in enumerate(circumcenters):
r = geometry.circumcircle_radius(*tri.points[tri.simplices[i]])
if i in members[1] and i in members[0]:
draw_circle(ax, x_t, y_t, r, 'm-', str(i) + ': grid 1 & 2')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[0]:
draw_circle(ax, x_t, y_t, r, 'r-', str(i) + ': grid 0')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[1]:
draw_circle(ax, x_t, y_t, r, 'b-', str(i) + ': grid 1')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
else:
draw_circle(ax, x_t, y_t, r, 'k:', str(i) + ': no match')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)
ax.set_aspect('equal', 'datalim')
ax.legend() | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | f505bf4f50a6b94c82b8fe948472655d |
What?....the circle from triangle 8 looks pretty darn close. Why isn't
grid 0 included in that circle? | x_t, y_t = circumcenters[8]
r = geometry.circumcircle_radius(*tri.points[tri.simplices[8]])
print('Distance between grid0 and Triangle 8 circumcenter:',
euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))
print('Triangle 8 circumradius:', r) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 443fe39371df276ac46c841761ee8d61 |
Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)
Grab the circumcenters and radii for natural neighbors | cc = np.array(circumcenters)
r = np.array([geometry.circumcircle_radius(*tri.points[tri.simplices[m]]) for m in members[0]])
print('circumcenters:\n', cc)
print('radii\n', r) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 3a5d93011724ebd141903902a04f8c72 |
Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram
<https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams>_
which serves as a complementary (but not necessary)
spatial data structure that we use here simply to show areal ratios.
Notice that the two natural neighbor triangle circumcenters are also vertices
in the Voronoi plot (green dots), and the observations are in the polygons (blue dots). | vor = Voronoi(list(zip(xp, yp)))
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility
voronoi_plot_2d(vor, ax=ax)
nn_ind = np.array([0, 5, 7, 8])
z_0 = zp[nn_ind]
x_0 = xp[nn_ind]
y_0 = yp[nn_ind]
for x, y, z in zip(x_0, y_0, z_0):
ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))
ax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)
ax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',
label='natural neighbor\ncircumcenters')
for center in cc:
ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),
xy=(center[0] + 1, center[1] + 1))
tris = tri.points[tri.simplices[members[0]]]
for triangle in tris:
x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]
y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]
ax.plot(x, y, ':', linewidth=2)
ax.legend()
ax.set_aspect('equal', 'datalim')
def draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):
"""Draw one of the natural neighbor polygons with some information."""
pts = np.array(polygon)[ConvexHull(polygon).vertices]
for i, pt in enumerate(pts):
ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],
[pt[1], pts[(i + 1) % len(pts)][1]], 'k-')
avex, avey = np.mean(pts, axis=0)
ax.annotate('area: {:.3f}'.format(geometry.area(pts)), xy=(avex + off_x, avey + off_y),
fontsize=12)
cc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc1, cc2])
cc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)
cc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)
cc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2]) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | fc1fef8076594be9b71f4d22f12ec708 |
Put all of the generated polygon areas and their affiliated values in arrays.
Calculate the total area of all of the generated polygons. | areas = np.array([60.434, 448.296, 25.916, 70.647])
values = np.array([0.064, 1.156, 2.809, 0.225])
total_area = np.sum(areas)
print(total_area) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 8cbb1a1a3634a37b1f2020d24b64970f |
For each polygon area, calculate its percent of total area. | proportions = areas / total_area
print(proportions) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 43e5b7a26f4e4158e6349843ba0657ac |
Multiply the percent of total area by the respective values. | contributions = proportions * values
print(contributions) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 861b9c40db3a59a19b9b6d16f09dbe32 |
The sum of this array is the interpolation value! | interpolation_value = np.sum(contributions)
function_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri,
members[0], circumcenters)
print(interpolation_value, function_output) | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | ee8b4b04a1e81bbe33e202154248816c |
The values are slightly different due to truncating the area values in
the above visual example to the 3rd decimal place. | plt.show() | v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb | metpy/MetPy | bsd-3-clause | 18e0fa7454b40ec1e813fa17cee9fa84 |
Chargement des images dans python | def cv_image_vers_vecteur(image): #Pour convertir une image en vecteur, cela servira pour les opérations suivantes
return ravel(image)
def charge_l_image(nom_de_fichier):
return misc.imread(nom_de_fichier, flatten=True, mode = "L")/255. #avec cela on convertit l'image en suite de 0 et de 1
def charge_l_image_sous_forme_de_vecteur(nom_de_fichier):
return cv_image_vers_vecteur(charge_l_image(nom_de_fichier))
def charge_l_image_et_trace(nom_de_fichier_complet):
imshow(charge_l_image(nom_de_fichier_complet))
show()
charge_l_image("training_set_perceptron/A1.png")
shape(charge_l_image("training_set_perceptron/A1.png")) | README.ipynb | konkam/perceptron_guide | gpl-3.0 | 952b90b8e2a6ebe2a79c7c56d11e04e4 |
On voit qu'une image est constituée de 50x50 = 2500 valeurs qui peuvent etre égales à 0 ou à 1. | charge_l_image_sous_forme_de_vecteur("training_set_perceptron/A1.png")
shape(charge_l_image_sous_forme_de_vecteur("training_set_perceptron/A1.png"))
charge_l_image_et_trace("training_set_perceptron/A1.png") | README.ipynb | konkam/perceptron_guide | gpl-3.0 | b272073c26fd8a39110ee2c26f5c85ff |
Can I extract just the sentence that belongs to the replied comment? | html = commentsHTML[0]
comms = html.findAll('comment')
first_comm_s = html.findAll('s', recursive=False)
first_comm_s
| testdataextractor/TestDataExtractor.ipynb | betoesquivel/onforums-application | mit | a4e3b1ecb8daa665cc87fe32fbe2ced9 |
Can I extract all the comment tags, including the nested ones?
Turns out the findAll is recursive and gets me every comment.
From there, getting the parents is easy. | for c in commentsHTML:
if c['id'] == "c4":
print c
print [p['id'] for p in c.findParents("comment")]
break | testdataextractor/TestDataExtractor.ipynb | betoesquivel/onforums-application | mit | 28a7d6ca1ae1ba6675498fc85ce853ee |