markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Scipy: Statistische Funktionen und mehr Die PDF erhalten wir ebenfalls aus Scipy. Um sie plotten zu können, müssen wir sie auf eine Reihe von Werten anwenden um Datenpunkte zu erhalten. Hier zeigt sich erneut die Stärke von Numpy: wir können einfach die Funktion auf das ganze Array anwenden und erhalten ein Array von Ergebnissen. Scipy ist modular aufgebaut, so dass wir mit unserem obigen Import nicht alle Untermodule enthalten. Wir müssen das Statistikmodul explizit importieren.
import scipy.stats pdf = scipy.stats.norm(2, 3).pdf xs = np.linspace(-15, 15, 5000) # Erzeuge 5000 äquidistante Werte im Interval [-15, 15). plt.hist(gauss, bins=20, normed=True, label='Werte') plt.plot(xs, pdf(xs), label='PDF') plt.xlabel('Wert') plt.ylabel('Relative Häufigkeit') plt.legend()
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
2e97670b3ef98f5243bd498a34f656f0
Das sieht doch schon mal hübsch aus. Zum Abschluss wollen wir noch Unsicherheiten auf die Bins berechnen und in das Histogramm eintragen. Um es einfach zu halten, verwenden wir nicht die normierte PDF, sondern skalieren unsere PDF auf unsere Daten.
bins, edges = np.histogram(gauss, bins=20) bin_width = edges[1] - edges[0] # Alle Bins haben die gleiche Breite centres = edges[:-1] + bin_width / 2 def scaled_pdf(x): return bin_width * n_events * pdf(x) plt.errorbar( # Typisches "Teilchenphysikerhistorgamm" centres, # x bins, # y xerr=bin_width/2, # Unsicherheit auf x: hier Breite der Bins yerr=np.sqrt(bins), # Unsicherheit auf y fmt='o', # Benutze Punkte statt Linien zur Darstellung label='Data' ) plt.plot(xs, scaled_pdf(xs), label='PDF') plt.xlabel('Wert') plt.ylabel('Relative Häufigkeit') plt.ylim(-100, 2000) # Manuelles Setzen des sichtbaren vertikalen Ausschnittes plt.legend()
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
9b8085a7b46d2bd72fad5b0e41514bdb
ComicAnalyzer 雑誌分析用に,ComicAnalyzerクラスを定義します.
class ComicAnalyzer(): """漫画雑誌の目次情報を読みだして,管理するクラスです.""" def __init__(self, data_path='data/wj-api.json', min_week=7, short_week=10): """ 初期化時に,data_pathにある.jsonファイルから目次情報を抽出します. - self.data: 全目次情報を保持するリスト型 - self.all_titles: 全作品名情報を保持するリスト型 - self.serialized_titles: min_week以上連載した全作品名を保持するリスト型 - self.last_year: 最新の目次情報の年を保持する数値型 - self.last_no: 最新の目次情報の号数を保持する数値型 - self.end_titles: self.serialized_titlesのうち,self.last_yearおよび self.last_noまでに終了した全作品名を保持するリスト型 - self.short_end_titles: self.end_titlesのうち,short_week週以内に 連載が終了した作品名を保持するリスト型 - self.long_end_titles: self.end_titlesのうち,short_week+1週以上に 連載が継続した作品名を保持するリスト型 """ self.data = self.read_data(data_path) self.all_titles = self.collect_all_titles() self.serialized_titles = self.drop_short_titles(self.all_titles, min_week) self.last_year = self.find_last_year(self.serialized_titles[-100:]) self.last_no = self.find_last_no(self.serialized_titles[-100:], self.last_year) self.end_titles = self.drop_continued_titles( self.serialized_titles, self.last_year, self.last_no) self.short_end_titles = self.drop_long_titles( self.end_titles, short_week) self.long_end_titles = self.drop_short_titles( self.end_titles, short_week + 1) def read_data(self, data_path): """ data_pathにあるjsonファイルを読み出して,全ての目次情報をまとめたリストを返します. """ with open(data_path, 'r', encoding='utf-8') as f: data = json.load(f) return data def collect_all_titles(self): """ self.dataから全ての作品名を抽出したリストを返します. """ titles = [] for comic in self.data: if comic['title'] not in titles: titles.append(comic['title']) return titles def extract_item(self, title='ONE PIECE', item='worst'): """ self.dataからtitleのitemをすべて抽出したリストを返します. """ return [comic[item] for comic in self.data if comic['title'] == title] def drop_short_titles(self, titles, min_week): """ titlesのうち,min_week週以上連載した作品名のリストを返します. """ return [title for title in titles if len(self.extract_item(title)) >= min_week] def drop_long_titles(self, titles, max_week): """ titlesのうち,max_week週以内で終了した作品名のリストを返します. """ return [title for title in titles if len(self.extract_item(title)) <= max_week] def find_last_year(self, titles): """ titlesが掲載された雑誌のうち,最新の年を返します. """ return max([self.extract_item(title, 'year')[-1] for title in titles]) def find_last_no(self, titles, year): """ titlesが掲載されたyear年の雑誌のうち,最新の号数を返します. """ return max([self.extract_item(title, 'no')[-1] for title in titles if self.extract_item(title, 'year')[-1] == year]) def drop_continued_titles(self, titles, year, no): """ titlesのうち,year年のno号までに連載が終了した作品名のリストを返します. """ end_titles = [] for title in titles: last_year = self.extract_item(title, 'year')[-1] if last_year < year: end_titles.append(title) elif last_year == year: if self.extract_item(title, 'no')[-1] < no: end_titles.append(title) return end_titles def search_title(self, key, titles): """ titlesのうち,keyを含む作品名のリストを返します. """ return [title for title in titles if key in title]
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
ece09f1a0f55a01123722b1ccf096426
かなりわかりづらい処理をしているので,初期化時(__init__())の動作を補足します. 1. self.all_titlesは文字通り全ての作品名を保持します.しかし,self.all_titlesは,明らかに読みきり作品や企画作品を含んでしまっています. 2. そこで,min_week以上連載した作品self.serialized_titlesとして抽出します.しかし,self.serialized_titlesは,データベースの最新の目次情報の時点で,連載を継続中の作品を含んでおり,連載継続期間が不正確になってしまいます.例えば,「鬼滅の刃」など現在も連載中の人気作が,21週で連載が終了した作品のように見えてしまいます. 3. そこで,データベースの最新の目次情報の時点で連載が終了した(と思われれる)作品のみをself.end_titlesとして抽出します.self.end_titlesが,本分析における全体集合です. 4. self.end_titlesのうち,10週以内に終了した作品をself.short_end_titlesとして,11週以内に継続した作品をself.long_end_titlesとして抽出します. 分析
wj = ComicAnalyzer()
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
cb3e81a7b2a9d2e067a18056f4a2ccd5
10週以内で終わった最新10タイトルの最初の10話分の掲載順(worst)を表示してみます.値が大きいほど,巻頭付近に掲載されていたことになります.
for title in wj.short_end_titles[-10:]: plt.plot(wj.extract_item(title)[:10], label=title[:6]) plt.xlabel('Week') plt.ylabel('Worst') plt.ylim(0,22) plt.legend()
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
3b431d5ed149f543080a34e8e9add64d
あれ?「斉木楠雄」って結構連載していたんじゃ…?こういうときは,search_title()を使います.
wj.search_title('斉木', wj.all_titles) len(wj.extract_item('超能力者 斉木楠雄のΨ難')) wj.extract_item('超能力者 斉木楠雄のΨ難', 'year'), \ wj.extract_item('超能力者 斉木楠雄のΨ難', 'no') len(wj.extract_item('斉木楠雄のΨ難'))
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
49d9c5e4e6670acb41a675dd1a803b1b
どうやら,「超能力者 斉木楠雄のΨ難」で試験的に7回読み切り掲載したあと,「斉木楠雄のΨ難」の連載を開始したみたいですね(wikipedia). 次は,近年のヒット作(独断)の最初の10話分の掲載順を表示します.
target_titles = ['ONE PIECE', 'NARUTO-ナルト-', 'BLEACH', 'HUNTER×HUNTER'] for title in target_titles: plt.plot(wj.extract_item(title)[:10], label=title[:6]) plt.ylim(0,22) plt.xlabel('Week') plt.ylabel('Worst') plt.legend()
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
c87fd4cf2b7c98702336f429a8ac3ac4
個人的に気になったので,50話まで掲載順を見てみます.
target_titles = ['ONE PIECE', 'NARUTO-ナルト-', 'BLEACH', 'HUNTER×HUNTER'] for title in target_titles: plt.plot(wj.extract_item(title)[:50], label=title[:6]) plt.ylim(0,22) plt.xlabel('Week') plt.ylabel('Worst') plt.legend()
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
3103c573a84ab09ebc61c2c982cdd5d0
ある程度予想はしてましたが,さすがですね.ちなみにですが,extract_item()を使ってサブタイトルを取得しながら掲載順を見ると,マンガ好きの方は楽しいと思います.
wj.extract_item('ONE PIECE', 'subtitle')[:10]
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
76928e49e3164dee1155bc28698d6446
さて,seabornで相関分析をやってみます.ここでは,ひとまず6週目までの掲載順をプロットします.同じ座標に複数の点が重なって非常に見づらいので,便宜上ランダムなノイズを加えて見栄えを整えます.なお,1週目を外したのは,ほとんどの場合巻頭に掲載されるためです.
end_data = pd.DataFrame( [[wj.extract_item(title)[1] + np.random.randn() * .3, wj.extract_item(title)[2] + np.random.randn() * .3, wj.extract_item(title)[3] + np.random.randn() * .3, wj.extract_item(title)[4] + np.random.randn() * .3, wj.extract_item(title)[5] + np.random.randn() * .3, '短命作品' if title in wj.short_end_titles else '継続作品'] for title in wj.end_titles]) end_data.columns = ["Worst (week2)", "Worst (week3)", "Worst (week4)", "Worst (week5)", "Worst (week6)", "Type"] sns.pairplot(end_data, hue="Type", palette="husl")
1_analyze_comic_data_j.ipynb
haltaro/predicting-comic-end
mit
3cc0aec61c2c2f1dc222071e9dc45600
WikiData
endpoint = 'https://query.wikidata.org/bigdata/namespace/wdq/sparql' query = """ PREFIX wikibase: <http://wikiba.se/ontology#> PREFIX wd: <http://www.wikidata.org/entity/> PREFIX wdt: <http://www.wikidata.org/prop/direct/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?president ?cause ?dob ?dod WHERE { ?pid wdt:P39 wd:Q11696 . ?pid wdt:P509 ?cid . ?pid wdt:P569 ?dob . ?pid wdt:P570 ?dod . OPTIONAL { ?pid rdfs:label ?president filter (lang(?president) = "en") . } OPTIONAL { ?cid rdfs:label ?cause filter (lang(?cause) = "en") . } } """ requests.get(endpoint, params={'query': query, 'format': 'json'}).json() get_sparql_dataframe(endpoint, query).head()
23-bnf.ipynb
loujine/musicbrainz-dataviz
mit
beaad35a15b1045f5753668a39eac250
Data BNF http://data.bnf.fr/fr/opendata
endpoint = 'http://data.bnf.fr/sparql' query = """ SELECT ?artist ?name ?bdate ?ddate ?wdurl ?mburl WHERE { ?artist isni:identifierValid "0000000108935378" . ?artist owl:sameAs ?wdurl . FILTER (regex (?wdurl, "wikidata.org")) ?artist owl:sameAs ?mburl . FILTER (regex (?mburl, "musicbrainz.org")) . OPTIONAL { ?artist bio:birth ?bdate . ?artist bio:death ?ddate . ?artist foaf:name ?name } } """ get_sparql_dataframe(endpoint, query).head() query = """ SELECT DISTINCT ?predicate ?val WHERE { <http://data.bnf.fr/ark:/12148/cb13894801b> ?predicate ?val } """ get_sparql_dataframe(endpoint, query).head() query = """ SELECT ?artist ?name ?isni WHERE { ?artist foaf:name "Emilʹ Grigorʹevič Gilelʹs" ; foaf:name ?name . #?artist isni:identifierValid ?isni } """ get_sparql_dataframe(endpoint, query).head() http://data.bnf.fr/sparql?default-graph-uri=&query=PREFIX+foaf%3A+%3Chttp%3A%2F%2Fxmlns.com%2Ffoaf%2F0.1%2F%3E%0D%0APREFIX+rdarelationships%3A+%3Chttp%3A%2F%2Frdvocab.info%2FRDARelationshipsWEMI%2F%3E%0D%0APREFIX+dcterms%3A+%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Fterms%2F%3E%0D%0ASELECT+DISTINCT+%3Fedition+%3Ftitre+%3Fdate+%3Fediteur+%3FURLGallica%0D%0AWHERE+{%0D%0A%3Chttp%3A%2F%2Fdata.bnf.fr%2Fark%3A%2F12148%2Fcb12258414j%3E+foaf%3Afocus+%3Foeuvre.%0D%0A%3Fedition+rdarelationships%3AworkManifested+%3Foeuvre.%0D%0AOPTIONAL+{%0D%0A%3Fedition+dcterms%3Adate+%3Fdate.%0D%0A++}%0D%0AOPTIONAL+{%0D%0A%3Fedition+dcterms%3Atitle+%3Ftitre.+%0D%0A++}%0D%0AOPTIONAL+{%0D%0A%3Fedition+dcterms%3Apublisher+%3Fediteur.%0D%0A++}%0D%0AOPTIONAL+{%0D%0A%3Fedition+rdarelationships%3AelectronicReproduction+%3FURLGallica.%0D%0A++}%0D%0A}&format=application%2Fjson&timeout=0&should-sponge=&debug=on query = """" SELECT DISTINCT ?name ?gender ?nat ?bday ?dday WHERE { ?mbartist foaf:name ?name ; foaf:gender ?gender ; rdagroup2elements:dateOfBirth ?bday ; rdagroup2elements:dateOfDeath ?dday . OPTIONAL { ?mbartist foaf:nationality ?nat } } LIMIT 10 """ get_sparql_dataframe(endpoint, query).head() query = """SELECT ?auteur ?jour ?date1 ?date2 ?nom WHERE { ?auteur foaf:birthday ?jour. ?auteur bio:birth ?date1. ?auteur bio:death ?date2. OPTIONAL { ?auteur foaf:name ?nom. } } ORDER BY (?jour) LIMIT 10 """ get_sparql_dataframe(endpoint, query).head() query = """ PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX bnf-onto: <http://data.bnf.fr/ontology/bnf-onto/> PREFIX owl: <http://www.w3.org/2002/07/owl#> SELECT DISTINCT ?name ?year ?endyear ?url ?wikidata ?gallica ?gender WHERE { <http://data.bnf.fr/ark:/12148/cb13894801b#foaf:Person> foaf:name ?name ; bnf-onto:firstYear ?year ; bnf-onto:lastYear ?endyear ; owl:sameAs ?url ; foaf:page ?wikidata ; foaf:depiction ?gallica ; foaf:gender ?gender . } """ get_sparql_dataframe(endpoint, query).head()
23-bnf.ipynb
loujine/musicbrainz-dataviz
mit
ef5990b4cd154e33d317f8ccc81f10f4
Working with Data in DataFrames (Tranform) &nbsp;
import re import pandas as pd LATEST_DISH_DATA_DF = pd.DataFrame.from_csv(os.path.join(DATA_DIR, 'Dish.csv'), index_col='id') LATEST_ITEM_DATA_DF = pd.DataFrame.from_csv(os.path.join(DATA_DIR, 'MenuItem.csv'), index_col='dish_id') LATEST_PAGE_DATA_DF = pd.DataFrame.from_csv(os.path.join(DATA_DIR, 'MenuPage.csv'), index_col='id') LATEST_MENU_DATA_DF = pd.DataFrame.from_csv(os.path.join(DATA_DIR, 'Menu.csv'), index_col='id')
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
eeba48630d0b57f74ee4fd1736cb2ce7
Dish.csv
NULL_APPEARANCES = LATEST_DISH_DATA_DF[LATEST_DISH_DATA_DF.times_appeared == 0] print('Data set contains {0} dishes that appear 0 times …'.format( len(NULL_APPEARANCES)) ) NON_NULL_DISH_DATA_DF = LATEST_DISH_DATA_DF[LATEST_DISH_DATA_DF.times_appeared != 0] discarded_columns = [n for n in NON_NULL_DISH_DATA_DF.columns if n not in ['name', 'menus_appeared', 'times_appeared']] print('Discarding columns from Dish.csv …') for discard in discarded_columns: print('{0} … removed'.format(discard)) TRIMMED_DISH_DATA_DF = NON_NULL_DISH_DATA_DF[['name', 'menus_appeared', 'times_appeared']] print('Dish.csv contains {0} potentially-unique dish names before any normalization'. format(TRIMMED_DISH_DATA_DF.name.nunique())) def normalize_names(obj): ''' Take a name as a string, converts the string to lowercase, strips whitespace from beginning and end, normalizes multiple internal whitespace characters to a single space. E.g.: normalize_names('Chicken gumbo ') = 'chicken gumbo' ''' tokens = obj.strip().lower().split() result = ' '.join(filter(None, tokens)) return result TRIMMED_DISH_DATA_DF['normalized_name'] = TRIMMED_DISH_DATA_DF.name.map(normalize_names) print( 'Dish.csv contains {0} potentially-unique dish names after normalizing whitespace and punctuation' .format(TRIMMED_DISH_DATA_DF.normalized_name.nunique()) ) def fingerprint(obj): """ A modified version of the fingerprint clustering algorithm implemented by Open Refine. See https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth This does not normalize to ASCII characters since diacritics may be significant in this dataset """ alphanumeric_tokens = filter(None, re.split('\W', obj)) seen = set() seen_add = seen.add deduped = sorted([i for i in alphanumeric_tokens if i not in seen and not seen_add(i)]) fingerprint = ' '.join(deduped) return fingerprint TRIMMED_DISH_DATA_DF['fingerprint'] = TRIMMED_DISH_DATA_DF.normalized_name.map(fingerprint) print( 'Dish.csv contains {0} unique fingerprint values' .format(TRIMMED_DISH_DATA_DF.fingerprint.nunique()) ) TRIMMED_DISH_DATA_DF.head()
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
02eba258a119fd655e0a0f0cd9fd9276
MenuItem.csv
discarded_columns2 = [n for n in LATEST_ITEM_DATA_DF.columns if n not in ['id', 'menu_page_id', 'xpos', 'ypos']] print('Discarding columns from MenuItem.csv …') for discard2 in discarded_columns2: print('{0} … removed'.format(discard2)) TRIMMED_ITEM_DATA_DF = LATEST_ITEM_DATA_DF[['id', 'menu_page_id', 'xpos', 'ypos']] TRIMMED_ITEM_DATA_DF.head()
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
45b1e4bb22098d6fb5b6fdaf9f8c0f9e
MenuPage.csv
LATEST_PAGE_DATA_DF.head() LATEST_PAGE_DATA_DF[['full_height', 'full_width']].astype(int, raise_on_error=False)
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
9fd903c09d43752a6c3614daa71952af
Menu.csv
LATEST_MENU_DATA_DF.columns discarded_columns3 = [n for n in LATEST_MENU_DATA_DF.columns if n not in ['sponsor', 'location', 'date', 'page_count', 'dish_count']] pipeline_logger.info('Discarding columns from Menu.csv …') for discard3 in discarded_columns3: pipeline_logger.info('{0} … removed'.format(discard3)) TRIMMED_MENU_DATA_DF = LATEST_MENU_DATA_DF[['sponsor', 'location', 'date', 'page_count', 'dish_count']] TRIMMED_MENU_DATA_DF.head()
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
463bd13c45f52bfe5077cf2871ebe83f
Merging DataFrames
MERGED_ITEM_PAGES_DF = pd.merge(TRIMMED_ITEM_DATA_DF, LATEST_PAGE_DATA_DF, left_on='menu_page_id', right_index=True, ) MERGED_ITEM_PAGES_DF.columns = ['item_id', 'menu_page_id', 'xpos', 'ypos', 'menu_id', 'page_number', 'image_id', 'full_height', 'full_width', 'uuid'] #MERGED_ITEM_PAGES_DF.head() MERGED_ITEM_PAGES_MENUS_DF = pd.merge(TRIMMED_MENU_DATA_DF, MERGED_ITEM_PAGES_DF, left_index=True, right_on='menu_id') FULL_MERGE = pd.merge(MERGED_ITEM_PAGES_MENUS_DF, TRIMMED_DISH_DATA_DF, left_index=True, right_index=True) FULL_MERGE.head() FOR_JSON_OUTPUT = FULL_MERGE.reset_index() FOR_JSON_OUTPUT.columns renamed_columns = ['dish_id', 'menu_sponsor', 'menu_location', 'menu_date', 'menu_page_count', 'menu_dish_count', 'item_id', 'menu_page_id', 'item_xpos', 'item_ypos', 'menu_id', 'menu_page_number', 'image_id', 'page_image_full_height', 'page_image_full_width', 'page_image_uuid', 'dish_name', 'dish_menus_appeared', 'dish_times_appeared', 'dish_normalized_name', 'dish_name_fingerprint'] FOR_JSON_OUTPUT.columns = renamed_columns FOR_JSON_OUTPUT[['menu_page_number', 'dish_id', 'item_id', 'menu_page_id', 'menu_id']].astype(int, raise_on_error=False) FOR_JSON_OUTPUT['dish_uri']= FOR_JSON_OUTPUT.dish_id.map(lambda x: 'http://menus.nypl.org/dishes/{0}'.format(int(x))) FOR_JSON_OUTPUT['item_uri']= FOR_JSON_OUTPUT.item_id.map(lambda x: 'http://menus.nypl.org/menu_items/{0}/edit' .format(int(x))) FOR_JSON_OUTPUT['menu_page_uri'] = FOR_JSON_OUTPUT.menu_page_id.map(lambda x: 'http://menus.nypl.org/menu_pages/{0}' .format(int(x))) FOR_JSON_OUTPUT['menu_uri'] = FOR_JSON_OUTPUT.menu_id.map(lambda x:'http://menus.nypl.org/menus/{0}' .format(int(x))) FOR_JSON_OUTPUT.head() print('Generating JSON …') FOR_JSON_OUTPUT.to_json(path_or_buf='../data/nypl_menus/menus_all.json', orient='index', force_ascii=False)
4-intro-to-pandas.ipynb
digital-humanities-data-curation/hilt2015
mit
4f79c2ec79937a894f91f4b0da350dfc
Feature Crosses in BigQuery We'll first explore how to create a feature cross in BigQuery. The cell below will create a dataset called babyweight in your GCP project, if it does not already exist. This dataset will will house our tables and models.
bq = bigquery.Client() dataset = bigquery.Dataset(bq.dataset("babyweight")) try: bq.create_dataset(dataset) print("Dataset created.") except: print("Dataset already exists.")
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
74ccb9e10f4bb3e22d250ed68c1ef061
Create datasets for training and evaluation
%%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data AS SELECT weight_pounds, CAST(is_male AS STRING) AS is_male, mother_age, CASE WHEN plurality = 1 THEN "Single(1)" WHEN plurality = 2 THEN "Twins(2)" WHEN plurality = 3 THEN "Triplets(3)" WHEN plurality = 4 THEN "Quadruplets(4)" WHEN plurality = 5 THEN "Quintuplets(5)" END AS plurality, gestation_weeks, CAST(mother_race AS STRING) AS mother_race, FARM_FINGERPRINT( CONCAT( CAST(year AS STRING), CAST(month AS STRING) ) ) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
d2f2641e348117c41e4581f8208bb0e6
Next, we'll create tables in BigQuery that we'll use for training and evaluation.
%%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data_train AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, mother_race FROM babyweight.babyweight_data WHERE ABS(MOD(hashmonth, 4)) < 3 %%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data_eval AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, mother_race FROM babyweight.babyweight_data WHERE ABS(MOD(hashmonth, 4)) = 3
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
6187d9f0863a43f3652362bd5122be28
Create model in BigQuery
%%bigquery CREATE OR REPLACE MODEL `babyweight.natality_model` OPTIONS (MODEL_TYPE="DNN_REGRESSOR", HIDDEN_UNITS=[64, 32], BATCH_SIZE=32, INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, plurality, gestation_weeks, mother_age, CAST(mother_race AS string) AS mother_race FROM babyweight.babyweight_data_train
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
4032f64f0b95483c2e1c7efca2b428e7
We can use ML.EVALUATE to determine the root mean square error of our model on the evaluation set.
query = """ SELECT *, SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL `babyweight.natality_model`, ( SELECT weight_pounds, is_male, plurality, gestation_weeks, mother_age, CAST(mother_race AS STRING) AS mother_race FROM babyweight.babyweight_data_eval )) """ df = bq.query(query).to_dataframe() df.head()
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
17ccc3a0a66f2d8ecaa1a485ecbc3f61
Creating a Feature Cross with BQML Next, we'll create a feature cross of the features is_male and mother_race. To create a feature cross we apply ML.FEATURE_CROSS to a STRUCT of the features is_male and mother_race cast as a string. The STRUCT clause creates an ordered pair of the two features. The TRANSFORM clause is used for engineering features of our model. This allows us to specify all preprocessing during model creation and apply those preprocessing steps during prediction and evaluation. The rest of the features within the TRANSFORM clause remain unchanged.
%%bigquery CREATE OR REPLACE MODEL `babyweight.natality_model_feat_eng` TRANSFORM(weight_pounds, is_male, plurality, gestation_weeks, mother_age, CAST(mother_race AS string) AS mother_race, ML.FEATURE_CROSS( STRUCT( is_male, plurality) ) AS gender_X_plurality) OPTIONS (MODEL_TYPE='linear_reg', INPUT_LABEL_COLS=['weight_pounds'], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT * FROM babyweight.babyweight_data_train
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
f7b905cffa7ff8c97d3014cdf77f4c44
As before, we compute the root mean square error.
query = """ SELECT *, SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL `babyweight.natality_model_feat_eng`, ( SELECT weight_pounds, is_male, plurality, gestation_weeks, mother_age, CAST(mother_race AS STRING) AS mother_race FROM babyweight.babyweight_data_eval )) """ df = bq.query(query).to_dataframe() df.head()
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
cb8fe46bfab2557e8494e2981ffc22cb
Feature Crosses in Keras Next, we'll see how to implement a feature cross in Tensorflow using feature columns.
import os import tensorflow as tf import datetime from tensorflow import keras from tensorflow.keras import layers from tensorflow import feature_column as fc # Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks", "mother_race"] # Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["null"]]
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
4d6173ac79937bacbe8cced63936f49e
Make a dataset of features and label.
def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: tf.estimator.ModeKeys to determine if training or evaluating. Returns: `Dataset` object. """ # Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS) # Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
f9a0a345f65fa432e8ee77450cee1c5d
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input).
def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({ colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality", "mother_race"]}) return inputs
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
d230320871c5044c0980c05592253294
Create feature columns for inputs Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male, plurality and mother_race, should be categorical. Remember, only dense feature columns can be inputs to a DNN. The last feature column created in the create_feature_columns function is a feature cross with is_male and plurality. To implement a feature cross in Tensorflow we use tf.feature_column.crossed_column which takes two arguments: a list of the feature keys to be crossed and the hash bucket size. Crossed features will be hashed according to hash_bucket_size so it should be large enough to accommodate all possible crossed categories. Since the feature is_male can take 3 values (True, False or Unknown) and the feature plurality can take 6 values (Single(1), Twins(2), Triplets(3), Quadruplets(4), Quintuplets(5), Multiple(2+)), we'll set hash_bucket_size=18. Finally, to use crossed column in DNN model, you need to wrap it either in an indicator_column or an embedding_column. In the code below, we use an embedding column and take the embedding dimension to be 2. To create a crossed column with features of numeric type, you can use categorical_column, or bucketized_column before passing to a crossed_column.
def categorical_fc(name, values): cat_column = fc.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) return fc.indicator_column(categorical_column=cat_column) def create_feature_columns(): feature_columns = { colname : fc.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } feature_columns["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) feature_columns["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) feature_columns["mother_race"] = fc.indicator_column( fc.categorical_column_with_hash_bucket( "mother_race", hash_bucket_size=17, dtype=tf.dtypes.string)) feature_columns["gender_x_plurality"] = fc.embedding_column( fc.crossed_column(["is_male", "plurality"], hash_bucket_size=18), dimension=2) return feature_columns
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
65af1f2bef04c8dcf97c002cb9e8d683
We can double-check the output of create_feature_columns.
feature_columns = create_feature_columns() print("Feature column keys: \n{}\n".format(list(feature_columns.keys()))) print("Feature column values: \n{}\n".format(list(feature_columns.values())))
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
94a966cb92170f5bedefb69e2a0af11c
Define a DNN model Next we define our model. This is regression so make sure the output layer activation is correct and that the shape is right. We'll create deep neural network model, similar to what we use in BigQuery.
def get_model_outputs(inputs): # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = layers.Dense(64, activation="relu", name="h1")(inputs) h2 = layers.Dense(32, activation="relu", name="h2")(h1) # Final output is a linear activation because this is regression output = layers.Dense(units=1, activation="linear", name="weight")(h2) return output def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
2d66f353671b9e591db24ce41995192f
Finally, we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) dnn_inputs = layers.DenseFeatures( feature_columns=feature_columns.values())(inputs) # Get output of model given inputs output = get_model_outputs(dnn_inputs) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) return model print("Here is our DNN architecture so far:\n") model = build_dnn_model() print(model.summary()) tf.keras.utils.plot_model( model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
28ed5ca6480679d06d6816b53b86b9c8
Train and evaluate our model We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data.
%%time tf.random.set_seed(33) TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 1000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 1000 trainds = load_dataset( pattern="./data/babyweight_train*", batch_size=TRAIN_BATCH_SIZE, mode=tf.estimator.ModeKeys.TRAIN) evalds = load_dataset( pattern="./data/babyweight_eval*", batch_size=1000, mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) logdir = os.path.join( "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=logdir, histogram_freq=1) history = model.fit( trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch, callbacks=[tensorboard_callback])
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
91cea7ef1258e9eb6c52b3b4093d2a20
Need for regularization Let's use a high-cardinality feature cross to illustrate the point. In this model, we are predicting taxifare in New York city using a feature cross of lat and lon
!bq show mlpatterns || bq mk mlpatterns %%bigquery CREATE OR REPLACE TABLE mlpatterns.taxi_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` # The full dataset has 1+ Billion rows, let's take only 1 out of 1,000 (or 1 Million total) WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 %%bigquery CREATE OR REPLACE MODEL mlpatterns.taxi_noreg TRANSFORM( fare_amount , ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr , CONCAT( ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM mlpatterns.taxi_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL mlpatterns.taxi_noreg) %%bigquery CREATE OR REPLACE MODEL mlpatterns.taxi_l2reg TRANSFORM( fare_amount , ML.FEATURE_CROSS(STRUCT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek, CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday), 2) AS day_hr , CONCAT( ML.BUCKETIZE(pickuplon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(pickuplat, GENERATE_ARRAY(37, 45, 0.01)), ML.BUCKETIZE(dropofflon, GENERATE_ARRAY(-78, -70, 0.01)), ML.BUCKETIZE(dropofflat, GENERATE_ARRAY(37, 45, 0.01)) ) AS pickup_and_dropoff ) OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg', l2_reg=0.5) AS SELECT * FROM mlpatterns.taxi_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL mlpatterns.taxi_l2reg) 100 * (4.814606 - 4.828183)/4.828183
02_data_representation/feature_cross.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
a06b9a4ce6b24a70519c80a61fcca528
Table 1 - Spitzer IRAC/MIPS IC348 catalog
tbl1 = ascii.read("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/datafile1.txt") tbl1[0:4]
notebooks/Lada2006.ipynb
BrownDwarf/ApJdataFrames
mit
f06ca478035d44f9d2f22e83a4a87581
Table 2 - SED Derived $\alpha_{IRAC}$ and $A_V$ But really... spectral types
tbl2 = ascii.read("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/datafile2.txt") tbl2[0:4] join_tbls = join(tbl1, tbl2, keys="Seq") print "There are {} rows in tbl1, {} in tbl2, and {} in the joined table.".format(len(tbl1), len(tbl2), len(join_tbls)) join_tbls[0:4]
notebooks/Lada2006.ipynb
BrownDwarf/ApJdataFrames
mit
cd4ea01f2ab13eb36cf742105d368102
Table 3 - Convenient passbands table
names = ["PASSBAND","DATA SYSTEM","REFERENCES","center_wavelength","F_{nu} (Jy)","References"] tbl3 = pd.read_csv("http://iopscience.iop.org/1538-3881/131/3/1574/fulltext/204953.tb3.txt", na_values="\ldots", names = names, sep='\t') tbl3.head()
notebooks/Lada2006.ipynb
BrownDwarf/ApJdataFrames
mit
f2f84e92cec5ddb8bdfdcc6898b3c908
Load data and show loaded variables. The data dictionary contains the full normalised read count marices for training and test file as well as a list of the respective gene names (either gene symbols or ENSEMBL - specify in the is_Ens option) and a list of cell cycle genes. In addition labels for traning and testing should be provided.
data = load_data(CFG, is_Ens=True, het_only=True, het_onlyCB=False, gene_set='GOCB')#gene_set can be either a list of genes, class_labels = data['class_labels']#['G1','G2M','S']#['T-cells']#d#['Liver']#['early', 'late', 'mid']#data['class_labels']#['G1', 'S','G2M']#['Liver']#[data['class_labels']#['T-cells']##['G1', 'S','G2M']#['T-cells']#['G1', 'S','G2M']# # #or 'all' (all genes), 'GOCB' GO and cyclebase or 'CB' or 'GO' data.keys() print(data['cc_ens'].shape[0], 'Cell cycle genes used for training and prediction') print(data['class_labels'])
py/demo/demo_cyclone.ipynb
PMBio/cyclone
apache-2.0
10b2ac107eb8d3df23f5fa258a259497
The data required to build the model are loaded. Next, we initialise the model.
cyclone = cyclone(data['Y'],row_namesY= data['genes'],cc_geneNames= data['cc_ens'],labels = data['labels'], Y_tst = data['Y_test'], row_namesY_tst = data['genes_tst'], labels_tst = data['labels_tst'])
py/demo/demo_cyclone.ipynb
PMBio/cyclone
apache-2.0
470907699a114c38427e5ebf0385cf93
2. Train model By default, a 10-fold corss-validation is performed on the training data to estimate the gernealizability of the gene set used for a number of classifers (PCA based, random forest, logistic regression, lasso and SVM (with rbf kernel)); then the model is trained on the entire data-set and applied to the test data-set. Once training and testing are completed, a plot with variable importances form the Random Forest method is shown together with a classification report in terms of precision and recall.
cyclone.trainModel(rftop = 40, cv=10, out_dir = out_dir, do_pca=1, npc=1, is_SVM=0)
py/demo/demo_cyclone.ipynb
PMBio/cyclone
apache-2.0
221c7dd2a63afdd14dd18cf8aaba60ad
3. Plot results Results can be visualised in terms of barplots indicating the distributions of predicted cell cycle phases for the individual classes/labels in the test data (both int erms of absolute cells and as relative plot). In addition a barplot for the cross-validation results as well as cell-cycle phase specific ROC cruves are shown to make sure the model performs well in the cross-validation.
cyclone.plotHistograms(class_labels = class_labels, out_dir = out_dir, method='GNB', do_h=True) cyclone.plotPerformance(plot_test=False, out_dir =out_dir, method='GNB')
py/demo/demo_cyclone.ipynb
PMBio/cyclone
apache-2.0
25131c9281f4073e737e49701d236864
In addition to the barplots the confidence of the classifier can be visualised in form of a scatter plot. By default, a scatter plot for the test data is shown; a scatter plot for the training data can be shown by setting the plot_test argument to False. The scores to be shown on the x- and y-axis can be chosen using the xaxis and yaxis argument.
cyclone.plotScatter(plot_test = True, xaxis = 0, yaxis = 2, xlab = 'G1 score', ylab = 'G2M score', class_labels = class_labels, out_dir = out_dir, method='GNB') cyclone.plotScatter(plot_test = False, xaxis = 0, yaxis = 2, xlab = 'G1 score', ylab = 'G2M score', class_labels = ['G1', 'S', 'G2M'], out_dir = out_dir, method='GNB')
py/demo/demo_cyclone.ipynb
PMBio/cyclone
apache-2.0
17c33cf395a0873acc2c37bc0a3dc0ec
Getting and converting the data 数据获取与格式转换
path = untar_data(URLs.BIWI_HEAD_POSE) cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6); cal fname = '09/frame_00667_rgb.jpg' def img2txt_name(f): return path/f'{str(f)[:-7]}pose.txt' img = open_image(path/fname) img.show() ctr = np.genfromtxt(img2txt_name(fname), skip_header=3); ctr def convert_biwi(coords): c1 = coords[0] * cal[0][0]/coords[2] + cal[0][2] c2 = coords[1] * cal[1][1]/coords[2] + cal[1][2] return tensor([c2,c1]) def get_ctr(f): ctr = np.genfromtxt(img2txt_name(f), skip_header=3) return convert_biwi(ctr) def get_ip(img,pts): return ImagePoints(FlowField(img.size, pts), scale=True) get_ctr(fname) ctr = get_ctr(fname) img.show(y=get_ip(img, ctr), figsize=(6, 6))
zh-nbs/Lesson3_head_pose.ipynb
fastai/course-v3
apache-2.0
a0c7b236bc297d75658234dbb925679e
Creating a dataset 创建一个数据集
data = (PointsItemList.from_folder(path) .split_by_valid_func(lambda o: o.parent.name=='13') .label_from_func(get_ctr) .transform(get_transforms(), tfm_y=True, size=(120,160)) .databunch().normalize(imagenet_stats) ) data.show_batch(3, figsize=(9,6))
zh-nbs/Lesson3_head_pose.ipynb
fastai/course-v3
apache-2.0
c2bfb8dff97bf1d3ef3650ee820dec92
Train model 训练模型
learn = cnn_learner(data, models.resnet34) learn.lr_find() learn.recorder.plot() lr = 2e-2 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1') learn.load('stage-1'); learn.show_results()
zh-nbs/Lesson3_head_pose.ipynb
fastai/course-v3
apache-2.0
eb5881c8a121885b0342a9a8e011f4eb
Data augmentation 数据增强
tfms = get_transforms(max_rotate=20, max_zoom=1.5, max_lighting=0.5, max_warp=0.4, p_affine=1., p_lighting=1.) data = (PointsItemList.from_folder(path) .split_by_valid_func(lambda o: o.parent.name=='13') .label_from_func(get_ctr) .transform(tfms, tfm_y=True, size=(120,160)) .databunch().normalize(imagenet_stats) ) def _plot(i,j,ax): x,y = data.train_ds[0] x.show(ax, y=y) plot_multi(_plot, 3, 3, figsize=(8,6))
zh-nbs/Lesson3_head_pose.ipynb
fastai/course-v3
apache-2.0
bb170887045828f7902189ce84ea2c53
QUIZ QUESTION Also, using this value of L1 penalty, how many nonzero weights do you have?
non_zero_weight_test = model_test["coefficients"][model_test["coefficients"]["value"] > 0] print model_test["coefficients"]["value"].nnz() non_zero_weight_test.print_rows(num_rows=20)
machine_learning/2_regression/assignment/week5/week-5-lasso-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
0d8932d4e1528700198f4ddf5fe9099b
Exercícios - Loops e Condiconais - Solução
# Exercício 1 - Crie uma estrutura que pergunte ao usuário qual o dia da semana. Se o dia for igual a Domingo ou # igual a sábado, imprima na tela "Hoje é dia de descanso", caso contrário imprima na tela "Você precisa trabalhar!" dia = input('Digite o dia da semana: ') if dia == 'Domingo' or dia == 'Sábado': print("Hoje é dia de descanso") else: print("Você precisa trabalhar!") # Exercício 2 - Crie uma lista de 5 frutas e verifique se a fruta 'Morango' faz parte da lista lista = ['Laranja', 'Maça', 'Abacaxi', 'Uva', 'Morango'] for fruta in lista: if fruta == 'Morango': print("Morango faz parte da lista de frutas") # Exercício 3 - Crie uma tupla de 4 elementos, multiplique cada elemento da tupla por 2 e guarde os resultados em uma # lista tup1 = (1, 2, 3, 4) lst1 = [] for i in tup1: novo_valor = i * 2 lst1.append(novo_valor) print(lst1) # Exercício 4 - Crie uma sequência de números pares entre 100 e 150 e imprima na tela for i in range(100, 151, 2): print(i) # Exercício 5 - Crie uma variável chamada temperatura e atribua o valor 40. Enquanto temperatura for maior que 35, # imprima as temperaturas na tela temperatura = 40 while temperatura > 35: print(temperatura) temperatura = temperatura - 1 # Exercício 6 - Crie uma variável chamada contador = 0. Enquanto counter for menor que 100, imprima os valores na tela, # mas quando for encontrado o valor 23, interrompa a execução do programa contador = 0 while contador < 100: if contador == 23: break print(contador) contador += 1 # Exercício 7 - Crie uma lista vazia e uma variável com valor 4. Enquanto o valor da variável for menor ou igual a 20, # adicione à lista, apenas os valores pares e imprima a lista numeros = list() i = 4 while (i <= 20): numeros.append(i) i = i+2 print(numeros) # Exercício 8 - Transforme o resultado desta função range em uma lista: range(5, 45, 2) nums = range(5, 45, 2) print(list(nums)) # Exercício 9 - Faça a correção dos erros no código abaixo e execute o programa. Dica: são 3 erros. temperatura = float(input('Qual a temperatura? ')) if temperatura > 30: print('Vista roupas leves.') else: print('Busque seus casacos.') # Exercício 10 - Faça um programa que conte quantas vezes a letra "r" aparece na frase abaixo. Use um placeholder na # sua instrução de impressão # “É melhor, muito melhor, contentar-se com a realidade; se ela não é tão brilhante como os sonhos, tem pelo menos a # vantagem de existir.” (Machado de Assis) frase = "É melhor, muito melhor, contentar-se com a realidade; se ela não é tão brilhante como os sonhos, tem pelo menos a vantagem de existir." count = 0 for caracter in frase: if caracter == 'r': count += 1 print("O caracter r aparece %s vezes na frase." %(count))
Cap03/Notebooks/DSA-Python-Cap03-Exercicios-Loops-Condiconais-Solucao.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
6d8d8b7c0146e8e2e45131ed6f66ab11
Recommender using MLLib Training the recommendation model
ratings = data.map(lambda l: l.split()).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2]))).cache() ratings.take(3) nratings = ratings.count() nUsers = ratings.keys().distinct().count() nMovies = ratings.values().distinct().count() print "We have Got %d ratings from %d users on %d movies." % (nratings, nUsers, nMovies) # Build the recommendation model using Alternating Least Squares #Train a matrix factorization model given an RDD of ratings given by users to items, in the form of #(userID, itemID, rating) pairs. We approximate the ratings matrix as the product of two lower-rank matrices #of a given rank (number of features). To solve for these features, we run a given number of iterations of ALS. #The level of parallelism is determined automatically based on the number of partitions in ratings. start = time() seed = 5L iterations = 10 rank = 8 model = ALS.train(ratings, rank, iterations) duration = time() - start print "Model trained in %s seconds" % round(duration,3)
Final/DATA643_pySpark_Final_Project.ipynb
psumank/DATA643
mit
0668329ca1a399b631ddeaddf99f68f9
Note: APT is supposed to automatically log the results to the output directory. Until then, do in manually:
if rerun_apt: # Save fit results as json with open(os.path.join(log_dir, "results_fit.json"), "w") as f: json.dump(results_fit, f, indent=2) # Also necessary information (can be migrated either to CAVE or (preferably) to autopytorch) with open(os.path.join(log_dir, 'configspace.json'), 'w') as f: f.write(pcs_json.write(autopytorch.get_hyperparameter_search_space(X_train=X_train, Y_train=Y_train))) with open(os.path.join(log_dir, 'autonet_config.json'), 'w') as f: json.dump(autopytorch.get_current_autonet_config(), f, indent=2)
examples/autopytorch/apt_notebook.ipynb
automl/SpySMAC
bsd-3-clause
7153dda5f98b1e28ff60c64a2fa19c3c
Next, spin up CAVE pass along the output directory.
from cave.cavefacade import CAVE cave_output_dir = "cave_output" cave = CAVE([log_dir], # List of folders holding results cave_output_dir, # Output directory ['.'], # Target Algorithm Directory (only relevant for SMAC) file_format="APT", verbose="DEBUG") cave.apt_overview() cave.compare_default_incumbent()
examples/autopytorch/apt_notebook.ipynb
automl/SpySMAC
bsd-3-clause
fd9fddaa81146a13a645a356ad3d9478
Other analyzers also run on the APT-data:
cave.apt_tensorboard()
examples/autopytorch/apt_notebook.ipynb
automl/SpySMAC
bsd-3-clause
5f2a62783d1c5d6f038ae80f2f3294e8
f, L, w = Function('f'), Function('L'), Function('w') # abstract, Lagrangian and path functions, respectively t, q, q_point = symbols(r't q \dot{q}') # symbols for the Leibniz notation Lagrangian_eq = Eq(Derivative(Derivative(L(t, q, q_point),q_point,evaluate=False), t, evaluate=False) - L(t,q,q_point).diff(q),0) Lagrangian_eq # compact Lagrangian equation, implicit indeed Lagrangian_eq = Eq(Derivative(Subs(L(t, q, q_point).diff(q_point), [q,q_point], [w(t),w(t).diff(t)]),t) - Subs(L(t,q,q_point).diff(q), [q,q_point], [w(t),w(t).diff(t)]),0) Lagrangian_eq Lagrangian_eq.doit() # a complex explosion by automatic computation def diff_positional(f, i): a = IndexedBase('a') der = f.diff(f.args[i]) def D(*args): #return Lambda([a[i] for i in range(len(f.args))], return der.subs({f.args[i]:args[i] for i in range(len(f.args))}, simultaneous=True) return D # function D is a function of the meta-language, not in the object language diff_positional(L(t, q, q_point), 2)(t,w(t),w(t).diff(t)) # :/ def diff_positional(f, i): return Function(r'\partial_{{{}}}{}'.format(i, str(f))) # the \partial is just a symbol, it hasn't meaning (Derivative(diff_positional(L, 2)(t,w(t), w(t).diff(t)), t) - diff_positional(L, 1)(t,w(t), w(t).diff(t))) # :( Lagrangian_eq = Eq(Derivative(L(t,w(t), w(t).diff(t)).fdiff(argindex=3),t, evaluate=False) - L(t,w(t), w(t).diff(t)).fdiff(argindex=2), 0, evaluate=False) Lagrangian_eq # :) , although it doesn't use "positional derivative" operator explicitly def Derivative_eq(f, argindex=1): d = Dummy() args = [d if i+1 == argindex else a for i,a in enumerate(f.args)] lhs = f.fdiff(argindex) rhs = Subs(f.func(*args).diff(d), d, f.args[argindex-1]) return Eq(lhs, rhs, evaluate=False) Derivative_eq(f(t,q,q_point),2) # applicative Derivative operator def Gamma(w): return Lambda([t], (t, w(t), w(t).diff(t))) # state-space function Gamma(w), Gamma(w)(t) Lagrangian_eq = Eq(Derivative(L(*Gamma(w)(t)).fdiff(argindex=3),t, evaluate=False) - L(t,w(t), w(t).diff(t)).fdiff(argindex=2), 0, evaluate=False) Lagrangian_eq class FunctionsComposition(Function): nargs = 2 def _latex(self, sexp): return r' \circ '.join(map(latex, map(lambda a: a.func if isinstance(a, Function) else a, self.args))) def _eval_subs(self, old, new): f, g = self.args if old == f: return new.subs({f.args[0]:g}, simultaneous=True) F_o_w = FunctionsComposition(Function(r'\mathcal{F}')(t), w(t)) F_o_w F_o_w.subs({Function(r'\mathcal{F}')(t):1/(1-t)}) F_o_w.subs({w(t):2*t}) _.subs({Function(r'\mathcal{F}')(t):2*t+1}) FunctionsComposition(L(t, w(t), w(t).diff(t)).fdiff(argindex=3),Gamma(w)) m,v,k,q = symbols('m v k q') Lagrangian_eq.subs({L:Lambda([t,q,v], (m*v**2)/2-(k*q**2)/2)}) # plug in the Lagrangian function eq = _.doit() # do derivatives eq a, omega, phi = symbols(r'a \omega \phi') _.subs({w(t):a*cos(omega*t+phi)}) _.doit().factor() solve(_, omega) dsolve(eq, w(t)) # solve with respect to path function w(t)
fdg/intro.ipynb
massimo-nocentini/on-python
mit
b990c81998a42f40e505653e7990bd9d
Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
# YOUR CODE HERE plt.plot(t,W) plt.xlabel('time') plt.ylabel('Wiener Process') assert True # this is for grading
assignments/assignment03/NumpyEx03.ipynb
joshnsolomon/phys202-2015-work
mit
9a4f652d0c171435d1b9d8adda893bab
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
# YOUR CODE HERE dW = np.diff(W) mean = dW.mean() standard_deviation = dW.std() mean, standard_deviation assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float)
assignments/assignment03/NumpyEx03.ipynb
joshnsolomon/phys202-2015-work
mit
973976a597f2040c0290debb14750628
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function.
def geo_brownian(t, W, X0, mu, sigma): "Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" Xt = X0*np.exp((((mu-(sigma**2))/2)*t)+(sigma*W)) return Xt assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
joshnsolomon/phys202-2015-work
mit
c4496e6761537c2bf55180eea185a5df
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
# YOUR CODE HERE Xt = geo_brownian(t, W, 1.0, .5, .3) plt.plot(t,Xt) plt.xlabel('time') plt.ylabel('position') assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
joshnsolomon/phys202-2015-work
mit
bdcb355e39d7d042fb85cbc1f995d201
Design of experiment We define the experiment design. The benefits of using a crude Monte-Carlo approach is the potential use of several contrasts.
mcsp = pygosa.SensitivityDesign(dist=dist, model=model, size=5000)
doc/example_contrast.ipynb
sofianehaddad/gosa
lgpl-3.0
673831c855e85ce920380e9e80fda4f9
Moment of second order Hereafter we define a new contrast class that helps evaluating sensitivities of $\mathbb{E}(Y^2)$. The contrast class should : Inherits from ContrastSensitivityAnalysis Define the contrast method with signature contrast(self, y,t, **kwargs). It should have kwargs as arguments even if not used Define the get_risk_value method with signature get_risk_value(self, data, **kwargs). Same remark concerning kwargs
class Moment2SA(pygosa.ContrastSensitivityAnalysis): def __init__(self, design): super(Moment2SA, self).__init__(design) # contrast method def contrast(self, y,t, **kwargs): """ Contrast for moment of second order """ return (y*y-t)*(y*y-t) # Define risk function (second order moment) # moments2 = var + mean * mean def get_risk_value(self, data, **kwargs): mu = ot.Sample(data).computeMean() var = ot.Sample(data).computeVariance() return np.array(mu) * np.array(mu) + np.array(var)
doc/example_contrast.ipynb
sofianehaddad/gosa
lgpl-3.0
3a9a1b8d142f321a8ebd51cbe8c8834d
The previous class is a contrast similar to those provided by the module. We can thus easily apply it using the previous design:
sam = Moment2SA(mcsp) factors_m = sam.compute_factors() fig, ax = sam.boxplot() print(factors_m)
doc/example_contrast.ipynb
sofianehaddad/gosa
lgpl-3.0
998e7039526757aedc5ecf5166a893c3
Nous pouvons parcourir les éléments d'un tableau :
tab = ["pommes", "tomates", "fromage", "lait", "sucre"] i = 0 while i < len(tab): print(tab[i]) i = i+1 # Attention, notez la différence avec : j = 0 while j < len(tab): print(j) j = j+1
2015-11-18 - TD12 - Introduction aux tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
a8604270f0caa5366ba594f9f08503c7
Exercice 1 : Recherchez si un élément est présent dans un tableau.
def cherche(tab, elt): i = 0 while i < len(tab): if tab[i] == elt: print("J'ai trouvé !") i = i+1 tableau = ["pommes", "tomates", "fromage", "lait", "sucre"] cherche(tableau, "tomates")
2015-11-18 - TD12 - Introduction aux tableaux.ipynb
ameliecordier/iutdoua-info_algo2015
cc0-1.0
550070c0767ab56a947de80845f32e6d
数据探索 上面数据是详细数据列表,一眼看不出来啥,先来看一下数据的大致情况。
data_train.info()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
54d601873a1614a81bc28ad002e3d5f8
从整体数据信息来看,总共包含891个顾客信息,总共有714个顾客有年龄信息,船舱信息缺失比较严重。
data_train.describe()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
38375affac4fba15bee67624e8585405
从上可以看出,头等舱顾客比较少,不到25%,平均年龄不到30,看起来都比较年轻啊,家里人平均数都不到1, 看来计划生育搞得不错,数字看起来太不直观了,画图看看。
#每个/多个 属性和最后的Survived之间有着什么样的关系 #中文乱码:http://blog.csdn.net/heloowird/article/details/46343519 import matplotlib.pyplot as plt plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签 plt.rcParams['axes.unicode_minus']=False #用来正常显示负号 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 plt.subplot2grid((2,3),(0,0)) # 在一张大图里分列几个小图 data_train.Survived.value_counts().plot(kind='bar')# 柱状图 plt.title("获救情况 (1为获救)") # 标题 plt.ylabel("人数") plt.subplot2grid((2,3),(0,1)) data_train.Pclass.value_counts().plot(kind="bar") plt.ylabel(u"人数") plt.title(u"乘客等级分布") plt.subplot2grid((2,3),(0,2)) plt.scatter(data_train.Survived, data_train.Age) plt.ylabel(u"年龄") # 设定纵坐标名称 plt.grid(b=True, which='major', axis='y') plt.title(u"按年龄看获救分布 (1为获救)") plt.subplot2grid((2,3),(1,0), colspan=2) data_train.Age[data_train.Pclass == 1].plot(kind='kde') data_train.Age[data_train.Pclass == 2].plot(kind='kde') data_train.Age[data_train.Pclass == 3].plot(kind='kde') plt.xlabel(u"年龄")# plots an axis lable plt.ylabel(u"密度") plt.title(u"各等级的乘客年龄分布") plt.legend((u'头等舱', u'2等舱',u'3等舱'),loc='best') # sets our legend for our graph. plt.subplot2grid((2,3),(1,2)) data_train.Embarked.value_counts().plot(kind='bar') plt.title(u"各登船口岸上船人数") plt.ylabel(u"人数") plt.tight_layout() plt.show()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
38fdc75640a8bc129eda6b0422347e16
这个时候我们可能会有一些想法了: 不同舱位/乘客等级可能和财富/地位有关系,最后获救概率可能会不一样 年龄对获救概率也一定是有影响的,副船长曾说『小孩和女士先走』呢 和登船港口是不是有关系呢?也许登船港口不同,人的出身地位不同?
#看看各乘客等级的获救情况 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_0 = data_train.Pclass[data_train.Survived == 0].value_counts() Survived_1 = data_train.Pclass[data_train.Survived == 1].value_counts() df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0}) df.plot(kind='bar', stacked=True) plt.title(u"各乘客等级的获救情况") plt.xlabel(u"乘客等级") plt.ylabel(u"人数") plt.show()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
f404db2646ecd725d9ecdaf064c66aeb
头等舱明显获救的概率高
#看看各性别的获救情况 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_m = data_train.Survived[data_train.Sex == 'male'].value_counts() Survived_f = data_train.Survived[data_train.Sex == 'female'].value_counts() df=pd.DataFrame({u'男性':Survived_m, u'女性':Survived_f}) df.plot(kind='bar', stacked=True) plt.title(u"按性别看获救情况") plt.xlabel(u"性别") plt.ylabel(u"人数") plt.show()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
78b11e2f22472769b69e9d99a1ee5d2d
歪果盆友果然很尊重lady,lady first践行得不错。性别无疑也要作为重要特征加入最后的模型之中。
#然后我们再来看看各种舱级别情况下各性别的获救情况 fig=plt.figure() fig.set(alpha=0.65) # 设置图像透明度,无所谓 plt.title(u"根据舱等级和性别的获救情况") ax1=fig.add_subplot(141) data_train.Survived[data_train.Sex == 'female'][data_train.Pclass != 3].value_counts().plot(kind='bar', label="female highclass", color='#FA2479') ax1.set_xticklabels([u"获救", u"未获救"], rotation=0) ax1.legend([u"女性/高级舱"], loc='best') ax2=fig.add_subplot(142, sharey=ax1) data_train.Survived[data_train.Sex == 'female'][data_train.Pclass == 3].value_counts().plot(kind='bar', label='female, low class', color='pink') ax2.set_xticklabels([u"未获救", u"获救"], rotation=0) plt.legend([u"女性/低级舱"], loc='best') ax3=fig.add_subplot(143, sharey=ax1) data_train.Survived[data_train.Sex == 'male'][data_train.Pclass != 3].value_counts().plot(kind='bar', label='male, high class',color='lightblue') ax3.set_xticklabels([u"未获救", u"获救"], rotation=0) plt.legend([u"男性/高级舱"], loc='best') ax4=fig.add_subplot(144, sharey=ax1) data_train.Survived[data_train.Sex == 'male'][data_train.Pclass == 3].value_counts().plot(kind='bar', label='male low class', color='steelblue') ax4.set_xticklabels([u"未获救", u"获救"], rotation=0) plt.legend([u"男性/低级舱"], loc='best') plt.tight_layout() plt.show() #看看各登船港口获救情况 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_0 = data_train.Embarked[data_train.Survived == 0].value_counts() Survived_1 = data_train.Embarked[data_train.Survived == 1].value_counts() df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0}) df.plot(kind='bar', stacked=True) plt.title(u"各登船港口的获救情况") plt.xlabel(u"登船港口") plt.ylabel(u"人数") plt.show() #看看堂兄妹个数的获救情况 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_0 = data_train.SibSp[data_train.Survived == 0].value_counts() Survived_1 = data_train.SibSp[data_train.Survived == 1].value_counts() df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0}) df.plot(kind='bar', stacked=True) plt.title(u"堂兄妹的获救情况") plt.xlabel(u"堂兄妹数") plt.ylabel(u"人数") plt.show() #看看父母孩子数的获救情况 fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_0 = data_train.Parch[data_train.Survived == 0].value_counts() Survived_1 = data_train.Parch[data_train.Survived == 1].value_counts() df=pd.DataFrame({u'获救':Survived_1, u'未获救':Survived_0}) df.plot(kind='bar', stacked=True) plt.title(u"父母孩子数的获救情况") plt.xlabel(u"父母孩子数") plt.ylabel(u"人数") plt.show() #ticket是船票编号,应该是unique的,和最后的结果没有太大的关系,先不纳入考虑的特征范畴把 #cabin只有204个乘客有值,我们先看看它的一个分布 data_train.Cabin.value_counts() fig = plt.figure() fig.set(alpha=0.2) # 设定图表颜色alpha参数 Survived_cabin = data_train.Survived[pd.notnull(data_train.Cabin)].value_counts() Survived_nocabin = data_train.Survived[pd.isnull(data_train.Cabin)].value_counts() df=pd.DataFrame({u'有':Survived_cabin, u'无':Survived_nocabin}).transpose() df.plot(kind='bar', stacked=True) plt.title(u"按Cabin有无看获救情况") plt.xlabel(u"Cabin有无") plt.ylabel(u"人数") plt.show()
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
1172a0f833a1e06b9df85773992a993c
数据预处理 处理missing value 这里学问有点深,如果各位有好的经验可以跟我交流下。以我浅薄的经验来说我一般会分情况处理 如果missing value占总体的比例非常小,那么直接填入平均值或者众数 如果missing value所占比例不算小也不算大,那么可以考虑它跟其他特征的关系,如果关系明显,那么直接根据其他特征填入;也可以建立简单的模型,比如线性回归,随机森林等。 如果missing value所占比例大,那么直接将miss value当做一种特殊的情况,另取一个值填入 用scikit-learn中的RandomForest来拟合一下缺失的年龄数据
from sklearn.ensemble import RandomForestRegressor ### 使用 RandomForestClassifier 填补缺失的年龄属性 def set_missing_ages(df): # 把已有的数值型特征取出来丢进Random Forest Regressor中 age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']] # 乘客分成已知年龄和未知年龄两部分 known_age = age_df[age_df.Age.notnull()].as_matrix() unknown_age = age_df[age_df.Age.isnull()].as_matrix() # y即目标年龄 y = known_age[:, 0] # X即特征属性值 X = known_age[:, 1:] # fit到RandomForestRegressor之中 rfr = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1) rfr.fit(X, y) # 用得到的模型进行未知年龄结果预测 predictedAges = rfr.predict(unknown_age[:, 1:]) # 用得到的预测结果填补原缺失数据 df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges return df, rfr def set_Cabin_type(df): df.loc[ (df.Cabin.notnull()), 'Cabin' ] = "Yes" df.loc[ (df.Cabin.isnull()), 'Cabin' ] = "No" return df data_train, rfr = set_missing_ages(data_train) data_train = set_Cabin_type(data_train) print(data_train.head()) #因为逻辑回归建模时,需要输入的特征都是数值型特征,我们通常会先对类目型的特征因子化。 dummies_Cabin = pd.get_dummies(data_train['Cabin'], prefix= 'Cabin') dummies_Embarked = pd.get_dummies(data_train['Embarked'], prefix= 'Embarked') dummies_Sex = pd.get_dummies(data_train['Sex'], prefix= 'Sex') dummies_Pclass = pd.get_dummies(data_train['Pclass'], prefix= 'Pclass') df = pd.concat([data_train, dummies_Cabin, dummies_Embarked, dummies_Sex, dummies_Pclass], axis=1) df.drop(['Pclass', 'Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'], axis=1, inplace=True) df
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
096529a32c1820f72d642c7ab80cb228
有一种临近结果的宠宠欲动感吧,莫急莫急,我们还得做一些处理,仔细看看Age和Fare两个属性,乘客的数值幅度变化,也忒大了吧!!如果大家了解逻辑回归与梯度下降的话,会知道,各属性值之间scale差距太大,将对收敛速度造成几万点伤害值!甚至不收敛!所以我们先用scikit-learn里面的preprocessing模块对这俩货做一个标准化。可以参考机器学习之特征工程-数据预处理
import sklearn.preprocessing as preprocessing scaler = preprocessing.StandardScaler() age_scale_param = scaler.fit(df['Age']) df['Age_scaled'] = age_scale_param.fit_transform(df['Age']) fare_scale_param = scaler.fit(df['Fare']) df['Fare_scaled'] = fare_scale_param.fit_transform(df['Fare']) df #选择线性回归 from sklearn import linear_model # 用正则取出我们要的属性值 train_df = df.filter(regex='Survived|Age_.*|SibSp|Parch|Fare_.*|Cabin_.*|Embarked_.*|Sex_.*|Pclass_.*') train_np = train_df.as_matrix() # y即Survival结果 y = train_np[:, 0] # X即特征属性值 X = train_np[:, 1:] # fit到RandomForestRegressor之中 clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6) clf.fit(X, y) clf pd.DataFrame({"columns":list(train_df.columns)[1:], "coef":list(clf.coef_.T)})
src/ml/kaggle/titanic/titanic.ipynb
jacksu/machine-learning
mit
7e64f954fb11d7abb07c02f620dad641
下載 mnist 資料
import os import urllib dataset = 'mnist.pkl.gz' def reporthook(a,b,c): print "\rdownloading: %5.1f%%"%(a*b*100.0/c), if not os.path.isfile(dataset): origin = "https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz" print('Downloading data from %s' % origin) urllib.urlretrieve(origin, dataset, reporthook=reporthook)
mnist.ipynb
tjwei/class2016
mit
0386226a3f5072435854067180867df4
載入訓練資料 train_set 和測試資料 test_set
import gzip import pickle with gzip.open(dataset, 'rb') as f: train_set, valid_set, test_set = pickle.load(f)
mnist.ipynb
tjwei/class2016
mit
950517ef61fc69c06cc28140b027c781
查看 mnist 資料的概況,用 .shape 看 np.array 的形狀 train_set 有五萬筆資料,第一部份是五萬筆長度為 784 的向量。第二部份是五萬個數字 test_set 則有一萬筆同樣形式的資料
print "train_set", train_set[0].shape, train_set[1].shape print "valid_set", valid_set[0].shape, valid_set[1].shape print "test_set", test_set[0].shape, test_set[1].shape
mnist.ipynb
tjwei/class2016
mit
60379e39edc58b37ae05de211db293a9
資料的第一部份,每一筆都是一個 28x28 的圖片(28*28=784) 用 reshape 把長度784 的向量轉成 28*28 的方陣,就能當成圖片來看 下面是第一筆訓練資料的圖片
imshow(train_set[0][0].reshape((28, 28)), cmap="gray")
mnist.ipynb
tjwei/class2016
mit
cabb3776566193113d38bfd790a2a944
寫一個函數可以更方面的看圖。 我們查看前 5 筆資料,分別是 5張圖片,以及對應的 5 個數字
def show(x, i=[0]): plt.figure(i[0]) imshow(x.reshape((28,28)), cmap="gray") i[0]+=1 for i in range(5): print train_set[1][i] show(train_set[0][i])
mnist.ipynb
tjwei/class2016
mit
1ecbaa5cf5c75cd47fa15a60379182a6
完整的模型如下 將圖片看成是長度 784 的向量 x 計算 Wx+b, 然後再取 exp。 最後得到的十個數值。將這些數值除以他們的總和。 我們希望出來的數字會符合這張圖片是這個數字的機率。 $softmax_i(W x + b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}$ 先拿第一筆資料試試看, x 是輸入。 y 是這張圖片對應到的數字(以這個例子來說 y=5)。
x = train_set[0][0] y = train_set[1][0]
mnist.ipynb
tjwei/class2016
mit
45e6790367f74a8f8a8ae4a294ec1da9
先計算 exp(Wx+b)
Pr = exp(dot(x, W)+b) Pr.shape
mnist.ipynb
tjwei/class2016
mit
019bfc4da2e991ec48b8b2841452eda3
然後 normalize,讓總和變成 1 (符合機率的意義)
Pr = Pr/Pr.sum() print Pr
mnist.ipynb
tjwei/class2016
mit
bb82367e7f9c093745c9c46684621563
由於 W 和 b 都是隨機設定的,所以上面我們算出的機率也是隨機的。 如果照上面的機率來看,y=2 的機率有 54.5% 為最高。 y=5 的機率只有 24% (也不低,但只是運氣好) 為了要評斷我們的預測的品質,要設計一個評斷誤差的方式,我們用的方法如下(不是常見的方差,而是用熵的方式來算,好處是容易微分,效果好) $ error = - \log(P(Y=y^{(i)}|x^{(i)}, W,b)) $ 上述的誤差評分方式,常常稱作 error 或者 loss,數學式可能有點費解。實際計算其實很簡單,就是下面的式子
loss = -log(Pr[y]) loss
mnist.ipynb
tjwei/class2016
mit
15e598a96ad927ab89797a29b25fb766
目前的誤差 1.4215 不算太差,畢竟我們運氣很好,隨機的 W 和 b,居然能讓正確答案有 24% 的機率。 不過我們還是要想辦法改進。 我們用一種被稱作是 gradient descent 的方式來改善我們的誤差。 因為我們知道 gradient 是讓函數上升最快的方向。所以我們如果朝 gradient 的反方向走一點點(也就是下降最快的方向),那麼得到的函數值應該會小一點。 記得我們的變數是 W 和 b (裡面總共有 28*20+10 個變數),所以我們要把 loss 對 W 和 b 裡面的每一個參數來偏微分。 還好這個偏微分是可以用手算出他的形式,而最後偏微分的式子也不會很複雜。 對 b 的偏微分如下
gradb = Pr.copy() gradb[y] -= 1 print gradb
mnist.ipynb
tjwei/class2016
mit
195266b9d2454777f890ff3ff9025256
對 W 的偏微分也不難
print Pr.shape, x.shape, W.shape gradW = dot(x.reshape(784,1), Pr.reshape(1,10), ) gradW[:, y] -= x
mnist.ipynb
tjwei/class2016
mit
be46ca134d846c2703fa7c860e843b32
再一次計算 Pr 以及 loss
Pr = exp(dot(x, W)+b) Pr = Pr/Pr.sum() loss = -log(Pr[y]) loss
mnist.ipynb
tjwei/class2016
mit
f795cad02c1650ce4b387544386d382d
發現這次誤差下降到 0.0005 左右,改進不少 我們將同樣的方式輪流對五萬筆訓練資料來做,看看情形會如何
W = np.random.uniform(low=-1, high=1, size=(28*28,10)) b = np.random.uniform(low=-1, high=1, size=10) score = 0 N=50000*20 d = 0.001 learning_rate = 1e-2 for i in xrange(N): if i%50000==0: print i, "%5.3f%%"%(score*100) x = train_set[0][i%50000] y = train_set[1][i%50000] Pr = exp(dot(x, W)+b) Pr = Pr/Pr.sum() loss = -log(Pr[y]) score *=(1-d) if Pr.argmax() == y: score += d gradb = Pr.copy() gradb[y] -= 1 gradW = dot(x.reshape(784,1), Pr.reshape(1,10), ) gradW[:, y] -= x W -= learning_rate * gradW b -= learning_rate * gradb
mnist.ipynb
tjwei/class2016
mit
e18da5e6681aa1ee387023a86765b15e
結果發現正確率大約是 92.42%, 但這是對訓練資料而不是對測試資料 而且,一筆一筆的訓練資也有點慢,線性代數的特點就是能夠向量運算。如果把很多筆 x 當成列向量組合成一個矩陣(然後還是叫做 x),由於矩陣乘法的原理,我們還是一樣計算 Wx+b , 就可以同時得到多筆結果。 下面的函數,可以一次輸入多筆 x, 同時一次計算多筆 x 的結果和準確率。
def compute_Pr(x): Pr = exp(dot(x, W)+b) return Pr/Pr.sum(axis=1, keepdims=True) def compute_accuracy(Pr, y): return mean(Pr.argmax(axis=1)==y)
mnist.ipynb
tjwei/class2016
mit
869fc3a38fe3205b5e6cd595c158f4d4
下面是更新過得訓練過程, 當 i%100000 時,順便計算一下 test accuracy 和 valid accuracy。
W = np.random.uniform(low=-1, high=1, size=(28*28,10)) b = np.random.uniform(low=-1, high=1, size=10) score = 0 N=50000*100 batch_size = 500 learning_rate = .7 for i in xrange(0, N, batch_size): if i%100000==0: x, y = test_set[0], test_set[1] test_score = compute_accuracy(compute_Pr(x), y)*100 x, y = valid_set[0], valid_set[1] valid_score = compute_accuracy(compute_Pr(x), y)*100 print i, "%5.2f%%"%test_score, "%5.2f%%"%valid_score # 隨機選出一些訓練資料出來 rndidx = np.random.choice(train_set[0].shape[0], batch_size, replace=False) x, y = train_set[0][rndidx], train_set[1][rndidx] # 一次計算所有的 Pr Pr = compute_Pr(x) # 計算平均 gradient gradb = Pr.mean(axis=0)-[(y==i).mean() for i in range(10)] gradW = dot(x.T, Pr) for i in range(batch_size): gradW[:, y[i]]-=x[i] gradW /= batch_size # 更新 W 和 b W -= learning_rate * gradW b -= learning_rate * gradb
mnist.ipynb
tjwei/class2016
mit
70f4d2a765a307f390f2aedaca5d187e
最後得到的準確率是 92%-93% 不算完美,不過畢竟這只有一個矩陣而已。
x, y = test_set[0], test_set[1] Pr = compute_Pr(x) test_score = compute_accuracy(Pr, y)*100 x, y = valid_set[0], valid_set[1] Pr = compute_Pr(x) valid_score = compute_accuracy(Pr, y)*100 print "test accuracy %5.2f%%"%test_score, "valid accuracy %5.2f%%"%valid_score x, y = train_set[0], train_set[1] Pr = compute_Pr(x) train_score = compute_accuracy(Pr, y)*100 print "train accuracy %5.2f%%"%train_score
mnist.ipynb
tjwei/class2016
mit
c798be4763bfc5b6882a23afe14f1baf
光看數據沒感覺,我們來看看前十筆測試資料跑起來的情形 可以看到前十筆只有錯一個
x = test_set[0][:10] y = test_set[1][:10] Pr = compute_Pr(x) print Pr.argmax(axis=1) print y for i in range(10): show(x[i])
mnist.ipynb
tjwei/class2016
mit
14c84de75802c800542bf30aef6c23c0
看看前一百筆資料中,是哪些情況算錯
x = test_set[0][:100] y = test_set[1][:100] Pr = compute_Pr(x) y2 = Pr.argmax(axis=1) for i in range(100): if y2[i] != y[i]: print y2[i], y[i] show(x[i])
mnist.ipynb
tjwei/class2016
mit
9376e32525963487147fa366e0bdecb9
Exercise: Now suppose that instead of observing a lifespan, k, you observe a lightbulb that has operated for 1 year and is still working. Write another version of LightBulb that takes data in this form and performs an update.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
code/survival.ipynb
NathanYee/ThinkBayes2
gpl-2.0
5b62f34f1e9987ff2da83aa169738adf
Prepare and shape the data
from pyspark.mllib.recommendation import ALS, Rating import re #Remove the header from the RDD header = retailData.first() retailData = retailData.filter(lambda line: line != header) #To produce the ALS model, we need to train it with each individual #purchase. Each record in the RDD must be the customer id, #item id, and the rating. In this case, the rating is the quantity #ordered. MLlib converts these into a sparce, unfactored matrix. retailData = retailData.map(lambda l: l.split(",")).\ filter(lambda l: int(l[3]) > 0 and len(re.sub("\D", "", l[1])) != 0 and len(l[6]) != 0).\ map(lambda l: (int(l[6]),int(re.sub("\D", "", l[1])),int(l[3]))) #Randomly split the data into a testing set and a training set testRDD, trainRDD = retailData.randomSplit([.2,.8]) trainData = trainRDD.map(lambda l: Rating(l[0],l[1],l[2])) print trainData.take(2) print print testRDD.take(2)
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 3 - Machine Learning/IntroToSparkMLlib.ipynb
shareactorIO/pipeline
apache-2.0
120e53a0549c86952e50bd64481df7d9
Build the recommendation model
#Use trainging RDD to train a model with Alternating Least Squares #rank=5 #5 columns in the user-feature and product-feature matricies #iterations=10 #10 factorization runs rank = 5 numIterations = 10 model = ALS.train(trainData, rank, numIterations) print "The model has been trained"
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 3 - Machine Learning/IntroToSparkMLlib.ipynb
shareactorIO/pipeline
apache-2.0
0c67108ddd71b94c3d5d2eb9a4cea1dd
Test the model
#Evaluate the model with the test rdd by using the predictAll function predict = model.predictAll(testRDD.map(lambda l: (l[0],l[1]))) #Calculate and print the Mean Squared Error predictions = predict.map(lambda l: ((l[0],l[1]), l[2])) ratingsAndPredictions = testRDD.map(lambda l: ((l[0], l[1]), l[2])).join(predictions) ratingsAndPredictions.cache() print ratingsAndPredictions.take(3) meanSquaredError = ratingsAndPredictions.map(lambda l: (l[1][0] - l[1][1])**2).mean() print print 'Mean squared error = %.4f' % meanSquaredError
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 3 - Machine Learning/IntroToSparkMLlib.ipynb
shareactorIO/pipeline
apache-2.0
5144cdafad92d4b5ba6d2cb6e95e30f3
This doesn't give us that good of a representation of ranking becuase the ranks are number of purchases. Something better may be to look at some actual recommendations.
recs = model.recommendProducts(15544,5) for rec in recs: print rec
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 3 - Machine Learning/IntroToSparkMLlib.ipynb
shareactorIO/pipeline
apache-2.0
289503067e31650490f8a0fa10546f99
<img src='https://raw.githubusercontent.com/rosswlewis/RecommendationPoT/master/FullFile.png' width="80%" height="80%"></img> This user seems to have purchased a lot of childrens gifts and some holiday items. The recomendation engine we created suggested some aitems along these lines
#Rating(user=15544, product=84568, rating=193.03195106065823) #GIRLS ALPHABET IRON ON PATCHES #Rating(user=15544, product=16033, rating=179.45915040198466) #MINI HIGHLIGHTER PENS #Rating(user=15544, product=22266, rating=161.04293255928698) #EASTER DECORATION HANGING BUNNY #Rating(user=15544, product=84598, rating=141.00162368678377) #BOYS ALPHABET IRON ON PATCHES #Rating(user=15544, product=72803, rating=129.54033486738518) #ROSE SCENT CANDLE JEWELLED DRAWER
source.ml/jupyterhub.ml/notebooks/zz_old/Spark/Intro/Lab 3 - Machine Learning/IntroToSparkMLlib.ipynb
shareactorIO/pipeline
apache-2.0
5353015cbfe8f1f69e1289ca2532460e
modify the network: L1 regularizer SGD optimizer
from tfs.core.optimizer import GradientDecentOptimizer from tfs.core.regularizers import L1 net.optimizer = GradientDecentOptimizer(net) net.regularizer = L1(net,l1=0.001) net.build() net.fit(dataset,batch_size=200,n_epoch=1,max_step=100) net.save('lenet_epoch_1') !ls ./
notebook/1.Save-and-load.ipynb
crackhopper/TFS-toolbox
mit
9ea2abba7eb64377a7be294ba2691e82
load the model
from tfs.network import Network net2 = Network() net2.load('lenet_epoch_1') print net2 print net2.optimizer print net2.initializer print net2.losser print 'accuracy',net2.score(dataset.test)
notebook/1.Save-and-load.ipynb
crackhopper/TFS-toolbox
mit
88f3586601c020469e515d800d5711cb
fine-tune the loaded model
net2.fit(dataset,batch_size=200,n_epoch=1,max_step=100) net2.score(dataset.test)
notebook/1.Save-and-load.ipynb
crackhopper/TFS-toolbox
mit
7d49e4fb4fd804918a362b5de38cbf33
We have also seen another transformation in class: the polynomial transformation. In practice, you would use sklearn's nice PolynomialFeatures. To give you experience implementing your own transformer class, write a bivariate (exactly 2 input features) BiPolyTrans transformer class that, given two features, $W$ and $Z$ of a matrix $X$, calculates all powers up to a given degree. That is for every record (row) $x_i = \begin{bmatrix} w_i & z_i \end{bmatrix}$, $$\phi_{degree}(x_i) = \begin{bmatrix} 1 & w_i & z_i & w_iz_i & w_i^2z_i & w_iz_i^2 & \dots & w_iz_i^{degree-1} & w_i^{degree} & z_i^{degree} \end{bmatrix} $$ If you are worried about efficiency, you may want to make use of Python's itertools. Namely, chain and combinations_with_replacement should be helpful.
from itertools import chain, combinations_with_replacement class BiPolyTrans(BaseEstimator, TransformerMixin): """ Transforms the data from a n x 2 matrix to a matrix with polynomial features up to the specified degree. Example Usage data = np.array([[1, 2], [3, 4]]) d3polytrans = BiPolyTrans(2) d3polytrans.fit_transform(data) == np.array([[1, 1, 2, 1, 2, 4], [1, 3, 4, 9, 12,16]]) Parameters ---------- degree : integer, required largest polynomial degree to calculate with the two features """ def __init__(self, degree): self.degree = ... def fit(self, X, y=None): """ Calculates the number of input and output features """ self.n_input_features = ... self.n_output_features = ... return self def transform(self, X, y=None): """ Transforms the data into polynomial features Input ----- X : an n x 2 matrix, required. Output ------ A higher-dimensional matrix with polynomial features up to the specified degree """ n_records = ... output = np.empty((..., ...), dtype=X.dtype) ... return(output) _ = ok.grade('qtransform')
sp17/hw/hw6/hw6.ipynb
DS-100/sp17-materials
gpl-3.0
6622e7463871572a093febf41128019b
She concludes that since this value is very small, sqft and the noise are most likely independent of each other. Is this a reasonable conclusion? Why or why not? Write your answer here, replacing this text. Question 2 Centering takes every data point and subtracts the overall mean from it. We can write the transformation function $\phi$ as: $$\begin{align}\phi(X) &= \left[\begin{array}{c|c|c|c} X_1 - \bar{X}_1 & X_2 - \bar{X}_2 & \dots & X_d - \bar{X}_d \end{array}\right] \ \phi(y) &= y - \bar{y} \end{align}$$ where $\bar{X}_j$ is the arithmetic mean of the $j^{th}$ column of $X$ and $\bar{y}$ is the average of the responses. Show that if a bias/intercept term is included in a regression after centering, then it will always be 0. This, of course, means that adding a column of 1s to your design matrix after centering your data might be a little silly. Hint: You will want to use what we've proved in Question 1a. Submitting your assignment Congratulations, you're done with this homework! Run the next cell to submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. After you've done that, make sure you've pushed your changes to Github as well!
_ = ok.submit()
sp17/hw/hw6/hw6.ipynb
DS-100/sp17-materials
gpl-3.0
6baa02e2502447b2337c0246c334b066
Normalization and backgroud removal (= EXAFS extraction)
from larch.xafs import autobk autobk(feo, kweight=2, rbkg=0.8, e0=7119.0)
notebooks/larch.ipynb
maurov/xraysloth
bsd-3-clause
7304ce95c62c90817213c18e0723ba65