markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Rating distribution in the dataset:
data.rating.value_counts().sort_index().plot.bar()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
7cb998badd801c81a636f52f22209d1b
Building our first recommender model Preparing data RecommenderData class provides a set of tools for manipulating the data and preparing it for experimentation. Input parameters are: the data itself (pandas dataframe) and mapping of the data fields (column names) to internal representation: userid, itemid and feedback:
data_model = RecommenderData(data, userid='userid', itemid='movieid', feedback='rating')
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
c0d6ce185b96ff49ae6b3707e262db0a
Verify correct mapping:
data.columns data_model.fields
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
341d341dd36e94708858db53e71aa24f
RecommenderData class has a number of parameters to control how the data is processed. Defaults are fine to start with:
data_model.get_configuration()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
452810b1e91dd55c3ecedc0908418ba8
Use prepare method to split the dataset into 2 parts: training data and test data.
data_model.prepare()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
c6f73f964b87c5cc9a27543f1f1ffb75
As the original data possibly contains gaps in users' and items' indices, the data preparation process will clean this up: items from the training data will be indexed starting from zero with no gaps and the result will be stored in:
data_model.index.itemid.head()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
ec29e86000a8b85a371c521bb254fd2e
Similarly, all userid's from both training and test set are reindexed and stored in:
data_model.index.userid.training.head() data_model.index.userid.test.head()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
750f225a9b9bf9f8eac2b5e715dc287d
Internally only new inices are used. This ensures consistency of various methods used by the model. The dataset is split according to test_fold and test_ratio attributes. By default it uses first 80% of users for training and last 20% of the users as test data.
data_model.training.head() data_model.training.shape
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
80178d5ec0f6def8bc14a47787f5a3cd
The test data is further split into testset and evaluation set (evalset). Testset is used to generate recommendations, which are than evaluated against the evaluation set.
data_model.test.testset.head() data_model.test.testset.shape data_model.test.evalset.head() data_model.test.evalset.shape
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
5518b32576a7879ed36b37ad824ba230
The users in the test and evaluation sets are the same (but this users are not in the training set!). For every test user the evaluation set contains a fixed number of items which are held out from the original test data. The number of holdout items is controlled by holdout_size parameter. By default it's set to 3:
data_model.holdout_size data_model.test.evalset.groupby('userid').movieid.count().head()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
4a43f928edbf3dc26d47f24a558708d0
Creating recommender model You can create your own model by subclassing RecommenderModel class and defining two required methods: self.build() and self.get_recommendations():
class TopMovies(RecommenderModel): def build(self): self._recommendations = None # this is required line in order to ensure consitency in experiments itemid = self.data.fields.itemid # get the name of the column, that corresponds to movieid # calculate popularity of the movies based on the number of ratings item_scores = self.data.training[itemid].value_counts().sort_index().values # store it for later use in some attribute self.item_scores = item_scores def get_recommendations(self): userid = self.data.fields.userid #get the name of the column, that corresponds to userid # get the number of test users # we expect that userid doesn't have gaps in numbering (as it might be in original dataset, # RecommenderData class takes care of that) num_users = self.data.test.testset[userid].max() + 1 # repeat computed popularity scores in accordance with the number of test users scores = np.repeat(self.item_scores[None, :], num_users, axis=0) # we got the scores, but what we actually need is items (their id) # we also need only top-k items, not all of them (for top-k recommendation task) # here's how to get it: top_recs = self.get_topk_items(scores) # here leftmost items are those with the highest scores return top_recs
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
4f8abc40b458a558a4f07f8284aee387
Note, that recommendations, generated by this model, do not take into account the fact, that some of the recommended items may be present in the test set and thus, should not be recommended (they are considered seen by a test user). In order to fix that you can use filter_seen parameter along with downvote_seen_items method as follows: if self.filter_seen: #prevent seen items from appearing in recommendations itemid = self.data.fields.itemid test_idx = (test_data[userid].values.astype(np.int64), test_data[itemid].values.astype(np.int64)) self.downvote_seen_items(scores, test_idx) With this procedure "seen" items will get the lowest scores and they will be sorted out. Place this code snippet inside the get_recommendations routine before handovering scores into get_top_k_items. This will improve the baseline. Alternative way Another way is to define slice_recommendations instead of get_recommendations method. With slice_recommendations defined, the model will scale better when huge datasets are used. The method slice_recommendations takes a piece of the test data slice by slice instead of processing it as a whole. Slice if defined by start and stop parameter (which are simply a userid to start with and userid to stop at). Slicing the data avoids memory overhead and leads to a faster evaluation of models. Slicing is done automatically behind the scene and you don't have to specify anything else. Another advantage: seen items will be automatically sorted out from recommendations as long as filter_seen attribute is set to True (it is by default). So it will requires less line of code.
class TopMoviesALT(RecommenderModel): def build(self): # should be the same as in TopMovies def slice_recommendations(self, test_data, shape, start, stop): # current implementation requires handovering slice data in specific format further, # and the easiest way to get it is via get_test_matrix method. It also returns # test data in sparse matrix format, but as our recommender model is non-personalized # we don't actually need it. See SVDModel implementation to see when it's useful. test_matrix, slice_data = self.get_test_matrix(test_data, shape, (start, stop)) nusers = stop - start scores = np.repeat(self.item_scores[None, :], nusers, axis=0) return scores, slice_data
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
d539f49f646dbd433c357d9fb7edce1b
Now everything is set to create an instance of the recommender model and produce recommendations. generating recommendations:
top = TopMovies(data_model) # the model takes as input parameter the recommender data model top.build() recs = top.get_recommendations() recs recs.shape top.topk
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
54a5d2877f13fc8a33b9cb0abd18e2db
You can evaluate your model befotre submitting the results (to ensure that you have improved above baseline):
top.evaluate()
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
2bd967ada12cc3043ee5a4572a7779fb
Try to change your model to maximize the true_positive score. submitting your model: After you have created your perfect recsys model, firstly, save your recommendation into file. Please, use your name as the name for file (this will be used to display at leaderboard)
np.savez('your_full_name', recs=recs)
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
098842096b410bad148bebe01f03299f
Now you can uppload your results:
import requests files = {'upload': open('your_full_name.npz','rb')} url = "http://isp2017.azurewebsites.net/upload" r = requests.post(url, files=files)
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
228f90221d61df1964c3386c8bb21b67
Verify, that upload is successful:
print r.status_code, r.reason
polara_intro.ipynb
Evfro/RecSys_ISP2017
mit
8f93dca8a193228aa21412f554dca283
Guia inicial de TensorFlow 2.0 para expertos <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver en TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Ejecutar en Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver codigo en GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Descargar notebook</a> </td> </table> Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad son basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la Documentacion Oficial en Ingles. Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request" al siguiente repositorio tensorflow/docs. Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad por favor contacten al siguiente grupo docs@tensorflow.org list. Este es un notebook de Google Colaboratory. Los programas de Python se executan directamente en tu navegador โ€”una gran manera de aprender y utilizar TensorFlow. Para poder seguir este tutorial, ejecuta este notebook en Google Colab presionando el boton en la parte superior de esta pagina. En Colab, selecciona "connect to a Python runtime": En la parte superior derecha de la barra de menus selecciona: CONNECT. Para ejecutar todas las celdas de este notebook: Selecciona Runtime > Run all. Descarga e installa el paquete TensorFlow 2.0 version. Importa TensorFlow en tu programa: Import TensorFlow into your program:
import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
ec6f4a4779db1015d3100136a0661fd7
Carga y prepara el conjunto de datos MNIST
mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # Agrega una dimension de canales x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis]
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
e59dab78722ab9aa126d77e5ee1b7c28
Utiliza tf.data to separar por lotes y mezclar el conjunto de datos:
train_ds = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(10000).batch(32) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
ce2811c7ecdb8058856b401a4883b9cb
Construye el modelo tf.keras utilizando la API de Keras model subclassing API:
class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.conv1 = Conv2D(32, 3, activation='relu') self.flatten = Flatten() self.d1 = Dense(128, activation='relu') self.d2 = Dense(10, activation='softmax') def call(self, x): x = self.conv1(x) x = self.flatten(x) x = self.d1(x) return self.d2(x) # Crea una instancia del modelo model = MyModel()
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
a140b2d152dd2b1fbd10263e96a1b90e
Escoge un optimizador y una funcion de perdida para el entrenamiento de tu modelo:
loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam()
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
2a430d8de371df9ff70d16432bc74dcc
Escoge metricas para medir la perdida y exactitud del modelo. Estas metricas acumulan los valores cada epoch y despues imprimen el resultado total.
train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
c8c45aae92b06984df1c16278c368776
Utiliza tf.GradientTape para entrenar el modelo.
@tf.function def train_step(images, labels): with tf.GradientTape() as tape: predictions = model(images) loss = loss_object(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) train_accuracy(labels, predictions)
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
e0a24d403261a905c58c2f0ae608155a
Prueba el modelo:
@tf.function def test_step(images, labels): predictions = model(images) t_loss = loss_object(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) EPOCHS = 5 for epoch in range(EPOCHS): for images, labels in train_ds: train_step(images, labels) for test_images, test_labels in test_ds: test_step(test_images, test_labels) template = 'Epoch {}, Perdida: {}, Exactitud: {}, Perdida de prueba: {}, Exactitud de prueba: {}' print(template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) # Reinicia las metricas para el siguiente epoch. train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states()
site/es-419/tutorials/quickstart/advanced.ipynb
tensorflow/docs-l10n
apache-2.0
7f9185075fddf983ec0279cb919e393b
Data: Preparing for the model Importing the raw data
DIR = os.getcwd() + "/../data/" t = pd.read_csv(DIR + 'raw/lending-club-loan-data/loan.csv', low_memory=False) t.head()
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
107887fdf72cd04f45b3b52be53fcee9
Cleaning, imputing missing values, feature engineering (some NLP)
t2 = md.clean_data(t) t3 = md.impute_missing(t2) df = md.simple_dataset(t3) # df = md.spelling_mistakes(t3) - skipping for now, so computationally expensive!
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
d1e8e302a02c22c22b71389fc545b7ec
Train, test split: Splitting on 2015
df['issue_d'].hist(bins = 50) plt.title('Seasonality in lending') plt.ylabel('Frequency') plt.xlabel('Year') plt.show()
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
ab3d7c632a34d50f341d1d9a49513593
We can use past years as predictors of future years. One challenge with this approach is that we confound time-sensitive trends (for example, global economic shocks to interest rates - such as the financial crisis of 2008, or the growth of Lending Club to broader and broader markets of debtors) with differences related to time-insensitive factors (such as a debtor's riskiness). To account for this, we can bundle our training and test sets into the following blocks: - Before 2015: Training set - 2015 to current: Test set
old = df[df['issue_d'] < '2015'] new = df[df['issue_d'] >= '2015'] old.shape, new.shape
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
530ce8c3560a98f208ebe4e0f94bb6f3
We'll use the pre-2015 data on interest rates (old) to fit a model and cross-validate it. We'll then use the post-2015 data as a 'wild' dataset to test against. Fitting the model
X = old.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1) y = old['int_rate'] X.shape, y.shape X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) X_train.shape, X_test.shape, y_train.shape, y_test.shape rfr = RandomForestRegressor(n_estimators = 10, max_features='sqrt') scores = cross_val_score(rfr, X, y, cv = 3) print("Accuracy: {:.2f} (+/- {:.2f})".format(scores.mean(), scores.std() * 2)) X_new = new.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1) y_new = new['int_rate'] new_scores = cross_val_score(rfr, X_new, y_new, cv = 3) print("Accuracy: {:.2f} (+/- {:.2f})".format(new_scores.mean(), new_scores.std() * 2)) # QUINN: Let's just use this - all data X_total = df.drop(['int_rate', 'issue_d', 'earliest_cr_line', 'grade'], 1) y_total = df['int_rate'] total_scores = cross_val_score(rfr, X_total, y_total, cv = 3) print("Accuracy: {:.2f} (+/- {:.2f})".format(total_scores.mean(), total_scores.std() * 2))
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
efc94a2dcce980f20e90fb7f93b8eba3
Fitting the model We fit the model on all the data, and evaluate feature importances.
rfr.fit(X_total, y_total) fi = [{'importance': x, 'feature': y} for (x, y) in \ sorted(zip(rfr.feature_importances_, X_total.columns))] fi = pd.DataFrame(fi) fi.sort_values(by = 'importance', ascending = False, inplace = True) fi.head() top5 = fi.head() top5.plot(kind = 'bar') plt.xticks(range(5), top5['feature']) plt.title('Feature importances (top 5 features)') plt.ylabel('Relative importance') plt.show()
notebooks/5-aa-second_model.ipynb
QuinnLee/cs109a-Project
mit
0b1f70c235e415073c4206e80cb355dd
๋ณ€๋ถ„ ์ถ”๋ก ์œผ๋กœ ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ํ˜ผํ•ฉ ํšจ๊ณผ ๋ชจ๋ธ ๋งž์ถค ์กฐ์ •ํ•˜๊ธฐ <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a> </td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์—์„œ ์‹คํ–‰</a></td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์—์„œ ์†Œ์Šค ๋ณด๊ธฐ</a> </td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œ</a></td> </table>
#@title Install { display-mode: "form" } TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System'] if TF_Installation == 'TF Nightly': !pip install -q --upgrade tf-nightly print('Installation of `tf-nightly` complete.') elif TF_Installation == 'TF Stable': !pip install -q --upgrade tensorflow print('Installation of `tensorflow` complete.') elif TF_Installation == 'System': pass else: raise ValueError('Selection Error: Please select a valid ' 'installation option.') #@title Install { display-mode: "form" } TFP_Installation = "System" #@param ["Nightly", "Stable", "System"] if TFP_Installation == "Nightly": !pip install -q tfp-nightly print("Installation of `tfp-nightly` complete.") elif TFP_Installation == "Stable": !pip install -q --upgrade tensorflow-probability print("Installation of `tensorflow-probability` complete.") elif TFP_Installation == "System": pass else: raise ValueError("Selection Error: Please select a valid " "installation option.")
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
46c120eb212f9005023ec07fbb61682b
์š”์•ฝ ์ด colab์—์„œ๋Š” TensorFlow Probability์˜ ๋ณ€๋ถ„ ์ถ”๋ก ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ํ˜ผํ•ฉ ํšจ๊ณผ ๋ชจ๋ธ์„ ๋งž์ถค ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๋ชจ๋ธ ํŒจ๋ฐ€๋ฆฌ ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ํ˜ผํ•ฉ ํšจ๊ณผ ๋ชจ๋ธ(GLMM)์€ ์ƒ˜ํ”Œ๋ณ„ ๋…ธ์ด์ฆˆ๋ฅผ ์˜ˆ์ธก๋œ ์„ ํ˜• ์‘๋‹ต์— ํ†ตํ•ฉํ•œ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ๋ชจ๋ธ(GLM)๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๊ฑฐ์˜ ๋ณด์ด์ง€ ์•Š๋Š” ํŠน์„ฑ์ด ๋” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ณด์ด๋Š” ํŠน์„ฑ๊ณผ ์ •๋ณด๋ฅผ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ถ€๋ถ„์ ์œผ๋กœ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ƒ์„ฑ ํ”„๋กœ์„ธ์Šค๋กœ์„œ ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ํ˜ผํ•ฉ ํšจ๊ณผ ๋ชจ๋ธ(GLMM)์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํŠน์ง•์ด ์žˆ์Šต๋‹ˆ๋‹ค. $$ \begin{align} \text{for } &amp; r = 1\ldots R: \hspace{2.45cm}\text{# for each random-effect group}\ &amp;\begin{aligned} \text{for } &amp;c = 1\ldots |C_r|: \hspace{1.3cm}\text{# for each category ("level") of group $r$}\ &amp;\begin{aligned} \beta_{rc} &amp;\sim \text{MultivariateNormal}(\text{loc}=0_{D_r}, \text{scale}=\Sigma_r^{1/2}) \end{aligned} \end{aligned}\ \text{for } &amp; i = 1 \ldots N: \hspace{2.45cm}\text{# for each sample}\ &amp;\begin{aligned} &amp;\eta_i = \underbrace{\vphantom{\sum_{r=1}^R}x_i^\top\omega}\text{fixed-effects} + \underbrace{\sum{r=1}^R z_{r,i}^\top \beta_{r,C_r(i) }}\text{random-effects} \ &amp;Y_i|x_i,\omega,{z{r,i} , \beta_r}_{r=1}^R \sim \text{Distribution}(\text{mean}= g^{-1}(\eta_i)) \end{aligned} \end{align} $$ ์—ฌ๊ธฐ์„œ $$ \begin{align} R &amp;= \text{number of random-effect groups}\ |C_r| &amp;= \text{number of categories for group $r$}\ N &amp;= \text{number of training samples}\ x_i,\omega &amp;\in \mathbb{R}^{D_0}\ D_0 &amp;= \text{number of fixed-effects}\ C_r(i) &amp;= \text{category (under group $r$) of the $i$th sample}\ z_{r,i} &amp;\in \mathbb{R}^{D_r}\ D_r &amp;= \text{number of random-effects associated with group $r$}\ \Sigma_{r} &amp;\in {S\in\mathbb{R}^{D_r \times D_r} : S \succ 0 }\ \eta_i\mapsto g^{-1}(\eta_i) &amp;= \mu_i, \text{inverse link function}\ \text{Distribution} &amp;=\text{some distribution parameterizable solely by its mean} \end{align} $$ ์ฆ‰, ๊ฐ ๊ทธ๋ฃน์˜ ๋ชจ๋“  ์นดํ…Œ๊ณ ๋ฆฌ๊ฐ€ ๋‹ค๋ณ€๋Ÿ‰ ์ •๊ทœ ๋ถ„ํฌ์˜ ์ƒ˜ํ”Œ $\beta_{rc}$์™€ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. $\beta_{rc}$ ์ถ”์ถœ์€ ํ•ญ์ƒ ๋…๋ฆฝ์ ์ด์ง€๋งŒ $r$ ๊ทธ๋ฃน์— ๋Œ€ํ•ด์„œ๋งŒ ๋™์ผํ•˜๊ฒŒ ๋ถ„ํฌ๋ฉ๋‹ˆ๋‹ค. $r\in{1,\ldots,R}$๋‹น ์ •ํ™•ํžˆ ํ•˜๋‚˜์˜ $\Sigma_r$๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ ๊ทธ๋ฃน์˜ ํŠน์„ฑ($z_{r,i}$)๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ๊ฒฐํ•ฉํ•˜๋ฉด ๊ฒฐ๊ณผ๋Š” $i$๋ฒˆ์งธ ์˜ˆ์ธก ์„ ํ˜• ์‘๋‹ต(๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด $x_i^\top\omega$)์— ๋Œ€ํ•œ ์ƒ˜ํ”Œ๋ณ„ ๋…ธ์ด์ฆˆ์ž…๋‹ˆ๋‹ค. ${\Sigma_r:r\in{1,\ldots,R}}$๋ฅผ ์ถ”์ •ํ•  ๋•Œ ๋ณธ์งˆ์ ์œผ๋กœ ์ž„์˜ ํšจ๊ณผ ๊ทธ๋ฃน์ด ์ „๋‹ฌํ•˜๋Š” ๋…ธ์ด์ฆˆ์˜ ์–‘์„ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด $x_i^\top\omega$์— ์žˆ๋Š” ์‹ ํ˜ธ๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. $\text{Distribution}$, ์—ญ๋งํฌ ํ•จ์ˆ˜ ๋ฐ $g^{-1}$์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ต์…˜์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. $Y_i\sim\text{Normal}(\text{mean}=\eta_i, \text{scale}=\sigma)$, $Y_i\sim\text{Binomial}(\text{mean}=n_i \cdot \text{sigmoid}(\eta_i), \text{total_count}=n_i)$, ๋ฐ $Y_i\sim\text{Poisson}(\text{mean}=\exp(\eta_i))$. ๋” ๋งŽ์€ ๊ฐ€๋Šฅ์„ฑ์€ tfp.glm ๋ชจ๋“ˆ์„ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ณ€๋ถ„ ์ถ”๋ก  ๋ถˆํ–‰ํžˆ๋„, ๋งค๊ฐœ๋ณ€์ˆ˜ $\beta,{\Sigma_r} _r^R$์˜ ์ตœ๋Œ€ ๊ฐ€๋Šฅ์„ฑ ์ถ”์ •์น˜๋ฅผ ์ฐพ๋Š” ๊ฒƒ์€ ๋น„ ๋ถ„์„ ์ ๋ถ„์„ ์ˆ˜๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๊ณ ์ž ๋Œ€์‹  ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋ถ€๋ก์— $q_{\lambda}$๋กœ ํ‘œ์‹œ๋œ ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”๋œ ๋ถ„ํฌ ํŒจ๋ฐ€๋ฆฌ('๋Œ€๋ฆฌ ๋ฐ€๋„')๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. $q_{\lambda}$๊ฐ€ ์‹ค์ œ ๋ชฉํ‘œ ๋ฐ€๋„์— ๊ฐ€๊น๋„๋ก ๋งค๊ฐœ๋ณ€์ˆ˜ $\lambda$๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ๋ถ„ํฌ ํŒจ๋ฐ€๋ฆฌ๋Š” ์ ์ ˆํ•œ ์ฐจ์›์˜ ๋…๋ฆฝ์ ์ธ ๊ฐ€์šฐ์‹œ์•ˆ์ด ๋  ๊ฒƒ์ด๋ฉฐ, '๋ชฉํ‘œ ๋ฐ€๋„์— ๊ฐ€๊นŒ์›€'์ด๋ž€ '์ฟจ๋ฐฑ-๋ผ์šฐ๋ธ”๋Ÿฌ(Kullbakc-Leibler) ๋ฐœ์‚ฐ ์ตœ์†Œํ™”'๋ฅผ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž˜ ์ž‘์„ฑ๋œ ์œ ๋„ ๋ฐ ๋™๊ธฐ๋Š” '๋ณ€๋ถ„ ์ถ”๋ก : ํ†ต๊ณ„ํ•™์ž๋ฅผ ์œ„ํ•œ ๊ฒ€ํ† '์˜ ์„น์…˜ 2.2๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ํŠนํžˆ, KL ๋ฐœ์‚ฐ์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์€ ELBO(evidence lower bound)๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•จ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์žฅ๋‚œ๊ฐ ๋ฌธ์ œ Gelman ๋“ฑ(2007)์˜ '๋ผ๋ˆ ๋ฐ์ดํ„ฐ์„ธํŠธ'๋Š” ํšŒ๊ท€์— ๋Œ€ํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์„ ์ž…์ฆํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๋ฐ์ดํ„ฐ์„ธํŠธ์ž…๋‹ˆ๋‹ค(์˜ˆ: ๋ฐ€์ ‘ํ•˜๊ฒŒ ๊ด€๋ จ๋œ PyMC3 ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ). ๋ผ๋ˆ ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ๋ฏธ๊ตญ ์ „์—ญ์—์„œ ์ธก์ •๋œ ๋ผ๋ˆ์˜ ์‹ค๋‚ด ์ธก์ •๊ฐ’์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ๋ˆ์€ ์ž์—ฐ์ ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ๋ฐฉ์‚ฌ์„ฑ ๊ฐ€์Šค๋กœ ๊ณ ๋†๋„์—์„œ ๋…์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด, ์ง€ํ•˜์‹ค์ด ์žˆ๋Š” ๊ฐ€์ •์—์„œ ๋ผ๋ˆ ์ˆ˜์น˜๊ฐ€ ๋” ๋†’๋‹ค๋Š” ๊ฐ€์„ค์„ ๊ฒ€์ฆํ•˜๋Š” ๋ฐ ๊ด€์‹ฌ์ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ผ๋ˆ ๋†๋„๊ฐ€ ํ† ์–‘ ์œ ํ˜•, ์ฆ‰ ์ง€๋ฆฌ ๋ฌธ์ œ์™€ ๊ด€๋ จ์ด ์žˆ๋‹ค๊ณ  ์˜์‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ML ๋ฌธ์ œ๋กœ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด, ํŒ๋… ๊ฐ’์ด ์ธก์ •๋œ ์ธต์˜ ์„ ํ˜• ํ•จ์ˆ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋กœ๊ทธ ๋ผ๋ˆ ์ˆ˜์ค€์„ ์˜ˆ์ธกํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์นด์šดํ‹ฐ(county)๋ฅผ ์ž„์˜ ํšจ๊ณผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ง€๋ฆฌ๋กœ ์ธํ•œ ๋ถ„์‚ฐ์„ ์„ค๋ช…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ์ผ๋ฐ˜ํ™”๋œ ์„ ํ˜• ํ˜ผํ•ฉ ํšจ๊ณผ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import os from six.moves import urllib import matplotlib.pyplot as plt; plt.style.use('ggplot') import numpy as np import pandas as pd import seaborn as sns; sns.set_context('notebook') import tensorflow_datasets as tfds import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp tfd = tfp.distributions tfb = tfp.bijectors
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
c80779f314b47e24e4b8dc924d18fde6
๋˜ํ•œ GPU์˜ ๊ฐ€์šฉ์„ฑ์„ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค.
if tf.test.gpu_device_name() != '/device:GPU:0': print("We'll just use the CPU for this run.") else: print('Huzzah! Found GPU: {}'.format(tf.test.gpu_device_name()))
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
8ebfacb2ebe4e1ea33f28dfe2c8a5e70
๋ฐ์ดํ„ฐ์„ธํŠธ ์–ป๊ธฐ TensorFlow ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•˜๊ณ  ์•ฝ๊ฐ„์˜ ๊ฐ€๋ฒผ์šด ์ „์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.
def load_and_preprocess_radon_dataset(state='MN'): """Load the Radon dataset from TensorFlow Datasets and preprocess it. Following the examples in "Bayesian Data Analysis" (Gelman, 2007), we filter to Minnesota data and preprocess to obtain the following features: - `county`: Name of county in which the measurement was taken. - `floor`: Floor of house (0 for basement, 1 for first floor) on which the measurement was taken. The target variable is `log_radon`, the log of the Radon measurement in the house. """ ds = tfds.load('radon', split='train') radon_data = tfds.as_dataframe(ds) radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True) df = radon_data[radon_data.state==state.encode()].copy() df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1) # Make county names look nice. df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title() # Remap categories to start from 0 and end at max(category). df['county'] = df.county.astype(pd.api.types.CategoricalDtype()) df['county_code'] = df.county.cat.codes # Radon levels are all positive, but log levels are unconstrained df['log_radon'] = df['radon'].apply(np.log) # Drop columns we won't use and tidy the index columns_to_keep = ['log_radon', 'floor', 'county', 'county_code'] df = df[columns_to_keep].reset_index(drop=True) return df df = load_and_preprocess_radon_dataset() df.head()
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
9582eb5913889bab377e42956134af31
GLMM ํŒจ๋ฐ€๋ฆฌ ์ „๋ฌธํ™”ํ•˜๊ธฐ ์ด ์„น์…˜์—์„œ๋Š” GLMM ํŒจ๋ฐ€๋ฆฌ๋ฅผ ๋ผ๋ˆ ์ˆ˜์ค€ ์˜ˆ์ธก ์ž‘์—…์— ์ „๋ฌธํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋จผ์ € GLMM์˜ ๊ณ ์ • ํšจ๊ณผ ํŠน์ˆ˜ ์ผ€์ด์Šค๋ฅผ ๊ณ ๋ คํ•ฉ๋‹ˆ๋‹ค. $$ \mathbb{E}[\log(\text{radon}_j)] = c + \text{floor_effect}_j $$ ์ด ๋ชจ๋ธ์€ ๊ด€์ธก์น˜ $j$์˜ ๋กœ๊ทธ ๋ผ๋ˆ์ด $j$๋ฒˆ์งธ ํŒ๋… ๊ฐ’์ด ์ธก์ •๋œ ์ธต๊ณผ ์ผ์ •ํ•œ ์ ˆํŽธ์— ์˜ํ•ด ์˜ˆ์ƒ๋Œ€๋กœ ๊ฒฐ์ •๋œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜์‚ฌ ์ฝ”๋“œ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. def estimate_log_radon(floor): return intercept + floor_effect[floor] ๋ชจ๋“  ์ธต์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜์™€ ๋ณดํŽธ์ ์ธ intercept ํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค. 0์ธต๊ณผ 1์ธต์˜ ๋ผ๋ˆ ์ธก์ •๊ฐ’์„ ๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค.
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 4)) df.groupby('floor')['log_radon'].plot(kind='density', ax=ax1); ax1.set_xlabel('Measured log(radon)') ax1.legend(title='Floor') df['floor'].value_counts().plot(kind='bar', ax=ax2) ax2.set_xlabel('Floor where radon was measured') ax2.set_ylabel('Count') fig.suptitle("Distribution of log radon and floors in the dataset");
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
d9d99c8e3fb72eb6d5756eb15dc6f98a
์ง€๋ฆฌ์— ๊ด€ํ•œ ๋‚ด์šฉ์„ ํฌํ•จํ•˜์—ฌ ๋ชจ๋ธ์„ ์ข€ ๋” ์ •๊ตํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์•„๋งˆ๋„ ๋” ์ข‹์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ผ๋ˆ์€ ๋•…์— ์กด์žฌํ•  ์ˆ˜ ์žˆ๋Š” ์šฐ๋ผ๋Š„์˜ ๋ถ•๊ดด ์‚ฌ์Šฌ์˜ ์ผ๋ถ€์ด๋ฏ€๋กœ ์ง€๋ฆฌ๋ฅผ ์„ค๋ช…ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. $$ \mathbb{E}[\log(\text{radon}_j)] = c + \text{floor_effect}_j + \text{county_effect}_j $$ ๋‹ค์‹œ ํ•˜๋ฉด, ์˜์‚ฌ ์ฝ”๋“œ์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. def estimate_log_radon(floor, county): return intercept + floor_effect[floor] + county_effect[county] ์นด์šดํ‹ฐ๋ณ„ ๊ฐ€์ค‘์น˜๋ฅผ ์ œ์™ธํ•˜๊ณ ๋Š” ์ด์ „๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ถฉ๋ถ„ํžˆ ํฐ ํ›ˆ๋ จ ์„ธํŠธ๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ์ด๋Š” ํ•ฉ๋ฆฌ์ ์ธ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฏธ๋„ค์†Œํƒ€์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๊ณ ๋ คํ•  ๋•Œ ๊ด€์ธก์น˜ ์ˆ˜๊ฐ€ ์ž‘์€ ์นด์šดํ‹ฐ๊ฐ€ ๋งŽ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, 85๊ฐœ ์นด์šดํ‹ฐ ์ค‘ 39๊ฐœ ์นด์šดํ‹ฐ์˜ ๊ด€์ธก์น˜๊ฐ€ 5๊ฐœ ๋ฏธ๋งŒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์นด์šดํ‹ฐ๋‹น ๊ด€์ธก์น˜ ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ์œ„์˜ ๋ชจ๋ธ๋กœ ์ˆ˜๋ ดํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๋ชจ๋“  ๊ด€์ธก์น˜ ๊ฐ„์— ํ†ต๊ณ„์  ๊ฐ•๋„๋ฅผ ๊ณต์œ ํ•˜๋„๋ก ๋™๊ธฐ๋ฅผ ๋ถ€์—ฌํ•ฉ๋‹ˆ๋‹ค.
fig, ax = plt.subplots(figsize=(22, 5)); county_freq = df['county'].value_counts() county_freq.plot(kind='bar', ax=ax) ax.set_xlabel('County') ax.set_ylabel('Number of readings');
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
4c342d2460fd1b9846874d47cd0a938e
์ด ๋ชจ๋ธ์„ ๋งž์ถค ์กฐ์ •ํ•˜๋ฉด county_effect ๋ฒกํ„ฐ๋Š” ํ›ˆ๋ จ ์ƒ˜ํ”Œ์ด ๊ฑฐ์˜ ์—†๋Š” ์นด์šดํ‹ฐ์— ๋Œ€ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ธฐ์–ตํ•˜๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋งˆ๋„ ๊ณผ๋Œ€์ ํ•ฉ์ด ๋ฐœ์ƒํ•˜์—ฌ ์ผ๋ฐ˜ํ™”๊ฐ€ ๋ถˆ๋Ÿ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GLMM์€ ์œ„์˜ ๋‘ GLM์— ๋Œ€ํ•ด ์ ์ ˆํ•œ ํƒ€ํ˜‘์ ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋งž์ถค ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. $$ \log(\text{radon}_j) \sim c + \text{floor_effect}_j + \mathcal{N}(\text{county_effect}_j, \text{county_scale}) $$ ์ด ๋ชจ๋ธ์€ ์ฒซ ๋ฒˆ์งธ ๋ชจ๋ธ๊ณผ ๊ฐ™์ง€๋งŒ, ๊ฐ€๋Šฅ์„ฑ์ด ์ •๊ทœ ๋ถ„ํฌ๊ฐ€ ๋˜๋„๋ก ๊ณ ์ •ํ–ˆ์œผ๋ฉฐ ๋‹จ์ผ ๋ณ€์ˆ˜ county_scale์„ ํ†ตํ•ด ๋ชจ๋“  ์นด์šดํ‹ฐ์—์„œ ๋ถ„์‚ฐ์„ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. ์˜์‚ฌ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. def estimate_log_radon(floor, county): county_mean = county_effect[county] random_effect = np.random.normal() * county_scale + county_mean return intercept + floor_effect[floor] + random_effect ๊ด€์ธก๋œ ๋ฐ์ดํ„ฐ๋กœ county_scale, county_mean ๋ฐ random_effect์— ๋Œ€ํ•œ ๊ฒฐํ•ฉ ๋ถ„ํฌ๋ฅผ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. ๊ธ€๋กœ๋ฒŒ county_scale์„ ์‚ฌ์šฉํ•˜๋ฉด ์นด์šดํ‹ฐ ๊ฐ„์— ํ†ต๊ณ„์  ๊ฐ•๋„๋ฅผ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ด€์ธก์น˜๊ฐ€ ๋งŽ์€ ๊ฒฝ์šฐ ๊ด€์ธก์น˜๊ฐ€ ๊ฑฐ์˜ ์—†๋Š” ์นด์šดํ‹ฐ ๋ถ„์‚ฐ์— ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋” ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜์ง‘ํ•˜๋ฉด ์ด ๋ชจ๋ธ์€ scale ๋ณ€์ˆ˜๊ฐ€ ํ’€๋งํ•˜์ง€ ์•Š๋Š” ๋ชจ๋ธ๋กœ ์ˆ˜๋ ด๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋”๋ผ๋„ ๋‘ ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ด€์ธก์น˜๊ฐ€ ๊ฐ€์žฅ ๋งŽ์€ ์นด์šดํ‹ฐ์— ๋Œ€ํ•œ ์œ ์‚ฌํ•œ ๊ฒฐ๋ก ์— ๋„๋‹ฌํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์‹คํ—˜ ์ด์ œ TensorFlow์—์„œ ๋ณ€๋ถ„ ์ถ”๋ก ์œผ๋กœ ์œ„์˜ GLMM์„ ๋งž์ถค ์กฐ์ •ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ๋ฐ์ดํ„ฐ๋ฅผ ํŠน์„ฑ๊ณผ ๋ ˆ์ด๋ธ”๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค.
features = df[['county_code', 'floor']].astype(int) labels = df[['log_radon']].astype(np.float32).values.flatten()
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
9d4710b443ce250305a92edf4e0fc0ef
๋ชจ๋ธ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.
def make_joint_distribution_coroutine(floor, county, n_counties, n_floors): def model(): county_scale = yield tfd.HalfNormal(scale=1., name='scale_prior') intercept = yield tfd.Normal(loc=0., scale=1., name='intercept') floor_weight = yield tfd.Normal(loc=0., scale=1., name='floor_weight') county_prior = yield tfd.Normal(loc=tf.zeros(n_counties), scale=county_scale, name='county_prior') random_effect = tf.gather(county_prior, county, axis=-1) fixed_effect = intercept + floor_weight * floor linear_response = fixed_effect + random_effect yield tfd.Normal(loc=linear_response, scale=1., name='likelihood') return tfd.JointDistributionCoroutineAutoBatched(model) joint = make_joint_distribution_coroutine( features.floor.values, features.county_code.values, df.county.nunique(), df.floor.nunique()) # Define a closure over the joint distribution # to condition on the observed labels. def target_log_prob_fn(*args): return joint.log_prob(*args, likelihood=labels)
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
aa1c9b246566ee7e25a40bd16f5e1ba3
์‚ฌํ›„ ํ™•๋ฅ  ๋Œ€๋ฆฌ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๋งค๊ฐœ๋ณ€์ˆ˜ $\lambda$๊ฐ€ ํ›ˆ๋ จ ๊ฐ€๋Šฅํ•œ ๋Œ€๋ฆฌ ํŒจ๋ฐ€๋ฆฌ $q_{\lambda}$๋ฅผ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์— ํŒจ๋ฐ€๋ฆฌ๋Š” ๊ฐ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•ด ํ•˜๋‚˜์˜ ๋ถ„ํฌ๋ฅผ ๊ฐ–๋Š” ๋…๋ฆฝ์ ์ธ ๋‹ค๋ณ€๋Ÿ‰ ์ •๊ทœ ๋ถ„ํฌ์ด๊ณ  $\lambda = {(\mu_j, \sigma_j)}$์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ $j$๋Š” 4๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ธ๋ฑ์‹ฑํ•ฉ๋‹ˆ๋‹ค. ๋Œ€๋ฆฌ ํŒจ๋ฐ€๋ฆฌ๋ฅผ ๋งž์ถค ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ๋ฉ”์„œ๋“œ๋Š” tf.Variables๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ tfp.util.TransformedVariable์„ Softplus์™€ ๊ฐ™์ด ์‚ฌ์šฉํ•˜์—ฌ scale ๋งค๊ฐœ๋ณ€์ˆ˜(ํ›ˆ๋ จ ๊ฐ€๋Šฅํ•จ)๋ฅผ ์–‘์ˆ˜๋กœ ์ œํ•œํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์–‘์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ธ ์ „์ฒด scale_prior์— Softplus๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ตœ์ ํ™”๋ฅผ ๋•๊ธฐ ์œ„ํ•ด ์•ฝ๊ฐ„์˜ ์ง€ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ํ›ˆ๋ จ ๊ฐ€๋Šฅํ•œ ๋ณ€์ˆ˜๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค.
# Initialize locations and scales randomly with `tf.Variable`s and # `tfp.util.TransformedVariable`s. _init_loc = lambda shape=(): tf.Variable( tf.random.uniform(shape, minval=-2., maxval=2.)) _init_scale = lambda shape=(): tfp.util.TransformedVariable( initial_value=tf.random.uniform(shape, minval=0.01, maxval=1.), bijector=tfb.Softplus()) n_counties = df.county.nunique() surrogate_posterior = tfd.JointDistributionSequentialAutoBatched([ tfb.Softplus()(tfd.Normal(_init_loc(), _init_scale())), # scale_prior tfd.Normal(_init_loc(), _init_scale()), # intercept tfd.Normal(_init_loc(), _init_scale()), # floor_weight tfd.Normal(_init_loc([n_counties]), _init_scale([n_counties]))]) # county_prior
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
7a345d4ae313e7725c4eed36cbeb0313
์ด ์…€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด tfp.experimental.vi.build_factored_surrogate_posterior๋กœ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. python surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior( event_shape=joint.event_shape_tensor()[:-1], constraining_bijectors=[tfb.Softplus(), None, None, None]) ๊ฒฐ๊ณผ ๋‹ค๋ฃจ๊ธฐ ์‰ฌ์šด ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”๋œ ๋ถ„ํฌ ํŒจ๋ฐ€๋ฆฌ๋ฅผ ์ •์˜ํ•œ ๋‹ค์Œ, ๋ชฉํ‘œ ๋ถ„ํฌ์— ๊ฐ€๊นŒ์šด ๋‹ค๋ฃจ๊ธฐ ์‰ฌ์šด ๋ถ„ํฌ๋ฅผ ๊ฐ–๋„๋ก ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ ํƒํ•˜๋Š” ๊ฒƒ์ด ๋ชฉํ‘œ์ž„์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ์œ„์˜ ๋Œ€๋ฆฌ ๋ถ„ํฌ๋ฅผ ๋นŒ๋“œํ–ˆ์œผ๋ฉฐ ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ์ฃผ์–ด์ง„ ์Šคํ… ์ˆ˜๋ฅผ ํ—ˆ์šฉํ•˜๋Š” tfp.vi.fit_surrogate_posterior๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์Œ์„ฑ ELBO๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋Œ€๋ฆฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค(๋Œ€๋ฆฌ์ž์™€ ๋Œ€์ƒ ๋ถ„ํฌ ์‚ฌ์ด์˜ ์ฟจ๋ฐฑ-๋ผ์ด๋ธ”๋Ÿฌ ๋ฐœ์‚ฐ์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ๊ณผ ์ผ์น˜ํ•จ). ๋ฐ˜ํ™˜ ๊ฐ’์€ ๊ฐ ์Šคํ…์—์„œ ์Œ์˜ ELBO์ด๋ฉฐ surrogate_posterior์˜ ๋ถ„ํฌ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €์—์„œ ์ฐพ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์—…๋ฐ์ดํŠธ๋ฉ๋‹ˆ๋‹ค.
optimizer = tf.optimizers.Adam(learning_rate=1e-2) losses = tfp.vi.fit_surrogate_posterior( target_log_prob_fn, surrogate_posterior, optimizer=optimizer, num_steps=3000, seed=42, sample_size=2) (scale_prior_, intercept_, floor_weight_, county_weights_), _ = surrogate_posterior.sample_distributions() print(' intercept (mean): ', intercept_.mean()) print(' floor_weight (mean): ', floor_weight_.mean()) print(' scale_prior (approx. mean): ', tf.reduce_mean(scale_prior_.sample(10000))) fig, ax = plt.subplots(figsize=(10, 3)) ax.plot(losses, 'k-') ax.set(xlabel="Iteration", ylabel="Loss (ELBO)", title="Loss during training", ylim=0);
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
ad09094e01c8fc4ccb8d71ffe56217ca
์ถ”์ •๋œ ํ‰๊ท  ์นด์šดํ‹ฐ(county) ํšจ๊ณผ์™€ ํ•ด๋‹น ํ‰๊ท ์˜ ๋ถˆํ™•์‹ค์„ฑ์„ ํ”Œ๋กฏํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ๊ด€์ฐฐ ํšŸ์ˆ˜๋กœ ์ •๋ ฌํ–ˆ์œผ๋ฉฐ ๊ฐ€์žฅ ํฐ ์ˆ˜๋Š” ์™ผ์ชฝ์— ์žˆ์Šต๋‹ˆ๋‹ค. ๊ด€์ธก์น˜๊ฐ€ ๋งŽ์€ ์นด์šดํ‹ฐ์—์„œ๋Š” ๋ถˆํ™•์‹ค์„ฑ์ด ์ž‘์ง€๋งŒ, ๊ด€์ธก์น˜๊ฐ€ ํ•œ๋‘ ๊ฐœ๋งŒ ์žˆ๋Š” ์นด์šดํ‹ฐ์—์„œ๋Š” ๋ถˆํ™•์‹ค์„ฑ์ด ๋” ํฝ๋‹ˆ๋‹ค.
county_counts = (df.groupby(by=['county', 'county_code'], observed=True) .agg('size') .sort_values(ascending=False) .reset_index(name='count')) means = county_weights_.mean() stds = county_weights_.stddev() fig, ax = plt.subplots(figsize=(20, 5)) for idx, row in county_counts.iterrows(): mid = means[row.county_code] std = stds[row.county_code] ax.vlines(idx, mid - std, mid + std, linewidth=3) ax.plot(idx, means[row.county_code], 'ko', mfc='w', mew=2, ms=7) ax.set( xticks=np.arange(len(county_counts)), xlim=(-1, len(county_counts)), ylabel="County effect", title=r"Estimates of county effects on log radon levels. (mean $\pm$ 1 std. dev.)", ) ax.set_xticklabels(county_counts.county, rotation=90);
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
c986c5b293c4d5c6c081f6c6b9c077d4
์‹ค์ œ๋กœ ์ถ”์ •๋œ ํ‘œ์ค€ ํŽธ์ฐจ์— ๋Œ€ํ•œ ๋กœ๊ทธ ์ˆ˜์˜ ๊ด€์ธก์น˜๋ฅผ ํ”Œ๋กฏํ•˜์—ฌ ์ด๋ฅผ ๋” ์ง์ ‘์ ์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ด€๊ณ„๊ฐ€ ๊ฑฐ์˜ ์„ ํ˜•์ž„์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
fig, ax = plt.subplots(figsize=(10, 7)) ax.plot(np.log1p(county_counts['count']), stds.numpy()[county_counts.county_code], 'o') ax.set( ylabel='Posterior std. deviation', xlabel='County log-count', title='Having more observations generally\nlowers estimation uncertainty' );
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
ca8ed1067d6d8701cdb99eebf7b6bb0d
R์—์„œ lme4์™€ ๋น„๊ตํ•˜๊ธฐ
%%shell exit # Trick to make this block not execute. radon = read.csv('srrs2.dat', header = TRUE) radon = radon[radon$state=='MN',] radon$radon = ifelse(radon$activity==0., 0.1, radon$activity) radon$log_radon = log(radon$radon) # install.packages('lme4') library(lme4) fit <- lmer(log_radon ~ 1 + floor + (1 | county), data=radon) fit # Linear mixed model fit by REML ['lmerMod'] # Formula: log_radon ~ 1 + floor + (1 | county) # Data: radon # REML criterion at convergence: 2171.305 # Random effects: # Groups Name Std.Dev. # county (Intercept) 0.3282 # Residual 0.7556 # Number of obs: 919, groups: county, 85 # Fixed Effects: # (Intercept) floor # 1.462 -0.693
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
908864de2c8c739aa5513a60a3e46590
๋‹ค์Œ ํ‘œ์— ๊ฒฐ๊ณผ๊ฐ€ ์š”์•ฝ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
print(pd.DataFrame(data=dict(intercept=[1.462, tf.reduce_mean(intercept_.mean()).numpy()], floor=[-0.693, tf.reduce_mean(floor_weight_.mean()).numpy()], scale=[0.3282, tf.reduce_mean(scale_prior_.sample(10000)).numpy()]), index=['lme4', 'vi']))
site/ko/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb
tensorflow/docs-l10n
apache-2.0
73d8ed3b551e8afbebbac9e56b68db21
Remove the smaller objects to retrieve the large galaxy using a boolean array, and then use skimage.exposure.histogram and plt.plot to show the light distribution from the galaxy. <div style="height: 400px;"></div>
%reload_ext load_style %load_style ../themes/tutorial.css
notebooks/3_morphological_operations.ipynb
jni/numpy-skimage-tutorial
bsd-3-clause
1d41a5edbe2e3159a7c6759916e9c4c7
If we call the keys of the pj.alignments dictionary, we can see the names of the alignments it contains:
pj.alignments.keys()
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
361af583135163620c6478d7a840f9fd
3.7.1 Configuring an alignment trimming process Like the sequence alignment phase, alignment trimming has its own configuration class, the TrimalConf class. An object of this class will generate a command-line and the required input files for the program TrimAl, but will not execute the process (this is shown below). Once the process has been successfully executed, this TrimalConf object is also stored in pj.used_methods and it can be invoked as a report. 3.7.1.1 Example1, the default gappyput algorithm With TrimalConf, instead of specifying loci names, we provide alignment names, as they appear in the keys of pj.alignments
gappyout = TrimalConf(pj, # The Project method_name='gappyout', # Any unique string ('gappyout' is default) program_name='trimal', # No alternatives in this ReproPhylo version cmd='default', # the default is trimal. Change it here # or in pj.defaults['trimal'] alns=['MT-CO1@mafftLinsi'], # 'all' by default trimal_commands={'gappyout': True} # By default, the gappyout algorithm is used. )
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
14fe0c17b44976ea48ac66bdcfd651eb
3.7.1.2 List comprehension to subset alignments In this example, it is easy enough to copy and paste alignment names into a list and pass it to TrimalConf. But this is more difficult if we want to fish out a subset of alignments from a very large list of alignments. In such cases, Python's list comprehension is very useful. Below I show two uses of list comprehension, but the more you feel comfortable with this approach, the better. Getting locus names of rRNA loci If you read the code line that follows very carefully, you will see it quite literally says "take the name of each Locus found in pj.loci if its feature type is rRNA, and put it in a list":
rRNA_locus_names = [locus.name for locus in pj.loci if locus.feature_type == 'rRNA'] print rRNA_locus_names
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
dc5c87f4e355d0837a67058830d102e8
what we get is a list of names of our rRNA loci. Getting alignment names that have locus names of rRNA loci The following line says: "take the key of each alignment from the pj.alignments dictionary if the first word before the '@' symbol is in the list of rRNA locus names, and put this key in a list":
rRNA_alignment_names = [key for key in pj.alignments.keys() if key.split('@')[0] in rRNA_locus_names] print rRNA_alignment_names
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
c09b1cb374dcdbde2c23000d38d66670
We get a list of keys, of the rRNA loci alignments we produced on the previous section, and which are stored in the pj.alignments dictionary. We can now pass this list to a new TrimalConf instance that will only process rRNA locus alignments:
gt50 = TrimalConf(pj, method_name='gt50', alns = rRNA_alignment_names, trimal_commands={'gt': 0.5} # This will keep positions with up to # 50% gaps. )
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
f48e0f1ad359589eff5adfac8c4e4dba
3.7.2 Executing the alignment trimming process As for the alignment phase, this is done with a Project method, which accepts a list of TrimalConf objects.
pj.trim([gappyout, gt50])
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
d6e99590d578be2ddb55b9df65e66759
Once used, these objects are also placed in the pj.used_methods dictionary, and they can be printed out for observation:
print pj.used_methods['gappyout']
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
4220be384926fb8905650d573310632a
3.7.3 Accessing trimmed sequence alignments 3.7.3.1 The pj.trimmed_alignments dictionary The trimmed alignments themselves are stored in the pj.trimmed_alignments dictionary, using keys that follow this pattern: locus_name@alignment_method_name@trimming_method_name where alignment_method_name is the name you have provided to your AlnConf object and trimming_method_name is the one you provided to your TrimalConf object.
pj.trimmed_alignments
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
74b36721b9103d3dfc8e4d2675b1704f
3.7.3.2 Accessing a MultipleSeqAlignment object A trimmed alignment can be easily accessed and manipulated with any of Biopython's AlignIO tricks using the fta Project method:
print pj.fta('18s@muscleDefault@gt50')[:4,410:420].format('phylip-relaxed')
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
f0ba2ffe3535275d153ab0d5ee4b1c1d
3.7.3.3 Writing trimmed sequence alignment files Trimmed alignment text files can be dumped in any AlignIO format for usage in an external command line or GUI program. When writing to files, you can control the header of the sequence by, for example, adding the organism name of the gene name, or by replacing the feature ID with the record ID:
# record_id and source_organism are feature qualifiers in the SeqRecord object # See section 3.4 files = pj.write_trimmed_alns(id=['record_id','source_organism'], format='fasta') files
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
5d24453810b59879d0f66e635c86670a
The files will always be written to the current working directory (where this notebook file is), and can immediately be moved programmatically to avoid clutter:
# make a new directory for your trimmed alignment files: if not os.path.exists('trimmed_alignment_files'): os.mkdir('trimmed_alignment_files') # move the files there for f in files: os.rename(f, "./trimmed_alignment_files/%s"%f)
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
4329075bbddd4f473f9bcc786ebbe3d7
3.7.3.4 Viewing trimmed alignments Trimmed alignments can be viewed in the same way as alignments, but using this command:
pj.show_aln('MT-CO1@mafftLinsi@gappyout',id=['source_organism']) pickle_pj(pj, 'outputs/my_project.pkpj')
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
fcb6dbb84ab613aaaf2af57398871a3c
3.7.4 Quick reference
# Make a TrimalConf object trimconf = TrimalConf(pj, **kwargs) # Execute alignment process pj.trim([trimconf]) # Show AlnConf description print pj.used_methods['method_name'] # Fetch a MultipleSeqAlignment object trim_aln_obj = pj.fta('locus_name@aln_method_name@trim_method_name') # Write alignment text files pj.write_trimmed_alns(id=['some_feature_qualifier'], format='fasta') # the default feature qualifier is 'feature_id' # 'fasta' is the default format # View alignment in browser pj.show_aln('locus_name@aln_method_name@trim_method_name',id=['some_feature_qualifier'])
notebooks/Tutorials/Basic/3.7 Alignment trimming.ipynb
szitenberg/ReproPhyloVagrant
mit
8e2ecbcb64c83b3571a24ed28973432d
$g(x)\rightarrow 1$ for $x\rightarrow\infty$ $g(x)\rightarrow 0$ for $x\rightarrow -\infty$ $g(0) = 1/2$ Finally, to go from the regression to the classification, we can simply apply the following condition: $$ y=\left{ \begin{array}{@{}ll@{}} 1, & \text{if}\ h_w(x)>=1/2 \ 0, & \text{otherwise} \end{array}\right. $$ Let's clarify the notation. We have $m$ training samples and $n$ features, our training examples can be represented by a $m$-by-$n$ matrix $\underline{\underline{X}}=(x_{ij})$ ($m$-by-$n+1$, if we include the intercept term) that contains the training examples, $x^{(i)}$, in its rows. The target values of the training set can be represented as a $m$-dimensional vector $\underline{y}$ and the parameters of our model as a $n$-dimensional vector $\underline{w}$ ($n+1$ if we take into account the intercept). Now, for a given training example $x^{(i)}$, the function that we want to learn (or fit) can be written: $$ h_\underline{w}(x^{(i)}) = \frac{1}{1+e^{-\sum_{j=0}^n w_j x_{ij}}} $$
# Simple example: # we have 20 students that took an exam and we want to know if we can use # the number of hours they studied to predict if they pass or fail the # exam # m = 20 training samples # n = 1 feature (number of hours) X = np.array([0.50, 0.75, 1.00, 1.25, 1.50, 1.75, 1.75, 2.00, 2.25, 2.50, 2.75, 3.00, 3.25, 3.50, 4.00, 4.25, 4.50, 4.75, 5.00, 5.50]) # 1 = pass, 0 = fail y = np.array([0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1]) print(X.shape) print(y.shape) p = plt.plot(X,y,'o') tx = plt.xlabel('x [h]') ty = plt.ylabel('y ')
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
58d27b1b2fda27065b9614bcbf2d5405
Likelihood of the model How to find the parameters, also called weights, $\underline{w}$ that best fit our training data? We want to find the weights $\underline{w}$ that maximize the likelihood of observing the target $\underline{y}$ given the observed features $\underline{\underline{X}}$. We need a probabilistic model that gives us the probability of observing the value $y^{(i)}$ given the features $x^{(i)}$. The function $h_\underline{w}(x^{(i)})$ can be used precisely for that: $$ P(y^{(i)}=1|x^{(i)};\underline{w}) = h_\underline{w}(x^{(i)}) $$ $$ P(y^{(i)}=0|x^{(i)};\underline{w}) = 1 - h_\underline{w}(x^{(i)}) $$ we can write it more compactly as: $$ P(y^{(i)}|x^{(i)};\underline{w}) = (h_\underline{w}(x^{(i)}))^{y^{(i)}} ( 1 - h_\underline{w}(x^{(i)}))^{1-y^{(i)}} $$ where $y^{(i)}\in{0,1}$ We see that $y^{(i)}$ is a random variable following a Bernouilli distribution with expectation $h_\underline{w}(x^{(i)})$. The Likelihood function of a statistical model is defined as: $$ \mathcal{L}(\underline{w}) = \mathcal{L}(\underline{w};\underline{\underline{X}},\underline{y}) = P(\underline{y}|\underline{\underline{X}};\underline{w}). $$ The likelihood takes into account all the $m$ training samples of our training dataset and estimates the likelihood of observing $\underline{y}$ given $\underline{\underline{X}}$ and $\underline{w}$. Assuming that the $m$ training examples were generated independently, we can write: $$ \mathcal{L}(\underline{w}) = P(\underline{y}|\underline{\underline{X}};\underline{w}) = \prod_{i=1}^m P(y^{(i)}|x^{(i)};\underline{w}) = \prod_{i=1}^m (h_\underline{w}(x^{(i)}))^{y^{(i)}} ( 1 - h_\underline{w}(x^{(i)}))^{1-y^{(i)}}. $$ This is the function that we want to maximize. It is usually much simpler to maximize the logarithm of this function, which is equivalent. $$ l(\underline{w}) = \log\mathcal{L}(\underline{w}) = \sum_{i=1}^{m} \left(y^{(i)} \log h_\underline{w}(x^{(i)}) + (1- y^{(i)})\log\left(1- h_\underline{w}(x^{(i)})\right) \right) $$ Loss function and linear models An other way of formulating this problem is by defining a Loss function $L\left(y^{(i)}, f(x^{(i)})\right)$ such that: $$ \sum_{i=1}^{m} L\left(y^{(i)}, f(x^{(i)})\right) = - l(\underline{w}). $$ And now the problem consists of minimizing $\sum_{i=1}^{m} L\left(y^{(i)}, f(x^{(i)})\right)$ over all the possible values of $\underline{w}$. Using the definition of $h_\underline{w}(x^{(i)})$ you can show that $L$ can be written as: $$ L\left(y^{(i)}=1, f(x^{(i)})\right) = \log_2\left(1+e^{-f(x^{(i)})}\right) $$ and $$ L\left(y^{(i)}=0, f(x^{(i)})\right) = \log_2\left(1+e^{-f(x^{(i)})}\right) - \log_2\left(e^{-f(x^{(i)})}\right) $$ where $f(x^{(i)}) = \sum_{j=0}^n w_j x_{ij}$ is called the decision function.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline fx = np.linspace(-5,5) Ly1 = np.log2(1+np.exp(-fx)) Ly0 = np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx)) p = plt.plot(fx,Ly1,label='L(1,f(x))') p = plt.plot(fx,Ly0,label='L(0,f(x))') plt.xlabel('f(x)') plt.ylabel('L') plt.legend() # coming back to our simple example def Loss(x_i,y_i, w0, w1): fx = w0 + x_i*w1 if y_i == 1: return np.log2(1+np.exp(-fx)) if y_i == 0: return np.log2(1+np.exp(-fx)) - np.log2(np.exp(-fx)) else: raise Exception('y_i must be 0 or 1') def sumLoss(x,y, w0, w1): sumloss = 0 for x_i, y_i in zip(x,y): sumloss += Loss(x_i,y_i, w0, w1) return sumloss # lets compute the loss function for several values w0s = np.linspace(-10,20,100) w1s = np.linspace(-10,20,100) sumLoss_vals = np.zeros((w0s.size, w1s.size)) for k, w0 in enumerate(w0s): for l, w1 in enumerate(w1s): sumLoss_vals[k,l] = sumLoss(X,y,w0,w1) # let's find the values of w0 and w1 that minimize the loss ind0, ind1 = np.where(sumLoss_vals == sumLoss_vals.min()) print((ind0,ind1)) print((w0s[ind0], w1s[ind1])) # plot the loss function p = plt.pcolor(w0s, w1s, sumLoss_vals) c = plt.colorbar() p2 = plt.plot(w1s[ind1], w0s[ind0], 'ro') tx = plt.xlabel('w1') ty = plt.ylabel('w0')
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
e072b8979b4854d85ec137343c21f481
Here we found the minimum of the loss function simply by computing it over a large range of values. In practice, this approach is not possible when the dimensionality of the loss function (number of weights) is very large. To find the minimum of the loss function, the gradient descent algorithm (or stochastic gradient descent) is often used.
# plot the solution x = np.linspace(0,6,100) def h_w(x, w0=w0s[ind0], w1=w1s[ind1]): return 1/(1+np.exp(-(w0+x*w1))) p1 = plt.plot(x, h_w(x)) p2 = plt.plot(X,y,'ro') tx = plt.xlabel('x [h]') ty = plt.ylabel('y ') # probability of passing the exam if you worked 5 hours: print(h_w(5))
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
27211f7d0f1972261297f18ba3780d17
We will use the package sci-kit learn (http://scikit-learn.org/) that provide access to many tools for machine learning, data mining and data analysis.
# The same thing using the sklearn module from sklearn.linear_model import LogisticRegression model = LogisticRegression(C=1e10) # to train our model we use the "fit" method # we have to reshape X because we have only one feature here model.fit(X.reshape(-1,1),y) # to see the weights print(model.coef_) print(model.intercept_) # use the trained model to predict new values print(model.predict_proba(5)) print(model.predict(5))
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
bc04f1ce1c56ff0c2ed54c6e7d5c3c3f
Note that although the loss function is not linear, the decision function is a linear function of the weights and features. This is why the Logistic regression is called a linear model. Other linear models are defined by different loss functions. For example: - Perceptron: $L \left(y^{(i)}, f(x^{(i)})\right) = \max(0, -y^{(i)}\cdot f(x^{(i)}))$ - Hinge-loss (soft-margin Support vector machine (SVM) classification): $L \left(y^{(i)}, f(x^{(i)})\right) = \max(0, 1-y^{(i)}\cdot f(x^{(i)}))$ See http://scikit-learn.org/stable/modules/sgd.html for more examples.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline fx = np.linspace(-5,5, 200) Logit = np.log2(1+np.exp(-fx)) Percep = np.maximum(0,- fx) Hinge = np.maximum(0, 1- fx) ZeroOne = np.ones(fx.size) ZeroOne[fx>=0] = 0 p = plt.plot(fx,Logit,label='Logistic Regression') p = plt.plot(fx,Percep,label='Perceptron') p = plt.plot(fx,Hinge,label='Hinge-loss') p = plt.plot(fx,ZeroOne,label='Zero-One loss') plt.xlabel('f(x)') plt.ylabel('L') plt.legend() ylims = plt.ylim((0,7))
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
166ff25da96cdb203f824c48806e5623
Evaluating the performance of a binary classifier The confusion matrix allows to visualize the performance of a classifier: | | predicted positive | predicted negative | | --- |:---:|:---:| | real positive | TP | FN | | real negative | FP | TN | For each prediction $y_p$, we put it in one of the four categories based on the true value of $y$: - TP = True Positive - FP = False Positive - TN = True Negative - FN = False Negative We can then evalute several measures, for example: Accuracy: $\text{Accuracy}=\frac{TP+TN}{TP+TN+FP+FN}$ Accuracy is the proportion of true results (both true positives and true negatives) among the total number of cases examined. However, accuracy is not necessarily a good measure of the predictive power of a model. See the example below: Accuracy paradox: A classifier with these results: | |Predicted Negative | Predicted Positive| | --- |---|---| |Negative Cases |9,700 | 150| |Positive Cases |50 |100| has an accuracy = 98%. Now consider the results of a classifier that systematically predict a negative result independently of the input: | |Predicted Negative| Predicted Positive| |---|---|---| |Negative Cases| 9,850 | 0| |Positive Cases| 150 |0 | The accuracy of this classifier is 98.5% while it is clearly useless. Here the less accurate model is more useful than the more accurate one. This is why accuracy should not be used (alone) to evaluate the performance of a classifier. Precision and Recall are usually prefered: Precision: $\text{Precision}=\frac{TP}{TP+FP}$ Precision measures the fraction of correct positive or the lack of false positive. It answers the question: "Given a positive prediction from the classifier, how likely is it to be correct ?" Recall: $\text{Recall}=\frac{TP}{TP+FN}$ Recall measures the proportion of positives that are correctly identified as such or the lack of false negative. It answers the question: "Given a positive example, will the classifier detect it ?" $F_1$ score: In order to account for the precision and recall of a classifier, $F_1$ score is the harmonic mean of both measures: $F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{ \mathrm{precision} + \mathrm{recall}} = 2 \frac{TP}{2TP +FP+FN}$ When evaluating the performance of a classifier it is important to test is on a different set of values than then set we used to train it. Indeed, we want to know how the classifier performs on new data not on the training data. For this purpose we separate the training set in two: a part that we use to train the model and a part that we use to test it. This method is called cross-validation. Usually, we split the training set in N parts (typically 3 or 10), train the model on N-1 parts and test it on the remaining part. We then repeat this procedure with all the combination of training and testing parts and average the performance metrics from each tests. Sci-kit learn allows to easily perform cross-validation: http://scikit-learn.org/stable/modules/cross_validation.html Regularization and over-fitting Overfitting happens when your model is too complicated to generalise for new data. When your model fits your data perfectly, it is unlikely to fit new data well. <img src="https://upload.wikimedia.org/wikipedia/commons/1/19/Overfitting.svg" style="width: 250px;"/> The model in green is over-fitted. It performs very well on the training set, but it does not generalize well to new data compared to the model in black. To avoid over-fitting, it is important to have a large training set and to use cross-validation to evaluate the performance of a model. Additionally, regularization is used to make the model less "complex" and more general. Regularization consists in adding a term $R(\underline{w})$, that penalizes too "complex" models, to the loss function, so that the training error that we want to minimize is: $E(\underline{w}) = \sum_{i=1}^{m} L\left(y^{(i)}, f(x^{(i)})\right) + \lambda R(\underline{w})$, where $\lambda$ is a parameter that controls the strength of the regularization. Usual choices for $R(\underline{w})$ are: - L2 norm of the weights: $R(\underline{w}) := \frac{1}{2} \sum_{i=1}^{n} w_j^2$, which forces small weights in the solution, - L1 norm of the weights: $R(\underline{w}) := \sum_{i=1}^{n} |w_j|$, (also refered as Lasso) which leads to sparse solutions (with several zero weights). The choice of the regularization and of the its strength are usually done by selecting the best choice during the cross-validation.
# for example from sklearn.model_selection import cross_val_predict from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix # logistic regression with L2 regularization, C controls the strength of the regularization # C = 1/lambda model = LogisticRegression(C=1, penalty='l2') # cross validation using 10 folds y_pred = cross_val_predict(model, X.reshape(-1,1), y=y, cv=10) print(confusion_matrix(y,y_pred)) print('Accuracy = ' + str(accuracy_score(y, y_pred))) print('Precision = ' + str(precision_score(y, y_pred))) print('Recall = ' + str(precision_score(y, y_pred))) print('F_1 = ' + str(f1_score(y, y_pred))) # try to run it with different number of folds for the cross-validation # and different values of the regularization strength
03_Introduction_To_Supervised_Machine_Learning.ipynb
SchwaZhao/networkproject1
mit
db9c02ba0dcef166b220498e37bd8058
Vectorizer
from helpers.tokenizer import TextWrangler from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer bow_stem = CountVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="stem")) X_bow_stem = bow_stem.fit_transform(corpus.data) tfidf_stem = TfidfVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="stem")) X_tfidf_stem = tfidf_stem.fit_transform(corpus.data)
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
7bb51f951c8a28dac34ad131193bd512
Models
from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, NMF n_topics = 5 lda = LatentDirichletAllocation(n_components=n_topics, learning_decay=0.5, learning_offset=1., random_state=23) lsa = TruncatedSVD(n_components=n_topics, random_state=23) nmf = NMF(n_components=n_topics, solver="mu", beta_loss="kullback-leibler", alpha=0.1, random_state=23) lda_params = {"lda__learning_decay": [0.5, 0.7, 0.9], "lda__learning_offset": [1., 5., 10.]}
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
e472a92a22e0133c8116ebc18359cd26
Pipelines
from sklearn.pipeline import Pipeline lda_pipe = Pipeline([ ("bow", bow_stem), ("lda", lda) ]) lsa_pipe = Pipeline([ ("tfidf", tfidf_stem), ("lsa", lsa) ]) nmf_pipe = Pipeline([ ("tfidf", tfidf_stem), ("nmf", nmf) ])
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
b02e21230caa2b8d9de6a52fd1e467de
Gridsearch
from sklearn.model_selection import GridSearchCV lda_model = GridSearchCV(lda_pipe, param_grid=lda_params, cv=5, n_jobs=-1) #lda_model.fit(corpus.data) #lda_model.best_params_
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
ce8a795a26cdc9a462f132d62210779f
Training
lda_pipe.fit(corpus.data) nmf_pipe.fit(corpus.data) lsa_pipe.fit(corpus.data)
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
7cb5a58b9c80ee6a35e30459a494238e
Evaluation
print("LDA") print("Log Likelihood:", lda_pipe.score(corpus.data))
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
0b3962d261af7168bddb6858228db586
Visual Inspection
def df_topic_model(vectorizer, model, n_words=20): keywords = np.array(vectorizer.get_feature_names()) topic_keywords = [] for topic_weights in model.components_: top_keyword_locs = (-topic_weights).argsort()[:n_words] topic_keywords.append(keywords.take(top_keyword_locs)) df_topic_keywords = pd.DataFrame(topic_keywords) df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])] df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])] return df_topic_keywords print("LDA") df_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=15) print("LSA") df_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=15) print("NMF") df_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=15) import pyLDAvis from pyLDAvis.sklearn import prepare pyLDAvis.enable_notebook() prepare(lda_pipe.named_steps.lda, X_bow_stem, bow_stem, mds="tsne") prepare(nmf_pipe.named_steps.nmf, X_tfidf_stem, tfidf_stem, mds="tsne")
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
8637343ffd67f0c8934a80a7adc98ae5
Conclusion: Topic models derived from different approaches look dissimilar. Top word distribution of NMF appears most meaningful, mostly because its topics doesn't share same words (due to NMF algorithm). LSA topic model is better interpretable than its LDA counterpart. Nonetheless, topics from both are hard to distinguish and doesn't make much sense. Therefore I'll go with the NMF topic model for the assginment to novel collections step. Jaccard Index
df_topic_word_lda = df_topic_model(vectorizer=bow_stem, model=lda_pipe.named_steps.lda, n_words=10) df_topic_word_lsa = df_topic_model(vectorizer=tfidf_stem, model=lsa_pipe.named_steps.lsa, n_words=10) df_topic_word_nmf = df_topic_model(vectorizer=tfidf_stem, model=nmf_pipe.named_steps.nmf, n_words=10) def jaccard_index(list1, list2): s1 = set(list1) s2 = set(list2) jaccard_index = len(s1.intersection(s2)) / len(s1.union(s2)) return jaccard_index sims_lda_lsa, sims_lda_nmf, sims_lsa_nmf = {}, {}, {} assert df_topic_word_lda.shape[0] == df_topic_word_lsa.shape[0] == df_topic_word_nmf.shape[0], "n_topics mismatch" for ix, row in df_topic_word_lda.iterrows(): l1 = df_topic_word_lda.loc[ix, :].values.tolist() l2 = df_topic_word_lsa.loc[ix, :].values.tolist() l3 = df_topic_word_nmf.loc[ix, :].values.tolist() sims_lda_lsa[ix] = jaccard_index(l1, l2) sims_lda_nmf[ix] = jaccard_index(l1, l3) sims_lsa_nmf[ix] = jaccard_index(l2, l3) df_jaccard_sims = pd.DataFrame([sims_lda_lsa, sims_lda_nmf, sims_lsa_nmf]) df_jaccard_sims.index = ["LDA vs LSA", "LDA vs NMF", "LSA vs NMF"] df_jaccard_sims["mean_sim"] = df_jaccard_sims.mean(axis=1) df_jaccard_sims
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
aa87f680eda97e2634780aae50978571
Conclusion: Topics derived from different topic modeling approaches are fundamentally dissimilar. Document-topic Assignment
nmf_topic_distr = nmf_pipe.transform(corpus.data) collections_map = {0: "His Last Bow", 1: "The Adventures of Sherlock Holmes", 2: "The Case-Book of Sherlock_Holmes", 3: "The Memoirs of Sherlock Holmes", 4: "The Return of Sherlock Holmes"} # Titles created from dominant words in topics novel_collections_map = {0: "The Whispering Ways Sherlock Holmes Waits to Act on Waste", 1: "Vengeful Wednesdays: Unexpected Incidences on the Tapering Train by Sherlock Holmes", 2: "A Private Journey of Sherlock Holmes: Thirteen Unfolded Veins on the Move", 3: "Sherlock Holmes Tumbling into the hanging arms of Scylla", 4: "The Shooking Jaw of Sherlock Holmes in the Villa of the Baronet"} print("Novel Sherlock Holmes Short Stories Collections:") for _,title in novel_collections_map.items(): print("*", title) topics = ["Topic" + str(i) for i in range(n_topics)] docs = [" ".join(f_name.split("/")[-1].split(".")[0].split("_")) for f_name in corpus.filenames] df_document_topic = pd.DataFrame(np.round(nmf_topic_distr, 3), columns=topics, index=docs) df_document_topic["assigned_topic"] = np.argmax(df_document_topic.values, axis=1) df_document_topic["orig_collection"] = [collections_map[item] for item in corpus.target] df_document_topic["novel_collection"] = [novel_collections_map.get(item, item) for item in df_document_topic.assigned_topic.values] df_novel_assignment = df_document_topic.sort_values("assigned_topic").loc[:, ["orig_collection", "novel_collection"]] df_novel_assignment from yellowbrick.text import TSNEVisualizer tsne = TSNEVisualizer() tsne.fit(X_tfidf_stem, df_document_topic.novel_collection) tsne.poof()
HolmesTopicModels/holmes_topic_models/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
ecb94da95c8805779a695cac9894195a
Open a GeoTIFF with GDAL Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:
# Note that you will need to update the filepath below according to your local machine chm_filename = '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_CHM.tif' chm_dataset = gdal.Open(chm_filename)
tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
c1788c9acfc27e6136f21a64de42d79b
On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
chm_nonzero_array = copy.copy(chm_array) chm_nonzero_array[chm_array==0]=np.nan chm_nonzero_nonan_array = chm_nonzero_array[~np.isnan(chm_nonzero_array)] # Use weighting to plot relative frequency plt.hist(chm_nonzero_nonan_array,bins=50); # plt.hist(chm_nonzero_nonan_array.flatten(),50) plt.title('Distribution of SERC Non-Zero Canopy Height') plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
490f1a7412a66a399dd0d3c68ed62b23
Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014). From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
plot_band_array(chm_array, chm_ext, (0,35), title='SERC Canopy Height', cmap_title='Canopy Height, m', colormap='BuGn')
tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
40b80a124c08e4104d74675e05e6b7f2
Threshold Based Raster Classification Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups: - Class 1: CHM = 0 m - Class 2: 0m < CHM <= 10m - Class 3: 10m < CHM <= 20m - Class 4: 20m < CHM <= 30m - Class 5: CHM > 30m We can use np.where to find the indices where a boolean criteria is met.
chm_reclass = copy.copy(chm_array) chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1 chm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2 chm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3 chm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4 chm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5
tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
NEONScience/NEON-Data-Skills
agpl-3.0
52edd8bafb22d743f43b6c35b83aa26b
Use an arbitary distribution NOTE this requires Pymc3 3.1 pymc3.distributions.DensityDist
# pymc3.distributions.DensityDist? import matplotlib.pyplot as plt import matplotlib as mpl from pymc3 import Model, Normal, Slice from pymc3 import sample from pymc3 import traceplot from pymc3.distributions import Interpolated from theano import as_op import theano.tensor as tt import numpy as np from scipy import stats %matplotlib inline %load_ext version_information %version_information pymc3 from sklearn.neighbors.kde import KernelDensity import numpy as np X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X) kde.score_samples(X) plt.scatter(X[:,0], X[:,1])
updating_info/Arb_dist.ipynb
balarsen/pymc_learning
bsd-3-clause
89d97dd16a7974d901679cc177a48e44
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
data.test.cls = np.argmax(data.test.labels, axis=1) feed_dict_test = {x: data.test.images, y_true: data.test.labels, y_true_cls: data.test.cls}
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
016368b1d0a357fbc9fb4282edba9640
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
# Counter for total number of iterations performed so far. total_iterations = 0 def optimize(num_iterations, ndisplay_interval=100): # Ensure we update the global variable rather than a local copy. global total_iterations # Start-time used for printing time-usage below. start_time = time.time() for i in range(total_iterations, total_iterations + num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = data.train.next_batch(train_batch_size) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) # Print status every 100 iterations. if i % ndisplay_interval == 0: # Calculate the accuracy on the training-set. acc = session.run(accuracy, feed_dict=feed_dict_train) # Message for printing. msg = "* Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}" # Print it. print(msg.format(i + 1, acc)) # Update the total number of iterations performed. total_iterations += num_iterations # Ending time. end_time = time.time() # Difference between start and end-times. time_dif = end_time - start_time # Print the time-usage. print("* Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
e3c4565922195e8cca721728289622d0
helper-function to plot sample digits
def plot_sample9(): # Use TensorFlow to get a list of boolean values # whether each test-image has been correctly classified, # and a list for the predicted class of each image. prediction, cls_pred = session.run([correct_prediction, y_pred_cls], feed_dict=feed_dict_test) num_imgs = data.test.images.shape[0] i_start = np.random.choice(num_imgs-10, 1)[0] # Plot the first 9 images. plot_images(images=data.test.images[i_start:i_start+9], cls_true=data.test.cls[i_start:i_start+9], cls_pred=cls_pred[i_start:i_start+9])
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
3a3927aa2ab303d16bd600af5b693de3
Performance after 1000 optimization iterations After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
optimize(num_iterations=900) # We performed 100 iterations above.
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
7aa0f75afaef1697bbadb48bb18f442b
test-run on 6/12/2017 Optimization Iteration: 101, Training Accuracy: 70.3% Optimization Iteration: 201, Training Accuracy: 81.2% Optimization Iteration: 301, Training Accuracy: 84.4% Optimization Iteration: 401, Training Accuracy: 89.1% Optimization Iteration: 501, Training Accuracy: 93.8% Optimization Iteration: 601, Training Accuracy: 87.5% Optimization Iteration: 701, Training Accuracy: 98.4% Optimization Iteration: 801, Training Accuracy: 93.8% Optimization Iteration: 901, Training Accuracy: 92.2% Time usage: 0:01:28
plot_sample9() print_test_accuracy(show_example_errors=True)
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
3f68076edf344cb070c667bff1ea5393
Performance after 10,000 optimization iterations After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
optimize(num_iterations=9000, ndisplay_interval=500) # We performed 1000 iterations above.
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
7d33312fbba5b998393d0b0fedea21b5
Optimization Iteration: 1, Training Accuracy: 92.2% Optimization Iteration: 501, Training Accuracy: 98.4% Optimization Iteration: 1001, Training Accuracy: 95.3% Optimization Iteration: 1501, Training Accuracy: 100.0% Optimization Iteration: 2001, Training Accuracy: 96.9% Optimization Iteration: 2501, Training Accuracy: 100.0% Optimization Iteration: 3001, Training Accuracy: 96.9% Optimization Iteration: 3501, Training Accuracy: 98.4% Optimization Iteration: 4001, Training Accuracy: 96.9% Optimization Iteration: 4501, Training Accuracy: 100.0% Optimization Iteration: 5001, Training Accuracy: 96.9% Optimization Iteration: 5501, Training Accuracy: 100.0% Optimization Iteration: 6001, Training Accuracy: 98.4% Optimization Iteration: 6501, Training Accuracy: 96.9% Optimization Iteration: 7001, Training Accuracy: 100.0% Optimization Iteration: 7501, Training Accuracy: 98.4% Optimization Iteration: 8001, Training Accuracy: 100.0% Optimization Iteration: 8501, Training Accuracy: 100.0% Time usage: 0:14:56
plot_sample9() print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
dd0a218521dd20226b8a2518d99e1651
From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images. These images are then flattened and input to the fully-connected layer, but that is not shown here. Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
# This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. session.close()
learn_stem/machine_learning/tensorflow/02_Convolutional_Neural_Network.ipynb
wgong/open_source_learning
apache-2.0
9c6ff969b0226205ddc3ccf4b75a3e55
Flip the plot by assigning the data variable to the y axis:
sns.ecdfplot(data=penguins, y="flipper_length_mm")
doc/docstrings/ecdfplot.ipynb
arokem/seaborn
bsd-3-clause
40cb9ba25731c98c608d7e3247a9a687
If neither x nor y is assigned, the dataset is treated as wide-form, and a histogram is drawn for each numeric column:
sns.ecdfplot(data=penguins.filter(like="bill_", axis="columns"))
doc/docstrings/ecdfplot.ipynb
arokem/seaborn
bsd-3-clause
b2a1ccc8fa316c6ee7426f6d74aac006
You can also draw multiple histograms from a long-form dataset with hue mapping:
sns.ecdfplot(data=penguins, x="bill_length_mm", hue="species")
doc/docstrings/ecdfplot.ipynb
arokem/seaborn
bsd-3-clause
abc46f6f566785f85ce8e7294530588b
The default distribution statistic is normalized to show a proportion, but you can show absolute counts instead:
sns.ecdfplot(data=penguins, x="bill_length_mm", hue="species", stat="count")
doc/docstrings/ecdfplot.ipynb
arokem/seaborn
bsd-3-clause
64a394d656ade7ebb70661b9571cacb2
It's also possible to plot the empirical complementary CDF (1 - CDF):
sns.ecdfplot(data=penguins, x="bill_length_mm", hue="species", complementary=True)
doc/docstrings/ecdfplot.ipynb
arokem/seaborn
bsd-3-clause
0ea48a425e46f820337b684d47aecd73
There are several features of $g$ to note, - For larger values of $z$ $g(z)$ approaches 1 - For more negative values of $z$ $g(z)$ approaches 0 - The value of $g(0) = 0.5$ - For $z \ge 0$, $g(z)\ge 0.5$ - For $z \lt 0$, $g(z)\lt 0.5$ 0.5 will be the cutoff for decisions. That is, if $g(z) \ge 0.5$ then the "answer" is "the positive case", 1, if $g(z) \lt 0.5$ then the answer is "the negative case", 0. Decision Boundary The value 0.5 mentioned above creates a boundary for classification by our model (hypothesis) $h_a(x)$ $$ \begin{align} \text{if } h_a(x) \ge 0.5 & \text{ then we say } &y=1 \ \ \text{if } h_a(x) \lt 0.5 & \text{ then } &y=0 \end{align} $$ Looking at $g(z)$ more closely gives, $$ \begin{align} h_a(x) = g(a'x) \ge 0.5 & \text{ when} & a'x \ge 0 \ \ h_a(x) = g(a'x) \lt 0.5 & \text{ when} & a'x \le 0 \end{align} $$ Therefore, $$ \bbox[25px,border:2px solid green]{ \begin{align} a'x \ge 0.5 & \text{ implies } & y = 1 \ \ a'x \lt 0.5 & \text{ implies} & y = 0 \end{align} }$$ The Decision Boundary is the "line" defined by $a'x$ that separates the area where $y=0$ and $y=1$. The "line" defined by $a'x$ can be non-linear since the feature variables $x_i$ can be non-linear. The decision boundary can be any shape (curve) that fits the data. We use a Cost Function derived from the logistic regression sigmoid function to helps us find the parameters $a$ that define the optimal decision boundary $a'x$. After we have found the optimal values of $a$ the model function $h_a(x$), which uses the sigmoid function, will tell us which side of the decision boundary our "question" lies on based on the values of the features $x$ that we give it. If you understand the paragraph above then you have a good idea of what logistic regression about! Here's some examples of what that Decision Boundary might look like;
# Generate 2 clusters of data S = np.eye(2) x1, y1 = np.random.multivariate_normal([1,1], S, 40).T x2, y2 = np.random.multivariate_normal([-1,-1], S, 40).T fig, ax = plt.subplots() ax.plot(x1,y1, "o", label='neg data' ) ax.plot(x2,y2, "P", label='pos data') xb = np.linspace(-3,3,100) a = [0.55,-1.3] ax.plot(xb, a[0] + a[1]*xb , label='b(x) = %.2f + %.2f x' %(a[0], a[1])) plt.title("Decision Boundary", fontsize=24) plt.legend();
ML-Logistic-Regression-theory.ipynb
dbkinghorn/blog-jupyter-notebooks
gpl-3.0
ec5f72b90f56ad6d160a49dcd1544466
The plot above shows 2 sets of training-data. The positive case is represented by green '+' and the negative case by blue 'o'. The red line is the decision boundary $b(x) = 0.55 -1.3x$. Any test cases that are above the line are negative and any below are positive. The parameters for that red line would be what we could have determined from doing a Logistic Regression run on those 2 sets of training data. The next plot shows a case where the decision boundry is more complicated. It's represented by $b(x_1,x_2) = x_1^2 +x_2^2 - 2.5$
fig, ax = plt.subplots() x3, y3 = np.random.multivariate_normal([0,0], [[.5,0],[0,.5]] , 400).T t = np.linspace(0,2*np.pi,400) ax.plot((3+x3)*np.sin(t), (3+y3)*np.cos(t), "o") ax.plot(x3, y3, "P") xb1 = np.linspace(-5.0, 5.0, 100) xb2 = np.linspace(-5.0, 5.0, 100) Xb1, Xb2 = np.meshgrid(xb1,xb2) b = Xb1**2 + Xb2**2 - 2.5 ax.contour(Xb1,Xb2,b,[0], colors='r') plt.title("Decision Boundary", fontsize=24) ax.axis('equal')
ML-Logistic-Regression-theory.ipynb
dbkinghorn/blog-jupyter-notebooks
gpl-3.0
793d7400cd65bb562595650fcb32bfa5
In this plot the positive outcomes are in a circular region in the center of the plot. The decision boundary the red circle. ## Cost Function for Logistic Regression A cost function's main purpose is to penalize bad choices for the parameters to be optimized and reward good ones. It should be easy to minimize by having a single global minimum and not be overly sensitive to changes in its arguments. It is also nice if it is differentiable, (without difficulty) so you can find the gradient for the minimization problem. That is, it's best if it is "convex", "well behaved" and "smooth". The cost function for logistic regression is written with logarithmic functions. An argument for using the log form of the cost function comes from the statistical derivation of the likelihood estimation for the probabilities. With the exponential form that's is a product of probabilities and the log-likelihood is a sum. [The statistical derivations are always interesting but usually complex. We don't really need to look at that to justify the cost function we will use.] The log function is also a monotonically increasing function so the negative of the log is decreasing. The minimization of a function and minimizing the negative log of that function will give the same values for the parameters. The log form will also be convex which means it will have a single global minimum whereas a simple "least-squares" cost function using the sigmoid function can have multiple minimum and abrupt changes. The log form is just better behaved! To see some of this lets looks at a plot of the sigmoid function and the negative log of the sigmoid function.
z = np.linspace(-10,10,100) fig, ax = plt.subplots() ax.plot(z, g(z)) ax.set_title('Sigmoid Function 1/(1 + exp(-z))', fontsize=24) ax.annotate('Convex', (-7.5,0.2), fontsize=18 ) ax.annotate('Concave', (3,0.8), fontsize=18 ) z = np.linspace(-10,10,100) plt.plot(z, -np.log(g(z))) plt.title("Log Sigmoid Function -log(1/(1 + exp(-z)))", fontsize=24) plt.annotate('Convex', (-2.5,3), fontsize=18 )
ML-Logistic-Regression-theory.ipynb
dbkinghorn/blog-jupyter-notebooks
gpl-3.0
9e5de29ecd51ac60620d0957566bc45e
Recall that in the training-set $y$ are labels with a values or 0 or 1. The cost function will be broken down into two cases for each data point $(i)$, one for $y=1$ and one for $y=0$. These two cases can then be combined into a single cost function $J$ $$ \bbox[25px,border:2px solid green]{ \begin{align} J^{(i)}{y=1}(a) & = -log(h_a(x^{(i)})) \ \ J^{(i)}{y=0}(a) & = -log(1 - h_a(x^{(i)})) \ \ J(a) & = -\frac{1}{m}\sum^{m}_{i=1} y^{(i)} log(h_a(x^{(i)})) + (1-y^{(i)})log(1 - h_a(x^{(i)})) \end{align} }$$ You can see that the factors $y$ and $(1-y)$ effectively pick out the terms for the cases $y=1$ and $y=0$. Vectorized form of $J(a)$ $J(a)$ can be written in vector form eliminating the summation sign as, $$ \bbox[25px,border:2px solid green]{ \begin{align} h_a(X) &= g(Xa) \ J(a) &= -\frac{1}{m} \left( y' log(h_a(X) + (1-y)'log(1 - h_a(X) \right) \end{align} }$$ To visualize how the cost functions works look at the following plots,
x = np.linspace(-10,10,50) plt.plot(g(x), -np.log(g(x))) plt.title("h(x) vs J(a)=-log(h(x)) for y = 1", fontsize=24) plt.xlabel('h(x)') plt.ylabel('J(a)')
ML-Logistic-Regression-theory.ipynb
dbkinghorn/blog-jupyter-notebooks
gpl-3.0
7abdfa3d1d8f00ea5ead76134379112a
You can see from this plot that when $y=1$ the cost $J(a)$ is large if $h(x)$ goes toward 0. That is, it favors $h(x)$ going to 1 which is what we want.
x = np.linspace(-10,10,50) plt.plot(g(x), -np.log(1-g(x))) plt.title("h(x) vs J(a)=-log(1-h(x)) for y = 0", fontsize=24) plt.xlabel('h(x)') plt.ylabel('J(a)')
ML-Logistic-Regression-theory.ipynb
dbkinghorn/blog-jupyter-notebooks
gpl-3.0
225d48f46eca9d995e2e1900345f6558
Data Generation A set of periodic microstructures and their volume averaged elastic stress values $\bar{\sigma}_{xx}$ can be generated by importing the make_elastic_stress_random function from pymks.datasets. This function has several arguments. n_samples is the number of samples that will be generated, size specifies the dimensions of the microstructures, grain_size controls the effective microstructure feature size, elastic_modulus and poissons_ratio are used to indicate the material property for each of the phases, macro_strain is the value of the applied uniaxixial strain, and the seed can be used to change the the random number generator seed. Let's go ahead and create 6 different types of microstructures each with 200 samples with dimensions 21 x 21. Each of the 6 samples will have a different microstructure feature size. The function will return and the microstructures and their associated volume averaged stress values.
from pymks.datasets import make_elastic_stress_random sample_size = 200 grain_size = [(15, 2), (2, 15), (7, 7), (8, 3), (3, 9), (2, 2)] n_samples = [sample_size] * 6 elastic_modulus = (410, 200) poissons_ratio = (0.28, 0.3) macro_strain = 0.001 size = (21, 21) X, y = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, macro_strain=macro_strain, seed=0)
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
9ae353afd38d71f1caa3e283ce27c5f2
These default parameters may not be the best model for a given problem, we will now show one method that can be used to optimize them. Optimizing the Number of Components and Polynomial Order To start with, we can look at how the variance changes as a function of the number of components. In general for SVD as well as PCA, the amount of variance captured in each component decreases as the component number increases. This means that as the number of components used in the dimensionality reduction increases, the percentage of the variance will asymptotically approach 100%. Let's see if this is true for our dataset. In order to do this we will change the number of components to 40 and then fit the data we have using the fit function. This function performs the dimensionality reduction and also fits the regression model. Because our microstructures are periodic, we need to use the periodic_axes argument when we fit the data.
model.n_components = 40 model.fit(X, y, periodic_axes=[0, 1])
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
974344033b8f488c39ccbce578163e60
Roughly 90 percent of the variance is captured with the first 5 components. This means our model may only need a few components to predict the average stress. Next we need to optimize the number of components and the polynomial order. To do this we are going to split the data into testing and training sets. This can be done using the train_test_spilt function from sklearn.
from sklearn.cross_validation import train_test_split flat_shape = (X.shape[0],) + (np.prod(X.shape[1:]),) X_train, X_test, y_train, y_test = train_test_split(X.reshape(flat_shape), y, test_size=0.2, random_state=3) print(X_train.shape) print(X_test.shape)
notebooks/stress_homogenization_2D.ipynb
XinyiGong/pymks
mit
ec77cfaf477419d1ca0ce9ae48a8fb88