repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
google/spectral-density | tf2/Lanczos_example.ipynb | apache-2.0 | import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
from matplotlib import pyplot as plt
import seaborn as sns
tf.enable_v2_behavior()
import lanczos_algorithm
num_samples = 50
num_features = 16
X = tf.random.normal([num_samples, num_features])
y = tf.random.normal([num_samples])
"""
Explanation: Approximating the Hessian for large neural networks.
This notebook describes how to use the spectral-density package with Tensorflow2. The main entry point of this package is the lanczos_algorithm.approximate_hessian function, compatible with Keras models. This function takes the following arguments:
model: The Keras model for which we want to compute the Hessian.
dataset: Dataset on which the model is trained. Can be a Tensorflow dataset, or more generally any iterator yielding tuples of data (X, y). If a Tensorflow dataset is used, it should be batched beforehand.
order: Rank of the approximation of the Hessian. The higher the better the approximation. See paper for more details.
reduce_op: Whether the loss function averages or sums the per sample loss. The default value is MEAN and should be compatible with most Keras losses, provided you didn't specify another reduction when instantiating it.
random_seed: Seed to use to sample the first vector in the Lanczos algorithm.
Example 1: Full rank estimation for linear model.
We start with a simplistic usecase: we wish to train the following model:
$$ \mbox{arg}\max_\beta \sum_i (y_i - \beta^Tx_i)^2$$
As this optimization problem is quadratic, the Hessian of the loss is independent of $\beta$ and is equal to $2X^TX$. Let's verify this using lanczos_algorithm.approximate_hessian, and setting the order of the approximation to the number of features, thus recovering the exact Hessian.
We first generate some random inputs and outputs:
End of explanation
"""
linear_model = tf.keras.Sequential(
[tf.keras.Input(shape=[num_features]),
tf.keras.layers.Dense(1, use_bias=False)])
"""
Explanation: We then define a linear model using the Keras API:
End of explanation
"""
def loss_fn(model, inputs):
x, y = inputs
preds = linear_model(x)
return tf.keras.losses.mse(y, preds)
"""
Explanation: Finally, we define a loss function that takes as input the model and a batch of examples, and return a scalar loss. Here, we simply compute the mean squared error between the predictions of the model and the desired output.
End of explanation
"""
V, T = lanczos_algorithm.approximate_hessian(
linear_model,
loss_fn,
[(X,y)],
order=num_features)
"""
Explanation: Fnally, we call approximate_hessian, setting order to the number of parameters to compute the exact Hessian. This function returns two tensors $(V, T)$ of shapes (num_parameters, order) and (order, order), such that :
$$ H \approx V T V^T $$
with an equality if order = num_parameters.
End of explanation
"""
plt.figure(figsize=(14, 5))
plt.subplot(1,2,1)
H = tf.matmul(V, tf.matmul(T, V, transpose_b=True))
plt.title("Hessian as estimated by Lanczos")
sns.heatmap(H)
plt.subplot(1,2,2)
plt.title("$2X^TX$")
sns.heatmap(2 * tf.matmul(X, X, transpose_a=True))
plt.show()
"""
Explanation: We can check that the reconstructed Hessian is indeed equal to $2X^TX$:
End of explanation
"""
def preprocess_images(tfrecord):
image, label = tfrecord['image'], tfrecord['label']
image = tf.cast(image, tf.float32) / 255.0
return image, label
cifar_dataset_train = tfds.load("cifar10", split="train").map(preprocess_images).cache()
cifar_dataset_test = tfds.load("cifar10", split="test").map(preprocess_images).cache()
model = tf.keras.Sequential([
tf.keras.Input([32, 32, 3]),
tf.keras.layers.Conv2D(filters=64, kernel_size=3, activation='relu'),
tf.keras.layers.Conv2D(filters=64, kernel_size=3, activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation='relu'),
tf.keras.layers.Conv2D(filters=128, kernel_size=3, activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dense(10)])
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print(model.summary())
_ = model.fit(cifar_dataset_train.batch(32),
validation_data=cifar_dataset_test.batch(128),
epochs=5)
"""
Explanation: Example 2: Convnet on Cifar10
We first define a VGG16-like model (15.2M parameters) that we train a bit on Cifar10:
End of explanation
"""
SCCE = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss_fn(model, inputs):
x, y = inputs
preds = model(x, training=False)
return SCCE(y, preds)
V, T = lanczos_algorithm.approximate_hessian(
model,
loss_fn,
mnist_dataset.batch(128),
order=90,
random_seed=1)
"""
Explanation: Our loss function is a bit different from the previous one, as we now use cross-entropy to train our model. Don't forget to set training=False to deactivate dropouts and similar mechanisms.
Computing an estimation of the Hessian will take a bit of time. A good rule of thumb is that the algorithm will take $T = order \times 2 T_{epoch}$ units of time, where $T_{epoch}$ stands for the time needed to perform one training epoch.
End of explanation
"""
import ..jax.density as density_lib
def plot(grids, density, label=None):
plt.semilogy(grids, density, label=label)
plt.ylim(1e-10, 1e2)
plt.ylabel("Density")
plt.xlabel("Eigenvalue")
plt.legend()
density, grids = density_lib.tridiag_to_density(
[T.numpy()], grid_len=10000, sigma_squared=1e-3)
plot(grids, density)
"""
Explanation: Finally, you can use the visualization functions provided in jax.density to plot the spectum (no actual JAX code is involved in this operation).
End of explanation
"""
|
kimkipyo/dss_git_kkp | ํต๊ณ, ๋จธ์ ๋ฌ๋ ๋ณต์ต/160516์_3์ผ์ฐจ_๊ธฐ์ด ์ ํ ๋์ 1 - ํ๋ ฌ์ ์ ์์ ์ฐ์ฐ Basic Linear Algebra(NumPy)/3.NumPy ์ฐ์ฐ.ipynb | mit | x = np.arange(1, 101)
x
y = np.arange(101, 201)
y
%%time
z = np.zeros_like(x)
for i, (xi, yi) in enumerate(zip(x, y)):
z[i] = xi + yi
z
z
"""
Explanation: NumPy ์ฐ์ฐ
๋ฒกํฐํ ์ฐ์ฐ
NumPy๋ ์ฝ๋๋ฅผ ๊ฐ๋จํ๊ฒ ๋ง๋ค๊ณ ๊ณ์ฐ ์๋๋ฅผ ๋น ๋ฅด๊ฒ ํ๊ธฐ ์ํ ๋ฒกํฐํ ์ฐ์ฐ(vectorized operation)์ ์ง์ํ๋ค. ๋ฒกํฐํ ์ฐ์ฐ์ด๋ ๋ฐ๋ณต๋ฌธ(loop)์ ์ฌ์ฉํ์ง ์๊ณ ์ ํ ๋์์ ๋ฒกํฐ ํน์ ํ๋ ฌ ์ฐ์ฐ๊ณผ ์ ์ฌํ ์ฝ๋๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ๋งํ๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ ์ฐ์ฐ์ ํด์ผ ํ๋ค๊ณ ํ์.
$$
x = \begin{bmatrix}1 \ 2 \ 3 \ \vdots \ 100 \end{bmatrix}, \;\;\;\;
y = \begin{bmatrix}101 \ 102 \ 103 \ \vdots \ 200 \end{bmatrix},
$$
$$z = x + y = \begin{bmatrix}1+101 \ 2+102 \ 3+103 \ \vdots \ 100+200 \end{bmatrix}= \begin{bmatrix}102 \ 104 \ 106 \ \vdots \ 300 \end{bmatrix}
$$
๋ง์ฝ NumPy์ ๋ฒกํฐํ ์ฐ์ฐ์ ์ฌ์ฉํ์ง ์๋๋ค๋ฉด ์ด ์ฐ์ฐ์ ๋ฃจํ๋ฅผ ํ์ฉํ์ฌ ๋ค์๊ณผ ๊ฐ์ด ์ฝ๋ฉํด์ผ ํ๋ค.
End of explanation
"""
%%time
z = x + y
z
"""
Explanation: ๊ทธ๋ฌ๋ NumPy๋ ๋ฒกํฐํ ์ฐ์ฐ์ ์ง์ํ๋ฏ๋ก ๋ค์๊ณผ ๊ฐ์ด ๋ง์
์ฐ์ฐ ํ๋๋ก ๋๋๋ค. ์์์ ๋ณด์ธ ์ ํ ๋์์ ๋ฒกํฐ ๊ธฐํธ๋ฅผ ์ฌ์ฉํ ์ฐ์ฐ๊ณผ ์ฝ๋๊ฐ ์์ ํ ๋์ผํ๋ค.
End of explanation
"""
x = np.arange(10)
x
a = 100
a * x
"""
Explanation: ์ฐ์ฐ ์๋๋ ๋ฒกํฐํ ์ฐ์ฐ์ด ํจ์ฌ ๋น ๋ฅธ ๊ฒ์ ๋ณผ ์ ์๋ค.
Element-Wise ์ฐ์ฐ
NumPy์ ๋ฒกํฐํ ์ฐ์ฐ์ ๊ฐ์ ์์น์ ์์๋ผ๋ฆฌ ์ฐ์ฐํ๋ element-wise ์ฐ์ฐ์ด๋ค. NumPy์ ndarray๋ฅผ ์ ํ ๋์์ ๋ฒกํฐ๋ ํ๋ ฌ์ด๋ผ๊ณ ํ์ ๋ ๋ง์
, ๋บ์
์ NumPy ์ฐ์ฐ๊ณผ ์ผ์นํ๋ค
์ค์นผ๋ผ์ ๋ฒกํฐ์ ๊ณฑ๋ ๋ง์ฐฌ๊ฐ์ง๋ก ์ ํ ๋์์์ ์ฌ์ฉํ๋ ์๊ณผ NumPy ์ฝ๋๊ฐ ์ผ์นํ๋ค.
End of explanation
"""
x = np.arange(10)
y = np.arange(10)
x * y
x
y
np.dot(x, y)
x.dot(y)
"""
Explanation: NumPy ๊ณฑ์
์ ๊ฒฝ์ฐ์๋ ํ๋ ฌ์ ๊ณฑ, ์ฆ ๋ด์ (inner product, dot product)์ ์ ์์ ๋ค๋ฅด๋ค. ๋ฐ๋ผ์ ์ด ๊ฒฝ์ฐ์๋ ๋ณ๋๋ก dot์ด๋ผ๋ ๋ช
๋ น ํน์ ๋ฉ์๋๋ฅผ ์ฌ์ฉํด์ผ ํ๋ค.
End of explanation
"""
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
a == b
a >= b
"""
Explanation: ๋น๊ต ์ฐ์ฐ๋ ๋ง์ฐฌ๊ฐ์ง๋ก element-wise ์ฐ์ฐ์ด๋ค. ๋ฐ๋ผ์ ๋ฒกํฐ ํน์ ํ๋ ฌ ์ ์ฒด์ ์์๊ฐ ๋ชจ๋ ๊ฐ์์ผ ํ๋ ์ ํ ๋์์ ๋น๊ต ์ฐ์ฐ๊ณผ๋ ๋ค๋ฅด๋ค.
End of explanation
"""
a = np.array([1, 2, 3, 4])
b = np.array([4, 2, 2, 4])
c = np.array([1, 2, 3, 4])
np.array_equal(a, b)
np.array_equal(a, c)
"""
Explanation: ๋ง์ฝ ๋ฐฐ์ด ์ ์ฒด๋ฅผ ๋น๊ตํ๊ณ ์ถ๋ค๋ฉด array_equal ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
End of explanation
"""
a = np.arange(5)
a
np.exp(a)
10**a
np.log(a)
np.log10(a)
"""
Explanation: ๋ง์ฝ NumPy ์์ ์ ๊ณตํ๋ ์ง์ ํจ์, ๋ก๊ทธ ํจ์ ๋ฑ์ ์ํ ํจ์๋ฅผ ์ฌ์ฉํ๋ฉด element-wise ๋ฒกํฐํ ์ฐ์ฐ์ ์ง์ํ๋ค.
End of explanation
"""
import math
a = [1, 2, 3]
math.exp(a)
"""
Explanation: ๋ง์ฝ NumPy์์ ์ ๊ณตํ๋ ํจ์๋ฅผ ์ฌ์ฉํ์ง ์์ผ๋ฉด ๋ฒกํฐํ ์ฐ์ฐ์ ๋ถ๊ฐ๋ฅํ๋ค.
End of explanation
"""
x = np.arange(5)
y = np.ones_like(x)
x + y
x + 1
"""
Explanation: ๋ธ๋ก๋์บ์คํ
์ ํ ๋์์ ํ๋ ฌ ๋ง์
ํน์ ๋บ์
์ ํ๋ ค๋ฉด ๋ ํ๋ ฌ์ ํฌ๊ธฐ๊ฐ ๊ฐ์์ผ ํ๋ค. ๊ทธ๋ฌ๋ NumPy์์๋ ์๋ก ๋ค๋ฅธ ํฌ๊ธฐ๋ฅผ ๊ฐ์ง ๋ ndarray ๋ฐฐ์ด์ ์ฌ์น ์ฐ์ฐ๋ ์ง์ํ๋ค. ์ด ๊ธฐ๋ฅ์ ๋ธ๋ก๋์บ์คํ
(broadcasting)์ด๋ผ๊ณ ํ๋๋ฐ ํฌ๊ธฐ๊ฐ ์์ ๋ฐฐ์ด์ ์๋์ผ๋ก ๋ฐ๋ณต ํ์ฅํ์ฌ ํฌ๊ธฐ๊ฐ ํฐ ๋ฐฐ์ด์ ๋ง์ถ๋ ๋ฐฉ๋ฒ์ด๋ค.
์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด ๋ฒกํฐ์ ์ค์นผ๋ผ๋ฅผ ๋ํ๋ ๊ฒฝ์ฐ๋ฅผ ์๊ฐํ์. ์ ํ ๋์์์๋ ์ด๋ฌํ ์ฐ์ฐ์ด ๋ถ๊ฐ๋ฅํ๋ค.
$$
x = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix}, \;\;\;\;
x + 1 = \begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + 1 = ?
$$
๊ทธ๋ฌ๋ NumPy๋ ๋ธ๋ก๋์บ์คํ
๊ธฐ๋ฅ์ ์ฌ์ฉํ์ฌ ์ค์นผ๋ผ๋ฅผ ๋ฒกํฐ์ ๊ฐ์ ํฌ๊ธฐ๋ก ํ์ฅ์์ผ์ ๋ง์
๊ณ์ฐ์ ํ๋ค.
$$
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} \overset{\text{numpy}}+ 1 =
\begin{bmatrix}0 \ 1 \ 2 \ 3 \ 4 \end{bmatrix} + \begin{bmatrix}1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix} =
\begin{bmatrix}1 \ 2 \ 3 \ 4 \ 5 \end{bmatrix}
$$
End of explanation
"""
np.tile(np.arange(0, 40, 10), (3, 1))
a = np.tile(np.arange(0, 40, 10), (3, 1)).T
a
b = np.array([0, 1, 2])
b
a + b
a = np.arange(0, 40, 10)[:, np.newaxis]
a
a + b
"""
Explanation: ๋ธ๋ก๋์บ์คํ
์ ๋ ์ฐจ์์ด ๋์ ๊ฒฝ์ฐ์๋ ์ ์ฉ๋๋ค. ๋ค์ ๊ทธ๋ฆผ์ ์ฐธ์กฐํ๋ผ.
<img src="https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png" style="width: 60%; margin: 0 auto 0 auto;">
End of explanation
"""
x = np.array([1, 2, 3, 4])
x
np.sum(x)
x.sum()
x = np.array([1, 3, 2, 4])
x.min(), np.min(x)
x.max()
x.argmin() # index of minimum
x.argmax() # index of maximum
x = np.array([1, 2, 3, 1])
x.mean()
np.median(x)
np.all([True, True, False])
np.any([True, True, False])
a = np.zeros((100, 100), dtype=np.int)
a
np.any(a == 0)
np.any(a != 0)
np.all(a == 0)
a = np.array([1, 2, 3, 2])
b = np.array([2, 2, 3, 2])
c = np.array([6, 4, 4, 5])
((a <= b) & (b <= c)).all()
"""
Explanation: ์ฐจ์ ์ถ์ ์ฐ์ฐ
ndarray์ ํ๋์ ํ์ ์๋ ์์๋ฅผ ํ๋์ ๋ฐ์ดํฐ ์งํฉ์ผ๋ก ๋ณด๊ณ ํ๊ท ์ ๊ตฌํ๋ฉด ๊ฐ ํ์ ๋ํด ํ๋์ ์ซ์๊ฐ ๋์ค๊ฒ ๋๋ค. ์๋ฅผ ๋ค์ด 10x5 ํฌ๊ธฐ์ 2์ฐจ์ ๋ฐฐ์ด์ ๋ํด ํ-ํ๊ท ์ ๊ตฌํ๋ฉด 10๊ฐ์ ์ซ์๋ฅผ ๊ฐ์ง 1์ฐจ์ ๋ฒกํฐ๊ฐ ๋์ค๊ฒ ๋๋ค. ์ด๋ฌํ ์ฐ์ฐ์ ์ฐจ์ ์ถ์(dimension reduction) ์ฐ์ฐ์ด๋ผ๊ณ ํ๋ค.
ndarray ๋ ๋ค์๊ณผ ๊ฐ์ ์ฐจ์ ์ถ์ ์ฐ์ฐ ๋ช
๋ น ํน์ ๋ฉ์๋๋ฅผ ์ง์ํ๋ค.
์ต๋/์ต์: min, max, argmin, argmax
ํต๊ณ: sum, mean, median, std, var
๋ถ๋ฆฌ์ธ: all, any
End of explanation
"""
x = np.array([[1, 1], [2, 2]])
x
x.sum()
x.sum(axis=0) # columns (first dimension)
x.sum(axis=1) # rows (second dimension)
y = np.array([[1, 2, 3], [5, 6, 1]])
np.median(y, axis=-1) # last axis
y
np.median(y, axis=1)
"""
Explanation: ์ฐ์ฐ์ ๋์์ด 2์ฐจ์ ์ด์์ธ ๊ฒฝ์ฐ์๋ ์ด๋ ์ฐจ์์ผ๋ก ๊ณ์ฐ์ ํ ์ง๋ฅผ axis ์ธ์๋ฅผ ์ฌ์ฉํ์ฌ ์ง์ํ๋ค. axis=0์ธ ๊ฒฝ์ฐ๋ ์ด ์ฐ์ฐ, axis=1์ธ ๊ฒฝ์ฐ๋ ํ ์ฐ์ฐ ๋ฑ์ผ๋ก ์ฌ์ฉํ๋ค. ๋ํดํธ ๊ฐ์ 0์ด๋ค.
<img src="https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png", style="margin: 0 auto 0 auto;">
End of explanation
"""
a = np.array([[4, 3, 5], [1, 2, 1]])
a
np.sort(a)
np.sort(a, axis=1)
np.sort(a, axis=0)
"""
Explanation: ์ ๋ ฌ
sort ๋ช
๋ น์ด๋ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐฐ์ด ์์ ์์๋ฅผ ํฌ๊ธฐ์ ๋ฐ๋ผ ์ ๋ ฌํ์ฌ ์๋ก์ด ๋ฐฐ์ด์ ๋ง๋ค ์๋ ์๋ค. 2์ฐจ์ ์ด์์ธ ๊ฒฝ์ฐ์๋ ๋ง์ฐฌ๊ฐ์ง๋ก axis ์ธ์๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐฉํฅ์ ๊ฒฐ์ ํ๋ค.
End of explanation
"""
a
a.sort(axis=1)
a
"""
Explanation: sort ๋ฉ์๋๋ ํด๋น ๊ฐ์ฒด์ ์๋ฃ ์์ฒด๊ฐ ๋ณํํ๋ in-place ๋ฉ์๋์ด๋ฏ๋ก ์ฌ์ฉํ ๋ ์ฃผ์๋ฅผ ๊ธฐ์ธ์ฌ์ผ ํ๋ค.
End of explanation
"""
a = np.array([4, 3, 1, 2])
j = np.argsort(a)
j
a[j]
"""
Explanation: ๋ง์ฝ ์๋ฃ๋ฅผ ์ ๋ ฌํ๋ ๊ฒ์ด ์๋๋ผ ์์๋ง ์๊ณ ์ถ๋ค๋ฉด argsort ๋ช
๋ น์ ์ฌ์ฉํ๋ค.
End of explanation
"""
|
NAU-CFL/Python_Learning_Source | 04_Control_Structures_Lecture.ipynb | mit | num = 10 # Assignment Operator
num == 12 # Comparison operator
"""
Explanation: Control Structures
A control statement is a statement that determines the control flow of a set of instructions.
Sequence control is an implicit form of control in which instructions are executed in the order that they are written.
Selection control is provided by a control statement that selectively executes instructions.
Iterative control is provided by an iterative control statement that repeatedly executes instructions.
Boolean Expressions
Boolean is a specific data type consists of True and False in Python.
Boolean expression is an expression that evaluates to a Boolean value.
One way of producing Boolean values is by comparing
Relational expressions are a type of Boolean expression, since they evaluate to a Boolean result.
End of explanation
"""
10 == 20
print(type('2'))
print('2' < '9')
if "Aliya" > "Alican":
print("Aliya is the best!")
else:
print("No, Aliya is not the best!")
'Hello' == "hello"
'Hello' > 'Zebra'
"""
Explanation: We know that we can compare number for sure, but Python also let's us compare String values based on their character encoding.
End of explanation
"""
'Dr.' in 'Dr. Madison'
10 not in (10, 20, 30)
"""
Explanation: Another way to get Boolean values is by checking if the membership of given value is valid or not:
End of explanation
"""
p = False
r = True
p and r
p or r
not (r and (not p))
"""
Explanation: Boolean (logical) operators , denoted by and, or, and not in Python. It is basically logic,
End of explanation
"""
num = 15
(1 <= num <= 10)
# Above is equals to
1 <= num and num <= 10
(10 < 0) and (10 > 2)
not(True) and False
not(True and False)
name = 'Ann'
name in ('Thomas', 'MaryAnn', 'Thomas')
type(('MarryAnn'))
"""
Explanation: The boolean operators will give us a more complex comparison statements which eventually will lead us to better control structures.
End of explanation
"""
if 10 < 0:
print("Yes")
grade = 66
if grade >= 70:
print('Passing Grade')
else:
print('Failing Grade')
grade = 100
if grade == 100:
print('Perfect Score!')
"""
Explanation: Selection Control
A selection control statement is a control statement providing selective execution of instructions.
An if statement is a selection control statement based on the value of a given Boolean expression.
Syntax:
if condition:
statements
else:
statements
You don't have to include else part.
End of explanation
"""
credits = 45
if credits >= 90:
print('Senior')
else:
if credits >= 60:
print('Junior')
else:
if credits >= 30:
print('Sophomore')
else:
if credits >= 1:
print('Freshman')
else:
print('* No Earned Credits *')
"""
Explanation: Apply it!
<p style=color:red>
Write a small program that converts Fahrenheit to Celcius or vice-verse by getting input from user (F/C)
</p>
Indentation is really important in Python since it does not use {} or ;
Multiway selection is possible by nested if else statements:
End of explanation
"""
credits = 45
if credits >= 90:
print('Senior')
elif credits >= 60:
print('Junior')
elif credits >= 30:
print('Sophomore')
elif credits >= 1:
print('Freshman')
else:
print('* No Earned Credits *')
"""
Explanation: However there is a better way to do this using an additional keyword: elif
End of explanation
"""
# Initial variables
total = 0
i = 1
n = int(input('Enter value: '))
while i <= n:
total += i # total = total + i
i += 1
print(total)
"""
Explanation: Apply It!
<p style=color:red>
Write a small program that prints the day of the specific month of a year. The output will look like this:
</p>
Test 1:
This program will determine the number of days in a given month
Enter the month (1-12): 14
*Invalid Value Entered -14*
Test 2:
This program will determine the number of days in a given month
Enter the month (1-12): 2
Please enter the year (e.g., 2010): 2000
There are 29 days in the month
<p style=color:red>
Use if and elif statements
</p>
Hint1:
<p style=color:white>
The days of the month are fixed regardless of the year, except February. <br>
Check for 2.
</p>
Hint2:
<p style=color:white>
If the year is divisible by 4 but is also divisible by 100, then it is not a leap yearโ unless, it is also divisible by 400, then it is.
</p>
Hint3:
<p style=color:white>
(year % 4 == 0) and (not (year % 100 == 0) or (year % 400 == 0))
</p>
Iterative Control
An iterative control statement is a control statement providing the repeated execution of a set of instructions.
Because of the repeated execution, iterative control structures are commonly referred to as โloopsโ and that's how I am going to name them :)
A while statement is an iterative control statement that repeatedly executes a set of statements based on a provided Boolean expression (condition).
Syntax:
while condition:
statement
End of explanation
"""
import time
n = 10
tot = 0
i = 1
while i <= n:
tot = tot + i
i = i + 1
print(tot)
time.sleep(2)
n = 100
tot = 0
while True:
tot = tot + n
n = n - 1
if n == 0:
break
print(tot)
"""
Explanation: As long as the condition of a while statement is true, the statements within the loop are (re)executed.
End of explanation
"""
|
ocelot-collab/ocelot | demos/ipython_tutorials/5_CSR.ipynb | gpl-3.0 | # the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# load beam distribution
# this function convert CSRtrack beam distribution to Ocelot format
# - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import
# specific module from ocelot.adaptors
from ocelot.adaptors.csrtrack2ocelot import *
"""
Explanation: This notebook was created by Sergey Tomin (sergey.tomin@desy.de). Source and license info is on GitHub. January 2018.
Tutorial N5. Coherent Synchrotron Radiation.
Second order tracking with CSR effect of the 200k particles.
As an example, we will use bunch compressor BC2 of the European XFEL Injector.
The CSR module uses a fast โprojectedโ 1-D method from CSRtrack code and follows the approach presented in {Saldin et al 1998, Dohlus 2003, Dohlus 2004}. The particle tracking uses matrices up to the second order. CSR wake is calculated continuously through beam lines of arbitrary flat geometry. The transverse self-forces are neglected completely. The method calculates the longitudinal self-field of a one-dimensional beam that is obtained by a projection of the โrealโ three-dimensional beam onto a reference trajectory. A smooth one-dimensional charge density is calculated by binning and filtering, which is crucial for the stability and accuracy of the simulation, since the instability is sensitive to high frequency components in the charge density.
This example will cover the following topics:
Initialization of the CSR object and the places of it applying
tracking of second order with CSR effect.
Requirements
in.fmt1 - input file, initial beam distribution in CSRtrack format (was obtained from s2e simulation performed with ASTRA/CSRtrack).
out.fmt1 - output file, beam distribution after BC2 bunch compressor (was obtained with CSRtrack)
End of explanation
"""
# load and convert CSRtrack file to OCELOT beam distribution
# p_array_i = csrtrackBeam2particleArray("in.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("test.npz", p_array_i)
p_array_i = load_particle_array("csr_beam.npz")
# show the longitudinal phase space
plt.plot(-p_array_i.tau()*1000, p_array_i.p(), "r.")
plt.xlabel("S, mm")
plt.ylabel("dE/pc")
"""
Explanation: Load beam distribution from CSRtrack format
End of explanation
"""
b1 = Bend(l = 0.5001, angle=-0.0336, e1=0.0, e2=-0.0336, gap=0, tilt=0, eid='BB.393.B2')
b2 = Bend(l = 0.5001, angle=0.0336, e1=0.0336, e2=0.0, gap=0, tilt=0, eid='BB.402.B2')
b3 = Bend(l = 0.5001, angle=0.0336, e1=0.0, e2=0.0336, gap=0, tilt=0, eid='BB.404.B2')
b4 = Bend(l = 0.5001, angle=-0.0336, e1=-0.0336, e2=0.0, gap=0, tilt=0, eid='BB.413.B2')
d_slope = Drift(l=8.5/np.cos(b2.angle))
start_csr = Marker()
stop_csr = Marker()
# define cell frome the bends and drifts
cell = [start_csr, Drift(l=0.1), b1 , d_slope , b2, Drift(l=1.5),
b3, d_slope, Marker(), b4, Drift(l= 1.), stop_csr]
"""
Explanation: create BC2 lattice
End of explanation
"""
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
lat = MagneticLattice(cell, method=method)
"""
Explanation: Initialization tracking method and MagneticLattice object
End of explanation
"""
csr = CSR(n_bin=300, m_bin=5, sigma_min=.2e-6)
"""
Explanation: Create CSR object
End of explanation
"""
navi = Navigator(lat)
# track witout CSR effect
p_array_no = deepcopy(p_array_i)
print("\n tracking without CSR effect .... ")
start = time()
tws_no, p_array_no = track(lat, p_array_no, navi)
print("\n time exec:", time() - start, "sec")
# again create Navigator with needed step in [m]
navi = Navigator(lat)
navi.unit_step = 0.5 # m
# add csr process to navigator with start and stop elements
navi.add_physics_proc(csr, start_csr, lat.sequence[-1])
# tracking
start = time()
p_array_csr = deepcopy(p_array_i)
print("\n tracking with CSR effect .... ")
tws_csr, p_array_csr = track(lat, p_array_csr, navi)
print("\n time exec:", time() - start, "sec")
# recalculate reference particle
from ocelot.cpbd.beam import *
recalculate_ref_particle(p_array_csr)
recalculate_ref_particle(p_array_no)
# load and convert CSRtrack file to OCELOT beam distribution
# distribution after BC2
# p_array_out = csrtrackBeam2particleArray("out.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("scr_track.npz", p_array_out)
p_array_out = load_particle_array("scr_track.npz")
# standard matplotlib functions
plt.figure(2, figsize=(10, 6))
plt.subplot(121)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="OCELOT no CSR")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
plt.subplot(122)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="Ocelot no CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
plt.savefig("arcline_traj.png")
"""
Explanation: Track particles with and without CSR effect
End of explanation
"""
|
datapolitan/lede_algorithms | class6_1/cluster_crime.ipynb | gpl-2.0 | data = list(csv.DictReader(open('data/columbia_crime.csv', 'r').readlines()))
# This part just splits out the latitude and longitude coordinate fields for each incident, which we need for mapping.
coords = [(float(d['lat']), float(d['lng'])) for d in data if len(d['lat']) > 0]
print coords[:10]
# And this creates a matching array of incident types
types = [d['ExtNatureDisplayName'] for d in data]
print types[:10]
"""
Explanation: Preparing the data
After we import our CSV of crime data, we need to do a couple things to get it ready for clustering: extracting the coordinate pairs that we want to cluster, and pulling together some simple labels so we know which indcident each point refers to.
End of explanation
"""
number_of_clusters = 3
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(coords)
clusters_to_csv(kmeans.labels_, types, coords)
"""
Explanation: K-means clustering
Here we'll review the idea of k-means clustering you discussed last week and see how it applies to our crime data. We'll start with three clusters.
End of explanation
"""
number_of_clusters = 10
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(coords)
clusters_to_csv(kmeans.labels_, types, coords)
"""
Explanation: The data comes out in the format of cluster_id,incident_type,lat,lng. If we save it to a csv file, we can load it into Google's simple map viewer tool to see how it looks.
As you can see, segmenting the data into only three clusters doesn't give us anything useful. Let's try a bigger number.
End of explanation
"""
# We're dealing in unprojected coordinates, so this basically refers to a fraction of a degree of lat/lng.
EPS = 0.02
"""
Explanation: These clusters are arguably more useful, but it's also clear that k-means might not be the best tool for figuring out our density-based crime clusters. Let's try another approach.
DBSCAN
Unlike K-Means clustering, which requires us to define the number of clusters we want in advance, DBSCAN is a density-based clustering algorithm that works by finding points that are close together (given an input parameter, known as epsilon).
As we'll see in class, the intuition of the algorithm is relatively simple to understand. A functioning, documented implementation is here if you want to explore further.
Since we'll be using scikit-learn's implementation, though, we can invoke it in almost exactly the same way that we invoked K-Means. We'll start by establishing a constant for epsilon.
End of explanation
"""
distance_matrix = distance.squareform(distance.pdist(coords))
print distance_matrix.shape
print distance_matrix
"""
Explanation: DBSCAN requires a pre-processing step that K-Means doesn't: converting our "coords" array into a pairwise distance matrix that shows how far every point is away from every other point. Scipy's pdist and squareform functions can handle this for us:
End of explanation
"""
# Fit DBSCAN in the same way we fit K-Means, using the EPS parameter and distance matrix established above
db = DBSCAN(eps=EPS)
db.fit(distance_matrix)
# Now print the results
clusters_to_csv(db.labels_, types, coords)
"""
Explanation: Each entry in the matrix shows how far each of our 1,995 points is from each of the other points in the dataset, in the unit of latitude and longitude degrees. The distances between points are key for DBSCAN to compute densities. But this also exposes one of its weaknesses: it can take a long time to run DBSCAN on huge datasets.
End of explanation
"""
|
bhermanmit/openmc | docs/source/examples/mgxs-part-iii.ipynb | mit | import math
import pickle
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
from openmc.openmoc_compatible import get_openmoc_geometry
import openmoc
import openmoc.process
from openmoc.materialize import load_openmc_mgxs_lib
%matplotlib inline
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs.Library class. The Library class is designed to automate the calculation of multi-group cross sections for use cases with one or more domains, cross section types, and/or nuclides. In particular, this Notebook illustrates the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation, manipulation and storage of MGXS with openmc.mgxs.Library
Validation of multi-group cross sections with OpenMOC
Steady-state pin-by-pin fission rates comparison between OpenMC and OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
"""
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
b10 = openmc.Nuclide('B10')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
zr90 = openmc.Nuclide('Zr90')
"""
Explanation: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
"""
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
"""
Explanation: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
"""
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, water, zircaloy))
# Export to "materials.xml"
materials_file.export_to_xml()
"""
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
"""
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
"""
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
"""
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
"""
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
"""
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
"""
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
"""
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
"""
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
"""
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:,:] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
"""
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
"""
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
"""
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
"""
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Export to "geometry.xml"
geometry.export_to_xml()
"""
Explanation: We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
"""
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
"""
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
End of explanation
"""
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.pixels = [250, 250]
plot.width = [-10.71*2, -10.71*2]
plot.color = 'mat'
# Instantiate a Plots object, add Plot, and export to "plots.xml"
plot_file = openmc.Plots([plot])
plot_file.export_to_xml()
"""
Explanation: Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully.
End of explanation
"""
# Run openmc in plotting mode
openmc.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
"""
Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
End of explanation
"""
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
"""
Explanation: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
"""
# Initialize an 2-group MGXS Library for OpenMOC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
"""
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
End of explanation
"""
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['transport', 'nu-fission', 'fission', 'nu-scatter matrix', 'chi']
"""
Explanation: Now, we must specify to the Library which types of cross sections to compute. In particular, the following are the multi-group cross section MGXS subclasses that are mapped to string codes accepted by the Library class:
TotalXS ("total")
TransportXS ("transport" or "nu-transport with nu set to True)
AbsorptionXS ("absorption")
CaptureXS ("capture")
FissionXS ("fission" or "nu-fission" with nu set to True)
KappaFissionXS ("kappa-fission")
ScatterXS ("scatter" or "nu-scatter" with nu set to True)
ScatterMatrixXS ("scatter matrix" or "nu-scatter matrix" with nu set to True)
Chi ("chi")
ChiPrompt ("chi prompt")
InverseVelocity ("inverse-velocity")
PromptNuFissionXS ("prompt-nu-fission")
DelayedNuFissionXS ("delayed-nu-fission")
ChiDelayed ("chi-delayed")
Beta ("beta")
In this case, let's create the multi-group cross sections needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we will define "transport", "nu-fission", '"fission", "nu-scatter matrix" and "chi" cross sections for our Library.
Note: A variety of different approximate transport-corrected total multi-group cross sections (and corresponding scattering matrices) can be found in the literature. At the present time, the openmc.mgxs module only supports the "P0" transport correction. This correction can be turned on and off through the boolean Library.correction property which may take values of "P0" (default) or None.
End of explanation
"""
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = 'cell'
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_material_cells().values()
"""
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. We will use a "cell" domain type here to compute cross sections in each of the cells in the fuel assembly geometry.
Note: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell or universe) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property. In our case, we wish to compute multi-group cross sections in each and every cell since they will be needed in our downstream OpenMOC calculation on the identical combinatorial geometry mesh.
End of explanation
"""
# Compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = True
"""
Explanation: We can easily instruct the Library to compute multi-group cross sections on a nuclide-by-nuclide basis with the boolean Library.by_nuclide property. By default, by_nuclide is set to False, but we will set it to True here.
End of explanation
"""
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
"""
Explanation: Lastly, we use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain and nuclide.
End of explanation
"""
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
"""
Explanation: The tallies can now be export to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as $O(N)$ for $N$ tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge paramter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
"""
# Instantiate a tally Mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission', 'nu-fission']
# Add tally to collection
tallies_file.append(tally)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
"""
Explanation: In addition, we instantiate a fission rate mesh tally to compare with OpenMOC.
End of explanation
"""
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
"""
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
"""
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
"""
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
"""
# Retrieve the NuFissionXS object for the fuel cell from the library
fuel_mgxs = mgxs_lib.get_mgxs(fuel_cell, 'nu-fission')
"""
Explanation: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
The Library supports a rich API to automate a variety of tasks, including multi-group cross section data retrieval and storage. We will highlight a few of these features here. First, the Library.get_mgxs(...) method allows one to extract an MGXS object from the Library for a particular domain and cross section type. The following cell illustrates how one may extract the NuFissionXS object for the fuel cell.
Note: The MGXS.get_mgxs(...) method will accept either the domain or the integer domain ID of interest.
End of explanation
"""
df = fuel_mgxs.get_pandas_dataframe()
df
"""
Explanation: The NuFissionXS object supports all of the methods described previously in the openmc.mgxs tutorials, such as Pandas DataFrames:
Note that since so few histories were simulated, we should expect a few division-by-error errors as some tallies have not yet scored any results.
End of explanation
"""
fuel_mgxs.print_xs()
"""
Explanation: Similarly, we can use the MGXS.print_xs(...) method to view a string representation of the multi-group cross section data.
End of explanation
"""
# Store the cross section data in an "mgxs/mgxs.h5" HDF5 binary file
mgxs_lib.build_hdf5_store(filename='mgxs.h5', directory='mgxs')
"""
Explanation: One can export the entire Library to HDF5 with the Library.build_hdf5_store(...) method as follows:
End of explanation
"""
# Store a Library and its MGXS objects in a pickled binary file "mgxs/mgxs.pkl"
mgxs_lib.dump_to_file(filename='mgxs', directory='mgxs')
# Instantiate a new MGXS Library from the pickled binary file "mgxs/mgxs.pkl"
mgxs_lib = openmc.mgxs.Library.load_from_file(filename='mgxs', directory='mgxs')
"""
Explanation: The HDF5 store will contain the numerical multi-group cross section data indexed by domain, nuclide and cross section type. Some data workflows may be optimized by storing and retrieving binary representations of the MGXS objects in the Library. This feature is supported through the Library.dump_to_file(...) and Library.load_from_file(...) routines which use Python's pickle module. This is illustrated as follows.
End of explanation
"""
# Create a 1-group structure
coarse_groups = openmc.mgxs.EnergyGroups(group_edges=[0., 20.0e6])
# Create a new MGXS Library on the coarse 1-group structure
coarse_mgxs_lib = mgxs_lib.get_condensed_library(coarse_groups)
# Retrieve the NuFissionXS object for the fuel cell from the 1-group library
coarse_fuel_mgxs = coarse_mgxs_lib.get_mgxs(fuel_cell, 'nu-fission')
# Show the Pandas DataFrame for the 1-group MGXS
coarse_fuel_mgxs.get_pandas_dataframe()
"""
Explanation: The Library class may be used to leverage the energy condensation features supported by the MGXS class. In particular, one can use the Library.get_condensed_library(...) with a coarse group structure which is a subset of the original "fine" group structure as shown below.
End of explanation
"""
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(mgxs_lib.geometry)
"""
Explanation: Verification with OpenMOC
Of course it is always a good idea to verify that one's cross sections are accurate. We can easily do so here with the deterministic transport code OpenMOC. We first construct an equivalent OpenMOC geometry.
End of explanation
"""
# Load the library into the OpenMOC geometry
materials = load_openmc_mgxs_lib(mgxs_lib, openmoc_geometry)
"""
Explanation: Now, we can inject the multi-group cross sections into the equivalent fuel assembly OpenMOC geometry. The openmoc.materialize module supports the loading of Library objects from OpenMC as illustrated below.
End of explanation
"""
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=32, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
"""
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
"""
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
"""
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
"""
# Get the OpenMC fission rate mesh tally data
mesh_tally = sp.get_tally(name='mesh tally')
openmc_fission_rates = mesh_tally.get_values(scores=['nu-fission'])
# Reshape array to 2D for plotting
openmc_fission_rates.shape = (17,17)
# Normalize to the average pin power
openmc_fission_rates /= np.mean(openmc_fission_rates)
"""
Explanation: There is a non-trivial bias between the eigenvalues computed by OpenMC and OpenMOC. One can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Flux and Pin Power Visualizations
We will conclude this tutorial by illustrating how to visualize the fission rates computed by OpenMOC and OpenMC. First, we extract volume-integrated fission rates from OpenMC's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
"""
# Create OpenMOC Mesh on which to tally fission rates
openmoc_mesh = openmoc.process.Mesh()
openmoc_mesh.dimension = np.array(mesh.dimension)
openmoc_mesh.lower_left = np.array(mesh.lower_left)
openmoc_mesh.upper_right = np.array(mesh.upper_right)
openmoc_mesh.width = openmoc_mesh.upper_right - openmoc_mesh.lower_left
openmoc_mesh.width /= openmoc_mesh.dimension
# Tally OpenMOC fission rates on the Mesh
openmoc_fission_rates = openmoc_mesh.tally_fission_rates(solver)
openmoc_fission_rates = np.squeeze(openmoc_fission_rates)
openmoc_fission_rates = np.fliplr(openmoc_fission_rates)
# Normalize to the average pin fission rate
openmoc_fission_rates /= np.mean(openmoc_fission_rates)
"""
Explanation: Next, we extract OpenMOC's volume-averaged fission rates into a 2D 17x17 NumPy array.
End of explanation
"""
# Ignore zero fission rates in guide tubes with Matplotlib color scheme
openmc_fission_rates[openmc_fission_rates == 0] = np.nan
openmoc_fission_rates[openmoc_fission_rates == 0] = np.nan
# Plot OpenMC's fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(openmc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMC Fission Rates')
# Plot OpenMOC's fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(openmoc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMOC Fission Rates')
"""
Explanation: Now we can easily use Matplotlib to visualize the fission rates from OpenMC and OpenMOC side-by-side.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | CPB100/lab4c/mlapis.ipynb | apache-2.0 | # Use the chown command to change the ownership of repository to user
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
APIKEY="CHANGE-THIS-KEY" # Replace with your API key
"""
Explanation: <h1> Using Machine Learning APIs </h1>
First, visit <a href="http://console.cloud.google.com/apis">API console</a>, choose "Credentials" on the left-hand menu. Choose "Create Credentials" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo.
Copy-paste your API Key here:
End of explanation
"""
# Running the Translate API
from googleapiclient.discovery import build
service = build('translate', 'v2', developerKey=APIKEY)
# Use the service
inputs = ['is it really this easy?', 'amazing technology', 'wow']
outputs = service.translations().list(source='en', target='fr', q=inputs).execute()
# Print outputs
for input, output in zip(inputs, outputs['translations']):
print("{0} -> {1}".format(input, output['translatedText']))
"""
Explanation: <b> Note: Make sure you generate an API Key and replace the value above. The sample key will not work.</b>
<h2> Invoke Translate API </h2>
End of explanation
"""
# Running the Vision API
import base64
IMAGE="gs://cloud-training-demos/vision/sign2.jpg"
vservice = build('vision', 'v1', developerKey=APIKEY)
request = vservice.images().annotate(body={
'requests': [{
'image': {
'source': {
'gcs_image_uri': IMAGE
}
},
'features': [{
'type': 'TEXT_DETECTION',
'maxResults': 3,
}]
}],
})
responses = request.execute(num_retries=3)
# Let's output the `responses`
print(responses)
foreigntext = responses['responses'][0]['textAnnotations'][0]['description']
foreignlang = responses['responses'][0]['textAnnotations'][0]['locale']
# Let's output the `foreignlang` and `foreigntext`
print(foreignlang, foreigntext)
"""
Explanation: <h2> Invoke Vision API </h2>
The Vision API can work off an image in Cloud Storage or embedded directly into a POST message. I'll use Cloud Storage and do OCR on this image: <img src="https://storage.googleapis.com/cloud-training-demos/vision/sign2.jpg" width="200" />. That photograph is from http://www.publicdomainpictures.net/view-image.php?image=15842
End of explanation
"""
inputs=[foreigntext]
outputs = service.translations().list(source=foreignlang, target='en', q=inputs).execute()
# Print the outputs
for input, output in zip(inputs, outputs['translations']):
print("{0} -> {1}".format(input, output['translatedText']))
"""
Explanation: <h2> Translate sign </h2>
End of explanation
"""
# Evaluating the sentiment with Google Cloud Natural Language API
lservice = build('language', 'v1beta1', developerKey=APIKEY)
quotes = [
'To succeed, you must have tremendous perseverance, tremendous will.',
'Itโs not that Iโm so smart, itโs just that I stay with problems longer.',
'Love is quivering happiness.',
'Love is of all passions the strongest, for it attacks simultaneously the head, the heart, and the senses.',
'What difference does it make to the dead, the orphans and the homeless, whether the mad destruction is wrought under the name of totalitarianism or in the holy name of liberty or democracy?',
'When someone you love dies, and youโre not expecting it, you donโt lose her all at once; you lose her in pieces over a long time โ the way the mail stops coming, and her scent fades from the pillows and even from the clothes in her closet and drawers. '
]
for quote in quotes:
# The `documents.analyzeSentiment` method analyzes the sentiment of the provided text.
response = lservice.documents().analyzeSentiment(
body={
'document': {
'type': 'PLAIN_TEXT',
'content': quote
}
}).execute()
polarity = response['documentSentiment']['polarity']
magnitude = response['documentSentiment']['magnitude']
# Lets output the value of each `polarity`, `magnitude` and `quote`
print('POLARITY=%s MAGNITUDE=%s for %s' % (polarity, magnitude, quote))
"""
Explanation: <h2> Sentiment analysis with Language API </h2>
Let's evaluate the sentiment of some famous quotes using Google Cloud Natural Language API.
End of explanation
"""
# Using the Speech API by passing audio file as an argument
sservice = build('speech', 'v1', developerKey=APIKEY)
# The `speech.recognize` method performs synchronous speech recognition.
# It receive results after all audio has been sent and processed.
response = sservice.speech().recognize(
body={
'config': {
'languageCode' : 'en-US',
'encoding': 'LINEAR16',
'sampleRateHertz': 16000
},
'audio': {
'uri': 'gs://cloud-training-demos/vision/audio.raw'
}
}).execute()
# Let's output the value of `response`
print(response)
print(response['results'][0]['alternatives'][0]['transcript'])
# Let's output the value of `'Confidence`
print('Confidence=%f' % response['results'][0]['alternatives'][0]['confidence'])
"""
Explanation: <h2> Speech API </h2>
The Speech API can work on streaming data, audio content encoded and embedded directly into the POST message, or on a file on Cloud Storage. Here I'll pass in this <a href="https://storage.googleapis.com/cloud-training-demos/vision/audio.raw">audio file</a> in Cloud Storage.
End of explanation
"""
|
Smith42/neuralnet-mcg | CNNs/MCG-ProcessData-3D.ipynb | gpl-3.0 | k = 1 # How many folds in the k-fold x-validation
## I used this to save the array in a smaller file so it doesn't eat all my ram
# df60 = pd.read_pickle("./inData/6060DF_MFMts.pkl")
# coilData = df60["MFMts"].as_matrix()
# ziData = np.zeros([400,2000,19,17])
#
# for i in np.arange(400):
# for j in np.arange(2000):
# ziData[i,j] = np.array(coilData[i][j][2])
#
# np.save("./inData/ziData.dat", ziData)
#
# ziClass = df60["Classification"].as_matrix()
# np.save("./inData/ziClass.dat", ziClass)
def splitData(coilData, classData):
"""
Split data into healthy and ill types.
"""
illData = []
healthData = []
for index, item in enumerate(classData):
if item == 1:
illData.append(coilData[index])
if item == 0:
healthData.append(coilData[index])
return illData, healthData
classData = np.load("./inData/ziClass.npy")
coilData = np.load("./inData/ziData.npy")
# Normalise coilData for each unit time
for i in np.arange(coilData.shape[0]):
for j in np.arange(coilData.shape[1]):
coilData[i,j] = normalize(coilData[i,j], axis=1)
illData, healthData = splitData(coilData, classData)
if k == 1:
illUnseen = np.array(illData[:20])
healthUnseen = np.array(healthData[:20])
illData = np.array(illData[20:])
healthData = np.array(healthData[20:])
print(illData.shape, healthData.shape,"\n", illUnseen.shape, healthUnseen.shape)
else:
illData = np.array(illData)
healthData = np.array(healthData)
print(illData.shape, healthData.shape)
def processClassData(classData):
"""
Process classData.
Returns a one-hot array of shape [len(classData), 2].
"""
# Convert label data to one-hot array
classDataOH = np.zeros((len(classData),2))
classDataOH[np.arange(len(classData)), classData] = 1
return classDataOH
def functionTown(illArr, healthArr, shuffle):
"""
Return the processed ecgData and the classData (one-hot). Also return arrays of ill and healthy ppts.
If shuffle is true, shuffle data.
"""
print("ill samples", len(illArr))
print("healthy samples", len(healthArr))
classData = []
for i in np.arange(0, len(illArr), 1):
classData.append(1)
for i in np.arange(0, len(healthArr), 1):
classData.append(0)
ecgData = np.reshape(np.append(illArr, healthArr), (-1, 2000, 19, 17))
if shuffle == True:
classData, ecgData = mutualShuf(np.array(classData), ecgData, random_state=0)
classDataOH = processClassData(classData)
return np.array(ecgData), classDataOH, classData
ecgData, classDataOH, classData = functionTown(illData, healthData, True)
# Reintegrate the found values...
ecgData = np.reshape(ecgData, (-1,2000,19,17))
if k != 1:
# Split ecgData into k sets so we can perform k-fold cross validation:
kfoldData = np.array_split(ecgData, k)
kfoldLabelsOH = np.array_split(classDataOH, k)
kfoldLabels = np.array_split(classData, k)
# Get the unseen data:
if k == 1:
unseenData, unseenClassOH, unseenClass = functionTown(illUnseen, healthUnseen, True)
#unseenData = np.cumsum(unseenData, axis=2)
unseenData = np.reshape(unseenData, (-1,2000,19,17))
iUnseen, hUnseen = splitData(unseenData, unseenClass)
unseenHL = np.tile([1,0], (len(hUnseen), 1))
unseenIL = np.tile([0,1], (len(iUnseen), 1))
np.save("./inData/3D-conv/ecgData", ecgData)
np.save("./inData/3D-conv/unseenData", unseenData)
np.save("./inData/3D-conv/ecgClass", classData)
np.save("./inData/3D-conv/unseenClass", unseenClass)
"""
Explanation: This ipynb processes the data for use in the neural net found at MCG-CNN-3D.ipynb
End of explanation
"""
ecgData = np.load("./inData/3D-conv/ecgData.npy")
unseenData = np.load("./inData/3D-conv/unseenData.npy")
fig, ax = plt.subplots(3,3)
k = 200
fig.suptitle("Still frames of 3D MCG data")
for i in np.arange(3):
for j in np.arange(3):
ax[i,j].set_title("%s ms"%(k/2), fontsize=7)
ax[i,j].imshow(ecgData[100,k], cmap="gray")
ax[i,j].axis("off")
k = k + 200
plt.savefig("/tmp/3Dmcg_frames.pdf")
ppt=100
fig = plt.figure()
data = ecgData[ppt,0]
im = plt.imshow(data)
def animate(i):
data = ecgData[ppt,i]
im.set_data(data)
return im
plt.axis("off")
plt.title("Example of 3D (two space, \n 1 time dimension) MCG data used")
anim = animation.FuncAnimation(fig, animate, frames=np.arange(2000)[::10], repeat=False)
anim.save("/tmp/3D-Data-Example.mp4")
"""
Explanation: Make a visualisation of the data
End of explanation
"""
|
mathnathan/notebooks | Linear vs Nonlinear Least Squares.ipynb | mit | #%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter((1,2,2.5), (2,1,2)); plt.xlim((0,3)); plt.ylim((0,3));
"""
Explanation: Linear Least Squares
This is the most common form of linear regression. Let's look at a concrete example...
Let us assume we would like to fit a line to the following three points
$${(x_i,y_i)} = {(1,2), (2,1), (2.5,2)}$$
End of explanation
"""
import numpy as np
A = np.array(((1,1),(2,1),(2.5,1))); b = np.array((2,1,2)) # Create A and b
x = np.dot(np.dot(np.linalg.inv(np.dot(A.T, A)), A.T), b) # Project b onto Col(A)
xvals = np.linspace(0,3,100) # Create a set of x values
yvals = x[0]*xvals + x[1] # All y values for the equation of the line
plt.scatter((1,2,2.5), (2,1,2)); plt.plot(xvals,yvals); plt.xlim((0,3)); plt.ylim((0,3));
"""
Explanation: Another way to express this problem is to say, I would like to find the equation of a line that satisfies all of the above points. Take the following general equation of a line...
$$ \alpha x_i + \beta = y_i $$
We would like to find the parameters $\alpha$ and $\beta$ such that the equality is satisfied for all of the points $(x_i, y_i)$. This can be expressed as a system of equations.
$$\begin{array}{lcl} \alpha (1)+\beta & = & 2 \ \alpha(2)+ \beta & = & 1 \ \alpha(2.5)+ \beta & = & 2 \end{array}$$
Now because each equation in the system is linear, which I will define in a bit, this system of equations can be expressed in matrix form using Linear Algebra!
$$\begin{bmatrix} 1 & 1 \ 2 & 1 \ 2.5 & 1 \end{bmatrix} \begin{bmatrix} \alpha \ \beta \end{bmatrix} = \begin{bmatrix} 2 \ 1 \ 2 \end{bmatrix}$$
The ideal objective for this overdetermined system is to find the values of $\alpha$ and $\beta$ that make the two columns of the matrix add up to the right hand side, i.e.
$$\alpha\begin{bmatrix} 1 \ 2 \ 2.5 \end{bmatrix} + \beta\begin{bmatrix} 1 \ 1 \ 1 \end{bmatrix} = \begin{bmatrix} 2 \ 1 \ 2 \end{bmatrix}$$
In linear algebra notation we express this problem more succinctly as follows
$$A\vec{x} = \vec{b}$$
where
$$A = \begin{bmatrix} 1 & 1 \ 2 & 1 \ 2.5 & 1 \end{bmatrix} \hspace{10pt} \vec{x} = \begin{bmatrix} \alpha \ \beta \end{bmatrix} \hspace{10pt} \vec{b} = \begin{bmatrix} 2 \ 1 \ 2 \end{bmatrix}$$
We know however, via our impressive powers of logic, that there does not exist an equation of a line that can pass through all of the points above, because the points do not lie along a line. In otherwords, there are not such $\alpha$ and $\beta$ that satisfy all of the equations simultaneously. In linear algebra lingo, we say that $\vec{b}$ does not lie in the column space of $A$. Since there is no exact solution, given a value of $\vec{x}$ we can express how far it is from the ideal solution as follows.
$$||\vec{r}|| = ||A\vec{x} - \vec{b}||$$
Given this definition of error we seek to find the "best" solution, $\hat{x}$. We define the best solution to be the values of $\alpha$ and $\beta$ that minimize the magnitude of $||\vec{r}||$, i.e. the error.
$$\hat{x} = \min{||\vec{r}||} = \min_{\vec{x}}{||A\vec{x} - \vec{b}}||$$
As far as theory is concerned, this is an extremely well posed problem. It can be shown that the parameter space is a convex parabola with one global minimum. Even more, because this is posed in the linear world we can solve this
problem directly in one formula
$$\hat{x} = (A^TA)^{-1}A^T\vec{b}$$
For those interested, since $\vec{b}$ is not in the columnspace of $A$, this formula says the "best" solution is the projection of $\vec{b}$ onto the columnspace. Interestingly this is equivalent to solving the above minimization problem. In practice however, it is not very stable or efficient to solve it directly like this. We now plot the line to see how close it is to the points.
End of explanation
"""
A = np.array(((1,1,1),(4,2,1),(6.25,2.5,1))) # The matrix for our new 3x3 system of equations.
x = np.dot(np.dot(np.linalg.inv(np.dot(A.T, A)), A.T), b) # Project b onto Col(A)
error = np.linalg.norm( np.dot(A,x) - b )
print "Error = ", error
"""
Explanation: NOW, let's assume that instead of fitting a line we wanted to fit a parabola. This is still a linear least squares problem. That's because linear least squares only requires that the function being fit is linear in its parameters. We will look more at what that means below. Let's take a general quadratic equation.
$$\alpha x_i^2 + \beta x_i + \gamma = y_i$$
Now we have three degrees of freedom and must fit all 3 parameters. We pose this problem the same way as above. We want the quadratic equation to satisfy all of the points $(x_i,y_i)$ simultaneously. We want $\alpha$, $\beta$, and $\gamma$ such that all of the below equations are true.
$$\begin{array}{lcl} \alpha (1)^2+\beta(1)+\gamma & = & 2 \ \alpha(2)^2+ \beta(2) + \gamma & = & 1 \ \alpha(2.5)^2+ \beta(2.5) + \gamma & = & 2 \end{array}$$
In matrix form...
$$\begin{bmatrix} 1 & 1 & 1 \ 4 & 2 & 1 \ 6.25 & 2.5 & 1 \end{bmatrix} \begin{bmatrix} \alpha \ \beta \ \gamma \end{bmatrix} = \begin{bmatrix} 2 \ 1 \ 2 \end{bmatrix}$$
This time, there does exist a unique solution. A quadratic equation has 3 degrees of freedom and there are 3 constraints posed. Our good friend Gauss proved that $n$ distinct points uniquely define a polynomial of degree $n-1$. So we will find the "best" solution using the above technique and show that the error is zero.
End of explanation
"""
yvals = x[0]*xvals*xvals + x[1]*xvals + x[2] # All y values for the equation of the line
plt.scatter((1,2,2.5), (2,1,2)); plt.plot(xvals,yvals); plt.xlim((0,3)); plt.ylim((0,3));
"""
Explanation: Now we look at the resulting parabola and see that it passes through all 3 points.
End of explanation
"""
|
miti0/mosquito | notebooks/simple_reg_15_feat_sample.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
%matplotlib inline
df = pd.read_csv('simple_reg_15_feat_sample.csv')
df = df.drop(df.columns[[0]], axis=1)
df = df.reset_index(drop=True)
print('data-shape:', df.shape)
df.head()
"""
Explanation: Simple case for regression prediction currency data blueprint
Author: miti0
Date: 2.12.2017
Sa
End of explanation
"""
df.iloc[1:100][['y_plus30', 'y_now']].plot(grid=True, figsize=(12, 8), title='Sample of y_now and y_plus_30');
X = df.drop(['y_plus30', 'y_now'], axis=1)
y = df['y_plus30']
y_real = df['y_now']
X.shape
y.plot(grid=True, figsize=(12, 8));
"""
Explanation: Sample preview of y_now and y_plus30
Here we can see that y_plus30 is y_now price + some offset
End of explanation
"""
X_cv = X.iloc[-500:]
y_cv = y.iloc[-500:].as_matrix()
y_real_cv = y_real.iloc[-500:].as_matrix()
X = X.iloc[:-500]
y = y.iloc[:-500]
"""
Explanation: Setting apart some data for cross validation
End of explanation
"""
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn import linear_model
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.neural_network import MLPRegressor
#poly = PolynomialFeatures(degree=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, shuffle=False)
#X_train = poly.fit_transform(X_train)
#X_test = poly.fit_transform(X_test)
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
y_hat = model.predict(X_test)
# Measure
mae = mean_absolute_error(y_test, y_hat)
mse = mean_squared_error(y_test, y_hat)
r2 = r2_score(y_test, y_hat)
print('Variance score:', r2)
print('mae:', mae)
print('mse:', mse)
y_test_0 = y_test.reset_index(drop=True)
#print(y_test['y_plus30'])
Y_test_df = pd.DataFrame({'y_test': y_test_0 , 'y_pred_test':y_hat})
Y_test_df.iloc[-50:,].head()
"""
Explanation: Prediciting
End of explanation
"""
Y_test_df = pd.DataFrame({'y_test': y_test, 'y_pred_test':y_hat})
Y_test_df.head()
Y_test_df.iloc[1000:1100].plot(figsize=(13, 10), grid=True);
"""
Explanation: Plotting our test results
End of explanation
"""
X_cv = X_cv.reset_index(drop=True)
y_cv_hat = model.predict(X_cv)
Y_cv_df_out = pd.DataFrame({'y_cv_pred': y_cv_hat, 'y_cv':y_cv, 'y_real': y_real_cv})
Y_cv_df_out.head()
#Y_cv_df_out = Y_cv_df.reset_index(drop=True)
Y_cv_df_out.iloc[0:100].plot(figsize=(13, 10), grid=True)
"""
Explanation: Trying our model with our CV dataset
End of explanation
"""
|
atcemgil/notes | swe582-regression.ipynb | mit | import scipy.linalg as la
LL = np.zeros(N)
for rr in range(N):
ss = s*np.ones(N)
ss[rr] = q
D_r = np.diag(1/ss)
V_r = np.dot(np.sqrt(D_r), W)
b = y/np.sqrt(ss)
a_r,re,ra, cond = la.lstsq(V_r, b)
e = (y-np.dot(W, a_r))/np.sqrt(ss)
LL[rr] = -0.5*np.dot(e.T, e)
print(LL[rr])
#plt.plot(x, y, 'o')
#plt.plot(x, np.dot(W, a_r),'-')
#plt.plot(e)
plt.plot(LL)
plt.show()
"""
Explanation: $$
\mathcal{L}(r, a) = \log \mathcal{N}(y_r; W_r a, q) \prod_{i\neq r} \mathcal{N}(y_i; W_i a, s)
$$
$$
\log\mathcal{N}(y_r; W_r a, q) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2
$$
$$
\log\mathcal{N}(y_i; W_i a, s) = -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 + \sum_{i\neq r} -\frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{s} (y_i - W_i a)^2
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \sum_{i\neq r} \frac{1}{2}\log 2\pi s - \frac{1}{2} \frac{1}{q} (y_r - W_r a)^2 - \frac{1}{2} \frac{1}{s} \sum_{i\neq r} (y_i - W_i a)^2
$$
$$
D_r = \diag(s,s,\dots, q, \dots, s)^{-1}
$$
$$
\mathcal{L}(r, a) = -\frac{1}{2}\log 2\pi q - \frac{N-1}{2}\log 2\pi s - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
$$
\mathcal{L}(r, a) =^+ - \frac{1}{2} (y - Wa)^\top D_r (y - Wa)
$$
\begin{eqnarray}
\mathcal{L}(r, a) =^+ - \frac{1}{2} y D_r y^\top + y^\top D_r W a - \frac{1}{2} a^\top W^\top D_r Wa
\end{eqnarray}
\begin{eqnarray}
\frac{\partial}{\partial a}\mathcal{L}(r, a) & = & W^\top D_r y - W^\top D_r W a = 0 \
W^\top D_r y & = & W^\top D_r W a = 0 \
(W^\top D_r W)^{-1} W^\top D_r y & = & a_r^* \
\end{eqnarray}
To use standard Least Square solver, we substitute
\begin{eqnarray}
V_r^\top \equiv W^\top D_r^{1/2} \
V_r \equiv D_r^{1/2} W \
\end{eqnarray}
\begin{eqnarray}
(V_r^\top V_r)^{-1} V_r^\top D_r^{1/2} y & = & a_r^*
\end{eqnarray}
In Matlab/Octave this is least square with
\begin{eqnarray}
a_r^* = V_r \backslash D_r^{1/2} y
\end{eqnarray}
End of explanation
"""
import numpy as np
import scipy as sc
import scipy.linalg as la
def cond_Gauss(Sigma, mu, idx1, idx2, x2):
Sigma11 = Sigma[idx1, idx1].reshape((len(idx1),len(idx1)))
Sigma12 = Sigma[idx1, idx2].reshape((len(idx1),len(idx2)))
Sigma22 = Sigma[idx2, idx2].reshape((len(idx2),len(idx2)))
# print(Sigma11)
# print(Sigma12)
# print(Sigma22)
mu1 = mu[idx1]
mu2 = mu[idx2]
G = np.dot(Sigma12, la.inv(Sigma22))
cond_Sig_1 = Sigma11 - np.dot(G, Sigma12.T)
cond_mu_1 = mu1 + np.dot(G, (x2-mu2))
return cond_mu_1, cond_Sig_1
mu = np.array([0,0])
#P = np.array([2])
#A = np.array([1])
idx1 = [0]
idx2 = [1]
x2 = 5
P = np.array(3).reshape((len(idx1), len(idx1)))
A = np.array(-1).reshape((len(idx2), len(idx1)))
rho = np.array(0)
#Sigma = np.array([[P, A*P],[P*A, A*P*A + rho ]])
I = np.eye(len(idx2))
Sigma = np.concatenate((np.concatenate((P,np.dot(P, A.T)),axis=1), np.concatenate((np.dot(A, P),np.dot(np.dot(A, P), A.T ) + rho*I ),axis=1)))
print(Sigma)
#print(mu)
cond_mu_1, cond_Sig_1 = cond_Gauss(Sigma, mu, idx1, idx2, x2)
print('E[x_1|x_2 = {}] = '.format(x2) , cond_mu_1)
print(cond_Sig_1)
"""
Explanation: Todo: Evaluate the likelihood for all polynomial orders $K=1 \dots 8$
$p(x_1, x_2) = \mathcal{N}(\mu, \Sigma)$
$\mu = \left(\begin{array}{c} \mu_{1} \
\mu_{2} \end{array} \right)$
$\Sigma = \left(\begin{array}{cc} \Sigma_{11} & \Sigma_{12} \
\Sigma_{12}^\top & \Sigma_{22} \end{array} \right)$
$
p(x_1 | x_2) = \mathcal{N}(\mu_1 + \Sigma_{12} \Sigma_{22}^{-1} (x_2 -\mu_2), \Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1}\Sigma_{12}^\top)
$
End of explanation
"""
# Use this code to generate a dataset
N = 30
K = 4
s = 0.1
q = 10*s
x = 2*np.random.randn(N)
e = np.sqrt(s) * np.random.randn(N)
# Create the vandermonde matrix
A = x.reshape((N,1))**np.arange(K).reshape(1,K)
w = np.array([0,-1,0.5,0])
y = np.dot(A, w) + e
plt.plot(x, y, 'o')
#plt.plot(e)
plt.show()
# Sig = [P, A.T; A A*A.T+rho*I]
N1 = 3
N2 = 7
P = np.random.randn(N1,N1)
A = np.random.randn(N2,N1)
#Sig11 = np.mat(P)
#Sig12 = np.mat(A.T)
#Sig21 = np.mat(A)
#Sig22 = Sig21*Sig12
Sig11 = np.mat(P)
Sig12 = np.mat(A.T)
Sig21 = np.mat(A)
Sig22 = Sig21*Sig12
print(Sig11.shape)
print(Sig12.shape)
print(Sig21.shape)
print(Sig22.shape)
W = np.bmat([[Sig11, Sig12],[Sig21, Sig22]])
Sig22.shape
3500*1.18*12
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
x = np.array([3.7, 2.3, 6.9, 7.5])
N = len(x)
lam = np.arange(0.05,10,0.01)
ll = -N*np.log(lam) - np.sum(x)/lam
plt.plot(lam, np.exp(ll))
plt.plot(np.mean(x), 0, 'ok')
plt.show()
xx = np.arange(0, 10, 0.01)
lam = 1000
p = 1/lam*np.exp(-xx/lam)
plt.plot(xx, p)
plt.plot(x, np.zeros((N)), 'ok')
plt.ylim((0,1))
plt.show()
1-(5./6.)**4
1-18/37
import numpy as np
N = 7
A = np.diag(np.ones(7))
ep = 0.5
a = 1
idx = [1, 2, 3, 4, 5, 6, 0]
A = ep*A + (1-ep)*A[:,idx]
C = np.array([[a, 1-a, 1-a, a, a, 1-a, 1-a],[1-a, a, a, 1-a, 1-a, a, a]])
p = np.ones((1,N))/N
print(A)
y = [1, 1, 0, 0, 0]
print(p)
p = C[y[0] , :]*p
print(p/np.sum(p, axis=1))
"""
Explanation: SUppose we are given a data set $(y_i, x_i)$ for $i=1\dots N$
Assume we have a basis regression model (for example a polynomial basis where $f_k(x) = x^k$) and wish to fit
$y_i = \sum_k A_{ik} w_k + \epsilon_i$
for all $i = 1 \dots N$ where
$
A_{ik} = f_k(x_i)
$
Assume the prior
$
w \sim \mathcal{N}(w; 0, P)
$
Derive an expression for $p(y_{\text{new}}| x_{\text{new}}, y_{1:N}, x_{1:N})$ and implement a program that plots the mean and corresponding errorbars (from standard deviation of $p(y_{\text{new}}| x_{\text{new}}, y_{1:N}, x_{1:N})$) by choosing $y_{\text{new}}$ on a regular grid.
Note that $y_{\text{new}} = \sum f_k(x_{\text{new}}) w_k + \epsilon$
End of explanation
"""
|
kimmintae/MNIST | MNIST Competition/mnist_competition.ipynb | mit | mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# test data
test_images = mnist.test.images.reshape(10000, 28, 28, 1)
test_labels = mnist.test.labels[:]
"""
Explanation: Load MNIST Data
End of explanation
"""
augmentation_size = 110000
images = np.concatenate((mnist.train.images.reshape(55000, 28, 28, 1), mnist.validation.images.reshape(5000, 28, 28, 1)), axis=0)
labels = np.concatenate((mnist.train.labels, mnist.validation.labels), axis=0)
datagen_list = [
ImageDataGenerator(rotation_range=20),
ImageDataGenerator(rotation_range=30),
ImageDataGenerator(width_shift_range=0.1),
ImageDataGenerator(width_shift_range=0.2),
]
for datagen in datagen_list:
datagen.fit(images)
for image, label in datagen.flow(images, labels, batch_size=augmentation_size, shuffle=False):
images = np.concatenate((images, image), axis=0)
labels = np.concatenate((labels, label), axis=0)
break
print(images.shape)
print(labels.shape)
"""
Explanation: Data Agumentation
image rotation
image width shift
End of explanation
"""
model_1_filter_size = 3
model_2_filter_size = 5
model_3_filter_size = 7
epochs = 10
"""
Explanation: Train Parameter
End of explanation
"""
model1 = Sequential([Convolution2D(filters=64, kernel_size=(model_1_filter_size, model_1_filter_size), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(model_1_filter_size, model_1_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_1_filter_size, model_1_filter_size), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(model_1_filter_size, model_1_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_1_filter_size, model_1_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model1.compile(optimizer=Adam(lr=0.0005), loss='categorical_crossentropy', metrics=['accuracy'])
model1.fit(images, labels, batch_size=256, epochs=epochs, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model1.to_json()
with open("model1.json", "w") as json_file:
json_file.write(model_json)
model1.save_weights("model1.h5")
print("Saved model to disk")
"""
Explanation: Model 1 Architecture
Convolution + Convolution + MaxPool + Dropout
Convolution + Convolution + MaxPool + Dropout
Convolution + MaxPool + Dropout
Dense + Dropout
Dense + Dropout
Output
End of explanation
"""
model2 = Sequential([Convolution2D(filters=64, kernel_size=(model_2_filter_size, model_2_filter_size), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(model_2_filter_size, model_2_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_2_filter_size, model_2_filter_size), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(model_2_filter_size, model_2_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_2_filter_size, model_2_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model2.compile(optimizer=Adam(lr=0.0005), loss='categorical_crossentropy', metrics=['accuracy'])
model2.fit(images, labels, batch_size=256, epochs=epochs, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model2.to_json()
with open("model2.json", "w") as json_file:
json_file.write(model_json)
model2.save_weights("model2.h5")
print("Saved model to disk")
"""
Explanation: Model 2 Architecture
Convolution * 2 + MaxPool + Dropout
Convolution * 2 + MaxPool + Dropout
Convolution + MaxPool + Dropout
Dense + Dropout
Dense + Dropout
Output
End of explanation
"""
model3 = Sequential([Convolution2D(filters=64, kernel_size=(model_3_filter_size, model_3_filter_size), padding='same', activation='elu', input_shape=(28, 28, 1)),
Convolution2D(filters=128, kernel_size=(model_3_filter_size, model_3_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_3_filter_size, model_3_filter_size), padding='same', activation='elu'),
Convolution2D(filters=128, kernel_size=(model_3_filter_size, model_3_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Convolution2D(filters=128, kernel_size=(model_3_filter_size, model_3_filter_size), padding='same', activation='elu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.5),
Flatten(),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(1024, activation='elu'),
Dropout(0.5),
Dense(10, activation='softmax'),
])
model3.compile(optimizer=Adam(lr=0.0005), loss='categorical_crossentropy', metrics=['accuracy'])
model3.fit(images, labels, batch_size=256, epochs=epochs, shuffle=True, verbose=1, validation_data=(test_images, test_labels))
model_json = model3.to_json()
with open("model3.json", "w") as json_file:
json_file.write(model_json)
model3.save_weights("model3.h5")
print("Saved model to disk")
"""
Explanation: Model 3 Architecture
Convolution + Convolution + MaxPool + Dropout
Convolution + Convolution + MaxPool + Dropout
Convolution + MaxPool + Dropout
Dense + Dropout
Dense + Dropout
Output
End of explanation
"""
# load json and create model
def model_open(name, test_images, test_labels):
json_file = open(name + '.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(name + '.h5')
print("Loaded model from disk")
loaded_model.compile(optimizer=Adam(lr=0.0005), loss='categorical_crossentropy', metrics=['acc'])
prob = loaded_model.predict_proba(test_images)
acc = np.mean(np.equal(np.argmax(prob, axis=1), np.argmax(test_labels, axis=1)))
print('\nmodel : %s, test accuracy : %4f\n' %(name, acc))
return prob
prob_1 = model_open('model1', test_images, test_labels)
prob_2 = model_open('model2', test_images, test_labels)
prob_3 = model_open('model3', test_images, test_labels)
final_prob = prob_1 * 1 + prob_2 * 2 + prob_3 * 1
final_score = np.mean(np.equal(np.argmax(final_prob, axis=1), np.argmax(test_labels, axis=1)))
print('final test accuracy : ', final_score)
"""
Explanation: Evaluate
End of explanation
"""
|
jmschrei/pomegranate | examples/bayesnet_huge_monty_hall.ipynb | mit | import math
from pomegranate import *
"""
Explanation: Huge Monty Hall Bayesian Network
authors:<br>
Jacob Schreiber [<a href="mailto:jmschreiber91@gmail.com">jmschreiber91@gmail.com</a>]<br>
Nicholas Farn [<a href="mailto:nicholasfarn@gmail.com">nicholasfarn@gmail.com</a>]
Lets expand the Bayesian network for the monty hall problem in order to make sure that training with all types of wild types works properly.
End of explanation
"""
friend = DiscreteDistribution( { True: 0.5, False: 0.5 } )
"""
Explanation: We'll create the discrete distribution for our friend first.
End of explanation
"""
guest = ConditionalProbabilityTable(
[[ True, 'A', 0.50 ],
[ True, 'B', 0.25 ],
[ True, 'C', 0.25 ],
[ False, 'A', 0.0 ],
[ False, 'B', 0.7 ],
[ False, 'C', 0.3 ]], [friend] )
"""
Explanation: The emissions for our guest are completely random.
End of explanation
"""
remaining = DiscreteDistribution( { 0: 0.1, 1: 0.7, 2: 0.2, } )
"""
Explanation: Then the distribution for the remaining cars.
End of explanation
"""
randomize = ConditionalProbabilityTable(
[[ 0, True , 0.05 ],
[ 0, False, 0.95 ],
[ 1, True , 0.8 ],
[ 1, False, 0.2 ],
[ 2, True , 0.5 ],
[ 2, False, 0.5 ]], [remaining] )
"""
Explanation: The probability of whether the prize is randomized is dependent on the number of remaining cars.
End of explanation
"""
prize = ConditionalProbabilityTable(
[[ True, True, 'A', 0.3 ],
[ True, True, 'B', 0.4 ],
[ True, True, 'C', 0.3 ],
[ True, False, 'A', 0.2 ],
[ True, False, 'B', 0.4 ],
[ True, False, 'C', 0.4 ],
[ False, True, 'A', 0.1 ],
[ False, True, 'B', 0.9 ],
[ False, True, 'C', 0.0 ],
[ False, False, 'A', 0.0 ],
[ False, False, 'B', 0.4 ],
[ False, False, 'C', 0.6]], [randomize, friend] )
"""
Explanation: Now the conditional probability table for the prize. This is dependent on the guest's friend and whether or not it is randomized.
End of explanation
"""
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize] )
"""
Explanation: Finally we can create the conditional probability table for our Monty. This is dependent on the guest and the prize.
End of explanation
"""
s0 = State( friend, name="friend")
s1 = State( guest, name="guest" )
s2 = State( prize, name="prize" )
s3 = State( monty, name="monty" )
s4 = State( remaining, name="remaining" )
s5 = State( randomize, name="randomize" )
"""
Explanation: Now we can create the states for our bayesian network.
End of explanation
"""
network = BayesianNetwork( "test" )
network.add_states(s0, s1, s2, s3, s4, s5)
"""
Explanation: Now we'll create our bayesian network with an instance of BayesianNetwork, then add the possible states.
End of explanation
"""
network.add_transition( s0, s1 )
network.add_transition( s1, s3 )
network.add_transition( s2, s3 )
network.add_transition( s4, s5 )
network.add_transition( s5, s2 )
network.add_transition( s0, s2 )
"""
Explanation: Then the possible transitions.
End of explanation
"""
network.bake()
"""
Explanation: With a "bake" to finalize the structure of our network.
End of explanation
"""
data = [[ True, 'A', 'A', 'C', 1, True ],
[ True, 'A', 'A', 'C', 0, True ],
[ False, 'A', 'A', 'B', 1, False ],
[ False, 'A', 'A', 'A', 2, False ],
[ False, 'A', 'A', 'C', 1, False ],
[ False, 'B', 'B', 'B', 2, False ],
[ False, 'B', 'B', 'C', 0, False ],
[ True, 'C', 'C', 'A', 2, True ],
[ True, 'C', 'C', 'C', 1, False ],
[ True, 'C', 'C', 'C', 0, False ],
[ True, 'C', 'C', 'C', 2, True ],
[ True, 'C', 'B', 'A', 1, False ]]
network.fit( data )
"""
Explanation: Now let's create our network from the following data.
End of explanation
"""
print(friend)
"""
Explanation: We can see the results below. Lets look at the distribution for our Friend first.
End of explanation
"""
print(guest)
"""
Explanation: Then our Guest.
End of explanation
"""
print(remaining)
"""
Explanation: Now the remaining cars.
End of explanation
"""
print(randomize)
"""
Explanation: And the probability the prize is randomized.
End of explanation
"""
print(prize)
"""
Explanation: Now the distribution of the Prize.
End of explanation
"""
print(monty)
"""
Explanation: And finally our Monty.
End of explanation
"""
|
napsternxg/DataMiningPython | Check installs.ipynb | gpl-3.0 | plt.plot(x,y, marker="o", color="r", label="demo")
plt.xlabel("X axis")
plt.ylabel("Y axis")
plt.title("Demo plot")
plt.legend()
"""
Explanation: Matplotlib checks
More details at: http://matplotlib.org/users/pyplot_tutorial.html
End of explanation
"""
df = pd.DataFrame()
df["X"] = x
df["Y"] = y
df["G"] = np.random.randint(1,10,size=x.shape)
df["E"] = np.random.randint(1,5,size=x.shape)
df.shape
df.head()
df.describe()
df.G = df.G.astype("category")
df.E = df.E.astype("category")
"""
Explanation: Pandas checks
More details at: http://pandas.pydata.org/pandas-docs/stable/tutorials.html
End of explanation
"""
sns.barplot(x="G", y="Y", data=df, estimator=np.mean, color="dodgerblue")
g = sns.jointplot("X", "Y", data=df, kind="reg",
color="r", size=7)
sns.pairplot(df, hue="E")
# Initialize a grid of plots with an Axes for each walk
grid = sns.FacetGrid(df, col="G", hue="E", col_wrap=4, size=3, legend_out=True)
# Draw a horizontal line to show the starting point
grid.map(plt.axhline, y=30, ls=":", c=".5")
# Draw a line plot to show the trajectory of each random walk
t = grid.map(plt.plot, "X", "Y", marker="o", ms=4).add_legend(title="E values")
#grid.fig.tight_layout(w_pad=1)
"""
Explanation: Seaborn checks
More details at: https://stanford.edu/~mwaskom/software/seaborn/index.html
End of explanation
"""
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.metrics import classification_report
"""
Explanation: Sklearn checks
More details at: http://scikit-learn.org/stable/index.html
End of explanation
"""
X = df[["X"]].copy()
y = df["Y"].copy()
print "X.shape: ", X.shape
print "Y.shape: ", y.shape
model_linear = LinearRegression()
model_linear.fit(X, y)
y_pred = model_linear.predict(X)
print "Y_pred.shape: ", y_pred.shape
X["X^2"] = X["X"]**2
X.columns
model_sqr = LinearRegression()
model_sqr.fit(X, y)
y_pred_sqr = model_sqr.predict(X)
print "Y_pred_sqr.shape: ", y_pred_sqr.shape
plt.scatter(X["X"], y, marker="o", label="data", alpha=0.5, s=30)
plt.plot(X["X"], y_pred, linestyle="--", linewidth=1.5, color="k", label="fit [linear]")
plt.plot(X["X"], y_pred_sqr, linestyle="--", linewidth=1.5, color="r", label="fit [square]")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
model_linear.coef_
model_sqr.coef_
"""
Explanation: Linear regreession
End of explanation
"""
import statsmodels.api as sm
model = sm.OLS(y, X)
res = model.fit()
res.summary2()
model = sm.OLS.from_formula("Y ~ X + I(X**2)", data=df)
res = model.fit()
res.summary2()
"""
Explanation: Statsmodels
More details at: http://statsmodels.sourceforge.net/
End of explanation
"""
X = df[["X", "Y"]]
y = df["E"]
model = LogisticRegression(multi_class="multinomial", solver="lbfgs")
model.fit(X, y)
y_pred = model.predict(X)
print classification_report(y, y_pred)
y_pred_p = model.predict_proba(X)
y_pred_p[:10]
model = sm.MNLogit.from_formula("E ~ Y + X", data=df)
res = model.fit()
#res.summary2()
res.summary()
"""
Explanation: Logistic regression
End of explanation
"""
|
pastas/pasta | examples/notebooks/14_timestep_analysis.ipynb | mit | import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.set_log_level("ERROR")
ps.show_versions(numba=True, lmfit=True)
"""
Explanation: Reducing Autocorrelation
R.A. Collenteur, University of Graz
In this notebook we look at two strategies that may help to reduce the autocorrelation in the noise, such that the estimated standard errors of the parameters may be used for further analysis.
The first strategy is to change the time interval between the groundwater level observations by removing observations.
The second strategy is the use of the ARMA(1,1) noise model instead of the default AR(1) noise model.
To show the effects of these strategies we look at example models for a groundwater level time series observed near the town of Wagna in Southeastern Austria. This analysis is based on the study from Collenteur et al. (2021).
<div class="alert alert-warning">
<b>Note:</b>
While the groundwater level data is the same to that used for the publication, the precipitation and potential evaporation data is not the same (due to open data issues). The results from this notebook are therefore not the same as in the manuscript.
</div>
End of explanation
"""
head = pd.read_csv("data_wagna/head_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2).loc["2006":]
evap = pd.read_csv("data_wagna/evap_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
rain = pd.read_csv("data_wagna/rain_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
ax = head.plot(figsize=(10,3), marker=".", linestyle=" ", color="k")
ax1 = plt.axes([0.95,0.2,0.3,0.68])
ax1.semilogx(ps.stats.acf(head).values, color="k") # Plot on log-scale
ax.set_title("Groundwater level [MASL]")
ax1.set_title("Autocorrelation");
"""
Explanation: 1. Read Data and plot autocorrelation
First we load example data from the hydrological research station Wagna in Southeastern Austria. Below is a plot of the original daily groundwater levels observations, along with a plot of the autocorrelation of the groundwater levels. The autocorrelation plot clearly shows that for the first 10 to 15 time lags the correlation between observations is very close to 1 (note the log-scale used for the x-axis). A possible interpretation may be that the additional measurements (e.g. below a 10-day interval) do not provide much additional information. This finding is quite common because many groundwater systems respond slow, resulting in smooth groundwater level time series.
End of explanation
"""
mls_ar = {}
dts = 11
# Model settings
tmin = "2007-01-01"
tmax = "2016-12-31"
solver = ps.LmfitSolve
# The two models we compare here
config = {
"Linear": [ps.FourParam, ps.rch.Linear()],
"Nonlinear": [ps.Exponential, ps.rch.FlexModel()],
}
for name, [rfunc, rch] in config.items():
for dt in range(1, dts, 2):
# Create the basic Pastas model
ml_name = f"{name}_{dt}"
ml = ps.Model(head.iloc[::dt], name=ml_name)
# Add the recharge model
sm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=rfunc,
name="rch")
ml.add_stressmodel(sm)
# Change parameter settings for non-linear recharge model
if name == "Nonlinear":
ml.set_parameter("rch_srmax", vary=False)
ml.set_parameter("rch_kv", vary=True)
ml.set_parameter("constant_d", initial=262)
# Solve the model
ml.solve(tmin=tmin, tmax=tmax, report=False, solver=solver,
method="least_squares")
mls_ar[ml_name] = ml
"""
Explanation: 2. Run models with AR(1) noise model
We now create models to simulate the groundwater levels, while increasing the interval between groundwater level observations through removal of observations. The original time series have daily observations, which is increased to one observation every 10th day here. Two types of recharge models are tested; one with a linear model and one with a nonlinear recharge model. The AR(1) model is used to try and transform the correlated residuals into approximate white noise.
End of explanation
"""
mls_arma = {}
for ml_name, ml in mls_ar.items():
ml = ml.copy(name=ml.name)
#Change the noise model
ml.del_noisemodel()
ml.add_noisemodel(ps.ArmaModel())
# Solve the model
ml.solve(tmin=tmin, tmax=tmax, report=False, solver=solver,
method="least_squares")
mls_arma[ml_name] = ml
"""
Explanation: 3. Run models with ARMA(1,1) noise model
We now repeat the previous analysis with the ARMA(1,1) model to transform the correlated residuals into approximate white noise. Note that for now this model is only applicable to time series with (approximately) regular time intervals between groundwater level observations.
End of explanation
"""
data = pd.DataFrame(index=range(dt, 1), columns=config.keys())
for ml in mls_ar.values():
name, i = ml.name.split("_")
n = ml.noise(tmin=tmin, tmax=tmax).asfreq(f"{i}D").fillna(0.0)
data.loc[int(i), name] = ps.stats.durbin_watson(n)[0]
data2 = pd.DataFrame(index=range(dt, 1), columns=config.keys())
for ml in mls_arma.values():
name, i = ml.name.split("_")
n = ml.noise(tmin=tmin, tmax=tmax).asfreq(f"{i}D").fillna(0.0)
data2.loc[int(i), name] = ps.stats.durbin_watson(n)[0]
# Plot the results
fig, [ax1, ax2] = plt.subplots(2,1, sharex=True, figsize=(5, 4), sharey=True)
# AR1 Model
data.plot(ax=ax1, marker=".", legend=False)
ax1.set_ylabel("DW [-]")
ax1.axhline(2., c="k", linestyle="--", zorder=-1)
ax1.text(1, 2.07, "Line of no autocorrelation")
ax1.grid()
ax1.set_title("AR(1) Noise model")
# ArmaModel
data2.plot(ax=ax2, marker=".", legend=False)
ax2.set_ylabel("DW [-]")
ax2.set_yticks([1, 1.5, 2.])
ax2.axhline(2., c="k", linestyle="--", zorder=-10)
ax2.set_ylim(0.5, 2.3)
ax2.grid()
ax2.legend(ncol=3, loc=4)
ax2.set_xlabel("$\Delta t$ [days]")
ax2.set_title("ARMA(1,1) Noise model")
plt.tight_layout()
"""
Explanation: 4. Plot and compare the the results
Let's have a look at the results for all the simulations we just did. We have two types of recharge models (linear and non-linear) and two types of noise model (AR(1) and ARMA(1,1,)). Additionally, we calibrated these models with an increasing time interval between the groundwater level observations.
Next, we use the Durbin-Watson (DW) test and the Ljung-Box test to test if the model noise exhibits significant autocorrelation. To illustrate the effect of the different strategies we plot the computed Durbin-Watson test statistic with increasing time intervals. When there is no significant autocorrelation at the first time lag, the DW-statistic should be close to the value 2.
The results are shown in the plot below. Three things may be concluded from this plot:
The autocorrelation in the noise (measured as DW) decreases when the time interval between observations is increased,
The use of an ARMA(1,1) noise model decreases the time lag-one autocorrelation,
The non-linear model seems to cause less autocorrelation in the noise.
It is noted that these results are site-specific, but this strategy can be useful to reduce the autocorrelation in the noise for other sites as well.
End of explanation
"""
mls = {}
dt = 10 # Select the time interval between GWL observations
for name, [rfunc, rch] in config.items():
for start in range(0, dt, 2):
ml_name = f"{name}_{start+1}"
ml = ps.Model(head.iloc[start::dt], name=ml_name)
# Add the recharge model
sm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=rfunc, name="rch")
ml.add_stressmodel(sm)
if name == "Nonlinear":
ml.set_parameter("rch_srmax", vary=False)
ml.set_parameter("rch_kv", vary=True)
ml.set_parameter("constant_d", initial=262)
# Solve the model
ml.add_noisemodel(ps.ArmaModel())
ml.solve(tmin=tmin, tmax=tmax, report=False, solver=solver,
method="least_squares")
mls[ml_name] = ml
# Extract the optimal parameters and estimated standard errors
data = {}
for name in config.keys():
ml = mls["{}_1".format(name)]
p = ml.parameters
mi = pd.MultiIndex.from_product([p.index[p.vary == True].to_list(), ["opt", "std"]])
data[name] = pd.DataFrame(index=range(dt, 1), columns=mi)
for ml in mls.values():
name, i = ml.name.split("_")
df = data[name]
for par in ml.parameters.index[ml.parameters.vary == True]:
df.loc[int(i), (par, "opt")] = ml.parameters.loc[par, "optimal"]
df.loc[int(i), (par, "std")] = ml.parameters.loc[par, "stderr"] * 1.96
df = pd.concat(data, axis=1)
# Plot the results
fig, axes = plt.subplots(8,2, sharex=True, figsize=(9,7))
axes = axes.flatten()
kwargs = dict(legend=False, color="0", capsize=2, linestyle="-", marker=".")
labels = [["$A$", "$n$", "$a$", "$b$", "$f$", "$d$", "$\\alpha$", "$\\beta$"],
["$A$", "$a$", "$k_s$", "$\\gamma$", "$k_v$", "$d$", "$\\alpha$", "$\\beta$"]]
for j, rch in enumerate(["Linear", "Nonlinear"]):
axes[j].set_title(rch)
for i, par in enumerate(df[rch].columns.get_level_values(0).unique()):
df.xs((rch, par), axis=1, level=[0, 1]).plot(ax=axes[i*2+j], yerr="std", **kwargs)
axes[i*2+j].set_ylabel(labels[j][i])
for i in range(2):
axes[-i-1].set_xlabel("Calibration")
plt.tight_layout()
"""
Explanation: 5. Consistency of parameter estimates
Based on the analysis above we may to choose to use the ARMA(1,1) noise model and a time interval of 10 days between the groundwater level observations. We could then "draw" ten groundwater level time series from the original time series and calibrate the models on each of these, as a sort of split-sample test. Below we fit both the linear and the non-linear model on ten groundwater level time series with 10-day time intervals between the observations, drawn from the original time series.
End of explanation
"""
rch = {"Linear": pd.DataFrame(columns=range(dt, 1)),
"Nonlinear": pd.DataFrame(columns=range(dt, 1))}
for ml in mls.values():
name, i = ml.name.split("_")
rch[name].loc[:, i] = ml.get_stress("rch", tmin=tmin,
tmax="2019-12-31").resample("A").sum()
df1 = pd.concat(rch, axis=1)
df1.index = df1.index.year
fig, [ax1, ax2, ax3] = plt.subplots(3,1, figsize=(6,6))
for ml in mls.values():
if ml.name.split("_")[0] == "Linear":
ax = ax1
color = "C0"
else:
ax = ax2
color = "C1"
ml.oseries.plot(ax=ax, linestyle="-", marker=" ", c="k")
ml.simulate(tmax="2020").plot(ax=ax, alpha=0.5, c=color, x_compat=True)
ax.set_xticks([])
ax.set_ylabel("GWL [m]")
ax.set_xlim("2007", "2020")
df1.groupby(level=0, axis=1).mean().plot.bar(yerr=1.96 * df1.groupby(level=0, axis=1).std(), ax=ax3, width=0.7)
plt.legend(ncol=3, loc=2, bbox_to_anchor=(0, 3.7))
plt.ylabel("R [mm yr$^{-1}$]")
plt.xlabel("");
"""
Explanation: The plot above shows the estimated optimal parameters and the 95% confidence intervals of the parameters. While most of the optimal parameter are relatively stable between calibrations, some parameters show larger variations. For the linear model these are, for example, $a$ and $n$, while for the non-linear model these are $k_s$ and $\gamma$. The values of these parameters seem correlated, and it might thus be difficult to estimate the individual parameter values.
5. Similarity of simulated heads and recharge estimates
Given the above plot, one might ask how this impacts the simulated groundwater levels and recharge. Below we plot the simulated groundwater levels for each of the models, and the estimated annual rates with two time the standard deviations. Perhaps surprisingly, the simulated groundwater levels and the estimated annual recharge rates are very similar to each other. This shows that caution is necessary when interpreting individual parameters, but the simulated time series may be useful for further analysis.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb | apache-2.0 | # Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
"""
Explanation: Getting started with TensorFlow
Learning Objectives
1. Practice defining and performing basic operations on constant Tensors
1. Use Tensorflow's automatic differentiation capability
1. Learn how to train a linear regression from scratch with TensorFLow
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
End of explanation
"""
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
x.assign(45.8) # TODO 1
x
x.assign_add(4) # TODO 1
x
x.assign_sub(3) # TODO 1
x
"""
Explanation: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method:
x.assign(new_value)
x.assign_add(value_to_be_added)
x.assign_sub(value_to_be_subtracted
End of explanation
"""
a = tf.constant([5, 3, 8]) # TODO 1
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
d = a + b
print("c:", c)
print("d:", d)
a = tf.constant([5, 3, 8]) # TODO 1
b = tf.constant([3, -1, 2])
c = tf.multiply(a, b)
d = a * b
print("c:", c)
print("d:", d)
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
print("b:", b)
"""
Explanation: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
tf.add allows to add the components of a tensor
tf.multiply allows us to multiply the components of a tensor
tf.subtract allow us to substract the components of a tensor
tf.math.* contains the usual math operations to be applied on the components of a tensor
and many more...
Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
End of explanation
"""
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py) # TODO 1
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np) # TODO 1
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf) # TODO 1
"""
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
"""
a_tf.numpy()
"""
Explanation: You can convert a native TF tensor to a NumPy array using .numpy()
End of explanation
"""
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
"""
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
"""
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
"""
Explanation: Let's also create a test dataset to evaluate our models:
End of explanation
"""
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
"""
Explanation: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
End of explanation
"""
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
"""
Explanation: Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
End of explanation
"""
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
"""
Explanation: This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
End of explanation
"""
# TODO 2
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
"""
Explanation: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information:
python
with tf.GradientTape() as tape:
loss = # computation
This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable:
python
gradients = tape.gradient(loss, [w0, w1])
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
End of explanation
"""
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
"""
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
End of explanation
"""
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
"""
Explanation: Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
End of explanation
"""
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
# TODO 3
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
"""
Explanation: This is indeed much better!
Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
End of explanation
"""
|
quoniammm/mine-tensorflow-examples | assignment/cs231n_assignment/assignment2/FullyConnectedNets.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
"""
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
""" Receive inputs x and weights w """
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
"""
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
"""
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
"""
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
"""
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
"""
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
"""
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
"""
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
"""
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
"""
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
"""
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
"""
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
"""
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
"""
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
"""
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
"""
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-4
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
"""
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-5
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
"""
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
"""
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
"""
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
"""
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
"""
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
"""
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
"""
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
"""
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
"""
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
"""
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
"""
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation
"""
|
TiKeil/Master-thesis-LOD | notebooks/Figure_7.2_Perturbations.ipynb | apache-2.0 | import os
import sys
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from visualize import drawCoefficient, ExtradrawCoefficient
import buildcoef2d
bg = 0.05 #background
val = 1 #values
NWorldFine = np.array([42, 42])
CoefClass = buildcoef2d.Coefficient2d(NWorldFine,
bg = bg, # background
val = val, # values
length = 2, # length
thick = 2, # thickness
space = 2, # space between values
probfactor = 1, # probability of an value
right = 1, # shape 1
down = 0, # shape 2
diagr1 = 0, # shape 3
diagr2 = 0, # shape 4
diagl1 = 0, # shape 5
diagl2 = 0, # shape 6
LenSwitch = None, # various length
thickSwitch = None, # various thickness
ChannelHorizontal = None, # horizontal Channels
ChannelVertical = None, # vertical Channels
BoundarySpace = True # additional space on the boundary
)
A = CoefClass.BuildCoefficient() # coefficient in a numpy array
A = A.flatten()
plt.figure("Original")
drawCoefficient(NWorldFine, A)
plt.title("Original")
plt.show()
# What entries will be perturbed
numbers = [13,20,27,44,73]
"""
Explanation: Perturbations of a Coefficient
Every diffusion coefficient can be subjected to some perturbation. This script presents perturbations that we investigate in the tests. It also shows the utilization of the 'buildcoef2d' class and its benefits in terms of perturbations. First, we show the original coefficient, determine the elements we want to perturb and simulate each perturbation. For further explanations of the 'buildcoef2d' perturbation functions, we refer to the thesis.
End of explanation
"""
B = CoefClass.SpecificValueChange( Number = numbers,
ratio = -0.4,
randomvalue = None,
negative = None,
ShapeRestriction = True,
ShapeWave = None,
probfactor = 1,
Original = True)
B = B.flatten()
plt.figure("Change in value")
drawCoefficient(NWorldFine, B)
plt.title("Change in value")
plt.show()
"""
Explanation: Change in value
End of explanation
"""
C = CoefClass.SpecificVanish( Number = numbers,
PartlyVanish = None,
probfactor = 1,
Original = True)
C = C.flatten()
plt.figure("Disappearance")
drawCoefficient(NWorldFine, C)
plt.title("Disappearance")
plt.show()
"""
Explanation: Disappearance
End of explanation
"""
D = CoefClass.SpecificMove( Number = numbers,
steps = 1,
randomstep = None,
randomDirection = None,
Right = 1,
BottomRight = 1,
Bottom = 1,
BottomLeft = 1,
Left = 1,
TopLeft = 1,
Top = 1,
TopRight = 1,
Original = True)
D = D.flatten()
plt.figure("Shift")
drawCoefficient(NWorldFine, D)
plt.title("Shift")
plt.show()
"""
Explanation: Shift
End of explanation
"""
plt.figure('Perturbatons')
ExtradrawCoefficient(NWorldFine, A, B, C, D)
plt.show()
"""
Explanation: Summary
End of explanation
"""
|
WillenZh/deep-learning-project | tutorials/autoencoder/Convolutional_Autoencoder.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
"""
learning_rate = 0.001
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
"""
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
"""
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
"""
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation
"""
|
sz2472/foundations-homework | 07 - Introduction to Pandas (complete).ipynb | mit | # import pandas, but call it pd. Why? Because that's What People Do.
import pandas as pd
"""
Explanation: An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
End of explanation
"""
# We're going to call this df, which means "data frame"
# It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding
df = pd.read_csv("NBA-Census-10.14.2013.csv", encoding='mac_roman')
"""
Explanation: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
End of explanation
"""
# Let's look at all of it
df
"""
Explanation: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
End of explanation
"""
# Look at the first few rows
df.head()
"""
Explanation: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
End of explanation
"""
# Let's look at MORE of the first few rows
df.head(10)
"""
Explanation: ...but maybe we want to see more than a measly five results?
End of explanation
"""
# Let's look at the final few rows
df.tail(4)
"""
Explanation: But maybe we want to make a basketball joke and see the final four?
End of explanation
"""
# Show the 6th through the 8th rows
df[5:8]
"""
Explanation: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
End of explanation
"""
# Get the names of the columns, just because
df.columns
# If we want to be "correct" we add .values on the end of it
df.columns.values
# Select only name and age
columns_to_show = ['Name', 'Age']
df[columns_to_show]
# Combing that with .head() to see not-so-many rows
columns_to_show = ['Name', 'Age']
df[columns_to_show].head()
# We can also do this all in one line, even though it starts looking ugly
# (unlike the cute bears pandas looks ugly pretty often)
df[['Name', 'Age']].head()
"""
Explanation: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
End of explanation
"""
df.head()
"""
Explanation: NOTE: That was not df['Name', 'Age'], it was df[['Name', 'Age]]. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets.
Describing your data
A powerful tool of pandas is being able to select a portion of your data, because who ordered all that data anyway.
End of explanation
"""
# Grab the POS column, and count the different values in it.
df['POS'].value_counts()
"""
Explanation: I want to know how many people are in each position. Luckily, pandas can tell me!
End of explanation
"""
# Summary statistics for Age
df['Age'].describe()
# That's pretty good. Does it work for everything? How about the money?
df['2013 $'].describe()
"""
Explanation: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers: how old is everyone? Maybe we could, I don't know, get some statistics about age? Some statistics to describe age?
End of explanation
"""
# Doing more describing
df['Ht (In.)'].describe()
"""
Explanation: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
End of explanation
"""
# Take another look at our inches, but only the first few
df['Ht (In.)'].head()
# Divide those inches by 12
df['Ht (In.)'].head() / 12
# Let's divide ALL of them by 12
feet = df['Ht (In.)'] / 12
feet
# Can we get statistics on those?
feet.describe()
# Let's look at our original data again
df.head(2)
"""
Explanation: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
End of explanation
"""
# Store a new column
df['feet'] = df['Ht (In.)'] / 12
df.head()
"""
Explanation: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
End of explanation
"""
# Can't just use .replace
df['2013 $'].head().replace("$","")
# Need to use this weird .str thing
df['2013 $'].head().str.replace("$","")
# Can't just immediately replace the , either
df['2013 $'].head().str.replace("$","").replace(",","")
# Need to use the .str thing before EVERY string method
df['2013 $'].head().str.replace("$","").str.replace(",","")
# Describe still doesn't work.
df['2013 $'].head().str.replace("$","").str.replace(",","").describe()
# Let's convert it to an integer using .astype(int) before we describe it
df['2013 $'].head().str.replace("$","").str.replace(",","").astype(int).describe()
df['2013 $'].head().str.replace("$","").str.replace(",","").astype(int)
# Maybe we can just make them millions?
df['2013 $'].head().str.replace("$","").str.replace(",","").astype(int) / 1000000
# Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0
df['2013 $'].str.replace("$","").str.replace(",","").str.replace("n/a", "0").astype(int) / 1000000
# Remove the .head() piece and save it back into the dataframe
df['millions'] = df['2013 $'].str.replace("$","").str.replace(",","").str.replace("n/a","0").astype(int) / 1000000
df.head()
df.describe()
"""
Explanation: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
End of explanation
"""
# This is just the first few guys in the dataset. Can we order it?
df.head(3)
# Let's try to sort them
df.sort_values(by='millions').head(3)
"""
Explanation: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
End of explanation
"""
# It isn't descending = True, unfortunately
df.sort_values(by='millions', ascending=False).head(3)
# We can use this to find the oldest guys in the league
df.sort_values(by='Age', ascending=False).head(3)
# Or the youngest, by taking out 'ascending=False'
df.sort_values(by='Age').head(3)
"""
Explanation: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
End of explanation
"""
# Get a big long list of True and False for every single row.
df['feet'] > 7
# We could use value counts if we wanted
above_seven_feet = df['feet'] > 7
above_seven_feet.value_counts()
# But we can also apply this to every single row to say whether YES we want it or NO we don't
df['feet'].head() > 7
# Instead of putting column names inside of the brackets, we instead
# put the True/False statements. It will only return the players above
# seven feet tall
df[df['feet'] > 7]
# Or only the guards
df[df['POS'] == 'G']
# Or only the guards who make more than 15 million
df[(df['POS'] == 'G') & (df['millions'] > 15)]
# It might be easier to break down the booleans into separate variables
is_guard = df['POS'] == 'G'
more_than_fifteen_million = df['millions'] > 15
df[is_guard & more_than_fifteen_million]
# We can save this stuff
short_players = df[df['feet'] < 6.5]
short_players
short_players.describe()
# Maybe we can compare them to taller players?
df[df['feet'] >= 6.5].describe()
"""
Explanation: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
End of explanation
"""
df['Age'].head()
# This will scream we don't have matplotlib.
df['Age'].hist()
"""
Explanation: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
End of explanation
"""
!pip install matplotlib
# this will open up a weird window that won't do anything
df['Age'].hist()
# So instead you run this code
%matplotlib inline
df['Age'].hist()
"""
Explanation: matplotlib is a graphing library. It's the Python way to make graphs!
End of explanation
"""
import matplotlib.pyplot as plt
plt.style.available
plt.style.use('ggplot')
df['Age'].hist()
plt.style.use('seaborn-deep')
df['Age'].hist()
plt.style.use('fivethirtyeight')
df['Age'].hist()
"""
Explanation: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
End of explanation
"""
# Pass in all sorts of stuff!
# Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
# .range() is a matplotlib thing
df['Age'].hist(bins=20, xlabelsize=10, ylabelsize=10, range=(0,40))
"""
Explanation: That might look better with a little more customization. So let's customize it.
End of explanation
"""
df.plot(kind='scatter', x='feet', y='millions')
df.head()
# How does experience relate with the amount of money they're making?
df.plot(kind='scatter', x='EXP', y='millions')
# At least we can assume height and weight are related
df.plot(kind='scatter', x='WT', y='feet')
# At least we can assume height and weight are related
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
df.plot(kind='scatter', x='WT', y='feet', xlim=(100,300), ylim=(5.5, 8))
plt.style.use('ggplot')
df.plot(kind='scatter', x='WT', y='feet', xlim=(100,300), ylim=(5.5, 8))
# We can also use plt separately
# It's SIMILAR but TOTALLY DIFFERENT
centers = df[df['POS'] == 'C']
guards = df[df['POS'] == 'G']
forwards = df[df['POS'] == 'F']
plt.scatter(y=centers["feet"], x=centers["WT"], c='c', alpha=0.75, marker='x')
plt.scatter(y=guards["feet"], x=guards["WT"], c='y', alpha=0.75, marker='o')
plt.scatter(y=forwards["feet"], x=forwards["WT"], c='m', alpha=0.75, marker='v')
plt.xlim(100,300)
plt.ylim(5.5,8)
"""
Explanation: I want more graphics! Do tall people make more money?!?!
End of explanation
"""
|
leliel12/scikit-otree | tutorial.ipynb | mit | import skotree
skotree.VERSION
"""
Explanation: Scikit-oTree Tutorial
Welcome to the Scikit-oTree tutorial. This package aims to integrate
any experiment developed on-top of oTree, with the
Python Scientific-Stack; alowing
the scientists to access a big collection of tools for analyse the
experimental data.
End of explanation
"""
# this load the library
import skotree
# this load the experiment located
# in the directory tests and
experiment = skotree.oTree("./tests")
experiment
"""
Explanation: Philosophy
1. The data must be processed only by the oTree deployment.
Scikit-oTree don't preprocess any data from the experiment. All the information
are preserved exactly as any traditional export from oTree; the project only
take this data and present it.
2. The environment for analysis must not be modified.
oTree uses some global configuration to make it run. Scikit-oTree don't store
any global configuration alowwing to load data from different experiments
without problems. All the oTree related processing always happen in an
external process.
3. Only one data type for the data.
The data are always presented as a
Pandas DataFrame
Installation
To install Scikit-oTree you must has Python and PIP. You can found
a comprensive tutorial to install it here.
After that you only need to run
bash
pip install -U scikit-otree
Local - Loading the experiment
To load your experiment you need to provide
the location of the oTree deployment. This is the
same location where the setting.py lives.
End of explanation
"""
experiment.settings
"""
Explanation: The previous code make a lot of things in background:
First create an extra process deatached from the local one
to extract all oTree related settings.
Wait until the process to end.
Check the result of the process and store the settings as
atrribute for experiment object.
Let's check the result
End of explanation
"""
experiment.lsapps()
"""
Explanation: This is the traditional object that you
obtain in any oTree experiment if you
write
python
from django.conf import settings
Now let's check some information about the experiment, for example
all the exiting oTree apps.
End of explanation
"""
experiment.lssessions()
"""
Explanation: or maybe you want to see all the sessiong configured
that uses all this apps
End of explanation
"""
experiment.session_config("matching_pennies")
"""
Explanation: Yikes! the app and the session has the same name. Let's check the full session configuration.
End of explanation
"""
experiment.settings.REAL_WORLD_CURRENCY_CODE
"""
Explanation: Finally you can access <span class="text-info">any</span> content of the settings object ussing the attribute showed before. For example, maybe you want to see the "currency code"
End of explanation
"""
all_data = experiment.all_data()
all_data
"""
Explanation: The Data
Lets check the oTree server data tab
As you can see 4 kind of data can be exported from any experiment.
1. All app
This generates one DataFrame with one row per participant, and all rounds are stacjed horizontally. For Scikit-oTree this functionallity are exposed as all_data() method
End of explanation
"""
data = experiment.app_data("matching_pennies")
data
"""
Explanation: 2. Per-App Data
These data-frame contain a row for each player in the given app. If there are multiple rounds, there will be multiple rows for the same participant. To access this information you need to provide the application name to the method app_data()
End of explanation
"""
filtered = data[["participant.code", "player.penny_side", "player.payoff"]]
filtered
"""
Explanation: With the power of pandas.DataFrame you can easily filter the data
End of explanation
"""
filtered.describe()
"""
Explanation: Describe the data
End of explanation
"""
group = filtered.groupby("participant.code")
group.describe()
"""
Explanation: group by participant
End of explanation
"""
data.columns
"""
Explanation: or check all the columns availables
End of explanation
"""
tspent = experiment.time_spent()
tspent
# check the available columns
tspent.columns
# filter only the most important columns
tspent = tspent[["participant__code", "page_index", "seconds_on_page"]]
tspent
# lets describe the time expent by page
tspent.groupby("page_index").describe()
# and lets make a plot but grouped by participant
%matplotlib inline
tspent.groupby("participant__code")[["seconds_on_page"]].plot();
"""
Explanation: 3. Per-App Documentation
The code
python
experiment.app_doc("matching_pennies")
returns the full documentation about the data retrieved by app_data()
4. Time spent on each page
Time spent on each page
End of explanation
"""
storage = experiment.bot_data("matching_pennies", 4)
storage
"""
Explanation: <div class="alert alert-info lead">
<h4>Note</h4>
<hr>
This only show a simple example in how to use <strong>pandas.DataFrame</strong> to understand more, please check
<br>
<a href="https://pandas.pydata.org/pandas-docs/stable/tutorials.html">
https://pandas.pydata.org/pandas-docs/stable/tutorials.html</a>
</div>
Simulate with bots
Scikit-oTree offers out of the box the posibility to run the oTree bot-based-tests and retrieve all the data generated by them.
The method bot_data() consume two arguments:
The name of the session to simulate
The number of participant
And the return a dict like object (called CSVStorage) whith the same attributes as application has the session in the app_sequence key.
End of explanation
"""
storage["matching_pennies"]
"""
Explanation: as you can see the only available app (as we see before) is the matching_pennies.
Lets extract the data
End of explanation
"""
storage.matching_pennies
"""
Explanation: also for convenience the sintax storage.matching_pennied are available
End of explanation
"""
experiment.bot_data("matching_pennies", 1)
"""
Explanation: If for some reason the experiment fails, this method returns an exception. for example if we provide a invalid number of participants
End of explanation
"""
remote = skotree.oTree("http://localhost:8000")
remote
remote.lsapps()
remote.lssessions()
remote.app_data("matching_pennies")
"""
Explanation: Connect to a remote experiment
<div class="alert alert-success lead">
New in version <strong>0.4</strong>
</div>
To connect to a remote oTree location instead of given the settings.py path, you need to
provide the URL where the experiment is running.
End of explanation
"""
skotree.oTree("http://localhost:9000")
"""
Explanation: Connect to a remote experiment With Authentication
<div class="alert alert-success lead">
New in version <strong>0.5</strong>
</div>
If you are trying to connect to server in auth level mode DEMO or STUDY (More Information about modes) without credentials an error will be shown:
End of explanation
"""
# the credential are not stored internally
exp = skotree.oTree("http://localhost:9000", username="admin", password="skotree")
exp
"""
Explanation: In this cases you need to provide the parameters username and password
End of explanation
"""
exp.all_data()
"""
Explanation: and now all works as before
End of explanation
"""
remote.bot_data("matching_pennies", 1)
"""
Explanation: <div class="text-warning">
<h3>Some methods not work in a remote experiment</h3>
</div>
session_config()
bot_data()
Raises an NotImplementedError when are called.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/62cc7f00e993cd712f75bc4ad788e028/plot_artifacts_correction_maxwell_filtering.ipynb | bsd-3-clause | import mne
from mne.preprocessing import maxwell_filter
data_path = mne.datasets.sample.data_path()
"""
Explanation: Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
interference and compensate for subject head movements.
See maxwell for more details.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
"""
Explanation: Set parameters
End of explanation
"""
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
"""
Explanation: Preprocess with Maxwell filtering
End of explanation
"""
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6))
evoked = epochs.average()
evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250),
mag=(-600, 700)), time_unit='s')
"""
Explanation: Select events to extract epochs from, pick M/EEG channels, and plot evoked
End of explanation
"""
|
MingChen0919/learning-apache-spark | notebooks/02-data-manipulation/2.7.2-dot-column-expression.ipynb | mit | mtcars = spark.read.csv('../../../data/mtcars.csv', inferSchema=True, header=True)
mtcars = mtcars.withColumnRenamed('_c0', 'model')
mtcars.show(5)
"""
Explanation: Example data
End of explanation
"""
mpg_col_exp = mtcars.mpg
mpg_col_exp
mtcars.select(mpg_col_exp).show(5)
"""
Explanation: Dot (.) column expression
Create a column expression that will return the original column values.
End of explanation
"""
|
mdeff/ntds_2016 | algorithms/08_sol_graph_inpainting.ipynb | mit | import numpy as np
import scipy.io
import matplotlib.pyplot as plt
%matplotlib inline
import os.path
X = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'embedding.mtx'))
W = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'graph.mtx'))
N = W.shape[0]
print('N = |V| = {}, k|V| < |E| = {}'.format(N, W.nnz))
plt.spy(W, markersize=2, color='black');
"""
Explanation: A Network Tour of Data Science
Michaรซl Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.
Assignment 4: Transductive Learning using Graphs
Transduction is reasoning from observed, specific (training) cases to specific (test) cases. For this assignment, the task is to infer missing values in some dataset, while the training and testing cases are available to construct a graph. The exercise consists of two parts: (1) construct some artificial data and (2) retrieve the missing values and measure performance.
1 Smooth graph signal
Let $\mathcal{G} = (\mathcal{V}, W)$ be a graph of vertex set $\mathcal{V}$ and weighted adjacency matrix $W$.
End of explanation
"""
# Fourier basis.
D = W.sum(axis=0)
D = scipy.sparse.diags(D.A.squeeze(), 0)
L = D - W
lamb, U = np.linalg.eigh(L.toarray())
# Low-pass filters.
def f1(u, a=4):
y = np.zeros(u.shape)
y[:a] = 1
return y
def f2(u, m=4):
return np.maximum(1 - m * u / u[-1], 0)
def f3(u, a=0.8):
return np.exp(-u / a)
# Random signal.
x = np.random.uniform(-1, 1, size=W.shape[0])
xhat = U.T.dot(x)
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].plot(lamb, xhat, '.-')
ax[0].set_title('Random signal spectrum')
ax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax[1].set_title('Random signal')
# Smooth signal through filtering.
xhat *= f3(lamb)
x = U.dot(xhat)
M = x.T.dot(L.dot(x))
print('M = x^T L x = {}'.format(M))
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].set_title('Smooth signal spectrum')
ax[0].plot(lamb, abs(xhat), '.-', label='spectrum |U^T x|')
#ax[0].plot(lamb, np.sqrt(M/lamb))
ax[0].plot(lamb[1:], np.sqrt(M/lamb[1:]), label='Decay associated with smoothness M')
ax[0].legend()
ax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax[1].set_title('Smooth signal');
"""
Explanation: Design a technique to construct smooth scalar signals $x \in \mathbb{R}^N$ over the graph $\mathcal{G}$.
Hint:
* This part is related to our last exercise.
* There is multiple ways to do this, another is to filter random signals.
End of explanation
"""
tau = 1e5 # Balance between fidelity and smoothness prior.
num = 100 # Number of signals and masks to generate.
# Percentage of values to keep.
probs = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0, 0.1, 0.2, 0.3]
errors = []
for p in probs:
mse = 0
for _ in range(num):
# Smooth signal.
x = np.random.uniform(-1, 1, size=W.shape[0])
xhat = U.T.dot(x) * f3(lamb)
x = U.dot(xhat)
# Observation.
A = np.diag(np.random.uniform(size=N) < p)
y = A.dot(x)
# Reconstruction.
x_sol = np.linalg.solve(tau * A + L, tau * y)
mse += np.linalg.norm(x - x_sol)**2
errors.append(mse / num)
# Show one example.
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
param = dict(s=40, vmin=min(x), vmax=max(x), linewidths=0)
ax[0].scatter(X[:, 0], X[:, 1], c=x, **param)
ax[1].scatter(X[:, 0], X[:, 1], c=y, **param)
ax[2].scatter(X[:, 0], X[:, 1], c=x_sol, **param)
ax[0].set_title('Ground truth')
ax[1].set_title('Observed signal (missing values set to 0)')
ax[2].set_title('Inpainted signal')
print('|x-y|_2^2 = {:5f}'.format(np.linalg.norm(x - y)**2))
print('|x-x*|_2^2 = {:5f}'.format(np.linalg.norm(x - x_sol)**2))
# Show reconstruction error w.r.t. percentage of observed values.
plt.figure(figsize=(15, 5))
plt.semilogy(probs, errors, '.', markersize=10)
plt.xlabel('Percentage of observed values n/N')
plt.ylabel('Reconstruction error |x* - x|_2^2');
"""
Explanation: 2 Graph Signal Inpainting
Let $y$ be a signal obtained by observing $n$ out the $N$ entries of a smooth signal $x$. Design and implement a procedure to infer the missing values and test its average accuracy $\| x^\ast - x \|_2^2$ as a function of $n/N$ on a test set of signals created using the technique developed above.
First complete the equations below, then do the implementation.
Observation:
$$y = Ax$$
where $A$ is a diagonal masking matrix with $\operatorname{diag(A)} \in {0,1}^N$.
Optimization problem:
$$x^\ast = \operatorname{arg } \min_x \frac{\tau}{2} \|Ax - y\|2^2 + \frac12 x^T L x$$
where $\|Ax - y\|_2^2$ is the fidelity term and
$x^T L x = \sum{u \sim v} w(u,v) (x(u) - x(v))^2$ is the smoothness prior.
Optimal solution (by putting the derivative to zero):
$$\tau Ax^\ast - \tau y + L x^\ast = 0
\hspace{0.3cm} \rightarrow \hspace{0.3cm}
x^\ast = (\tau A + L)^{-1} \tau y$$
Hint: in the end the solution should be a linear system of equations, to be solved with np.linalg.solve().
End of explanation
"""
|
Kaggle/learntools | notebooks/computer_vision/raw/tut6.ipynb | apache-2.0 | #$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed()
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
"""
Explanation: <!--TITLE:Data Augmentation-->
Introduction
Now that you've learned the fundamentals of convolutional classifiers, you're ready to move on to more advanced topics.
In this lesson, you'll learn a trick that can give a boost to your image classifiers: it's called data augmentation.
The Usefulness of Fake Data
The best way to improve the performance of a machine learning model is to train it on more data. The more examples the model has to learn from, the better it will be able to recognize which differences in images matter and which do not. More data helps the model to generalize better.
One easy way of getting more data is to use the data you already have. If we can transform the images in our dataset in ways that preserve the class, we can teach our classifier to ignore those kinds of transformations. For instance, whether a car is facing left or right in a photo doesn't change the fact that it is a Car and not a Truck. So, if we augment our training data with flipped images, our classifier will learn that "left or right" is a difference it should ignore.
And that's the whole idea behind data augmentation: add in some extra fake data that looks reasonably like the real data and your classifier will improve.
Using Data Augmentation
Typically, many kinds of transformation are used when augmenting a dataset. These might include rotating the image, adjusting the color or contrast, warping the image, or many other things, usually applied in combination. Here is a sample of the different ways a single image might be transformed.
<figure>
<img src="https://i.imgur.com/UaOm0ms.png" width=400, alt="Sixteen transformations of a single image of a car.">
</figure>
Data augmentation is usually done online, meaning, as the images are being fed into the network for training. Recall that training is usually done on mini-batches of data. This is what a batch of 16 images might look like when data augmentation is used.
<figure>
<img src="https://i.imgur.com/MFviYoE.png" width=400, alt="A batch of 16 images with various random transformations applied.">
</figure>
Each time an image is used during training, a new random transformation is applied. This way, the model is always seeing something a little different than what it's seen before. This extra variance in the training data is what helps the model on new data.
It's important to remember though that not every transformation will be useful on a given problem. Most importantly, whatever transformations you use should not mix up the classes. If you were training a digit recognizer, for instance, rotating images would mix up '9's and '6's. In the end, the best approach for finding good augmentations is the same as with most ML problems: try it and see!
Example - Training with Data Augmentation
Keras lets you augment your data in two ways. The first way is to include it in the data pipeline with a function like ImageDataGenerator. The second way is to include it in the model definition by using Keras's preprocessing layers. This is the approach that we'll take. The primary advantage for us is that the image transformations will be computed on the GPU instead of the CPU, potentially speeding up training.
In this exercise, we'll learn how to improve the classifier from Lesson 1 through data augmentation. This next hidden cell sets up the data pipeline.
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
# these are a new feature in TF 2.2
from tensorflow.keras.layers.experimental import preprocessing
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
pretrained_base.trainable = False
model = keras.Sequential([
# Preprocessing
preprocessing.RandomFlip('horizontal'), # flip left-to-right
preprocessing.RandomContrast(0.5), # contrast change by up to 50%
# Base
pretrained_base,
# Head
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dense(1, activation='sigmoid'),
])
"""
Explanation: Step 2 - Define Model
To illustrate the effect of augmentation, we'll just add a couple of simple transformations to the model from Tutorial 1.
End of explanation
"""
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=30,
verbose=0,
)
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
"""
Explanation: Step 3 - Train and Evaluate
And now we'll start the training!
End of explanation
"""
|
allanko/media-word-contagion | mediacloud-sandbox.ipynb | mit | # this api call takes a minute or two, but you should only need to do this once.
network = mc.topicMediaMap(topic_id)
with open('network.gexf', 'wb') as f:
f.write(network)
# if you've already generated network.gexf, run this cell to import it
with open('network.gexf', 'r') as f:
network = f.read()
"""
Explanation: 1. Background Info
We're looking at the US Presidential Election topic in Media Cloud. That's topic ID #1404. This is a set of stories published between Apr 30, 2015 to Nov 7, 2016, queried on the names of the major presidential candidates. The topic is queried from the following media source sets:
US Top Online News
US Top Digital Native News
US Regional Mainstream Media
The seed query is:
+( fiorina ( scott and walker ) ( ben and carson ) trump ( cruz and -victor ) kasich rubio (jeb and bush) clinton sanders ) AND (+publish_date:[2016-09-30T00:00:00Z TO 2016-11-08T23:59:59Z]) AND ((tags_id_media:9139487 OR tags_id_media:9139458 OR tags_id_media:2453107 OR tags_id_stories:9139487 OR tags_id_stories:9139458 OR tags_id_stories:2453107))
I think this is the same dataset used for this CJR report, "Breitbart-led right-wing media ecosystem altered broader media agenda", but I'm not totally sure.
2. Network Structure
Run this section to request a gexf file representing the unweighted, directed network of media outlets in this dataset. Nodes represent different media outlets, edges represents inlinks and outlinks between outlets.
End of explanation
"""
# this is the query we're interested in. put the term(s) you want to search for here
query = '( "alt-right" OR "alt right" OR "alternative right" )'
# define function fetch stories from topic, based on query
def fetch_all_stories(query, topic_id):
stories_id = []
media_id = []
media_name = []
publish_date = []
media_inlink_count = []
outlink_count = []
title = []
url = []
# do the first page of stories
stories = mc.topicStoryList(topic_id, q=query)
# append new data to lists
stories_id.extend( [s['stories_id'] for s in stories['stories']])
media_id.extend( [s['media_id'] for s in stories['stories']])
media_name.extend( [s['media_name'] for s in stories['stories']])
publish_date.extend( [s['publish_date'] for s in stories['stories']])
media_inlink_count.extend( [s['media_inlink_count'] for s in stories['stories']])
outlink_count.extend( [s['outlink_count'] for s in stories['stories']])
title.extend( [s['title'] for s in stories['stories']])
url.extend( [s['url'] for s in stories['stories']])
nextpage_id = stories['link_ids']['next']
# page through all the remaining stories in the topic
while True:
stories = mc.topicStoryList(topic_id, q=query, link_id = nextpage_id)
# append story data
stories_id.extend( [s['stories_id'] for s in stories['stories']])
media_id.extend( [s['media_id'] for s in stories['stories']])
media_name.extend( [s['media_name'] for s in stories['stories']])
publish_date.extend( [s['publish_date'] for s in stories['stories']])
media_inlink_count.extend( [s['media_inlink_count'] for s in stories['stories']])
outlink_count.extend( [s['outlink_count'] for s in stories['stories']])
title.extend( [s['title'] for s in stories['stories']])
url.extend( [s['url'] for s in stories['stories']])
if (len(stories['stories']) < 1) or ('next' not in stories['link_ids']):
break
nextpage_id = stories['link_ids']['next']
stories = pd.DataFrame({
'stories_id' : stories_id,
'media_id' : media_id,
'media_name' : media_name,
'publish_date' : publish_date,
'media_inlink_count' : media_inlink_count,
'outlink_count' : outlink_count,
'title' : title,
'url' : url
})
return stories
stories = fetch_all_stories(query, topic_id)
# write to csv
stories.to_csv('stories_mentioning_altright.csv', encoding='utf-8')
"""
Explanation: 3. Contagion Data
Now we want to see how a term/framing/quote propagates through our network. To do that, we need to search the stories in our topic (#1404) for mentions of a given term/framing/quote. Let's start with the term "alt-right".
End of explanation
"""
query = '( "nasty woman" OR "nasty women" OR "nastywomen" OR "nastywoman" )'
stories_nastywomen = fetch_all_stories(query, topic_id)
stories_nastywomen.to_csv('stories_mentioning_nastywomen.csv', encoding='utf-8')
"""
Explanation: We can get the same data for some other terms...
End of explanation
"""
|
solvebio/solvebio-python | examples/global_beacon_indexing.ipynb | mit | # Importing SolveBio library
from solvebio import login
from solvebio import Object
# Logging to SolveBio
login()
"""
Explanation: Global Beacon
Global Beacon lets anyone in your organization find datasets based on the entities it contains (i.e. variants, genets, targets).
Note: Only datasets that contain entities can be indexed.
Importing SolveBio library and logging in
End of explanation
"""
# Getting the dataset
dataset_full_path = "solvebio:public:/beacon-test-dataset"
dataset = Object.get_by_full_path(dataset_full_path)
dataset
# Enabling Global Beacon on dataset
dataset.enable_global_beacon()
"""
Explanation: Enabling Global Beacon on dataset
First letโs start with enabling Global Beacon on the dataset:
End of explanation
"""
# Getting the status of global beacon on the dataset
dataset.get_global_beacon_status()
"""
Explanation: Please notice that in the response, attribute status is indexing. While indexing is still in progress you won't be able to perform Global Beacon Search.
Checking the status of Global Beacon
Letโs check now the status of Global Beacon indexing for the datasets:
End of explanation
"""
# Disabling Global Beacon on dataset
dataset.disable_global_beacon()
"""
Explanation: As we can see, indexing has been completed (status is completed and progress percentage is 100%).
Disabling Global Beacon on dataset
Now when we made sure that global beacon exists for the dataset, when we don't need it any more, we can disable/delete it.
End of explanation
"""
# Getting the status of global beacon on the dataset
status = dataset.get_global_beacon_status()
print(status)
"""
Explanation: We can see in the response that the status is now destroying.
When Global Beacon index has been deleted on the dataset, when you try to get the status for the Global Beacon it will return None.
End of explanation
"""
dataset.get_global_beacon_status(raise_on_disabled=True)
"""
Explanation: Alternatively, you may set the argument raise_on_disabled to True, to raise an exception if Global Beacon doesn't exist on the dataset. You'll get 404 error with following message: "Error: No Global Beacon for Dataset:DATASET_ID"
End of explanation
"""
|
noppanit/machine-learning | parking-signs-nyc/Parking Signs.ipynb | mit | row = 'NO PARKING (SANITATION BROOM SYMBOL) 7AM-7:30AM EXCEPT SUNDAY'
assert from_time(row) == '07:00AM'
assert to_time(row) == '07:30AM'
special_case1 = 'NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1PM THURS'
assert from_time(special_case1) == '11:30AM'
assert to_time(special_case1) == '01:00PM'
special_case2 = 'NO PARKING (SANITATION BROOM SYMBOL) MOON & STARS (SYMBOLS) TUESDAY FRIDAY MIDNIGHT-3AM'
assert from_time(special_case2) == '12:00AM'
assert to_time(special_case2) == '03:00AM'
special_case3 = 'TRUCK (SYMBOL) TRUCK LOADING ONLY MONDAY-FRIDAY NOON-2PM'
assert from_time(special_case3) == '12:00PM'
assert to_time(special_case3) == '02:00PM'
special_case4 = 'NIGHT REGULATION (MOON & STARS SYMBOLS) NO PARKING (SANITATION BROOM SYMBOL) MIDNIGHT TO-3AM WED & SAT'
assert from_time(special_case4) == '12:00AM'
assert to_time(special_case4) == '03:00AM'
special_case5 = 'NO PARKING (SANITATION BROOM SYMBOL)8AM 11AM TUES & THURS'
assert from_time(special_case5) == '08:00AM'
assert to_time(special_case5) == '11:00AM'
special_case6 = 'NO PARKING (SANITATION BROOM SYMBOL) MONDAY THURSDAY 7AMM-7:30AM'
assert from_time(special_case6) == '07:00AM'
assert to_time(special_case6) == '07:30AM'
def filter_from_time(row):
if not pd.isnull(row['SIGNDESC1']):
return from_time(row['SIGNDESC1'])
return np.nan
def filter_to_time(row):
if not pd.isnull(row['SIGNDESC1']):
return to_time(row['SIGNDESC1'])
return np.nan
data['FROM_TIME'] = data.apply(filter_from_time, axis=1)
data['TO_TIME'] = data.apply(filter_to_time, axis=1)
data[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']].head(10)
"""
Explanation: Special Cases
assert extract_time('1 HR MUNI-METER PARKING 10AM-7PM MON THRU FRI 8AM-7PM SATURDAY W/ SINGLE ARROW') == ''
NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1 PM FRIW/ SINGLE ARROW
check if 2 timings is the maximum amount
End of explanation
"""
rows_with_AM_PM_but_time_NaN = data[(data['FROM_TIME'].isnull() | data['FROM_TIME'].isnull()) & (data['SIGNDESC1'].str.contains('[0-9]+(?:[AP]M)'))]
len(rows_with_AM_PM_but_time_NaN)
rows_with_AM_PM_but_time_NaN[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']]
data.iloc[180670, data.columns.get_loc('SIGNDESC1')]
data.iloc[180670, data.columns.get_loc('FROM_TIME')] = '9AM'
data.iloc[180670, data.columns.get_loc('TO_TIME')] = '4AM'
data.iloc[212089, data.columns.get_loc('SIGNDESC1')]
data.iloc[212089, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[212089, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258938, data.columns.get_loc('SIGNDESC1')]
data.iloc[258938, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258938, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258942, data.columns.get_loc('SIGNDESC1')]
data.iloc[258942, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258942, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[258944, data.columns.get_loc('SIGNDESC1')]
data.iloc[258944, data.columns.get_loc('FROM_TIME')] = '10AM'
data.iloc[258944, data.columns.get_loc('TO_TIME')] = '11:30AM'
data.iloc[283262, data.columns.get_loc('SIGNDESC1')]
data.iloc[283262, data.columns.get_loc('FROM_TIME')] = '6AM'
data.iloc[283262, data.columns.get_loc('TO_TIME')] = '7:30AM'
"""
Explanation: Find out if any rows has NaN
Want to find out if any rows has NaN from from_time and to_time but has timing in SIGNDESC1
End of explanation
"""
rows_with_AM_PM_but_time_NaN = data[(data['FROM_TIME'].isnull() | data['FROM_TIME'].isnull()) & (data['SIGNDESC1'].str.contains('[0-9]+(?:[AP]M)'))]
len(rows_with_AM_PM_but_time_NaN)
data[['SIGNDESC1', 'FROM_TIME', 'TO_TIME']]
"""
Explanation: Confirm that every row has from_time and to_time
End of explanation
"""
data['SIGNDESC1'].head(20)
#https://regex101.com/r/fO4zL8/3
regex_to_extract_days_idv_days = r'\b((?:(?:MON|MONDAY|TUES|TUESDAY|WED|WEDNESDAY|THURS|THURSDAY|FRI|FRIDAY|SAT|SATURDAY|SUN|SUNDAY)\s*)+)(?=\s|$)'
regex_to_extract_days_with_range = r'(MON|TUES|WED|THURS|FRI|SAT|SUN)\s(THRU|\&)\s(MON|TUES|WED|THURS|FRI|SAT|SUN)'
def extract_day(signdesc):
days = ['MON', 'TUES', 'WED', 'THURS', 'FRI', 'SAT', 'SUN']
p_idv_days = re.compile(regex_to_extract_days_idv_days)
m_idv_days = p_idv_days.search(signdesc)
p_range_days = re.compile(regex_to_extract_days_with_range)
m_range_days = p_range_days.search(signdesc)
if 'EXCEPT SUN' in signdesc:
return ', '.join(days[:6])
if 'INCLUDING SUNDAY' in signdesc:
return ', '.join(days)
if 'FRIW/' in signdesc:
return ', '.join(['FRI'])
if ('THRU' in signdesc) and m_range_days:
from_day = m_range_days.group(1)
to_day = m_range_days.group(3)
idx_frm_d = days.index(from_day)
idx_to_d = days.index(to_day)
return ', '.join([days[n] for n in range(idx_frm_d, idx_to_d + 1)])
if ('&' in signdesc) and m_range_days:
from_day = m_range_days.group(1)
to_day = m_range_days.group(3)
return ', '.join([from_day, to_day])
if m_idv_days:
days = m_idv_days.group(1)
d = []
for day in days.split(' '):
if len(day) > 3:
if day in ['MONDAY', 'WEDNESDAY', 'FRIDAY', 'SATURDAY', 'SUNDAY']:
d.append(day[:3])
if day in ['TUESDAY']:
d.append(day[:4])
if day in ['THURSDAY']:
d.append(day[:5])
else:
d.append(day)
return ', '.join(d)
return np.nan
def filter_days(row):
if not pd.isnull(row['SIGNDESC1']):
return extract_day(row['SIGNDESC1'])
return np.nan
assert extract_day('NO STANDING 11AM-7AM MON SAT') == "MON, SAT"
assert extract_day('NO STANDING MON FRI 7AM-9AM') == "MON, FRI"
assert extract_day('2 HOUR PARKING 9AM-5PM MON THRU SAT') == "MON, TUES, WED, THURS, FRI, SAT"
assert extract_day('1 HOUR PARKING 8AM-7PM EXCEPT SUNDAY') == "MON, TUES, WED, THURS, FRI, SAT"
assert extract_day('NO PARKING 10PM-8AM INCLUDING SUNDAY') == "MON, TUES, WED, THURS, FRI, SAT, SUN"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) MONDAY THURSDAY 9:30AM-11AM') == "MON, THURS"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) 11:30AM TO 1 PM FRIW/ SINGLE ARROW') == "FRI"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) 8-9:30AM TUES & FRI') == "TUES, FRI"
assert extract_day('NO PARKING (SANITATION BROOM SYMBOL) TUESDAY FRIDAY 11AM-12:30PM') == "TUES, FRI"
data['DAYS'] = data.apply(filter_days, axis=1)
rows_with_days_but_DAYS_NAN = data[data['DAYS'].isnull() & data['SIGNDESC1'].str.contains('\sMON|\sTUES|\sWED|\sTHURS|\sFRI|\sSAT|\sSUN')]
rows_with_days_but_DAYS_NAN[['SIGNDESC1', 'DAYS']]
data.iloc[308838, data.columns.get_loc('SIGNDESC1')]
data.head()
"""
Explanation: Day of the week
End of explanation
"""
data.to_csv('Processed_Signs.csv', index=False)
"""
Explanation: Save to CSV
End of explanation
"""
|
GoogleCloudPlatform/analytics-componentized-patterns | retail/recommendation-system/bqml-scann/05_deploy_lookup_and_scann_caip.ipynb | apache-2.0 | import numpy as np
import tensorflow as tf
"""
Explanation: Part 5: Deploy the solution to AI Platform Prediction
This notebook is the fifth of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to complete the following tasks:
Deploy the embedding lookup model to AI Platform Prediction.
Deploy the ScaNN matching service to AI Platform Prediction by using a custom container. The ScaNN matching service is an application that wraps the ANN index model and provides additional functionality, like mapping item IDs to item embeddings.
Optionally, export and deploy the matrix factorization model to AI Platform for exact matching.
Before starting this notebook, you must run the 04_build_embeddings_scann notebook to build an approximate nearest neighbor (ANN) index for the item embeddings.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
Import libraries
End of explanation
"""
PROJECT_ID = 'yourProject' # Change to your project.
PROJECT_NUMBER = 'yourProjectNumber' # Change to your project number
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourPredictionRegion' # Change to your AI Platform Prediction region.
ARTIFACTS_REPOSITORY_NAME = 'ml-serving'
EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/embedding_lookup_model'
EMBEDDNIG_LOOKUP_MODEL_NAME = 'item_embedding_lookup'
EMBEDDNIG_LOOKUP_MODEL_VERSION = 'v1'
INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
SCANN_MODEL_NAME = 'index_server'
SCANN_MODEL_VERSION = 'v1'
KIND = 'song'
!gcloud config set project $PROJECT_ID
"""
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
PROJECT_NUMBER: The number of the Google Cloud project you are using to implement this solution. You can find this in the Project info card on the project dashboard page.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
REGION: The region to use for the AI Platform Prediction job.
End of explanation
"""
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
"""
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
"""
!gcloud ai-platform models create {EMBEDDNIG_LOOKUP_MODEL_NAME} --region={REGION}
"""
Explanation: Deploy the embedding lookup model to AI Platform Prediction
Create the embedding lookup model resource in AI Platform:
End of explanation
"""
!gcloud ai-platform versions create {EMBEDDNIG_LOOKUP_MODEL_VERSION} \
--region={REGION} \
--model={EMBEDDNIG_LOOKUP_MODEL_NAME} \
--origin={EMBEDDNIG_LOOKUP_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Prediction.")
"""
Explanation: Next, deploy the model:
End of explanation
"""
import googleapiclient.discovery
from google.api_core.client_options import ClientOptions
api_endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=api_endpoint)
service = googleapiclient.discovery.build(
serviceName='ml', version='v1', client_options=client_options)
"""
Explanation: Once the model is deployed, you can verify it in the AI Platform console.
Test the deployed embedding lookup AI Platform Prediction model
Set the AI Platform Prediction API information:
End of explanation
"""
def caip_embedding_lookup(input_items):
request_body = {'instances': input_items}
service_name = f'projects/{PROJECT_ID}/models/{EMBEDDNIG_LOOKUP_MODEL_NAME}/versions/{EMBEDDNIG_LOOKUP_MODEL_VERSION}'
print(f'Calling : {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
"""
Explanation: Run the caip_embedding_lookup method to retrieve item embeddings. This method accepts item IDs, calls the embedding lookup model in AI Platform Prediction, and returns the appropriate embedding vectors.
End of explanation
"""
input_items = ['2114406', '2114402 2120788', 'abc123']
embeddings = caip_embedding_lookup(input_items)
print(f'Embeddings retrieved: {len(embeddings)}')
for idx, embedding in enumerate(embeddings):
print(f'{input_items[idx]}: {embedding[:5]}')
"""
Explanation: Test the caip_embedding_lookup method with three item IDs:
End of explanation
"""
!gcloud beta artifacts repositories create {ARTIFACTS_REPOSITORY_NAME} \
--location={REGION} \
--repository-format=docker
!gcloud beta auth configure-docker {REGION}-docker.pkg.dev --quiet
"""
Explanation: ScaNN matching service
The ScaNN matching service performs the following steps:
Receives one or more item IDs from the client.
Calls the embedding lookup model to fetch the embedding vectors of those item IDs.
Uses these embedding vectors to query the ANN index to find approximate nearest neighbor embedding vectors.
Maps the approximate nearest neighbors embedding vectors to their corresponding item IDs.
Sends the item IDs back to the client.
When the client receives the item IDs of the matches, the song title and artist information is fetched from Datastore in real-time to be displayed and served to the client application.
Note: In practice, recommendation systems combine matches (from one or more indices) with user-provided filtering clauses (like where price <= value and colour =red), as well as other item metadata (like item categories, popularity, and recency) to ensure recommendation freshness and diversity. In addition, ranking is commonly applied after generating the matches to decide the order in which they are served to the user.
ScaNN matching service implementation
The ScaNN matching service is implemented as a Flask application that runs on a gunicorn web server. This application is implemented in the main.py module.
The ScaNN matching service application works as follows:
Uses environmental variables to set configuration information, such as the Google Cloud location of the ScaNN index to load.
Loads the ScaNN index as the ScaNNMatcher object is initiated.
As required by AI Platform Prediction, exposes two HTTP endpoints:
health: a GET method to which AI Platform Prediction sends health checks.
predict: a POST method to which AI Platform Prediction forwards prediction requests.
The predict method expects JSON requests in the form {"instances":[{"query": "item123", "show": 10}]}, where query represents the item ID to retrieve matches for, and show represents the number of matches to retrieve.
The predict method works as follows:
1. Validates the received request object.
1. Extracts the `query` and `show` values from the request object.
1. Calls `embedding_lookup.lookup` with the given query item ID to get its embedding vector from the embedding lookup model.
1. Calls `scann_matcher.match` with the query item embedding vector to retrieve its approximate nearest neighbor item IDs from the ANN Index.
The list of matching item IDs are put into JSON format and returned as the response of the predict method.
Deploy the ScaNN matching service to AI Platform Prediction
Package the ScaNN matching service application in a custom container and deploy it to AI Platform Prediction.
Create an Artifact Registry for the Docker container image
End of explanation
"""
IMAGE_URL = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}/{SCANN_MODEL_NAME}:{SCANN_MODEL_VERSION}'
PORT=5001
SUBSTITUTIONS = ''
SUBSTITUTIONS += f'_IMAGE_URL={IMAGE_URL},'
SUBSTITUTIONS += f'_PORT={PORT}'
!gcloud builds submit --config=index_server/cloudbuild.yaml \
--substitutions={SUBSTITUTIONS} \
--timeout=1h
"""
Explanation: Use Cloud Build to build the Docker container image
The container runs the gunicorn HTTP web server and executes the Flask app variable defined in the main.py module.
The container image to deploy to AI Platform Prediction is defined in a Dockerfile, as shown in the following code snippet:
```
FROM python:3.8-slim
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . ./
ARG PORT
ENV PORT=$PORT
CMD exec gunicorn --bind :$PORT main:app --workers=1 --threads 8 --timeout 1800
```
Build the container image by using Cloud Build and specifying the cloudbuild.yaml file:
End of explanation
"""
repository_id = f'{REGION}-docker.pkg.dev/{PROJECT_ID}/{ARTIFACTS_REPOSITORY_NAME}'
!gcloud beta artifacts docker images list {repository_id}
"""
Explanation: Run the following command to verify the container image has been built:
End of explanation
"""
SERVICE_ACCOUNT_NAME = 'caip-serving'
SERVICE_ACCOUNT_EMAIL = f'{SERVICE_ACCOUNT_NAME}@{PROJECT_ID}.iam.gserviceaccount.com'
!gcloud iam service-accounts create {SERVICE_ACCOUNT_NAME} \
--description="Service account for AI Platform Prediction to access cloud resources."
"""
Explanation: Create a service account for AI Platform Prediction
Create a service account to run the custom container. This is required in cases where you want to grant specific permissions to the service account.
End of explanation
"""
!gcloud projects describe {PROJECT_ID} --format="value(projectNumber)"
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/iam.serviceAccountAdmin \
--member=serviceAccount:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/storage.objectViewer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
!gcloud projects add-iam-policy-binding {PROJECT_ID} \
--role=roles/ml.developer \
--member=serviceAccount:{SERVICE_ACCOUNT_EMAIL}
"""
Explanation: Grant the Cloud ML Engine (AI Platform) service account the iam.serviceAccountAdmin privilege, and grant the caip-serving service account the privileges required by the ScaNN matching service, which are storage.objectViewer and ml.developer.
End of explanation
"""
!gcloud ai-platform models create {SCANN_MODEL_NAME} --region={REGION}
"""
Explanation: Deploy the custom container to AI Platform Prediction
Create the ANN index model resource in AI Platform:
End of explanation
"""
HEALTH_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'
PREDICT_ROUTE=f'/v1/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}:predict'
ENV_VARIABLES = f'PROJECT_ID={PROJECT_ID},'
ENV_VARIABLES += f'REGION={REGION},'
ENV_VARIABLES += f'INDEX_DIR={INDEX_DIR},'
ENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_NAME={EMBEDDNIG_LOOKUP_MODEL_NAME},'
ENV_VARIABLES += f'EMBEDDNIG_LOOKUP_MODEL_VERSION={EMBEDDNIG_LOOKUP_MODEL_VERSION}'
!gcloud beta ai-platform versions create {SCANN_MODEL_VERSION} \
--region={REGION} \
--model={SCANN_MODEL_NAME} \
--image={IMAGE_URL} \
--ports={PORT} \
--predict-route={PREDICT_ROUTE} \
--health-route={HEALTH_ROUTE} \
--machine-type=n1-standard-4 \
--env-vars={ENV_VARIABLES} \
--service-account={SERVICE_ACCOUNT_EMAIL}
print("The model version is deployed to AI Platform Prediction.")
"""
Explanation: Deploy the custom container to AI Platform prediction. Note that you use the env-vars parameter to pass environmental variables to the Flask application in the container.
End of explanation
"""
from google.cloud import datastore
import requests
client = datastore.Client(PROJECT_ID)
def caip_scann_match(query_items, show=10):
request_body = {
'instances': [{
'query':' '.join(query_items),
'show':show
}]
}
service_name = f'projects/{PROJECT_ID}/models/{SCANN_MODEL_NAME}/versions/{SCANN_MODEL_VERSION}'
print(f'Calling: {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
match_tokens = response['predictions']
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
"""
Explanation: Test the Deployed ScaNN Index Service
After deploying the custom container, test it by running the caip_scann_match method. This method accepts the parameter query_items, whose value is converted into a space-separated string of item IDs and treated as a single query. That is, a single embedding vector is retrieved from the embedding lookup model, and similar item IDs are retrieved from the ScaNN index given this embedding vector.
End of explanation
"""
songs = {
'2120788': 'Limp Bizkit: My Way',
'1086322': 'Jacques Brel: Ne Me Quitte Pas',
'833391': 'Ricky Martin: Livin\' la Vida Loca',
'1579481': 'Dr. Dre: The Next Episode',
'2954929': 'Black Sabbath: Iron Man'
}
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_scann_match([item_Id], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
"""
Explanation: Call the caip_scann_match method with five item IDs and request five match items for each:
End of explanation
"""
BQ_DATASET_NAME = 'recommendations'
BQML_MODEL_NAME = 'item_matching_model'
BQML_MODEL_VERSION = 'v1'
BQML_MODEL_OUTPUT_DIR = f'gs://{BUCKET}/bqml/item_matching_model'
!bq --quiet extract -m {BQ_DATASET_NAME}.{BQML_MODEL_NAME} {BQML_MODEL_OUTPUT_DIR}
!saved_model_cli show --dir {BQML_MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default
"""
Explanation: (Optional) Deploy the matrix factorization model to AI Platform Prediction
Optionally, you can deploy the matrix factorization model in order to perform exact item matching. The model takes Item1_Id as an input and outputs the top 50 recommended item2_Ids.
Exact matching returns better results, but takes significantly longer than approximate nearest neighbor matching. You might want to use exact item matching in cases where you are working with a very small data set and where latency isn't a primary concern.
Export the model from BigQuery ML to Cloud Storage as a SavedModel
End of explanation
"""
!gcloud ai-platform models create {BQML_MODEL_NAME} --region={REGION}
!gcloud ai-platform versions create {BQML_MODEL_VERSION} \
--region={REGION} \
--model={BQML_MODEL_NAME} \
--origin={BQML_MODEL_OUTPUT_DIR} \
--runtime-version=2.2 \
--framework=TensorFlow \
--python-version=3.7 \
--machine-type=n1-standard-2
print("The model version is deployed to AI Platform Predicton.")
def caip_bqml_matching(input_items, show):
request_body = {'instances': input_items}
service_name = f'projects/{PROJECT_ID}/models/{BQML_MODEL_NAME}/versions/{BQML_MODEL_VERSION}'
print(f'Calling : {service_name}')
response = service.projects().predict(
name=service_name, body=request_body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
match_tokens = response['predictions'][0]["predicted_item2_Id"][:show]
keys = [client.key(KIND, int(key)) for key in match_tokens]
items = client.get_multi(keys)
return items
return response['predictions']
for item_Id, desc in songs.items():
print(desc)
print("==================")
similar_items = caip_bqml_matching([int(item_Id)], 5)
for similar_item in similar_items:
print(f'- {similar_item["artist"]}: {similar_item["track_title"]}')
print()
"""
Explanation: Deploy the exact matching model to AI Platform Prediction
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/probability/examples/Learnable_Distributions_Zoo.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability.python.internal import prefer_static
tfb = tfp.bijectors
tfd = tfp.distributions
tf.enable_v2_behavior()
event_size = 4
num_components = 3
"""
Explanation: Learnable Distributions Zoo (ๅญฆ็ฟๅฏ่ฝใชๅๅธใๆง็ฏใใใใใฎใใพใใพใชไพ)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Learnable_Distributions_Zoo"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org ใง่กจ็คบ</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Learnable_Distributions_Zoo.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab ใงๅฎ่ก</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Learnable_Distributions_Zoo.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHubใงใฝใผในใ่กจ็คบ</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Learnable_Distributions_Zoo.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ใใผใใใใฏใใใฆใณใญใผใ</a></td>
</table>
ใใฎใณใฉใใงใฏใๅญฆ็ฟๅฏ่ฝใช (ใใใฌใผใใณใฐๅฏ่ฝใชใ) ๅๅธใๆง็ฏใใใใพใใพใชไพใ็คบใใพใใ(ๅๅธใซใคใใฆใฏ่ชฌๆใใใๆง็ฏใใๆนๆณใ็คบใใ ใใงใใ)
End of explanation
"""
learnable_mvn_scaled_identity = tfd.Independent(
tfd.Normal(
loc=tf.Variable(tf.zeros(event_size), name='loc'),
scale=tfp.util.TransformedVariable(
tf.ones([1]),
bijector=tfb.Exp(),
name='scale')),
reinterpreted_batch_ndims=1,
name='learnable_mvn_scaled_identity')
print(learnable_mvn_scaled_identity)
print(learnable_mvn_scaled_identity.trainable_variables)
"""
Explanation: chol(Cov)ใฎในใฑใผใชใณใฐใใใใขใคใใณใใฃใใฃใๆใคๅญฆ็ฟๅฏ่ฝใชๅคๅค้ๆญฃ่ฆ
End of explanation
"""
learnable_mvndiag = tfd.Independent(
tfd.Normal(
loc=tf.Variable(tf.zeros(event_size), name='loc'),
scale=tfp.util.TransformedVariable(
tf.ones(event_size),
bijector=tfb.Softplus(), # Use Softplus...cuz why not?
name='scale')),
reinterpreted_batch_ndims=1,
name='learnable_mvn_diag')
print(learnable_mvndiag)
print(learnable_mvndiag.trainable_variables)
"""
Explanation: chol(Cov) ใฎๅฏพ่งใๆใคๅญฆ็ฟๅฏ่ฝใชๅคๅค้ๆญฃ่ฆ
End of explanation
"""
learnable_mix_mvn_scaled_identity = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(
logits=tf.Variable(
# Changing the `1.` intializes with a geometric decay.
-tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),
name='logits')),
components_distribution=tfd.Independent(
tfd.Normal(
loc=tf.Variable(
tf.random.normal([num_components, event_size]),
name='loc'),
scale=tfp.util.TransformedVariable(
10. * tf.ones([num_components, 1]),
bijector=tfb.Softplus(), # Use Softplus...cuz why not?
name='scale')),
reinterpreted_batch_ndims=1),
name='learnable_mix_mvn_scaled_identity')
print(learnable_mix_mvn_scaled_identity)
print(learnable_mix_mvn_scaled_identity.trainable_variables)
"""
Explanation: ๅคๅค้ๆญฃ่ฆ (็ๅฝข) ใฎๆททๅ
End of explanation
"""
learnable_mix_mvndiag_first_fixed = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(
logits=tfp.util.TransformedVariable(
# Initialize logits as geometric decay.
-tf.math.log(1.5) * tf.range(num_components, dtype=tf.float32),
tfb.Pad(paddings=[[1, 0]], constant_values=0)),
name='logits'),
components_distribution=tfd.Independent(
tfd.Normal(
loc=tf.Variable(
# Use Rademacher...cuz why not?
tfp.random.rademacher([num_components, event_size]),
name='loc'),
scale=tfp.util.TransformedVariable(
10. * tf.ones([num_components, 1]),
bijector=tfb.Softplus(), # Use Softplus...cuz why not?
name='scale')),
reinterpreted_batch_ndims=1),
name='learnable_mix_mvndiag_first_fixed')
print(learnable_mix_mvndiag_first_fixed)
print(learnable_mix_mvndiag_first_fixed.trainable_variables)
"""
Explanation: ๅคๅค้ๆญฃ่ฆ (็ๅฝข) ใจๅญฆ็ฟไธๅฏ่ฝใชๆๅใฎๆททๅ้ใฟใฎๆททๅ
End of explanation
"""
learnable_mix_mvntril = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(
logits=tf.Variable(
# Changing the `1.` intializes with a geometric decay.
-tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),
name='logits')),
components_distribution=tfd.MultivariateNormalTriL(
loc=tf.Variable(tf.zeros([num_components, event_size]), name='loc'),
scale_tril=tfp.util.TransformedVariable(
10. * tf.eye(event_size, batch_shape=[num_components]),
bijector=tfb.FillScaleTriL(),
name='scale_tril')),
name='learnable_mix_mvntril')
print(learnable_mix_mvntril)
print(learnable_mix_mvntril.trainable_variables)
"""
Explanation: ๅคๅค้ๆญฃ่ฆๅๅธใฎๆททๅ (ๅฎๅ
จใชCov)
End of explanation
"""
# Make a bijector which pads an eye to what otherwise fills a tril.
num_tril_nonzero = lambda num_rows: num_rows * (num_rows + 1) // 2
num_tril_rows = lambda nnz: prefer_static.cast(
prefer_static.sqrt(0.25 + 2. * prefer_static.cast(nnz, tf.float32)) - 0.5,
tf.int32)
# TFP doesn't have a concat bijector, so we roll out our own.
class PadEye(tfb.Bijector):
def __init__(self, tril_fn=None):
if tril_fn is None:
tril_fn = tfb.FillScaleTriL()
self._tril_fn = getattr(tril_fn, 'inverse', tril_fn)
super(PadEye, self).__init__(
forward_min_event_ndims=2,
inverse_min_event_ndims=2,
is_constant_jacobian=True,
name='PadEye')
def _forward(self, x):
num_rows = int(num_tril_rows(tf.compat.dimension_value(x.shape[-1])))
eye = tf.eye(num_rows, batch_shape=prefer_static.shape(x)[:-2])
return tf.concat([self._tril_fn(eye)[..., tf.newaxis, :], x],
axis=prefer_static.rank(x) - 2)
def _inverse(self, y):
return y[..., 1:, :]
def _forward_log_det_jacobian(self, x):
return tf.zeros([], dtype=x.dtype)
def _inverse_log_det_jacobian(self, y):
return tf.zeros([], dtype=y.dtype)
def _forward_event_shape(self, in_shape):
n = prefer_static.size(in_shape)
return in_shape + prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)
def _inverse_event_shape(self, out_shape):
n = prefer_static.size(out_shape)
return out_shape - prefer_static.one_hot(n - 2, depth=n, dtype=tf.int32)
tril_bijector = tfb.FillScaleTriL(diag_bijector=tfb.Softplus())
learnable_mix_mvntril_fixed_first = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(
logits=tfp.util.TransformedVariable(
# Changing the `1.` intializes with a geometric decay.
-tf.math.log(1.) * tf.range(num_components, dtype=tf.float32),
bijector=tfb.Pad(paddings=[(1, 0)]),
name='logits')),
components_distribution=tfd.MultivariateNormalTriL(
loc=tfp.util.TransformedVariable(
tf.zeros([num_components, event_size]),
bijector=tfb.Pad(paddings=[(1, 0)], axis=-2),
name='loc'),
scale_tril=tfp.util.TransformedVariable(
10. * tf.eye(event_size, batch_shape=[num_components]),
bijector=tfb.Chain([tril_bijector, PadEye(tril_bijector)]),
name='scale_tril')),
name='learnable_mix_mvntril_fixed_first')
print(learnable_mix_mvntril_fixed_first)
print(learnable_mix_mvntril_fixed_first.trainable_variables)
"""
Explanation: ๅคๅค้ๆญฃ่ฆๅๅธ (ๅฎๅ
จใชCov) ใจๅญฆ็ฟไธๅฏ่ฝใชๆๅใฎๆททๅใใใณๆๅใฎ่ฆ็ด ใฎๆททๅ
End of explanation
"""
|
ShyamSS-95/Bolt | example_problems/nonrelativistic_boltzmann/quick_start/tutorial.ipynb | gpl-3.0 | # Importing problem specific modules:
import boundary_conditions
import domain
import params
import initialize
!cat boundary_conditions.py
"""
Explanation: Introduction To Bolt
Hello! This is an intro to $\texttt{Bolt}$ to help you understand the structure of the framework. This way you'll hit the hit the ground running when you want to introduce models of your own. :)
$\texttt{Bolt}$ is a solver framework for kinetic theories and can be used to obtain solution for any equation of the following forms:
Conservative:
\begin{align}
\frac{\partial f}{\partial t} + \frac{\partial (C_{q1} f)}{\partial q_1} + \frac{\partial (C_{q2} f)}{\partial q_2} + \frac{\partial (C_{p1} f)}{\partial p_1} + \frac{\partial (C_{p2} f)}{\partial p_2} + \frac{\partial (C_{p3} f)}{\partial p_3} = S(f)
\end{align}
Non-Conservative:
\begin{align}
\frac{\partial f}{\partial t} + A_{q1} \frac{\partial f}{\partial q_1} + A_{q2} \frac{\partial f}{\partial q_2} + A_{p1} \frac{\partial f}{\partial p_1} + A_{p2} \frac{\partial f}{\partial p_2} + A_{p3} \frac{\partial f}{\partial p_3} = S(f)
\end{align}
Where $A_{q1}$, $A_{q2}$, $A_{p1}$, $A_{p2}$, $A_{p3}$,$C_{q1}$, $C_{q2}$, $C_{p1}$, $C_{p2}$, $C_{p3}$ and $S(f)$ are terms that need to be coded in by the user. Bolt can make use of the advective semi-lagrangian method and/or the finite-volume method. While the advective semi-lagrangian method makes use of $A_{q1, q2, p1, p2, p3}$, while the finite volume method makes use of $C_{q1, q2, p1, p2, p3}$.
In this tutorial we'll be considering non-relativistic Boltzmann equation, with the BGK collision operator:
\begin{align}
\frac{\partial f}{\partial t} + v_x \frac{\partial f}{\partial x} + v_y \frac{\partial f}{\partial y} + \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_x \frac{\partial f}{\partial v_x} + \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_y \frac{\partial f}{\partial v_y} + \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_z \frac{\partial f}{\partial v_z} = C[f] = -\frac{f - f_0}{\tau}
\end{align}
So for this model, we have the following:
$A_{q1} = C_{q1} = v_x$
$A_{q2} = C_{q2} = v_y$
$A_{p1} = C_{p1} = \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_x$
$A_{p2} = C_{p2} = \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_y$
$A_{p3} = C_{p3} = \frac{q}{m}(\vec{E} + \vec{v} \times \vec{B})_z$
$S(f) = -\frac{f-f_0}{\tau}$, where $f_0$ is the local-maxwellian distribution $\tau$ is the collison timescale.
Additionally, we've taken the generalized the canonical coordinates $p_1$, $p_2$ and $p_3$ to be the velocity values $v_x$, $v_y$, $v_z$. That is:
$p_1 = v_x$
$p_2 = v_y$
$p_3 = v_z$
Before we dive into how we introduce this non-relativistic Boltzmann equation into $\texttt{Bolt}$, let's define the example problem that we intend to solve. The problem that we are considering is a simple one: Given an initial perturbation in the number density $n$ in a collisionless periodic 1D domain, how would the amplitude of density vary with time. Basically, we are stating that:
\begin{align}
n(x, 0) = n_{background} + \delta n_r cos(kx) - \delta n_i sin(kx)
\end{align}
Now $\texttt{Bolt}$ requires the initial distribution function to be defined which we'll initialize using the Maxwell Boltzmann distribution function. The system that we are modelling here is a 1D1V one. That is, one dimension in position space, and one dimension in velocity space. The initial distribution function would be:
\begin{align}
f(x, v, t = 0) = n(x, 0) \sqrt{\frac{m}{2 \pi k T}} e^{-\frac{mv^2}{2 k T}}
\end{align}
The folder in which this notebook is contained has 4 other python files:boundary_conditions.py, domain.py, initialize.py, params.py each of which hold essential information about the system being modelled.
These files when which used with the import statement are imported as modules. These modules are what we use to use to pass the information to the solvers in Bolt. We'll go ahead and import these modules for now, and explore what each of these files contain step by step.
End of explanation
"""
!cat domain.py
"""
Explanation: As the name suggests boundary_conditions.py contains the information about the boundary conditions for the setup considered. While the current problem in consideration is for periodic boundary conditions, $\texttt{Bolt}$ also supports Dirichlet, mirror, and shearing box boundary conditions. The setups for these boundary conditions can be found in other example problems.
End of explanation
"""
!cat params.py
"""
Explanation: domain.py contains data about the phase space domain and resolution that has been considered. Note that we've taken the number of grid points along q2 as 3 although it's a 1D problem. It must be ensured that the number of grid zones in q1 and q2 are greater than or equal to the the number of ghost zones that are taken in q space. This is due to an internal restriction placed on us by one of the libraries we use for parallelization. Additionally, we've taken the domain zones and sizes for p3 and p3 such that dp2 and dp3 come out to be one. This way the integral measure, dp1 dp2 dp3 boils down to be dp1 which is then used for moment computations.
End of explanation
"""
!cat initialize.py
"""
Explanation: Let's go over each of the attributes mentioned above to understand their usage. While some of these attributes are flags and options that need to be mentioned for every system being solved, there are a few attributes which are specific to the non-relativistic Boltzmann system being solved. The params module can be used to add attributes, and functions which you intend to declare somewhere in src/ :
Attributes Native To the Solver:
fields_type is used to declare what sort of fields are being solved in the problem of consideration. It can be electrostatic where magnetic fields stay at zero, electrodynamic where magnetic fields are also evolved and user-defined where the evolution of the fields in time are declared by the user(this is primarily used in debugging). This attribute can be set appropriately as electrostatic, electrodynamic and user-defined. The setup we've considered is an electrostatic one.
fields_initialize is used to declare which method is used to set the initialize the values for the electromagnetic fields. 2 methods are available for initializing electrostatic fields from the density - snes and fft. The fft method of initialization can only be used for serial runs with periodic boundary conditions. snes is a more versatile method capable of being run in parallel with other boundary conditions as well. It makes use of the SNES routines in PETSc which use Krylov subspace methods to solve for the fields. Additionally, this can also be set to user-defined, where the initial conditions for the electric and magnetic fields are defined in terms of q1 and q2 under initialize.
fields_solver is used to declare which method is used to set the method that is used to evolve the fields with time. The same methods are available for computing electrostatic fields snes and fft. The fdtd method is to be used when solving for electrodynamic systems.
solver_method_in_q and solver_method_in_p are used to set the specific solver method used in q-space and p-space which can be set to FVM or ASL for the finite volume method and the advective semi-lagrangian method.
reconstruction_method_in_q and reconstruction_method_in_p are used to set the specific reconstruction scheme used for the FVM method in q-space and p-space. This can be set to piecewise-constant, minmod, ppm and weno5.
riemann_solver_in_q and riemann_solver_in_p are used to set the specific riemann solver which is to be used for the FVM method in q-space and p-space. This can be set to upwind-flux for the first order upwind flux method, and lax-friedrichs for the local Lax-Friedrichs method.
num_devices is used in parallel runs when run on nodes which contain more than a single accelerator. For instance when running on nodes which contain 4 GPUs each, this attribute is set to 4.
EM_fields_enabled is a solver flag which is used to indicate whether the case considered is one where we solve in p-space as well. Similarly, source_enabled is used to switch on and off the source term. For now, we have set both to False.
charge as the name suggests is used to assign charge to the species considered in the simulation. This is used internally in the fields solver. For this tutorial we've taken the charge of the particle to be -10 units. However this won't matter if EM_fields_enabled is set to False.
instantaneous_collisions is a flag which is turned on when we want to update the distribution function f to a distribution function array as returned by the source term. For instance in the case of the BGK operator we want to solve $\frac{d f}{d t} = -\frac{f - f_0}{\tau}$. But as $\tau \to 0$, $f = f_0$. For solving systems in the $\tau = 0$ regime this flag is turned to True. How this is carried out is explained further in the section that explains how the equation to be modelled is input.
Attributes Native To the System Solved:
p_dim is used to set the dimensionality considered in p-space. This becomes important especially for the definition of various physical quantities which vary as a function of the dimensionality and the moments. This would be used in the collision operator as we'll discuss further in this tutorial.
mass and boltzmann_constant are pretty explanatory from their name, and are used for initialization and defining the system solved.
tau is the collision timescale in the BGK operator and used in solving for the source part of the equation. This parameter would only make a difference in the simulation results if the switch source_enabled is set to True.
The remaining parameters as they are mentioned are used in the initialize module
End of explanation
"""
import bolt.src.nonrelativistic_boltzmann.advection_terms as advection_terms
import bolt.src.nonrelativistic_boltzmann.collision_operator as collision_operator
import bolt.src.nonrelativistic_boltzmann.moments as moments
"""
Explanation: As you can see the initialize module contains the function initialize_f which initializes the distribution function using the parameters that were declared.
Now that we've setup the parameters for the specific test problem that we want to solve, we'll proceed to describe how we input the desired equation of our model into $\texttt{Bolt}$.
How the equation to be modelled is introduced into Bolt:
As one navigates from the root folder of this repository into the folder main folder for the package bolt, there's two separate subfolders lib and src. While all the files in lib contain the solver algorithms, and the structure for the solvers, src is where we introduce the models that we intend to model. For instance, the files that we'll be using for this test problem can be found under bolt/src/nonrelativistic_boltzmann. First let's import all necessary modules.
End of explanation
"""
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/advection_terms.py
"""
Explanation: Let's start of by seeing how we've introduced the advection terms specific to the non-relativistic Boltzmann equation into the framework of $\texttt{Bolt}$. Advection terms are introduced into $\texttt{Bolt}$ through the advection_terms which has the functions A_q, C_q, C_p, C_p.
It is expected that A_q and C_q take the arguments (f, t, q1, q2, v1, v2, v3, params), where f is the distribution function, t is the time elapsed, (q1, q2, v1, v2, v3) are phase space grid data for the position space and velocity space respectively. Additionally, it also accepts a module params which can take contain user defined attributes and functions that can be injected into this function.
While A_p and C_p take all the arguments that are taken by A_q and C_q, it also takes the additional argument of a fields_solver object. The get_fields() method of this object returns the electromagnetic fields at the current instance. The fields_solver objects are internal to the solvers, and can be chosen as an electrostatic and electrodynamic as we've seen in the parameters above.
End of explanation
"""
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/moments.py
"""
Explanation: We hope the model described is quite clear from the docstrings. Note that we describe the model in terms of variables in velocities v and not the canonical variables p to avoid confusion with momentum.
Next, we proceed to see how moments we define moments for our system of interest.
End of explanation
"""
!cat $BOLT_HOME/bolt/src/nonrelativistic_boltzmann/collision_operator.py
"""
Explanation: As you can see all the moment quantities take (f, v1, v2, v3, integral_measure) as arguments interms of which we define the moments for the system. By default, integral_measure is taken to be dv1 dv2 dv3. These definitions are referred to by the solver routine compute_moments, which calls the appropriate moment routine as a string. For instance, if we want to compute density at the current state, calling compute_moments('density') gets the job done.
It's to be noted that when fields are enabled in the problem of consideration, density, mom_v1_bulk, mom_v1_bulk and mom_v1_bulk must be defined since this is used internally when solving for electromagnetic fields.
NOTE: Density is number density here
Now we proceed to the final information regarding our equation which is the source term which in our case is the BGK collision operator.
End of explanation
"""
# Importing dependencies:
import arrayfire as af
import numpy as np
import pylab as pl
%matplotlib inline
# Importing the classes which are used to declare the physical_system and solver objects
from bolt.lib.physical_system import physical_system
from bolt.lib.nonlinear.nonlinear_solver import nonlinear_solver
from bolt.lib.linear.linear_solver import linear_solver
# Optimized plot parameters to make beautiful plots:
pl.rcParams['figure.figsize'] = 12, 7.5
pl.rcParams['figure.dpi'] = 300
pl.rcParams['image.cmap'] = 'jet'
pl.rcParams['lines.linewidth'] = 1.5
pl.rcParams['font.family'] = 'serif'
pl.rcParams['font.weight'] = 'bold'
pl.rcParams['font.size'] = 20
pl.rcParams['font.sans-serif'] = 'serif'
pl.rcParams['text.usetex'] = True
pl.rcParams['axes.linewidth'] = 1.5
pl.rcParams['axes.titlesize'] = 'medium'
pl.rcParams['axes.labelsize'] = 'medium'
pl.rcParams['xtick.major.size'] = 8
pl.rcParams['xtick.minor.size'] = 4
pl.rcParams['xtick.major.pad'] = 8
pl.rcParams['xtick.minor.pad'] = 8
pl.rcParams['xtick.color'] = 'k'
pl.rcParams['xtick.labelsize'] = 'medium'
pl.rcParams['xtick.direction'] = 'in'
pl.rcParams['ytick.major.size'] = 8
pl.rcParams['ytick.minor.size'] = 4
pl.rcParams['ytick.major.pad'] = 8
pl.rcParams['ytick.minor.pad'] = 8
pl.rcParams['ytick.color'] = 'k'
pl.rcParams['ytick.labelsize'] = 'medium'
pl.rcParams['ytick.direction'] = 'in'
"""
Explanation: Here the BGK function is our source term, and takes the arguments (f, t, q1, q2, v1, v2, v3, moments, params, flag). Note that moments is the solver routine of compute_moments which is used to compute the moments at the current instance of computing the collison operator.
In parameters, we had defined an attribute instantaneous_collision which when set to True, returns the value that is returned by the source function when flag is set to True. Above we had mentioned how this maybe necessary in our model when solving for purely collisional cases(hydrodynamic regime)
We'll start by importing dependencies for the solver.
ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures including CPUs, GPUs, and other hardware acceleration devices. $\texttt{Bolt}$ uses it's Python API for the creation and manipulation of arrays which allows us to run our code on a range of devices at optimal speed.
We use NumPy for declaring the time data and storing data which we indend to plot, and pylab(matplotlib), for post-processing.
End of explanation
"""
# Defining the physical system to be solved:
system = physical_system(domain,
boundary_conditions,
params,
initialize,
advection_terms,
collision_operator.BGK,
moments
)
N_g_q = system.N_ghost_q
# Declaring a linear system object which will evolve the defined physical system:
nls = nonlinear_solver(system)
ls = linear_solver(system)
# Time parameters:
dt = 0.001
t_final = 0.5
time_array = np.arange(0, t_final + dt, dt)
rho_data_nls = np.zeros(time_array.size)
rho_data_ls = np.zeros(time_array.size)
"""
Explanation: We define the system we want to solve through the physical_system class. This class takes it's arguments as (domain, boundary_conditions, params, initialize, advection_terms, source_function, moments) which we had explored above. The declared object is then passed to the linear and nonlinear solver objects to provide information about the system solved.
End of explanation
"""
# Storing data at time t = 0:
n_nls = nls.compute_moments('density')
# Check for the data non inclusive of the ghost zones:
rho_data_nls[0] = af.max(n_nls[:, :, N_g_q:-N_g_q, N_g_q:-N_g_q])
n_ls = ls.compute_moments('density')
rho_data_ls[0] = af.max(n_ls)
for time_index, t0 in enumerate(time_array[1:]):
nls.strang_timestep(dt)
ls.RK4_timestep(dt)
n_nls = nls.compute_moments('density')
rho_data_nls[time_index + 1] = af.max(n_nls[:, :, N_g_q:-N_g_q, N_g_q:-N_g_q])
n_ls = ls.compute_moments('density')
rho_data_ls[time_index + 1] = af.max(n_ls)
pl.plot(time_array, rho_data_nls, label='Nonlinear Solver')
pl.plot(time_array, rho_data_ls, '--', color = 'black', label = 'Linear Solver')
pl.ylabel(r'MAX($\rho$)')
pl.xlabel('Time')
pl.legend()
"""
Explanation: The default data format in $\texttt{Bolt}$ is (Np, Ns, N_q1, N_q2), where Np is the number of zones in p-space, Ns is the number of species and N_q1 and N_q2 are the number of gridzones along q1 and q2 respectively.
Below since we want to obtain the amplitude for the density in the physical domain non-inclusive of the ghost zones, we use max(density[:, :, N_g:-N_g, -N_g:N_g]).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/4457f1e38b5fa0853b9fa024b11fe018/plot_artifacts_detection.ipynb | bsd-3-clause | import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
"""
Explanation: Introduction to artifacts and artifact detection
Since MNE supports the data of many different acquisition systems, the
particular artifacts in your data might behave very differently from the
artifacts you can observe in our tutorials and examples.
Therefore you should be aware of the different approaches and of
the variability of artifact rejection (automatic/manual) procedures described
onwards. At the end consider always to visually inspect your data
after artifact rejection or correction.
Background: what is an artifact?
Artifacts are signal interference that can be
endogenous (biological) and exogenous (environmental).
Typical biological artifacts are head movements, eye blinks
or eye movements, heart beats. The most common environmental
artifact is due to the power line, the so-called line noise.
How to handle artifacts?
MNE deals with artifacts by first identifying them, and subsequently removing
them. Detection of artifacts can be done visually, or using automatic routines
(or a combination of both). After you know what the artifacts are, you need
remove them. This can be done by:
- *ignoring* the piece of corrupted data
- *fixing* the corrupted data
For the artifact detection the functions MNE provides depend on whether
your data is continuous (Raw) or epoch-based (Epochs) and depending on
whether your data is stored on disk or already in memory.
Detecting the artifacts without reading the complete data into memory allows
you to work with datasets that are too large to fit in memory all at once.
Detecting the artifacts in continuous data allows you to apply filters
(e.g. a band-pass filter to zoom in on the muscle artifacts on the temporal
channels) without having to worry about edge effects due to the filter
(i.e. filter ringing). Having the data in memory after segmenting/epoching is
however a very efficient way of browsing through the data which helps
in visualizing. So to conclude, there is not a single most optimal manner
to detect the artifacts: it just depends on the data properties and your
own preferences.
In this tutorial we show how to detect artifacts visually and automatically.
For how to correct artifacts by rejection see
sphx_glr_auto_tutorials_plot_artifacts_correction_rejection.py.
To discover how to correct certain artifacts by filtering see
sphx_glr_auto_tutorials_plot_artifacts_correction_filtering.py
and to learn how to correct artifacts
with subspace methods like SSP and ICA see
sphx_glr_auto_tutorials_plot_artifacts_correction_ssp.py
and sphx_glr_auto_tutorials_plot_artifacts_correction_ica.py.
Artifacts Detection
This tutorial discusses a couple of major artifacts that most analyses
have to deal with and demonstrates how to detect them.
End of explanation
"""
(raw.copy().pick_types(meg='mag')
.del_proj(0)
.plot(duration=60, n_channels=100, remove_dc=False))
"""
Explanation: Low frequency drifts and line noise
End of explanation
"""
raw.plot_psd(tmax=np.inf, fmax=250)
"""
Explanation: we see high amplitude undulations in low frequencies, spanning across tens of
seconds
End of explanation
"""
average_ecg = create_ecg_epochs(raw).average()
print('We found %i ECG events' % average_ecg.nave)
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
average_ecg.plot_joint(**joint_kwargs)
"""
Explanation: On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,
related to line noise.
But also some high amplitude signals between 25 and 32 Hz, hinting at other
biological artifacts such as ECG. These can be most easily detected in the
time domain using MNE helper functions
See sphx_glr_auto_tutorials_plot_artifacts_correction_filtering.py.
ECG
finds ECG events, creates epochs, averages and plots
End of explanation
"""
average_eog = create_eog_epochs(raw).average()
print('We found %i EOG events' % average_eog.nave)
average_eog.plot_joint(**joint_kwargs)
"""
Explanation: we can see typical time courses and non dipolar topographies
not the order of magnitude of the average artifact related signal and
compare this to what you observe for brain signals
EOG
End of explanation
"""
|
Caranarq/01_Dmine | Datasets/CNGMD/2015.ipynb | gpl-3.0 | descripciones = {
'P0306' : 'Programas de modernizaciรณn catastral',
'P0307' : 'Disposiciones normativas sustantivas en materia de desarrollo urbano u ordenamiento territorial',
'P1001' : 'Promedio diario de RSU recolectados',
'P1003' : 'Nรบmero de municipios con disponibilidad de servicios relacionados con los RSU',
'P1006' : 'Nรบmero de municipios con aplicaciรณn de programas locales orientados a la GIRSU',
'P1009' : 'Nรบmero de municipios con estudios de generaciรณn de RSU',
}
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import zipfile
import csv
import pprint
import re
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
root = r'http://www.beta.inegi.org.mx/contenidos/proyectos/censosgobierno/municipal/cngmd/2015/datosabiertos/'
links = {
'P0306' : r'm1/Programa_modernizacion_catastral_cngmd2015_csv.zip', # Programas de modernizaciรณn catastral
'P0307' : r'm2/Marco_regulatorio_cngmd2015_csv.zip', # Disposiciones normativas sustantivas en materia de desarrollo urbano u ordenamiento territorial
'P1001' : r'm6/Rec_RSU_cngmd2015_csv.zip', # Promedio diario de RSU recolectados
'P1006' : r'm6/Prog_gest_int_RSU_cngmd2015_csv.zip', # Nรบmero de municipios con aplicaciรณn de programas locales orientados a la GIRSU
'P1009' : r'm6/Est_gen_comp_RSU_cngmd2015_csv.zip', # Nรบmero de municipios con estudios de generaciรณn de RSU
}
"""
Explanation: Limpieza de dataset del Censo Nacional de Gobiernos Municipales y Delegacionales 2015
1. Introduccion
Indicadores que salen de este dataset:
ID |Descripciรณn
---|:----------
P0306|Programas de modernizaciรณn catastral
P0307|Disposiciones normativas sustantivas en materia de desarrollo urbano u ordenamiento territorial
P1001|Promedio diario de RSU recolectados
P1003|Nรบmero de municipios con disponibilidad de servicios relacionados con los RSU
P1006|Nรบmero de municipios con aplicaciรณn de programas locales orientados a la GIRSU
P1009|Nรบmero de municipios con estudios de generaciรณn de RSU
2. Descarga de datos
End of explanation
"""
P1003links = { # Nรบmero de municipios con disponibilidad de servicios relacionados con los RSU
1 : r'm6/Rec_RSU_cngmd2015_csv.zip',
2 : r'm6/Trat_RSU_cngmd2015_csv.zip',
3 : r'm6/Disp_final_RSU_cngmd2015_csv.zip'
}
# Destino local
destino = r'D:\PCCS\00_RawData\01_CSV\cngmd\2015'
# Descarga de zips para parametros que se encuentran en un solo archivo
m_archivos = {} # Diccionario para guardar memoria de descarga
for parametro, fuente in links.items():
file = fuente.split('/')[1]
remote_path = root+fuente
local_path = destino + r'\{}'.format(file)
if os.path.isfile(local_path):
print('Ya existe el archivo: {}'.format(local_path))
m_archivos[parametro] = local_path
else:
print('Descargando {} ... ... ... ... ... '.format(local_path))
urllib.request.urlretrieve(remote_path, local_path) #
m_archivos[parametro] = local_path
print('se descargรณ {}'.format(local_path))
# Descarga de zips para parametro P1003
m_archivos2 = {} # Diccionario para guardar memoria de descarga
for parametro, fuente in P1003links.items():
file = fuente.split('/')[1]
remote_path = root+fuente
local_path = destino + r'\{}'.format(file)
if os.path.isfile(local_path):
print('Ya existe el archivo: {}'.format(local_path))
m_archivos2[parametro] = local_path
else:
print('Descargando {} ... ... ... ... ... '.format(local_path))
urllib.request.urlretrieve(remote_path, local_path) #
m_archivos2[parametro] = local_path
print('se descargรณ {}'.format(local_path))
# Descompresiรณn de archivos de m_parametro
unzipped = {}
for parametro, comprimido in m_archivos.items():
target = destino + '\\' + parametro
if os.path.isfile(target):
print('Ya existe el archivo: {}'.format(target))
unzipped[parametro] = target
else:
print('Descomprimiendo {} ... ... ... ... ... '.format(target))
descomprimir = zipfile.ZipFile(comprimido, 'r')
descomprimir.extractall(target)
descomprimir.close
unzipped[parametro] = target
# Descompresiรณn de archivos de m_parametro2
unzipped2 = {}
for parametro, comprimido in m_archivos2.items():
target = destino + '\\P1003\\' + str(parametro)
if os.path.isfile(target):
print('Ya existe el archivo: {}'.format(target))
unzipped2[parametro] = target
else:
print('Descomprimiendo {} ... ... ... ... ... '.format(target))
descomprimir = zipfile.ZipFile(comprimido, 'r')
descomprimir.extractall(target)
descomprimir.close
unzipped2[parametro] = target
# Localizacion de archivos de cada parametro
# Cada parametro tiene rutas y estructuras distintas. En este paso localizo manualmente
# cada tabla y estructura desde los comprimidos. cada valor del diccionario contiene la ruta hacia
# donde se encuentran las tablas.
cd = r'\conjunto_de_datos'
tablas = {
'P0306' : destino + r'\P0306' + cd,
'P0307' : destino + r'\P0307\marco_regulatorio_cngmd2015_dbf' + cd,
'P1001' : destino + r'\P1001\Rec_RSU_cngmd2015_csv' + cd,
'P1006' : destino + r'\P1006\Prog_gest_int_RSU_cngmd2015_csv' + cd,
'P1009' : destino + r'\P1009\Est_gen_comp_RSU_cngmd2015_csv' + cd,
}
# Tablas para P1003
destino2 = destino + r'\P1003'
tablasP1003 = {
'1' : destino2 + r'\1' + r'\Rec_RSU_cngmd2015_csv' + cd,
'2' : destino2 + r'\2' + r'\Trat_RSU_cngmd2015_csv' + cd,
'3' : destino2 + r'\3' + r'\Disp_final_RSU_cngmd2015_csv' + cd,
}
"""
Explanation: En el caso del parรกmetro P1003, los datos se extraen desde 3 archivos. Estos archivos son una base de datos para cada servicio relacionado con los RSU, Utilizando nuevamente el archivo que utiliza P1001 y dos adicionales:
End of explanation
"""
# Script para extraer metadatos:
def getmeta(path, charcoding): # Path es el contenido en las variables 'tablas' para cada parametro
cat = r'\catalogos'
dic = r'\diccionario_de_datos'
metadict = {}
metapath = path.replace(cd, cat)
metafiles = os.listdir(metapath)
dicdict = {}
dicpath = path.replace(cd, dic)
dicfiles = os.listdir(dicpath)
for file in metafiles:
variable = file.replace('.csv', '')
if file.endswith('.csv'):
csvpath = metapath+'\\'+file
metadf = pd.DataFrame.from_csv(csvpath, parse_dates=False)
try:
metadf.index = metadf.index.map(str.lower)
except:
pass
metadict[variable] = metadf
else:
dothis = input('El archivo {} no es csv, que deseas hacer? [DD]etener [CC]ontinuar'.format(file))
dothis = dothis.lower()
if dothis == 'dd':
raise GeneratorExit('Script detenido por el usuario')
elif dothis == 'cc':
continue
else:
raise KeyError('No entendi la instruccion {}'.format(dothis))
for file in dicfiles:
if file.endswith('.csv'):
filename = file.replace('.csv', '')
csvpath = dicpath+'\\'+file
try:
dicdf = pd.read_csv(csvpath, skiprows=2, usecols=[1, 2], index_col=0, parse_dates=False).dropna()
except:
dicdf = pd.read_csv(csvpath, skiprows=2, usecols=[1, 2], index_col=0, parse_dates=False, encoding = charcoding).dropna()
dicdf.index = dicdf.index.map(str.lower)
dicdict[filename] = dicdf
return dicdict, metadict
# Funcion para revisar metadatos
def queryvar(var, tablelen=10, colprint = 125, dictio = p0306dic, metadat = p0306meta):
pdefault = pd.get_option('display.max_colwidth')
pd.set_option('display.max_colwidth', colprint) # Expande el espacio para imprimir columnas
print('"{}" :\n{}'.format(var, dictio.loc[var][0].upper()))
if len(metadat[var]) > tablelen:
print('{}\nImprimiendo {} de {} registros'.format('-'*40,tablelen, len(metadat[var])))
print(metadat[var].head(tablelen))
pd.set_option('display.max_colwidth', pdefault) # Regresa la variable de impresion de columnas a su default
"""
Explanation: Construccion de datasets estรกndar
Los datasets para cada parรกmetro surgen de diferentes preguntas del censo por lo que sus estructuras son muy desemejantes, razon por la cual:
(1) : Cada parรกmetro tiene que procesarse individualmente.
(2) : Es conveniente extraer de manera individual los metadatos de cada parรกmetro. Con este propรณsito, el siguiente script sirve para extraer los metadatos de cada dataset:
End of explanation
"""
# Creacion de diccionarios con metadatos para cada variable de P0306:
par = 'P0306'
p0306dic, p0306meta = getmeta(tablas['P0306'], 'mbcs')
print('Se extrajeron metadatos para las siguientes variables de {}:'.format(par))
for key in p0306meta.keys(): print(key)
print('\nDiccionarios disponibles para {}:'.format(par))
for key in p0306dic.keys(): print(key)
# Para P0306, solo existe una tabla de descripciones por lo que se convierte a un dataframe unico para poder indexar
p0306dic = p0306dic['diccionario_de_datos_programa_modernizacion_catastral_cngmd2015_dbf']
p0306dic
list(p0306dic.index)
queryvar('acc_modr')
print('** Descripciones de variables **\n'.upper())
for i in p0306dic.index:
queryvar(i)
print('\n')
# Carga de datos
P0306f = tablas['P0306']+'\\'+os.listdir(tablas['P0306'])[0]
df = pd.read_csv(P0306f, dtype={'ubic_geo':'str'})
df = df.rename(columns = {'ubic_geo':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P0306 = df.where((pd.notnull(df)), None)
"""
Explanation: P0306 - Programas de modernizaciรณn catastral
Existencia de programas de modernizaciรณn catastral en los municipios.
End of explanation
"""
# subset para pruebas
test = P0306.loc['15045']
test
"""
Explanation: El archivo estรก estructurado de manera inconveniente, teniendo un renglรณn para cada variable. Lo conveniente es que cada renglรณn contenga toda la informaciรณn de un solo municipio.
End of explanation
"""
queryvar('estructu')
# ยฟEl municipio cuenta con un programa de modernizaciรณn catastral?
P0306_00 = P0306[P0306['estructu'] == 240500]['prog_mod'].astype('int')
print(P0306_00.head(10))
print('-'*50)
queryvar('prog_mod')
# ยฟEn que periodo se realizaron las acciones del programa de modernizaciรณn catastral?
P0306_03 = P0306[P0306['estructu'] == 240503]['perio_ac'].astype('int')
print(P0306_03.head(10))
print('-'*50)
queryvar('perio_ac')
# ยฟQuรฉ acciones se realizaron?
P0306_02 = P0306[P0306['estructu'] == 240502]['acc_modr'].astype('int').groupby('CVE_MUN').apply(list)
print(P0306_02.head(10))
queryvar('acc_modr')
# ยฟCuantas acciones se realizaron?
P0306_02b = P0306_02.apply(len).rename('n_acc_modr')
P0306_02b.head(10)
queryvar('inst_enc')
# ยฟQue instituciones se han involucrado en la modernizacion catastral, y de quรฉ manera?
P0306_01t = P0306[P0306['estructu'] == 240501][['inst_enc', 'tip_inst']] # tipo de apoyo e institucion
P0306_01t.head()
"""
Explanation: Para corregirlo, primero hacemos dataframes separados para cada variable. Afortunadamente, la columna 'Estructu' sirve para agrupar estructuralmente el dataframe
End of explanation
"""
queryvar('tip_inst')
# Institucion involucrada
instit = {
1:'Administraciรณn pรบblica de la entidad federativa',
2:'BANOBRAS',
3:'SEDATU',
4:'OTRA INSTITUCION'
}
P0306_01t['tip_inst'] = P0306_01t['tip_inst'].replace(instit)
P0306_01t.head()
"""
Explanation: Se reemplazarรกn numeros por descripciones en tip_inst:
End of explanation
"""
queryvar('inst_enc')
P0306_01t1 = P0306_01t[P0306_01t['inst_enc'] == 1]['tip_inst'].groupby('CVE_MUN').apply(list).rename('i_coord_ejecuta')
P0306_01t2 = P0306_01t[P0306_01t['inst_enc'] == 2]['tip_inst'].groupby('CVE_MUN').apply(list).rename('i_otorga_apoyos')
P0306_01t1.head()
P0306_01t2.head()
"""
Explanation: Y se separarรก la columna 'inst_enc' en 2:
End of explanation
"""
# Convertir series en Dataframes
P0306_00 = P0306_00.to_frame()
P0306_03 = P0306_03.to_frame()
P0306_02 = P0306_02.to_frame()
P0306_02b = P0306_02b.to_frame()
P0306_01t1 = P0306_01t1.to_frame()
P0306_01t2 = P0306_01t2.to_frame()
# Unir dataframes
P0306 = P0306_00.join(P0306_03).join(P0306_02).join(P0306_02b).join(P0306_01t1).join(P0306_01t2)
P0306 = P0306.where((pd.notnull(P0306)), None)
P0306.head()
"""
Explanation: Finalmente, se unirรกn todas las series en un solo dataframe
End of explanation
"""
P0306meta = {
'Nombre del Dataset' : 'Censo Nacional de Gobiernos Municipales y Delegacionales 2015',
'Descripcion del dataset' : 'Censo Nacional de Gobiernos Municipales y Delegacionales 2015',
'Disponibilidad Temporal' : '2015',
'Periodo de actualizacion' : 'Bienal',
'Nivel de Desagregacion' : 'Municipal',
'Notas' : 's/n',
'Fuente' : 'INEGI',
'URL_Fuente' : 'http://www.beta.inegi.org.mx/contenidos/proyectos/censosgobierno/municipal/cngmd/2015/datosabiertos/',
'Dataset base' : '"P0306.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015',
}
P0306meta = pd.DataFrame.from_dict(P0306meta, orient='index', dtype=None)
P0306meta.columns = ['Descripcion']
P0306meta = P0306meta.rename_axis('Metadato')
P0306meta
list(P0306meta)
P0306.head()
"""
Explanation: Metadatos para P0306
End of explanation
"""
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD\P0306.xlsx'
writer = pd.ExcelWriter(file)
P0306.to_excel(writer, sheet_name = 'P0306')
P0306meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: EXPORTAR A EXCEL
End of explanation
"""
# Redefiniciรณn de la funciรณn para revisar metadatos, porque los datos de la carpeta 'catรกlogos' de P0307
# no coinciden con los titulos de las columnas en la carpeta 'Conjunto de datos'.
def getmetab(csvpath, textcoding):
# Importa el csv
try: dicdf = pd.read_csv(csvpath,
index_col=0,
parse_dates=False
)
except: dicdf = pd.read_csv(csvpath,
index_col=0,
parse_dates=False,
encoding = textcoding,
)
# Renombra las columnas
dicdf.columns = list(dicdf.iloc[1])
# Crea columna con el indice
dicdf['text_arc'] = dicdf.index
# Extrae el nombre del csv fuente en una columna independiente
def getarc(x):
try: return re.search('(?<=(o: ))([A-Z])\w+', x).group()
except: return None
dicdf['arc'] = dicdf['text_arc'].apply(lambda x: getarc(x))
# Extrae la descripcion del archivo en una columna independiente
def getdescarc(x):
try: return re.search('\(([^)]+)\)', x).group(1)
except: return None
dicdf['desc_arc'] = dicdf['text_arc'].apply(lambda x: getdescarc(x))
# Marca columnas que se van a eliminar (Las columnas de donde se sacaron las variables 'arc' y 'desc_arc')
dicdf['delete1'] = dicdf[list(dicdf.columns)[1:6]].notnull().sum(axis = 1)
# Rellenar valores NaN
dicdf = dicdf.fillna(method='ffill')
# Eliminar valores marcados previaente
dicdf = dicdf[dicdf.delete1 != 0]
# Eliminar encabezados de columna repetidos
dicdf = dicdf[dicdf.Descripciรณn != 'Descripciรณn']
# Asignar nuevo indice y eliminar columna 'arc'
dicdf = dicdf.set_index('arc')
# Elimina columna delete1
del dicdf['delete1']
# Renombra la columna de descripciones de codigos
dicdf.columns.values[5] = 'Descripcion codigos'
# Dame el DataFrame
return dicdf
# Tambiรฉn es necesario redefinir la funciรณn para hacer consultas a los metadatos
def queryvar(filename, var = '', tablelen=10, colprint = 125, dictio = metadatos):
pdefault = pd.get_option('display.max_colwidth')
pd.set_option('display.max_colwidth', colprint) # Expande el espacio para imprimir columnas
frame = dictio.loc[filename]
print('Archivo "{}.csv" {}'.format(filename, '-'*30)) # Muestra el nombre del archivo
print(frame.iloc[0]['desc_arc']) # Muestra la descripcion del archivo
if var == '': pass
else:
print('\n{}{}'.format(var.upper(), '-'*30)) # Muestra el nombre de la variable
varframe = frame[frame['Nombre de la \ncolumna'] == var.upper()] # Haz un subset con los datos de la variable
varframe = varframe.set_index('Cรณdigos vรกlidos en la columna')
print(varframe['Descripciรณn'][0]) # Muestra la descripcion de la variable
print(varframe[['Descripcion codigos']]) # Imprime las descripciones de codigos
csvpath = r'D:\PCCS\00_RawData\01_CSV\cngmd\2015\P0307\marco_regulatorio_cngmd2015_dbf\diccionario_de_datos\diccionario_de_datos_marco_regulatorio_cngmd2015.csv'
metadatos = getmetab(csvpath, 'mbcs')
# Definiciรณn de rutas de archivos
par = 'P0307'
P0307files = {}
for file in os.listdir(tablas[par]):
P0307files[file.replace('.csv', '')] = tablas[par]+'\\'+file
"""
Explanation: P0307: Disposiciones normativas sustantivas en materia de desarrollo urbano u ordenamiento territorial
Es necesario cambiar el encoding para leer los archivos de este parametro
End of explanation
"""
for file in P0307files.keys():
print(file)
queryvar(file.upper())
print('\n')
"""
Explanation: El contenido de los archivos en la carpeta "Conjunto de datos" es el siguiente:
End of explanation
"""
print('P0307 - {}\n'.format(descripciones['P0307']))
queryvar('m_regula'.upper())
# Carga de datos
P0307f = tablas['P0307']+'\\'+ os.listdir(tablas['P0307'])[4]
df = pd.read_csv(P0307f, dtype={'ubic_geo':'str'})
df = df.rename(columns = {'ubic_geo':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P0307 = df.where((pd.notnull(df)), None)
P0307.head()
P0307.columns
"""
Explanation: La informaciรณn para el parรกmetro P0307 se encuentra en el archivo M_REGULA.csv
End of explanation
"""
queryvar('m_regula'.upper(), 'tema_nis')
"""
Explanation: ยฟDรณnde estรกn los datos sobre desarrollo urbano y ordenamiento territorial?
End of explanation
"""
P0307 = P0307[P0307['tema_nis'] == 41]
P0307.head()
# Quita las columnas que estรฉn vacรญas
P0307 = P0307.dropna(axis=1, how = 'all')
P0307.head()
# Metadatos
meta = P0306meta
meta.at['Dataset base','Descripcion'] = meta.at['Dataset base','Descripcion'].replace('P0306', 'P0307')
meta
"""
Explanation: Los datos de DU y OT estan en la columna TEMA_NIS. El cรณdigo 41 en esta column indica DU y OT
End of explanation
"""
par = 'P0307'
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+par+'.xlsx'
writer = pd.ExcelWriter(file)
P0307.to_excel(writer, sheet_name = par)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar archivo
End of explanation
"""
# Rutas de archivos
param = 'P1001'
rutadatos = tablas[param]
rutameta = tablas[param].replace('conjunto_de_datos', 'diccionario_de_datos')
rutameta = rutameta + '\\' + os.listdir(rutameta)[0]
print('{}\n{}'.format(rutadatos, rutameta))
# Obtencion de metadatos
# Cada hoja de metadatos es muy muy similar, pero con muy ligeras variaciones
# La unica parte del proceso que es seguro automatizar es la importaciรณn del archivo hacia Python
def getmeta(csvpath, textcoding):
# Importa el csv
try:
dicdf = pd.read_csv(csvpath,
index_col=0,
parse_dates=False
)
except:
dicdf = pd.read_csv(csvpath,
index_col=0,
parse_dates=False,
encoding = textcoding,
)
# Renombra las columnas
dicdf.columns = list(dicdf.iloc[1])
# Dame el archivo
return dicdf
os.listdir(r'D:\PCCS\00_RawData\01_CSV\cngmd\2015\P1001\Rec_RSU_cngmd2015_csv\diccionario_de_datos')
metadatos = getmeta(rutameta, 'mbcs')
# Crea columna con el indice
metadatos['Nombre de la \ncolumna'] = metadatos.index
# Extrae el nombre del csv fuente en una columna independiente
def getarc(x):
try: return x.split(' ')[1]
except: return None
metadatos['archivo'] = metadatos['Nombre de la \ncolumna'].apply(getarc)
# Extrae la descripcion del archivo en una columna independiente
def getdescarc(x):
try: return x.split('(')[1].replace(')','')
except: return None
metadatos['desc_arc'] = metadatos['Nombre de la \ncolumna'].apply(getdescarc)
# En la columna 'arc', reemplaza las celdas cuyo valor es 'de'
metadatos['archivo'] = metadatos['archivo'].replace({'de':None})
# Marca columnas que se van a eliminar (Las columnas de donde se sacaron las variables 'arc' y 'desc_arc')
metadatos['delete1'] = metadatos[list(metadatos.columns)[1:6]].notnull().sum(axis = 1)
# Rellenar valores NaN
metadatos = metadatos.fillna(method='ffill')
# Eliminar valores marcados previaente
metadatos = metadatos[metadatos.delete1 != 0]
# Eliminar columnas sin datos
metadatos = metadatos.dropna(axis = 1, how = 'all')
# Eliminar encabezados de columna repetidos
metadatos = metadatos[metadatos.Descripciรณn != 'Descripciรณn']
# Asignar nuevo indice y eliminar columna 'text_arc'
metadatos = metadatos.set_index('archivo')
# Elimina columna delete1
del metadatos['delete1']
# Renombra la columna de descripciones de codigos
metadatos.columns.values[3] = 'Descripcion codigos'
# Reordena las columnas
neworder = ['Nombre de la \ncolumna', 'Descripciรณn', 'Tipo de dato', 'Rango vรกlido', 'Descripcion codigos',
'Pregunta textual', 'Pรกgina de Cuestionario', 'Definiciรณn', 'desc_arc']
metadatos = metadatos.reindex(columns= neworder)
# Renombra las columnas para que funcionen con queryvar
metadatos = metadatos.rename({'Rango vรกlido':'Cรณdigos vรกlidos en la columna'})
metadatos.head(3)
"""
Explanation: P1001 - Promedio diario de RSU recolectados
End of explanation
"""
metadatos.loc['secc_i_tr_cngmd15_m6'][metadatos.loc['secc_i_tr_cngmd15_m6']['Nombre de la \ncolumna'] == 'P2_2']
"""
Explanation: ยฟDonde estan los datos?
End of explanation
"""
# Definiciรณn de rutas a archivos de datos
Paramfiles = {}
for file in os.listdir(rutadatos):
Paramfiles[file.replace('.csv', '')] = rutadatos+'\\'+file
for file, path in Paramfiles.items():
print('{}:\n{}\n'.format(file, path))
# Carga de datos
P1001f = tablas[param]+'\\'+ os.listdir(tablas[param])[0]
df = pd.read_csv(P1001f, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1001 = df.where((pd.notnull(df)), None)
P1001.head(1)
P1001 = P1001['p2_2'].to_frame()
P1001.head(1)
"""
Explanation: Los datos se encuentran en el archivo secc_i_tr_cngmd15_m6, en la columna P2_2
End of explanation
"""
# Metadatos
meta = meta # Utiliza el archivo de metadatos que habรญas definido anteriormente
meta.at['Dataset base','Descripcion'] = '"P1001.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015'
meta.at['Notas','Descripcion'] = 'p2_2: Cantidad de residuos sรณlidos recolectada en kilogramos.'
meta
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+param+'.xlsx'
writer = pd.ExcelWriter(file)
P1001.to_excel(writer, sheet_name = param)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar archivos
End of explanation
"""
# Rutas de archivos
param = 'P1006'
rutadatos = tablas[param]
rutameta = tablas[param].replace('conjunto_de_datos', 'diccionario_de_datos')
rutameta = rutameta + '\\' + os.listdir(rutameta)[0]
print('{}\n{}'.format(rutadatos, rutameta))
"""
Explanation: P1006 - Nรบmero de municipios con aplicaciรณn de programas locales orientados a la GIRSU
End of explanation
"""
# Definiciรณn de rutas a archivos de datos
Paramfiles = {}
for file in os.listdir(rutadatos):
Paramfiles[file.replace('.csv', '')] = rutadatos+'\\'+file
for file, path in Paramfiles.items():
print('{}:\n{}\n'.format(file, path))
os.listdir(tablas[param])[0]
# Carga de datos
P1006f = tablas[param]+'\\'+ os.listdir(tablas[param])[0]
df = pd.read_csv(P1006f, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1006 = df.where((pd.notnull(df)), None)
"""
Explanation: ยฟDonde estan los datos?
El archivo secc_v_tr_cngmd15_m6.csv Contiene variables que caracterizan a los municipios de acuerdo a los programas orientados a la gestiรณn integral de los residuos sรณlidos urbanos, durante el aรฑo 2014. En este archivo, la columna P13 Indica si se cuenta con algรบn programa orientado a la gestiรณn integral de residuos sรณlidos urbanos (1 = Cuenta con Programas; 2 = No cuenta con programas).
El archivo secc_v_tr_cngmd15_m6_p13_1.csv Contiene la variable P13_1_1_2, que indica el tipo de programa orientado a la gestiรณn integral de residuos sรณlidos urbanos.
End of explanation
"""
P1006 = P1006['p13'].to_frame()
# Metadatos
meta = meta # Utiliza el archivo de metadatos que habรญas definido anteriormente
meta.at['Dataset base','Descripcion'] = '"P1006.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015'
meta.at['Notas','Descripcion'] = 'En la columna p13, ยฟEl municipio cuenta con Programas de Gestion de Residuos? 1: Si, 2: No'
meta
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+param+'.xlsx'
writer = pd.ExcelWriter(file)
P1006.to_excel(writer, sheet_name = param)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar Archivos
End of explanation
"""
# Rutas de archivos
param = 'P1009'
rutadatos = tablas[param]
rutameta = tablas[param].replace('conjunto_de_datos', 'diccionario_de_datos')
rutameta = rutameta + '\\' + os.listdir(rutameta)[0]
print('{}\n{}'.format(rutadatos, rutameta))
"""
Explanation: P1009 - Nรบmero de municipios con estudios de generaciรณn de RSU
End of explanation
"""
# Definiciรณn de rutas a archivos de datos
Paramfiles = {}
for file in os.listdir(rutadatos):
Paramfiles[file.replace('.csv', '')] = rutadatos+'\\'+file
for file, path in Paramfiles.items():
print('{}:\n{}\n'.format(file, path))
# Carga de datos
P1009f = tablas[param]+'\\'+ os.listdir(tablas[param])[0]
df = pd.read_csv(P1009f, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1009 = df.where((pd.notnull(df)), None)
del(P1009['entidad'])
del(P1009['municipio'])
"""
Explanation: ยฟDonde estรกn los datos?
secc_iv_tr_cngmd15_m6 Contiene variables que caracterizan a los municipios de acuerdo a los estudios sobre la generaciรณn y composiciรณn de los residuos sรณlidos urbanos, durante el aรฑo 2014.
La columna P12 Indica si se cuenta con algรบn estudio sobre la generaciรณn de residuos sรณlidos urbanos (1 = Si; 2 = No).
End of explanation
"""
meta
# Metadatos
meta = meta # Utiliza el archivo de metadatos que habรญas definido anteriormente
meta.at['Dataset base','Descripcion'] = '"P1009.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015'
meta.at['Notas','Descripcion'] = 'Para la columna P12, ยฟEl Municipio cuenta con estudios de generacion de residuos? 1: Si 2: No'
meta
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+param+'.xlsx'
writer = pd.ExcelWriter(file)
P1009.to_excel(writer, sheet_name = param)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar archivos
End of explanation
"""
tablasP1003
"""
Explanation: P1003 - Nรบmero de municipios con disponibilidad de servicios relacionados con los RSU
ยฟDonde estan los datos?
La informacion de este parametro se encuentra dividida entre diferentes carpetas.
End of explanation
"""
# Rutas de archivos
param = 'P1003'
rutasdatos = list(tablasP1003.values())
for ruta in rutasdatos:
print(ruta)
# Definiciรณn de rutas a archivos de datos
Paramfiles = {}
for rutadatos in rutasdatos:
for file in os.listdir(rutadatos):
Paramfiles[file.replace('.csv', '')] = rutadatos+'\\'+file
for file, path in Paramfiles.items():
print('{}:\n{}\n'.format(file, path))
# Carga de datos
# Es necesario hacer 3 dataframes, uno por cada archivo, y despuรฉs unir las columnas para cada parรกmetro.
P1003f1 = Paramfiles['secc_i_tr_cngmd15_m6']
df = pd.read_csv(P1003f1, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1003f1 = df.where((pd.notnull(df)), None)
P1003f2 = Paramfiles['secc_ii_tr_cngmd15_m6']
df = pd.read_csv(P1003f2, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1003f2 = df.where((pd.notnull(df)), None)
# El Parametro en realidad no utiliza el numero de sitios de disposicion de residuos.
# Y no estรก documentado el significado de NS en la columna P11 lo que dificulta la lectura de los datos
'''
P1003f3 = Paramfiles['secc_iii_tr_cngmd15_m6']
df = pd.read_csv(P1003f3, dtype={'folio':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'folio':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1003f3 = df.where((pd.notnull(df)), None)
'''
# Aislar datos de interรฉs
P1003 = P1003f1['p1'].to_frame()
P1003['p10'] = P1003f2['p10']
# P1003['p11'] = P1003f3['p11'] #p11 se excluye del analisis por los motivos descritos antes
P1003.head(1)
"""
Explanation: La Carpeta 1 Contiene 2 archivos:
secc_i_tr_cngmd15_m6.csv - Contiene variables que caracterizan a los municipios de acuerdo a la recolecciรณn de residuos sรณlidos urbanos, durante el aรฑo 2014. En este archivo, la variable P1 indica la disponibilidad del servicio de recolecciรณn (1: Si, 2:No)
secc_i_tr_cngmd15_m6_p6_3_2.csv - Contiene variables que caracterizan a los municipios de acuerdo al parque vehicular utilizado para la recolecciรณn y traslado de residuos sรณlidos urbanos, durante el aรฑo 2014. En este archivo, la variable P6_3_2_1_3 contiene el nรบmero de vehรญculos utilizados para la recolecciรณn de Residuos solidos urbanos. (Esta variable puede utilizarse para la construcciรณn del parรกmetro 1005)
La Carpeta 2 Contiene 1 archivo:
secc_ii_tr_cngmd15_m6.csv - Contiene variables que caracterizan a los municipios de acuerdo al tratamiento de los residuos, durante el aรฑo 2014. En este archivo, la variable P10 Identifica si al menos una fracciรณn de los residuos sรณlidos urbanos recolectados por el municipio o delegaciรณn es enviada a plantas de tratamiento (1: Si, 2:No)
La Carpeta 3 contiene 1 archivo:
secc_iii_tr_cngmd15_m6.csv - Contiene variables que caracterizan a los municipios de acuerdo a la disposiciรณn final de los residuos sรณlidos urbanos, durante el aรฑo 2014. En este archivo, la variable P11 Identifica el nรบmero de sitios de disposiciรณn final a los que son son remitidos los residuos que se recolectan en todo el municipio o delegaciรณn
End of explanation
"""
# Metadatos
meta = meta # Utiliza el archivo de metadatos que habรญas definido anteriormente
meta.at['Dataset base','Descripcion'] = '"P1003.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015'
meta.at['Notas','Descripcion'] = 'para p1: ยฟDispone de servicio de recoleccion? (1: Si 2: No)\npara p10: ยฟAl menos una fracciรณn de los RSU es enviada a plantas de tratamiento? (1: Si 2: No)\npara p11: ยฟA cuantos sitios de disposiciรณn final son remitidos los residuos?'
meta
param
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+param+'.xlsx'
writer = pd.ExcelWriter(file)
P1003.to_excel(writer, sheet_name = param)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar archivos
End of explanation
"""
# Carga de datos
P1005f = Paramfiles['secc_i_tr_cngmd15_m6_p6_3_2']
df = pd.read_csv(P1005f, dtype={'FOLIO':'str'}, encoding = 'mbcs')
df = df.rename(columns = {'FOLIO':'CVE_MUN'})
df.set_index('CVE_MUN', inplace = True)
P1005f = df.where((pd.notnull(df)), None)
P1005f.head(1)
"""
Explanation: P1005 - Nรบmero de vehรญculos utilizados para la recolecciรณn de residuos sรณlidos urbanos
ยฟDonde estรกn los datos?
La Carpeta 1 de P1003 (Procesada previamente) contiene 2 archivos:
secc_i_tr_cngmd15_m6.csv, y
secc_i_tr_cngmd15_m6_p6_3_2.csv - Contiene variables que caracterizan a los municipios de acuerdo al parque vehicular utilizado para la recolecciรณn y traslado de residuos sรณlidos urbanos, durante el aรฑo 2014. En este archivo, la variable P6_3_2_1_3 contiene el nรบmero de vehรญculos utilizados para la recolecciรณn de Residuos solidos urbanos. (Esta variable puede utilizarse para la construcciรณn del parรกmetro 1005)
End of explanation
"""
P1005 = P1005f['P6_3_2_1_3'].to_frame()
P1005.head(3)
# Metadatos
meta = meta # Utiliza el archivo de metadatos que habรญas definido anteriormente
meta.at['Dataset base','Descripcion'] = '"P1005.xlsx" disponible en \nhttps://github.com/INECC-PCCS/01_Dmine/tree/master/Datasets/CNGMD/2015'
meta.at['Notas','Descripcion'] = 'P6_3_2_1_3: Numero de vehiculos utilizados para la recolecciรณn de Residuos Solidos Urbanos'
meta
param = 'P1005'
file = r'D:\PCCS\01_Dmine\Datasets\CNGMD'+'\\'+param+'.xlsx'
writer = pd.ExcelWriter(file)
P1005.to_excel(writer, sheet_name = param)
meta.to_excel(writer, sheet_name ='METADATOS')
writer.save()
"""
Explanation: Exportar archivos
End of explanation
"""
|
afedynitch/MCEq | examples/Compare_primary_fluxes.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
#import solver related modules
from MCEq.core import MCEqRun
import mceq_config as config
#import primary model choices
import crflux.models as pm
"""
Explanation: Dependence on primary cosmic ray flux
End of explanation
"""
mceq_run = MCEqRun(
#provide the string of the interaction model
interaction_model='SIBYLL2.3c',
#primary cosmic ray flux model
#support a tuple (primary model class (not instance!), arguments)
primary_model=(pm.HillasGaisser2012, "H3a"),
# Zenith angle in degrees. 0=vertical, 90=horizontal
theta_deg=0.0
)
"""
Explanation: Create an instance of an MCEqRun class. Most options are defined in the mceq_config module, and do not require change. Look into mceq_config.py or use the documentation.
If the initialization succeeds it will print out some information according to the debug level.
End of explanation
"""
# Bump up the debug level to see what the calculation is doing
config.debug_level = 2
#Define equidistant grid in cos(theta)
angles = np.arccos(np.linspace(1,0,11))*180./np.pi
#Power of energy to scale the flux
mag = 3
#obtain energy grid (nver changes) of the solution for the x-axis of the plots
e_grid = mceq_run.e_grid
p_spectrum_flux = []
#Initialize empty grid
for pmcount, pmodel in enumerate([(pm.HillasGaisser2012,'H3a'),
(pm.HillasGaisser2012,'H4a'),
(pm.GaisserStanevTilav,'3-gen'),
(pm.GaisserStanevTilav,'4-gen')]):
mceq_run.set_primary_model(*pmodel)
flux = {}
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = np.zeros_like(e_grid)
#Sum fluxes, calculated for different angles
for theta in angles:
mceq_run.set_theta_deg(theta)
mceq_run.solve()
#_conv means conventional (mostly pions and kaons)
flux['mu_conv'] += (mceq_run.get_solution('conv_mu+', mag)
+ mceq_run.get_solution('conv_mu-', mag))
# _pr means prompt (the mother of the muon had a critical energy
# higher than a D meson. Includes all charm and direct resonance
# contribution)
flux['mu_pr'] += (mceq_run.get_solution('pr_mu+', mag)
+ mceq_run.get_solution('pr_mu-', mag))
# total means conventional + prompt
flux['mu_total'] += (mceq_run.get_solution('total_mu+', mag)
+ mceq_run.get_solution('total_mu-', mag))
# same meaning of prefixes for muon neutrinos as for muons
flux['numu_conv'] += (mceq_run.get_solution('conv_numu', mag)
+ mceq_run.get_solution('conv_antinumu', mag))
flux['numu_pr'] += (mceq_run.get_solution('pr_numu', mag)
+ mceq_run.get_solution('pr_antinumu', mag))
flux['numu_total'] += (mceq_run.get_solution('total_numu', mag)
+ mceq_run.get_solution('total_antinumu', mag))
# same meaning of prefixes for electron neutrinos as for muons
flux['nue_conv'] += (mceq_run.get_solution('conv_nue', mag)
+ mceq_run.get_solution('conv_antinue', mag))
flux['nue_pr'] += (mceq_run.get_solution('pr_nue', mag)
+ mceq_run.get_solution('pr_antinue', mag))
flux['nue_total'] += (mceq_run.get_solution('total_nue', mag)
+ mceq_run.get_solution('total_antinue', mag))
# since there are no conventional tau neutrinos, prompt=total
flux['nutau_pr'] += (mceq_run.get_solution('total_nutau', mag)
+ mceq_run.get_solution('total_antinutau', mag))
#average the results
for frac in ['mu_conv','mu_pr','mu_total',
'numu_conv','numu_pr','numu_total',
'nue_conv','nue_pr','nue_total','nutau_pr']:
flux[frac] = flux[frac]/float(len(angles))
p_spectrum_flux.append((flux,mceq_run.pmodel.sname,mceq_run.pmodel.name))
"""
Explanation: Solve and store results
This code below computes fluxes of neutrinos, averaged over all directions, for different primary models.
End of explanation
"""
#get path of the home directory + Desktop
desktop = os.path.join(os.path.expanduser("~"),'Desktop')
for pref, lab in [('numu_',r'\nu_\mu'),
('mu_',r'\mu'),
('nue_',r'\nu_e')
]:
plt.figure(figsize=(4.5, 3.5))
for (flux, p_sname, p_name), col in zip(p_spectrum_flux,['k','r','g','b','c']):
plt.loglog(e_grid, flux[pref + 'total'], color=col, ls='-', lw=2.5,
label=p_sname, alpha=0.4)
plt.loglog(e_grid, flux[pref + 'conv'], color=col, ls='--', lw=1,
label='_nolabel_')
plt.loglog(e_grid, flux[pref + 'pr'], color=col,ls='-', lw=1,
label='_nolabel_')
plt.xlim(50,1e9)
plt.ylim(1e-5,1)
plt.xlabel(r"$E_{{{0}}}$ [GeV]".format(lab))
plt.ylabel(r"$\Phi_{" + lab + "}$ (E/GeV)$^{" + str(mag) +" }$" +
"(cm$^{2}$ s sr GeV)$^{-1}$")
plt.legend(loc='upper right')
plt.tight_layout()
# Uncoment if you want to save the plot
# plt.savefig(os.path.join(desktop,pref + 'flux.pdf'))
"""
Explanation: Plot with matplotlib
End of explanation
"""
for (flux, p_sname, p_name) in p_spectrum_flux:
np.savetxt(open(os.path.join(desktop, 'numu_flux_' + p_sname + '.txt'),'w'),
zip(e_grid,
flux['mu_conv'],flux['mu_pr'],flux['mu_total'],
flux['numu_conv'],flux['numu_pr'],flux['numu_total'],
flux['nue_conv'],flux['nue_pr'],flux['nue_total'],
flux['nutau_pr']),
fmt='%6.5E',
header=('lepton flux scaled with E**{0}. Order (E, mu_conv, mu_pr, mu_total, ' +
'numu_conv, numu_pr, numu_total, nue_conv, nue_pr, nue_total, ' +
'nutau_pr').format(mag)
)
"""
Explanation: Save as in ASCII file for other types of processing
End of explanation
"""
|
ds-hwang/deeplearning_udacity | udacity_notebook/1_notmnist.ipynb | mit | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
"""
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
"""
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
"""
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
"""
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
"""
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
"""
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
for image_index, image in enumerate(image_files):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index + 1
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
"""
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
"""
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
"""
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
"""
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
"""
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
"""
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
Finally, let's save the data for later reuse:
End of explanation
"""
|
me-surrey/dl-gym | .ipynb_checkpoints/10_introduction_to_artificial_neural_networks-checkpoint.ipynb | apache-2.0 | # To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
"""
Explanation: Chapter 10 โ Introduction to Artificial Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 10.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
"""
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
"""
Explanation: Perceptrons
End of explanation
"""
def logit(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def sigmoid(z):
return 1/(1+np.exp(-z))
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
"""
Explanation: Activation functions
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)
from sklearn.metrics import accuracy_score
y_pred = dnn_clf.predict(X_test)
accuracy_score(y_test, y_pred['classes'])
from sklearn.metrics import log_loss
y_pred_proba = y_pred['probabilities']
log_loss(y_test, y_pred_proba)
"""
Explanation: FNN for MNIST
using tf.learn
End of explanation
"""
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
"""
Explanation: Using plain TensorFlow
End of explanation
"""
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
"""
Explanation: Using dense() instead of neuron_layer()
Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function, except for a few minor differences:
* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
* the default activation is now None rather than tf.nn.relu.
* a few more differences are presented in chapter 11.
End of explanation
"""
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
"""
Explanation: Exercise solutions
1. to 8.
See appendix A.
9.
Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).
First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a tf.summary.scalar() to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
End of explanation
"""
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
"""
Explanation: Now we need to define the directory to write the TensorBoard logs to:
End of explanation
"""
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
"""
Explanation: Now we can create the FileWriter that we will use to write the TensorBoard logs:
End of explanation
"""
X_valid = mnist.validation.images
y_valid = mnist.validation.labels
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
"""
Explanation: Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's input_data() function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define X_valid and y_valid:
End of explanation
"""
|
ellisztamas/faps | docs/tutorials/.ipynb_checkpoints/03_paternity_arrays-checkpoint.ipynb | mit | import faps as fp
import numpy as np
print("Created using FAPS version {}.".format(fp.__version__))
"""
Explanation: Paternity arrays
Tom Ellis, March 2017, updated June 2020
End of explanation
"""
np.random.seed(27) # this ensures you get exactly the same answers as I do.
allele_freqs = np.random.uniform(0.3,0.5, 50)
mypop = fp.make_parents(4, allele_freqs, family_name='my_population')
progeny = fp.make_sibships(mypop, 0, [1,2], 3, 'myprogeny')
"""
Explanation: Paternity arrays are the what sibship clustering is built on in FAPS. They contain information about the probability that each candidate male is the father of each individual offspring - this is what the FAPS paper refers to as matrix G. This information is stored in a paternityArray object, along with other related information. A paternityArray can either be imported directly, or created from genotype data.
This notebook will examine how to:
Create a paternityArray from marker data.
Examine what information it contains.
Read and write a paternityArray to disk, or import a custom paternityArray.
Once you have made your paternityArray, the next step is to cluster the individuals in your array into full sibship groups.
Note that this tutorial only deals with the case where you have a paternityArray object for a single maternal family. If you have multiple families, you can apply what is here to each one, but you'll have to iterate over those families. See the specific tutorial on that.
Creating a paternityArray from genotype data
To create a paternityArray from genotype data we need to specficy genotypeArrays for the offspring, mothers and candidate males. Currently only biallelic SNP data are supported.
We will illustrate this with a small simulated example again with four adults and six offspring typed at 50 loci.
End of explanation
"""
mum_index = progeny.parent_index('mother', mypop.names) # positions in the mothers in the array of adults
mothers = mypop.subset(mum_index) # genotypeArray of the mothers
"""
Explanation: We need to supply a genotypeArray for the mothers. This needs to have an entry for for every offspring, i.e. six replicates of the mother.
End of explanation
"""
error_rate = 0.0015
patlik = fp.paternity_array(
offspring = progeny,
mothers = mothers,
males= mypop,
mu=error_rate)
"""
Explanation: To create the paternityArray we also need to supply information on the genotyping error rate (mu). In this toy example we know the error rate to be zero. However, in reality this will almost never be true, and moreover, sibship clustering becomes unstable when errors are zero, so we will use a small number for the error rate.
End of explanation
"""
print(patlik.candidates)
print(patlik.mothers)
print(patlik.offspring)
"""
Explanation: paternityArray structure
Basic attributes
A paternityArray inherits information about individuals from found in a genotypeArray. For example, labels of the candidates, mothers and offspring.
End of explanation
"""
patlik.lik_array
"""
Explanation: Representation of matrix G
The FAPS paper began with matrix G that gives probabilities that each individual is sired by each candidate father, or that the true father is absent from the sample. Recall that this matrix had a row for every offspring and a column for every candidate father, plus and additional column for the probability that the father was unsampled, and that these rows sum to one. The relative weight given to these two sections of G is determined by our prior expectation p about what proportion of true fathers were sampled. This section will examine how that is matrix is constructed.
The most important part of the paternityArray is the likelihood array, which represent the log likelihood that each candidate male is the true father of each offspring individual. In this case it will be a 6x4 dimensional array with a row for each offspring and a column for each candidate.
End of explanation
"""
patlik.lik_absent
"""
Explanation: You can see that the log likelihoods of paternity for the first individual are much lower than the other candidates. This individual is the mother, so this makes sense. You can also see that the highest log likelihoods are in the columns for the real fathers (the 2nd column in rows one to three, and the third column in rows four to six).
The paternityArray also includes information that the true sire is not in the sample of candidate males. In this case this is not helpful, because we know sampling is complete, but in real examples is seldom the case. By default this is defined as the likelihood of generating the offspring genotypes given the known mothers genotype and alleles drawn from population allele frequencies. Here, values for the six offspring are higher than the likelihoods for the non-sires, indicating that they are no more likely to be the true sire than a random unrelated individual.
End of explanation
"""
patlik.missing_parents = 0.1
"""
Explanation: The numbers in the two previous cells are (log) likelihoods, either of paternity, or that the father was missing. These are estimated from the marker data and are not normalised to probabilities. To join these bits of information together, we also need to specify our prior belief about the proportion of fathers you think you sampled based on your domain expertise in the system, which should be a float between 0 and 1.
Let's assume that we think we missed 10% of the fathers and set that as an attribute of the paternityArray object:
End of explanation
"""
print(patlik.lik_array.shape)
print(patlik.prob_array().shape)
"""
Explanation: The function prob_array creates the G matrix by multiplying lik_absent by 0.1 and lik_array by 0.9 (i.e. 1-0.1), then normalising the rows to sum to one. This returns a matrix with an extra column than lik_array had.
End of explanation
"""
np.exp(patlik.prob_array()).sum(axis=1)
"""
Explanation: Note that FAPS is doing this on the log scale under the hood. To check its working, we can check that rows sum to one.
End of explanation
"""
patlik.missing_parents = 0
patlik.prob_array()
"""
Explanation: If we were sure we really had sampled every single father, we could set the proportion of missing fathers to 0. This will throw a warning urging you to be cautious about that, but will run. We can see that the last column has been set to negative infinity, which is log(0).
End of explanation
"""
patlik = fp.paternity_array(
offspring = progeny,
mothers = mothers,
males= mypop,
mu=error_rate,
missing_parents=0.1)
"""
Explanation: You can also set the proportion of missing fathers directly when you create the paternity array.
End of explanation
"""
patlik.selfing_rate=0
patlik.prob_array()
"""
Explanation: Modifying a paternityArray
In the previous example we saw how to set the proportion of missing fathers by changing the attributes of the paternityArray object. There are a few other attributes that can be set that will modify the G matrix before passing this on to cluster offspring into sibships.
Selfing rate
Often the mother is included in the sample of candidate males, either because you are using the same array for multiple families, or self-fertilisation is a biological possibility. In a lot of cases though the mother cannot simultaneously be the sperm/pollen donor, and it is necessary to set the rate of self-fertilisation to zero (the natural logarithm of zero is negative infinity). This can be done simply by setting the attribute selfing_rate to zero:
End of explanation
"""
patlik.selfing_rate=0.95
patlik.prob_array()
"""
Explanation: This has set the prior probability of paternity of the mother (column zero above) to negative infinity (i.e log(zero)). You can set any selfing rate between zero and one if you have a good idea of what the value should be and how much it varies. For example, Arabidopsis thaliana selfs most of the time, so we could set a selfing rate of 95%.
End of explanation
"""
patlik.purge = 'my_population_3'
patlik.prob_array()
"""
Explanation: However, notice that despite the strong prior favouring the mother, she still doesn't have the highest probablity of paternity for any offspring. That's because the signal from the genetic markers is so strong that the true fathers still come out on top.
Removing individual candidates
You can also set likelihoods for particular individuals to zero manually. You might want to do this if you wanted to test the effects of incomplete sampling on your results, or if you had a good reason to suspect that some candidates could not possibly be the sire (for example, if the data are multigenerational, and the candidate was born after the offspring). Let's remove candidate 3:
End of explanation
"""
patlik.purge = ['my_population_0', 'my_population_3']
patlik.prob_array()
"""
Explanation: This also works using a list of candidates.
End of explanation
"""
patlik.purge = 0.4
patlik.prob_array()
"""
Explanation: This has removed the first individual (notice that this is identical to the previous example, because in this case the first individual is the mother). Alternatively you can supply a float between zero and one, which will be interpreted as a proportion of the candidates to be removed at random, which can be useful for simulations.
End of explanation
"""
patlik.max_clashes=3
"""
Explanation: Reducing the number of candidates
You might want to remove candidates who have an a priori very low probability of paternity, for example to reduce the memory requirements of the paternityArray. One simple rule is to exclude any candidates with more than some arbritray number of loci with opposing homozygous genotypes relative to the offspring (you want to allow for a small number, in case there are genotyping errors). This is done with max_clashes.
End of explanation
"""
patlik.clashes
"""
Explanation: The option max_clashes refers back to a matrix that counts the number of such incompatibilities for each offspring-candidate pair. When you create a paternityArray from genotypeArray objects, this matrix is created automatically ad can be called with:
End of explanation
"""
fp.incompatibilities(mypop, progeny)
"""
Explanation: If you import a paternityArray object, this isn't automatically generated, but you can recreate this manually with:
End of explanation
"""
patlik = fp.paternity_array(
offspring = progeny,
mothers = mothers,
males= mypop,
mu=error_rate,
missing_parents=0.1,
purge = 'my_population_3',
selfing_rate = 0
)
"""
Explanation: Notice that this array has a row for each offspring, and a column for each candidate father. The first column is for the mother, which is why everything is zero.
Modifying arrays on creation
You can also set the attributes we just described by setting them when you create the paternityArray object. For example:
End of explanation
"""
patlik.write('../../data/mypatlik.csv')
"""
Explanation: Importing a paternityArray
Frequently you may wish to save an array and reload it. Otherwise, you may be working with a more exotic
system than FAPS currently supports, such as microsatellite markers or a funky ploidy system. In this case you can create your own matrix of paternity likelihoods and import this directly as a paternityArray. Firstly, we can save the array we made before to disk by supplying a path to save to:
End of explanation
"""
patlik = fp.read_paternity_array(
path = '../../data/mypatlik.csv',
mothers_col=1,
likelihood_col=2)
"""
Explanation: We can reimport it again using read_paternity_array. This function is similar to the function for importing a genotypeArray, and the data need to have a specific structure:
Offspring names should be given in the first column
Names of the mothers are usually given in the second column.
If known for some reason, names of fathers can be given as well.
Likelihood information should be given to the right of columns indicating individual or parental names, with candidates' names in the column headers.
The final column should specify a likelihood that the true sire of an individual has not been sampled. Usually this is given as the likelihood of drawing the paternal alleles from population allele frequencies.
End of explanation
"""
fp.incompatibilities(mypop, progeny)
"""
Explanation: Of course, you can of course generate your own paternityArray and import it in the same way. This is especially useful if your study system has some specific marker type or genetic system not supported by FAPS.
One caveat with importing data is that the array of opposing homozygous loci is not imported automatically. You can either import this as a separate text file, or you can recreate this as above:
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/Skew_T/SkewT_and_Hodograph.ipynb | mit | # Create a datetime for our request - notice the times are from laregest (year) to smallest (hour)
from datetime import datetime
request_time = datetime(1999, 5, 3, 12)
# Store the station name in a variable for flexibility and clarity
station = 'OUN'
# Import the Wyoming simple web service and request the data
# Don't worry about a possible warning from Pandas - it's related to our handling of units
from siphon.simplewebservice.wyoming import WyomingUpperAir
df = WyomingUpperAir.request_data(request_time, station)
# Let's see what we got in return
df.head()
"""
Explanation: <a name="pagetop"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Upper Air and the Skew-T Log-P</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://unidata.github.io/MetPy/latest/_images/sphx_glr_Advanced_Sounding_001.png" alt="Example Skew-T" style="height: 500px;"></div>
Overview:
Teaching: 25 minutes
Exercises: 25 minutes
Questions
Where can upper air data be found and what format is it in?
How can I obtain upper air data programatically?
How can MetPy be used to make a Skew-T Log-P diagram and associated fiducial lines?
How are themodynamic calculations performed on upper-air data?
Table of Contents
<a href="#upperairdata">Obtain upper air data</a>
<a href="#makeskewt">Make a Skew-T</a>
<a href="#thermo">Thermodynamics</a>
<a href="#hodograph">Plotting a Hodograph</a>
<a href="#advanced">Advanced Layout</a>
<hr style="height:2px;">
<a name="upperairdata"></a>
Obtain upper air data
Overview
Upper air observations are generally reported as a plain text file in a tabular format that represents the down sampled raw data transmitted by the rawinsonde. Data are reported an mandatory levels and at levels of significant change. An example of sounding data may look like this:
```
PRES HGHT TEMP DWPT RELH MIXR DRCT SKNT THTA THTE THTV
hPa m C C % g/kg deg knot K K K
1000.0 270
991.0 345 -0.3 -2.8 83 3.15 0 0 273.6 282.3 274.1
984.0 403 10.2 -7.8 27 2.17 327 4 284.7 291.1 285.0
963.0 581 11.8 -9.2 22 1.99 226 17 288.0 294.1 288.4
959.7 610 11.6 -9.4 22 1.96 210 19 288.1 294.1 288.5
```
Data are available to download from the University of Wyoming archive, the Iowa State archive, and the Integrated Global Radiosonde Archive (IGRA). There is no need to download data manually. We can use the siphon library (also developed at Unidata) to request and download these data. Be sure to checkout the documentation on all of siphon's capabilities.
Getting our data
First, we need to create a datetime object that has the time of observation we are looking for. We can then request the data for a specific station. Note that if you provide an invalid time or station where no sounding data are present you will receive an error.
End of explanation
"""
df.units
"""
Explanation: We got a Pandas dataframe back, which is great. Sadly, Pandas does not play well with units, so we need to attach units and make some other kind of data structure. We've provided a helper function for this - it takes the dataframe with our special .units attribute and returns a dictionary where the keys are column (data series) names and the values are united arrays. This means we can still use the dictionary access syntax and mostly forget that it is not a data frame any longer.
Fist, let's look at the special attribute siphon added:
End of explanation
"""
from metpy.units import pandas_dataframe_to_unit_arrays, units
sounding = pandas_dataframe_to_unit_arrays(df)
sounding
"""
Explanation: Now let's import the helper and the units registry from MetPy and get units attached.
End of explanation
"""
import matplotlib.pyplot as plt
from metpy.plots import SkewT
%matplotlib inline
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
# Plot the data using normal plotting functions, all of the transforms
# happen in the background!
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.ax.set_ylim(1050,100)
skew.ax.set_xlim(-50,20)
# Redisplay the figure
fig
# Plot a isotherm using axvline (axis vertical line)
skew.ax.axvline([0] * units.degC, color='cyan', linestyle='--')
# Redisplay the figure
fig
"""
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="makeskewt"></a>
Make a Skew-T
Now that we have data, we can actually start making our Skew-T Log-P diagram. This consists of:
Import matplotlib
Importing the SkewT object
Creating a figure
Creating a SkewT object based upon that figure
Plotting our data
End of explanation
"""
# Import the Wyoming simple web service upper air object
# YOUR CODE GOES HERE
# Create the datetime and station variables you'll need
# YOUR CODE GOES HERE
# Make the request for the data
# YOUR CODE GOES HERE
# Attach units to the data
# YOUR CODE GOES HERE
"""
Explanation: Exercise
Part 1
Download your own data using the Wyoming upper-air archive. Have a look at the documentation to help get started.
Attach units using the unit helper.
End of explanation
"""
# %load solutions/skewt_get_data.py
df
"""
Explanation: Solution
End of explanation
"""
# Make a figure
# Make a SkewT object
# Plot the temperature and dewpoint
"""
Explanation: Part 2
Make a figure and SkewT object.
Plot the temperature and dewpoint in red and green lines.
Set the axis limits to sensible limits with set_xlim and set_ylim.
End of explanation
"""
# %load solutions/skewt_make_figure.py
"""
Explanation: Solution
End of explanation
"""
# Plot wind barbs
# Add dry adiabats
# Add moist adiabats
# Add mixing ratio lines
# Redisplay figure
"""
Explanation: Part 3
Plot wind barbs using the plot_barbs method of the SkewT object.
Add the fiducial lines for dry adiabats, moist adiabats, and mixing ratio lines using the plot_dry_adiabats(), plot_moist_adiabats(), plot_mixing_lines() functions.
End of explanation
"""
# %load solutions/skewt_wind_fiducials.py
"""
Explanation: Solution
End of explanation
"""
# Grab data for our original case and make a basic figure for us to keep working with.
df = WyomingUpperAir.request_data(datetime(1999, 5, 3, 12), 'OUN')
sounding = pandas_dataframe_to_unit_arrays(df)
# Create a new figure and SkewT object
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], color='tab:blue')
skew.ax.set_xlim(-60, 30)
skew.ax.set_ylim(1000, 100)
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
import metpy.calc as mpcalc
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
print(lcl_pressure, lcl_temperature)
"""
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="thermo"></a>
Thermodynamics
Using MetPy's calculations functions we can calculate thermodynamic paramters like LCL, LFC, EL, CAPE, and CIN. Let's start off with the LCL.
End of explanation
"""
skew.ax.plot(lcl_temperature, lcl_pressure, marker="_", color='k', markersize=30, markeredgewidth=3)
fig
"""
Explanation: We can this as a point on our sounding using the scatter method.
End of explanation
"""
sounding['profile'] = mpcalc.parcel_profile(sounding['pressure'], sounding['temperature'][0], sounding['dewpoint'][0])
print(sounding['profile'])
# Plot the profile
skew.plot(sounding['pressure'], sounding['profile'], color='black')
# Redisplay the figure
fig
"""
Explanation: We can also calculate the ideal parcel profile and plot it.
End of explanation
"""
# Get data for the sounding
df = WyomingUpperAir.request_data(datetime(1999, 5, 3, 12), 'OUN')
# Calculate the ideal surface parcel path
sounding['profile'] = mpcalc.parcel_profile(sounding['pressure'],
sounding['temperature'][0],
sounding['dewpoint'][0]).to('degC')
# Calculate the LCL
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
# Calculate the LFC
# YOUR CODE GOES HERE
# Calculate the EL
# YOUR CODE GOES HERE
# Create a new figure and SkewT object
fig = plt.figure(figsize=(10, 10))
skew = SkewT(fig)
# Plot the profile and data
skew.plot(sounding['pressure'], sounding['profile'], color='black')
skew.plot(sounding['pressure'], sounding['temperature'], color='tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], color='tab:blue')
# Plot the LCL, LFC, and EL as horizontal lines
# YOUR CODE GOES HERE
# Set axis limits
skew.ax.set_xlim(-60, 30)
skew.ax.set_ylim(1000, 100)
# Add fiducial lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
"""
Explanation: Exercise
Part 1
Calculate the LFC and EL for the sounding.
Plot them as horizontal line markers (see how we did it above for the LCL).
End of explanation
"""
# %load solutions/skewt_thermo.py
"""
Explanation: Solution
End of explanation
"""
# Calculate surface based cape/cin
# YOUR CODE GOES HERE
# Print CAPE and CIN
# YOUR CODE GOES HERE
# Shade CAPE
# YOUR CODE GOES HERE
# Shade CIN
# YOUR CODE GOES HERE
# Redisplay the figure
fig
"""
Explanation: Bonus
Use the function surface_based_cape_cin in the MetPy calculations module to calculate the CAPE and CIN of this sounding. Print out the values
Using the methods shade_cape and shade_cin on the SkewT object, shade the areas representing CAPE and CIN.
End of explanation
"""
# %load solutions/skewt_cape_cin.py
"""
Explanation: Solution
End of explanation
"""
# Import the hodograph class
from metpy.plots import Hodograph
# Make a figure and axis
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
# Create a hodograph
h = Hodograph(ax, component_range=60.)
# Add "range rings" to the plot
h.add_grid(increment=20)
# Plot the wind data
h.plot(sounding['u_wind'], sounding['v_wind'], color='tab:red')
"""
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="hodograph"></a>
Plotting a Hodograph
Hodographs are a great way to look at wind shear - they are created by drawing wind vectors, all starting at the origin of a plot, and the connecting the vector tips. They are often thought of as a polar plot where the range rings (lines of constant radius) represent speed and the angle representes the compass angle of the wind.
In MetPy we can create a hodograph in a similar way to a skew-T - we create a hodograph object and attach it to an axes.
End of explanation
"""
# Add vectors
h.wind_vectors(sounding['u_wind'], sounding['v_wind'])
# Redisplay figure
fig
"""
Explanation: We can even add wind vectors, which is helpful for learning/teaching hodographs.
End of explanation
"""
(_, u_trimmed, v_trimmed,
speed_trimmed, height_trimmed) = mpcalc.get_layer(sounding['pressure'],
sounding['u_wind'],
sounding['v_wind'],
sounding['speed'],
sounding['height'],
heights=sounding['height'],
depth=10 * units.km)
"""
Explanation: This is great, but we generally don't care about wind shear for the entire sounding. Let's say we want to view it in the lowest 10km of the atmosphere. We can do this with the powerful, but complex get_layer function. Let's get a subset of the u-wind, v-wind, and windspeed.
End of explanation
"""
from metpy.plots import colortables
import numpy as np
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
h = Hodograph(ax, component_range=60.)
h.add_grid(increment=20)
norm, cmap = colortables.get_with_range('ir_rgbv', np.nanmin(speed_trimmed),
np.nanmax(speed_trimmed))
h.plot_colormapped(u_trimmed, v_trimmed, speed_trimmed,
cmap=cmap, norm=norm)
h.wind_vectors(u_trimmed[::3], v_trimmed[::3])
"""
Explanation: Let's make the same hodograph again, but we'll also color the line by the value of the windspeed and we'll use the trimmed data we just created.
End of explanation
"""
# Calculate the height above ground level (AGL)
# YOUR CODE GOES HERE
# Make an array of segment boundaries - don't forget units!
# YOUR CODE GOES HERE
# Make a list of colors for the segments
# YOUR CODE GOES HERE
"""
Explanation: Exercise
In this exercise you'll create a hodograph that is colored by a variable that is not displayed - height above ground level. We generally wouldn't want to color this in a continuous fashion, so we'll make a hodograph that is segmented by height.
Part 1
Make a variable to hold the height above ground level (subtract the surface height from the heights in the sounding).
Make an list of boundary values that we'll use to segment the hodograph from 0-1, 1-3, 3-5, and 5-8 km. (Hint the array should have one more value than the number of segments desired.)
Make a list of colors for each segment.
End of explanation
"""
# %load solutions/hodograph_preprocessing.py
"""
Explanation: Solution
End of explanation
"""
# Create figure/axis
# YOUR CODE GOES HERE
# Create a hodograph object/fiducial lines
# YOUR CODE GOES HERE
# Plot the data
# YOUR CODE GOES HERE
# BONUS - add a colorbar
# YOUR CODE GOES HERE
"""
Explanation: Part 2
Make a new figure and hodograph object.
Using the bounds and colors keyword arguments to plot_colormapped create the segmented hodograph.
BONUS: Add a colorbar!
End of explanation
"""
# %load solutions/hodograph_segmented.py
"""
Explanation: Solution
End of explanation
"""
# Get the data we want
df = WyomingUpperAir.request_data(datetime(1998, 10, 4, 0), 'OUN')
sounding = pandas_dataframe_to_unit_arrays(df)
# Calculate thermodynamics
lcl_pressure, lcl_temperature = mpcalc.lcl(sounding['pressure'][0],
sounding['temperature'][0],
sounding['dewpoint'][0])
lfc_pressure, lfc_temperature = mpcalc.lfc(sounding['pressure'],
sounding['temperature'],
sounding['dewpoint'])
el_pressure, el_temperature = mpcalc.el(sounding['pressure'],
sounding['temperature'],
sounding['dewpoint'])
parcel_profile = mpcalc.parcel_profile(sounding['pressure'],
sounding['temperature'][0],
sounding['dewpoint'][0])
# Some new imports
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from metpy.plots import add_metpy_logo
# Make the plot
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 630, 80, size='large')
# Grid for plots
gs = gridspec.GridSpec(3, 3)
skew = SkewT(fig, rotation=45, subplot=gs[:, :2])
# Plot the sounding using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(sounding['pressure'], sounding['temperature'], 'tab:red')
skew.plot(sounding['pressure'], sounding['dewpoint'], 'tab:green')
skew.plot(sounding['pressure'], parcel_profile, 'k')
# Mask barbs to be below 100 hPa only
mask = sounding['pressure'] >= 100 * units.hPa
skew.plot_barbs(sounding['pressure'][mask], sounding['u_wind'][mask], sounding['v_wind'][mask])
skew.ax.set_ylim(1000, 100)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Shade areas
skew.shade_cin(sounding['pressure'], sounding['temperature'], parcel_profile)
skew.shade_cape(sounding['pressure'], sounding['temperature'], parcel_profile)
# Good bounds for aspect ratio
skew.ax.set_xlim(-30, 40)
if lcl_pressure:
skew.ax.plot(lcl_temperature, lcl_pressure, marker="_", color='black', markersize=30, markeredgewidth=3)
if lfc_pressure:
skew.ax.plot(lfc_temperature, lfc_pressure, marker="_", color='brown', markersize=30, markeredgewidth=3)
if el_pressure:
skew.ax.plot(el_temperature, el_pressure, marker="_", color='blue', markersize=30, markeredgewidth=3)
# Create a hodograph
agl = sounding['height'] - sounding['height'][0]
mask = agl <= 10 * units.km
intervals = np.array([0, 1, 3, 5, 8]) * units.km
colors = ['tab:red', 'tab:green', 'tab:blue', 'tab:olive']
ax = fig.add_subplot(gs[0, -1])
h = Hodograph(ax, component_range=30.)
h.add_grid(increment=10)
h.plot_colormapped(sounding['u_wind'][mask], sounding['v_wind'][mask], agl[mask], bounds=intervals, colors=colors)
"""
Explanation: <a href="#pagetop">Top</a>
<hr style="height:2px;">
<a name="advanced"></a>
Advanced Layout
This section is meant to show you some fancy matplotlib to make nice Skew-T/Hodograph combinations. It's a good starting place to make your custom plot for your needs.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tutorials/text/text_classification_rnn.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install tf-nightly
import tensorflow_datasets as tfds
import tensorflow as tf
"""
Explanation: RNN ใไฝฟใฃใใใญในใๅ้ก
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_classification_rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: ใใใใฎใใญใฅใกใณใใฏ็งใใกTensorFlowใณใใฅใใใฃใ็ฟป่จณใใใใฎใงใใใณใใฅใใใฃใซใใ ็ฟป่จณใฏใในใใจใใฉใผใใงใใใใใใใฎ็ฟป่จณใๆญฃ็ขบใงใใใใจใ่ฑ่ชใฎๅ
ฌๅผใใญใฅใกใณใใฎ ๆๆฐใฎ็ถๆ
ใๅๆ ใใใใฎใงใใใใจใไฟ่จผใใใใจใฏใงใใพใใใ ใใฎ็ฟป่จณใฎๅ่ณชใๅไธใใใใใใฎใๆ่ฆใใๆใกใฎๆนใฏใGitHubใชใใธใใชtensorflow/docsใซใใซใชใฏใจในใใใ้ใใใ ใใใ ใณใใฅใใใฃใซใใ็ฟป่จณใใฌใใฅใผใซๅๅ ใใฆใใใ ใใๆนใฏใ docs-ja@tensorflow.org ใกใผใชใณใฐใชในใใซใ้ฃ็ตกใใ ใใใ
ใใฎใใญในใๅ้กใใฅใผใใชใขใซใงใฏใๆๆ
ๅๆใฎใใใซ IMDB ๆ ็ปใฌใใฅใผๅคงๅใใผใฟใปใใ ใไฝฟใฃใฆ ใชใซใฌใณใใใฅใผใฉใซใใใใฏใผใฏ ใ่จ็ทดใใพใใ
่จญๅฎ
End of explanation
"""
import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
plt.show()
"""
Explanation: matplotlib ใใคใณใใผใใใฐใฉใใๆ็ปใใใใใฎใใซใใผ้ขๆฐใไฝๆใใพใใ
End of explanation
"""
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_examples, test_examples = dataset['train'], dataset['test']
"""
Explanation: ๅ
ฅๅใใคใใฉใคใณใฎ่จญๅฎ
IMDB ๆ ็ปใฌใใฅใผๅคงๅใใผใฟใปใใใฏไบๅคๅ้กใใผใฟใปใใใงใใใในใฆใฎใฌใใฅใผใฏใๅฅฝๆ็(positive) ใพใใฏ ้ๅฅฝๆ็(negative) ใฎใใใใใฎๆๆ
ใๅซใใงใใพใใ
TFDS ใไฝฟใฃใฆใใฎใใผใฟใปใใใใใฆใณใญใผใใใพใใ
End of explanation
"""
encoder = info.features['text'].encoder
print('Vocabulary size: {}'.format(encoder.vocab_size))
"""
Explanation: ใใฎใใผใฟใปใใใฎ info ใซใฏใใจใณใณใผใใผ(tfds.features.text.SubwordTextEncoder) ใๅซใพใใฆใใพใใ
End of explanation
"""
sample_string = 'Hello TensorFlow.'
encoded_string = encoder.encode(sample_string)
print('Encoded string is {}'.format(encoded_string))
original_string = encoder.decode(encoded_string)
print('The original string: "{}"'.format(original_string))
assert original_string == sample_string
for index in encoded_string:
print('{} ----> {}'.format(index, encoder.decode([index])))
"""
Explanation: ใใฎใใญในใใจใณใณใผใใผใฏใไปปๆใฎๆๅญๅใๅฏ้็ใซใจใณใณใผใใใพใใๅฟ
่ฆใงใใใฐใใคใใจใณใณใผใใฃใณใฐใซใใฉใผใซใใใฏใใพใใ
End of explanation
"""
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = (train_examples
.shuffle(BUFFER_SIZE)
.padded_batch(BATCH_SIZE, padded_shapes=([None],[])))
test_dataset = (test_examples
.padded_batch(BATCH_SIZE, padded_shapes=([None],[])))
"""
Explanation: ่จ็ทด็จใใผใฟใฎๆบๅ
ๆฌกใซใใใใใฎใจใณใณใผใๆธใฟๆๅญๅใใใใๅใใพใใpadded_batch ใกใฝใใใไฝฟใฃใฆใใใไธญใฎไธ็ช้ทใๆๅญๅใฎ้ทใใซใผใญใใใฃใณใฐใ่กใใพใใ
End of explanation
"""
train_dataset = (train_examples
.shuffle(BUFFER_SIZE)
.padded_batch(BATCH_SIZE))
test_dataset = (test_examples
.padded_batch(BATCH_SIZE))
"""
Explanation: Note: TensorFlow 2.2 ใใใpadded_shapes ใฏๅฟ
้ ใงใฏใชใใชใใพใใใใใใฉใซใใงใฏใในใฆใฎ่ปธใใใใไธญใงๆใ้ทใใใฎใซๅใใใฆใใใฃใณใฐใใพใใ
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
"""
Explanation: ใขใใซใฎไฝๆ
tf.keras.Sequential ใขใใซใๆง็ฏใใพใใใใๆๅใซ Embedding ใฌใคใคใผใใๅงใใพใใEmbedding ใฌใคใคใผใฏๅ่ชไธใคใซๅฏพใใฆไธใคใฎใใฏใใซใๅๅฎนใใพใใๅผใณๅบใใๅใใใจใEmbedding ใฌใคใคใผใฏๅ่ชใฎใคใณใใใฏในใฎใทใผใฑใณในใใใใฏใใซใฎใทใผใฑใณในใซๅคๆใใพใใใใใใฎใใฏใใซใฏ่จ็ทดๅฏ่ฝใงใใ๏ผๅๅใชใใผใฟใง๏ผ่จ็ทดใใใใใจใฏใใใชใใใใชๆๅณใใใคๅ่ชใฏใใใฐใใฐใใชใใใใชใใฏใใซใซใชใใพใใ
ใใฎใคใณใใใฏในๅ็
งใฏใใฏใณใใใใใฏใใซใ tf.keras.layers.Dense ใฌใคใคใผใไฝฟใฃใฆ่กใใใชใใใใชๆผ็ฎใซๆฏในใฆใใฃใจๅน็็ใงใใ
ใชใซใฌใณใใใฅใผใฉใซใใใใฏใผใฏ๏ผRNN๏ผใฏใใทใผใฑใณในใฎๅ
ฅๅใ่ฆ็ด ใไธใคใใคๆฑใใใจใงๅฆ็ใใพใใRNN ใฏใใใใฟใคใ ในใใใใงใฎๅบๅใๆฌกใฎใฟใคใ ในใใใใฎๅ
ฅๅใธใจใๆฌกใ
ใซๆธกใใฆใใใพใใ
RNN ใฌใคใคใผใจใจใใซใtf.keras.layers.Bidirectional ใฉใใใผใไฝฟ็จใใใใจใใงใใพใใใใฎใฉใใใผใฏใๅ
ฅๅใ RNN ๅฑคใฎ้ ๆนๅใจ้ๆนๅใซไผใใใใฎๅพๅบๅใ็ตๅใใพใใใใใซใใใRNN ใฏ้ทๆ็ใชไพๅญ้ขไฟใๅญฆ็ฟใงใใพใใ
End of explanation
"""
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
"""
Explanation: ่จ็ทดใใญใปในใๅฎ็พฉใใใใใKeras ใขใใซใใณใณใใคใซใใพใใ
End of explanation
"""
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
"""
Explanation: ใขใใซใฎ่จ็ทด
End of explanation
"""
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
def sample_predict(sample_pred_text, pad):
encoded_sample_pred_text = encoder.encode(sample_pred_text)
if pad:
encoded_sample_pred_text = pad_to_size(encoded_sample_pred_text, 64)
encoded_sample_pred_text = tf.cast(encoded_sample_pred_text, tf.float32)
predictions = model.predict(tf.expand_dims(encoded_sample_pred_text, 0))
return (predictions)
# ใใใฃใณใฐใชใใฎใตใณใใซใใญในใใฎๆจ่ซ
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=False)
print(predictions)
# ใใใฃใณใฐใใใฎใตใณใใซใใญในใใฎๆจ่ซ
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=True)
print(predictions)
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
"""
Explanation: ไธ่จใฎใขใใซใฏใทใผใฑใณในใซ้ฉ็จใใใใใใฃใณใฐใใในใฏใใฆใใพใใใใใใฃใณใฐใใใใทใผใฑใณในใง่จ็ทดใ่กใใใใใฃใณใฐใใใฆใใชใใทใผใฑใณในใงใในใใใใจใใใฐใใใฎใใจใ็ตๆใๆญชใใๅฏ่ฝๆงใใใใพใใ็ๆณ็ใซใฏใใใ้ฟใใใใใซใ ใในใญใณใฐใไฝฟใในใใงใใใไธ่จใฎใใใซๅบๅใธใฎๅฝฑ้ฟใฏๅฐใใใใฎใงใใใใใพใใใ
ไบๆธฌๅคใ 0.5 ไปฅไธใงใใใฐใใธใใฃใใใใไปฅๅคใฏใใฌใใฃใใงใใ
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
# ใใใฃใณใฐใชใใฎใตใณใใซใใญในใใฎๆจ่ซ
sample_pred_text = ('The movie was not good. The animation and the graphics '
'were terrible. I would not recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=False)
print(predictions)
# ใใใฃใณใฐใใใฎใตใณใใซใใญในใใฎๆจ่ซ
sample_pred_text = ('The movie was not good. The animation and the graphics '
'were terrible. I would not recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=True)
print(predictions)
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
"""
Explanation: 2ใคไปฅไธใฎ LSTM ใฌใคใคใผใ้ใญใ
Keras ใฎใชใซใฌใณใใฌใคใคใผใซใฏใใณใณในใใฉใฏใฟใฎ return_sequences ๅผๆฐใงใณใณใใญใผใซใใใ2ใคใฎใขใผใใใใใพใใ
ใใใใใฎใฟใคใ ในใใใใฎ้ฃ็ถใใๅบๅใฎใทใผใฑใณในๅ
จไฝ๏ผshape ใ (batch_size, timesteps, output_features) ใฎ3้ใใณใฝใซ๏ผใ่ฟใใ
ใใใใใฎๅ
ฅๅใทใผใฑใณในใฎๆๅพใฎๅบๅใ ใ๏ผshape ใ (batch_size, output_features) ใฎ2้ใใณใฝใซ๏ผใ่ฟใใ
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/thu/cmip6/models/sandbox-2/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
csdms/coupling | docs/demos/cem.ipynb | mit | %matplotlib inline
"""
Explanation: <img src="../_static/pymt-logo-header-text.png">
Coastline Evolution Model
Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/cem.ipynb
Install command: $ conda install notebook pymt_cem
Download local copy of notebook:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/docs/demos/cem.ipynb
This example explores how to use a BMI implementation using the CEM model as an example.
Links
CEM source code: Look at the files that have deltas in their name.
CEM description on CSDMS: Detailed information on the CEM model.
Interacting with the Coastline Evolution Model BMI using Python
Some magic that allows us to view images within the notebook.
End of explanation
"""
import pymt.models
cem = pymt.models.Cem()
"""
Explanation: Import the Cem class, and instantiate it. In Python, a model with a BMI will have no arguments for its constructor. Note that although the class has been instantiated, it's not yet ready to be run. We'll get to that later!
End of explanation
"""
cem.output_var_names
cem.input_var_names
"""
Explanation: Even though we can't run our waves model yet, we can still get some information about it. Just don't try to run it. Some things we can do with our model are get the names of the input variables.
End of explanation
"""
angle_name = 'sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity'
print("Data type: %s" % cem.get_var_type(angle_name))
print("Units: %s" % cem.get_var_units(angle_name))
print("Grid id: %d" % cem.get_var_grid(angle_name))
print("Number of elements in grid: %d" % cem.get_grid_number_of_nodes(0))
print("Type of grid: %s" % cem.get_grid_type(0))
"""
Explanation: We can also get information about specific variables. Here we'll look at some info about wave direction. This is the main input of the Cem model. Notice that BMI components always use CSDMS standard names. The CSDMS Standard Name for wave angle is,
"sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity"
Quite a mouthful, I know. With that name we can get information about that variable and the grid that it is on (it's actually not a one).
End of explanation
"""
args = cem.setup(number_of_rows=100, number_of_cols=200, grid_spacing=200.)
cem.initialize(*args)
"""
Explanation: OK. We're finally ready to run the model. Well not quite. First we initialize the model with the BMI initialize method. Normally we would pass it a string that represents the name of an input file. For this example we'll pass None, which tells Cem to use some defaults.
End of explanation
"""
import numpy as np
cem.set_value("sea_surface_water_wave__height", 2.)
cem.set_value("sea_surface_water_wave__period", 7.)
cem.set_value("sea_surface_water_wave__azimuth_angle_of_opposite_of_phase_velocity", 0. * np.pi / 180.)
"""
Explanation: Before running the model, let's set a couple input parameters. These two parameters represent the wave height and wave period of the incoming waves to the coastline.
End of explanation
"""
grid_id = cem.get_var_grid('sea_water__depth')
"""
Explanation: The main output variable for this model is water depth. In this case, the CSDMS Standard Name is much shorter:
"sea_water__depth"
First we find out which of Cem's grids contains water depth.
End of explanation
"""
grid_type = cem.get_grid_type(grid_id)
grid_rank = cem.get_grid_ndim(grid_id)
print('Type of grid: %s (%dD)' % (grid_type, grid_rank))
"""
Explanation: With the grid_id, we can now get information about the grid. For instance, the number of dimension and the type of grid (structured, unstructured, etc.). This grid happens to be uniform rectilinear. If you were to look at the "grid" types for wave height and period, you would see that they aren't on grids at all but instead are scalars.
End of explanation
"""
spacing = np.empty((grid_rank, ), dtype=float)
shape = cem.get_grid_shape(grid_id)
cem.get_grid_spacing(grid_id, out=spacing)
print('The grid has %d rows and %d columns' % (shape[0], shape[1]))
print('The spacing between rows is %f and between columns is %f' % (spacing[0], spacing[1]))
"""
Explanation: Because this grid is uniform rectilinear, it is described by a set of BMI methods that are only available for grids of this type. These methods include:
* get_grid_shape
* get_grid_spacing
* get_grid_origin
End of explanation
"""
z = np.empty(shape, dtype=float)
cem.get_value('sea_water__depth', out=z)
"""
Explanation: Allocate memory for the water depth grid and get the current values from cem.
End of explanation
"""
def plot_coast(spacing, z):
import matplotlib.pyplot as plt
xmin, xmax = 0., z.shape[1] * spacing[0] * 1e-3
ymin, ymax = 0., z.shape[0] * spacing[1] * 1e-3
plt.imshow(z, extent=[xmin, xmax, ymin, ymax], origin='lower', cmap='ocean')
plt.colorbar().ax.set_ylabel('Water Depth (m)')
plt.xlabel('Along shore (km)')
plt.ylabel('Cross shore (km)')
"""
Explanation: Here I define a convenience function for plotting the water depth and making it look pretty. You don't need to worry too much about it's internals for this tutorial. It just saves us some typing later on.
End of explanation
"""
plot_coast(spacing, z)
"""
Explanation: It generates plots that look like this. We begin with a flat delta (green) and a linear coastline (y = 3 km). The bathymetry drops off linearly to the top of the domain.
End of explanation
"""
qs = np.zeros_like(z)
qs[0, 100] = 1250
"""
Explanation: Right now we have waves coming in but no sediment entering the ocean. To add some discharge, we need to figure out where to put it. For now we'll put it on a cell that's next to the ocean.
Allocate memory for the sediment discharge array and set the discharge at the coastal cell to some value.
End of explanation
"""
cem.get_var_units('land_surface_water_sediment~bedload__mass_flow_rate')
cem.time_step, cem.time_units, cem.time
"""
Explanation: The CSDMS Standard Name for this variable is:
"land_surface_water_sediment~bedload__mass_flow_rate"
You can get an idea of the units based on the quantity part of the name. "mass_flow_rate" indicates mass per time. You can double-check this with the BMI method function get_var_units.
End of explanation
"""
for time in range(3000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
cem.time
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
val = np.empty((5, ), dtype=float)
cem.get_value("basin_outlet~coastal_center__x_coordinate", val)
val / 100.
"""
Explanation: Set the bedload flux and run the model.
End of explanation
"""
qs[0, 150] = 1500
for time in range(3750):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: Let's add another sediment source with a different flux and update the model.
End of explanation
"""
qs.fill(0.)
for time in range(4000):
cem.set_value('land_surface_water_sediment~bedload__mass_flow_rate', qs)
cem.update_until(time)
cem.get_value('sea_water__depth', out=z)
plot_coast(spacing, z)
"""
Explanation: Here we shut off the sediment supply completely.
End of explanation
"""
|
mathinmse/mathinmse.github.io | Lecture-14-Ordinary-Differential-Equations.ipynb | mit | %matplotlib notebook
import sympy as sp
# can also run quietly using:
#sp.init_session(quiet=True)
# set up some common symbols and report back to the user.
sp.init_session()
"""
Explanation: Lecture 14: Solutions to Ordinary Differential Equations and Viscoelasticity
Background
What are differential equations? Grant Sanderson from 3Blue1Brown gives us this conceptual description on the origin of differential equations, "Differential equations arise whenever it is easier to describe change than absolute amounts." The analysis of mechanics of moving bodies, vibrating strings, and heat flow were some of the areas where differential equations were first used. In this lecture, viscoelasticity will be the topic used to demonstrate the solution to differential equations. The invention of Calculus and use of the deriviative and integral naturally lead to differential equations.
For us, an equation that relates an independent variable (such as time) to some quantities and the derivatives of those quantities with respect to the independent variable is an ordinary differental equation (ODE). Using x as the independent variable, a general ordinary differential equation looks like this (see Wikipedia page on ODEs):
$$
y^{(n)}=\sum {i=0}^{n-1}a{i}(x)y^{(i)}+r(x)
$$
where $y'$ are the derivatives in the independent variable $x$ and the $a$ could be functions of $x$.
In physics, you have probably solved this differential equation:
$$
m{\frac {\mathrm {d} ^{2}x(t)}{\mathrm {d} t^{2}}}=F(x(t))
$$
where t is now the independent variable. This is a differential equation restatement of $F= ma$ where it is easier to relate the acceleration and velocity as a differential quantity.
What skills will I learn?
You will identify differential equations and practice solving them using Sympy and numerical methods.
Solutions to standard and more general viscoelastic problems will be developed.
What steps should I take?
Review the solutions to ODEs from physics and electronics.
Practice using sympy and python tools to solve these equations in an organized way.
Review viscoelasticity concepts.
Apply the techniquest of ODE solutions to viscoelastic problems. Review the Maxwell model and solve the Kelvin model for yourself.
Examine the solutions to two important ODEs that we will use when solving Fick's law.
Reading and Reference
3Blue1Brown's playlist on differential equations: https://www.youtube.com/playlist?list=PLZHQObOWTQDNPOjrT6KVlfJuKtYTftqH6
https://en.wikipedia.org/wiki/Ordinary_differential_equation
https://en.wikipedia.org/wiki/Differential_equation
Part 1: Review of Physics and Electronics ODEs
Example using DC Circuits
A simple DC circuit includes a switch (initially open), an inductor (L=2) and a resistor (R=6) in series. The DC voltage is 100V. What is the current in the circuit after the switch is closed? We know that:
$$ E_L + E_R = 100V $$
and that
$$ E_L = L \frac{dI(t)}{dt} $$
and
$$ E_R = RI(t) $$
(You will see a strong correlation between these problems and viscoelastic problems later in this lecture.) Typically, one would solve this problem by separation of variables. In sympy there is a function called dsolve() that can help us with this. Let us import sympy into the current namespace in a way that helps us keep track of what functions are coming from sympy:
End of explanation
"""
# These will be functions of time, thus the classification.
current, voltage = sp.symbols('current voltage', cls=Function)
"""
Explanation: We create some additional symbols that are a bit easier to remember in this problem
End of explanation
"""
# This creates a relation between two objects.
?Eq
equationToSolve = Eq(2*current(t).diff(t) + 9*current(t), 100)
equationToSolve
"""
Explanation: We assign to the variable equationToSolve the equation we are going to pass to dsolve(). At this point I have to say that Mathematica is really way more advanced than sympy. If you find yourself really doing some heavy computing you may want to default to numerical methods if you are desperate to stay with Python or consider switching to Mathematica to do symbolic calculations. I'm really caught between two worlds on this one.
End of explanation
"""
solutionToEquation = dsolve(equationToSolve, current(t))
solutionToEquation
"""
Explanation: Here we solve the equation.
End of explanation
"""
var('C1 C2')
"""
Explanation: Unfortunately, when we use dsolve() the integration constants are not automatically created for us.
End of explanation
"""
particularSolution = solutionToEquation.subs([(current(t),0),(t,0)])
particularSolution
solutionSet = solveset(particularSolution,C1)
solutionList = [a for a in solutionSet]
solutionSet
"""
Explanation: We substitute the initial condition and then solve for C1. Afterwards we substitute C1 back into the general solution and plot the result.
End of explanation
"""
solutionToEquation.subs(C1,-100)
plot(solutionToEquation.subs(C1,-100).rhs,(t,0,2));
"""
Explanation: solveset returns a set object. When the set is a FiniteSet we can convert to a list and de-reference the list as I've done here. Not sure this is good practice, in general. So - I would only do this in an interactive session when I can inspect the return value. I would not write library code like this.
End of explanation
"""
L, R, t, V0 = symbols('L R t V0', positive=True)
A, V = symbols('A V', cls=Function)
rLCircuit = A(t).diff(t) + (R/L)*A(t) - V(t)
rLCircuit
solution = dsolve(rLCircuit, A(t))
solution
particularSolution = dsolve(rLCircuit, A(t)).subs([(A(t),0),(t,0),(V(t),V0)]).doit()
particularSolution
const1 = -L*V0/R
simplify(solution.subs([(C1,const1),(V(t),V0)]).doit())
"""
Explanation: This time with no substitutions.
End of explanation
"""
#from sympy import *
#init_session()
mass, g, b, t = symbols('mass g b t', real=True)
v = symbols('v', cls=Function)
paraChute = mass*v(t).diff(t)-mass*g+b*(v(t))**2
paraChute
solution = dsolve(paraChute,v(t))
solution
particularSolution = solution.subs([(v(t),0),(t,0)])
particularSolution
C1 = symbols('C1')
const = solve(particularSolution, C1)
const
"""
Explanation: A Second Example First Order ODE: Skydiving
This example is taking from Arfken's book. Example problem 7.2.1. The model describes a falling skydiver under a parachute. We are trying to find the velocity profile as a function of time as well as the terminal velocity. The equation of motion is:
$$ m \dot{v} = m g - b v^2 $$
where v is $v(t)$, a function of time. Later in the problem we will provide values for $m$, $g$, and $b$ to find a particular solution.
End of explanation
"""
1/tanh(t - I*pi/2).expand(complex=True)
1/tanh(t - I*pi/2).expand(complex=True).simplify()
final_solution = (solution.subs(C1,const[1]).rhs).expand().simplify()
final_solution
"""
Explanation: We need to think a bit about our identities.
End of explanation
"""
#%matplotlib notebook
plot(final_solution.subs([(g,9.8),(mass,70000),(b,700000)]),(t,0,2))
"""
Explanation: Now we can substitute and plot the solution.
End of explanation
"""
#%matplotlib notebook
from scipy.integrate import odeint # for integrate.odeint
from pylab import * # for plotting commands
def deriv(velocity,time):
mass = 70000.
drag = 700000.
gravity = 9.8
dvdt = gravity-(drag/mass)*velocity**2
return array(dvdt)
time = linspace(0.0,1.0,1000)
yinit = array([0.0]) # initial values
y = odeint(deriv,yinit,time)
figure()
plot(time, y[:,0])
xlabel('t')
ylabel('y')
show()
"""
Explanation: Numpy and SciPy give us the ability to do this work numerically, too.
End of explanation
"""
# your dsolve code goes here!
# it is good practice to put your solution back into the differential equation to check the results.
# your plotting code goes here!
"""
Explanation: DIY: Nonseparable Exact ODE
Solve the following differential equation and plot the family of solutions for different constants:
$$
\frac{d f(x)}{dx} + \frac{1 + f(x)}{x} = 0
$$
End of explanation
"""
#%matplotlib notebook
#from sympy import *
#init_printing()
L, R, t = symbols('L R t')
A, V = symbols('A V', cls=Function)
rLCircuit = L*A(t).diff(t) + R*A(t) - V(t)
rLCircuit
sp.symbols('V0')
rLCircuit_constantV = rLCircuit.subs(V(t),V0)
rLCircuit_constantV
solution = dsolve(rLCircuit_constantV, A(t))
solution
solution_with_ics = solution.subs([(A(t),0),(t,0)])
solution_with_ics
sp.symbols('C1')
const = sp.solve(solution_with_ics.rhs,C1)
const
particularSolution = (solution.rhs).subs(C1, ((L/R)*sp.log(-V0))).simplify()
particularSolution
sp.plot(particularSolution.subs([(R,20),(V0,10),(L,2)]),(t,0,10));
"""
Explanation: RL Circuit with Forcing
Solve Example 7.2.5 from Arfken. Assume that $I(t=0) = 0$. Find the solution first, then allow $V(t) = V0$ (a constant). While we won't do it here, you could let $V(t)$ be a function like $V0 \sin(\omega t)$.
End of explanation
"""
import sympy as sp
sp.init_session()
sp.init_printing()
f = Function('f')
modelToSolve = Eq(Derivative(f(x), x, x) + 9*f(x),0)
modelToSolve
generalSolution = dsolve(modelToSolve, f(x))
generalSolution
"""
Explanation: Postscript: Using dsolve in an Organized Way
The dsolve documentation can be found here and covers the types of equations that can be solved currently. In a simple case the function dsolve is called/used by supplying the differental equation of interest and the function for which the solution is desired. The results are then used in subsequent calculations to determine the constants in the problem. We will use the examples below to reinforce this simple view of differential equation solving.
My recommendation for developing your solutions is as follows:
express the equation/model you are interested in solving in the most general format
substitute any known functional relationships that can eliminate unknown terms or derivatives
call dsolve to find the general solution for your model
substitute initial conditions into the results to determine the constants
substitute the constants into the general solution to get your particular solution
functionalize the results and visualize the results in a plot (if appropriate)
use interact to permit exploration of free parameters and continue to develop an intuititve understanding of your model
End of explanation
"""
dsolve(sp.diff(f(x),x,2) + 9*f(x), f(x))
"""
Explanation: There are other ways to specify the derivative.
End of explanation
"""
%matplotlib notebook
from ipywidgets import interact, fixed
import sympy as sp
sp.init_session()
sp.var('omega sigma epsilon eta M sigma0 C1 C2')
sp.var('sigma epsilon', cls=Function)
"""
Explanation: Part 2: Review of Viscoelasticity
The spring obeys Hooke's law:
$$\sigma=M\epsilon $$
where $\sigma$ is the stress, $M$ is the modulus of elasticity in the linear regime, and $\epsilon$ is the elastic strain.
The dashpot obeys:
$$\sigma = \eta \frac{d \epsilon}{dt}$$
where $\sigma$, and $\epsilon$ are defined as above and $\eta$ is related to the viscous response.
Some materials exhibit time dependent behavior under loading. In polymers this behavior originates from chain elasticity due to coiling and uncoiling and non-reversible motion of the chains relative to one another. The elastic and viscous behaviors are typically modeled using the spring and dashpot.
In this lecture we will discuss the Maxwell model and the Kelvin model of viscoelastic materials. Combinations of springs and dashpots can provide insight into materials behavior. Combining many such Maxwell and Kelvin models can give insight into the spectrum of dynamic mechanical behavior seen in polymer systems.
Maxwell Model
Kelvin Model
To give you some concept for what we will discuss, this section covers a short derivation for the spring and dashpot in series. Our first assumption is that for a spring and a dashpot in series that the stress is equally carried by each member in the load train and that the strain is a sum of strains in each element. This is stated as:
$$ \epsilon_{total} = \epsilon_s + \epsilon_d $$
Taking the time derivative for each term produces:
$$ \frac{d\epsilon_{\mathrm{total}}}{dt} = \frac{d\epsilon_s}{dt} + \frac{d\epsilon_d}{dt} $$
We assume that the spring response is instantaneous so that we can write:
$$\frac{d\sigma}{dt} = M\frac{d\epsilon}{dt} $$
Using the condition that the stresses are the same reduces the number of variables by one. Using the condition that the strains add up to the total strain results in:
$$ \frac{d \epsilon_{total}(t)}{dt} = \frac{1}{M} \frac{d \sigma(t)}{dt} + \frac{\sigma(t)}{\eta}$$
Don't forget that both stress and strain are functions of time.
Models of Viscoelastic Behavior: Maxwell Model - Constant Stress
The physical experiment is one where we apply a constant stress to the Maxwell model. The model responds by setting the stress in every element as constant and the strain is the sum of the individual strains. The thought experiment is as follows:
A stress is applied to the series (chain) of elements at rest
The spring responds by Hooke's law instantaneously
The dashpot is unresponsive at $t=0$
From $t>0$ the spring's strain is fixed but the dashpot begins to strain
There is no limit on how far the dashpot can extend
A good exercise is to sketch a diagram of these steps
We begin by setting up our environment. We import interact and fixed to help us plot and visualize the results of our computation.
End of explanation
"""
maxwellModel = Eq(epsilon(t).diff(t), sigma(t).diff(t)/M + sigma(t)/eta)
maxwellModel
"""
Explanation: The Maxwell model is defined as follows:
End of explanation
"""
generalSolution = dsolve(maxwellModel,epsilon(t))
generalSolution
constantStressSolution = dsolve(maxwellModel,epsilon(t)).subs(sigma(t),sigma0).doit()
constantStressSolution
"""
Explanation: First we will work out the solution for a constant applied stress, $\sigma_0$. We start with the general solution and make the substitution that $\sigma(t)$ is constant by changing the symbol from a function of t to $\sigma_0$.
End of explanation
"""
constantStressSolution.subs([(epsilon(t),sigma0/M),(t,0)])
solutionToPlot = constantStressSolution.subs(C1,sigma0/M).rhs
solutionToPlot
"""
Explanation: We know that at $t=0$ in this system, Hooke's law defines the strain at $t=0$.
End of explanation
"""
def plotSolution(eta0,M0):
plot(solutionToPlot.subs([(eta,eta0),(M,M0),(sigma0,100)]),
(t,0,5),
#ylim=(0,1000),
xlabel='time',
ylabel=r'$\epsilon(t)$'
)
interact(plotSolution,eta0=(1,100,1),M0=(1,100,1));
"""
Explanation: Now we can plot the solution with appropriate substitutions for the two parameters and one constant. We extract the RHS and use subs and ipywidgets to interactively plot the solution. Alternatively you could lambdafy the solution and use numpy to develop the interactive plot (you might be more satisfied with the results).
End of explanation
"""
%matplotlib notebook
from ipywidgets import interact, fixed
import sympy as sp
sp.init_session()
sp.var('t, M, eta, epsilon0, sigma0, C1, C2')
sp.var('epsilon, sigma', cls=Function)
"""
Explanation: The major takeawy here is that strain rises linearly with time.
Models of Viscoelastic Behavior: Maxwell Model - Constant Strain
Because the strain is constant the time rate of change is zero and we can make ths substitution right from the start.
End of explanation
"""
maxwellModel = sp.Eq(epsilon(t).diff(t), sigma(t).diff(t)/M + sigma(t)/eta)
maxwellModel
constantStrainModel = maxwellModel.subs(diff(epsilon(t),t),0)
constantStrainModel
solutionToConstantStrainModel1 = dsolve(constantStrainModel.rhs,sigma(t))
solutionToConstantStrainModel1
dsolve(maxwellModel,sigma(t)).subs(epsilon(t),0).doit()
solutionToConstantStrainModel2 = dsolve(maxwellModel,sigma(t))
solution = solutionToConstantStrainModel2.subs(epsilon(t),0).doit()
solution
solution.subs([(t,0),(sigma(0),sigma0)])
"""
Explanation: The Maxwell model is defined as:
End of explanation
"""
solution.subs([(C1,sigma0)])
"""
Explanation: Now we make the final substitutions and get our final equation:
End of explanation
"""
solutionToPlot = solution.subs([(C1,sigma0)]).rhs
solutionToPlot
"""
Explanation: What does this say? We apply an initial stress at time zero. The material strains to whatever is predicted by Hooke's law. And then, without changing the strain again through the experiment, the stress drops exponentially.
End of explanation
"""
from ipywidgets import *
def plotSolution(M0, eta0):
plot(solutionToPlot.subs([(M,M0),(eta,eta0),(sigma0,100)]),
(t,0,5),
ylim=(0,100),
xlabel='time',
ylabel=r'$\sigma(t)$'
)
interact(plotSolution,M0=(1,100,1),eta0=(1,100,1));
"""
Explanation: Functionalize and plot the results.
End of explanation
"""
%matplotlib notebook
from ipywidgets import interact, fixed
import sympy as sp
sp.init_session()
sp.var('t, M, eta, epsilon0, sigma0, C1, C2')
sp.var('epsilon, sigma', cls=Function)
kelvinModel = Eq(sigma(t),M*epsilon(t)+eta*epsilon(t).diff(t))
kelvinModel
"""
Explanation: DIY: Kelvin Model - Constant Stress
Following similar logic as above, except this time the strains are all taken to be the same and the stresses are additive.
End of explanation
"""
# Work on your solution here.
"""
Explanation: In this problem we will let stress be a constant for all time:
$$ \sigma(t) = \sigma_0 $$
End of explanation
"""
# Restart the kernel here.
%matplotlib notebook
import sympy as sp
import numpy as np
from ipywidgets import interact, fixed
sp.init_session()
sp.init_printing()
t = sp.symbols('t')
x = sp.symbols('x', cls=Function)
k, alpha = sp.symbols('k alpha', positive=True)
"""
Explanation: Preparation for Solving Fick's Law
In this section I pose two important differential equation forms that result from the decomposition of Fick's Law. As stated above, the steps to follow are:
express the equation/model you are interested in solving in the most general format
substitute any known functional relationships that can eliminate unknown terms or derivatives
call dsolve to find the general solution for your model
substitute initial conditions into the results to determine the constants
substitute the constants into the general solution to get your particular solution
functionalize the results and visualize the results in a plot (if appropriate)
use interact to permit exploration of free parameters and continue to develop an intuititve understanding of your model
I'll help you get started by writing down the first model equation to solve:
End of explanation
"""
firstModelToSolve = x(t).diff(t)/(x(t)*alpha) + k
functionWeAreLookingFor = x(t)
firstModelToSolve, functionWeAreLookingFor
hintList = sp.classify_ode(firstModelToSolve)
hintList
solutionList = [sp.dsolve(firstModelToSolve, functionWeAreLookingFor, hint=hint) for hint in hintList]
solutionList
solutionToFirstModel = sp.dsolve(firstModelToSolve, functionWeAreLookingFor, hint=hintList[2])
solutionToFirstModel
sp.var('C1'), solutionToFirstModel.subs([(C1,1)])
sp.plot(sp.exp(-2*t),(t,0,10));
"""
Explanation: Important ODE One
This is the first ODE of interest:
$$
\frac{d x(t)}{dt } \frac{1}{\alpha x(t)} + k = 0
$$
Solve this ODE with boundary conditions: $x(t) = 1$ at $t = 0$ and $x(t) = 0$ as $t \rightarrow \infty$.
End of explanation
"""
%matplotlib notebook
import sympy as sp
import numpy as np
from ipywidgets import interact, fixed
sp.init_session()
sp.init_printing()
t = sp.symbols('t')
x = sp.symbols('x', cls=Function)
k, alpha = sp.symbols('k alpha', positive=True)
secondModelToSolve = x(t).diff(t,2)/x(t) + k**2
sp.var('C1 C2')
solutionToSecondModel = sp.dsolve(secondModelToSolve).subs(C2,0)
solutionToSecondModel
"""
Explanation: Important ODE Two
This is the other ODE of interest:
$$
\frac{d^2 x(t)}{dt^2} \frac{1}{x(t)} + k = 0
$$
Solve this ODE with boundary conditions:
$$
x(t) = 0
$$
at $t = 0$ and $t = 2\pi$
End of explanation
"""
|
facaiy/book_notes | Mining_of_Massive_Datasets/Advertising_on_the_Web/note.ipynb | cc0-1.0 | # exerices for section 8.1
"""
Explanation: 8 Advertising on the Web
"adwords" model, search
"collaborative filtering", suggestion
8.1 Issues in On-Line Advertising
8.1.1 Advertising Opportunities
Auto trading sites allow advertisters to post their ads directly on the website.
Display ads are placed on many Web sites.
On-line stores show ads in many contexts.
Search ads are placed among the results of a search query.
8.1.2 Direct Placement of Ads
Which ones:
in response to query terms.
ask the advertiser to specify parameters of the ad, and queryers can use the same menus of terms in their queries.
How to rank:
"most-recent first"
Abuse: post small variations of ads at frequent intervals. $\to$ Against: filter out similar ads.
try to measure the attractiveness of an ad.
several factors that must be considered in evaluating ads:
The position of the ad in a list has great influence on whether or not it is clicked.
The ad may have attractiveness that depends on the query terms.
All ads deserve the opportunity to be shown until their click probability can be approximated closely.
8.1.3 Issues for Display Ads
It's possible to use information about the user to determine which ad they should be shown. $\to$ privacy issues.
End of explanation
"""
# exerices for section 8.2
"""
Explanation: 8.2 On-Line Algorithms
8.2.1 On-Line and Off-Line Algorithms
Off-Line: The algorithm can access all the data in any order, and produces its answer at the end.
On-Line: The algorithm must decide about each stream element knowing nothing at all of the future.
Since we don't know the future, an on-line algorithm cannot always do as well as an off-line algorithm.
8.2.2 Greedy Algorithms
Greedy: make their decision in response to each input element by maximizing some function of the input element and the past.
might be not optimal.
8.2.3 The Competitive Ratio
an on-line algorithm need not give as good a result as the best off-line algorithm for the same problem:
particular on-line algorithm >= $C \times$ the optimum off-line algorithm, where $C \in (0,1)$ and is called the competitive ratio for the on-line algorithm.
The competitive ratio for an algorithm may depend on what kind of data is allowd to be input to the algorithm.
End of explanation
"""
plt.figure(figsize=(5,8))
plt.imshow(plt.imread('./res/fig8_1.png'))
"""
Explanation: 8.3 The Matching Problem
bipartite graphs:
graphs with two sets of nodes - left and right - with all edges connecting a node in the left set node to a node in the right set.
End of explanation
"""
plt.figure(figsize=(8,8))
plt.imshow(plt.imread('./res/fig8_2.png'))
"""
Explanation: 8.3.1 Matches and Perfect Matches
matching: a matching is a subset of the edges such that no node is an end of two or more edges.
perfect matching: a matching is said to be perfect if every node appears in the matching.
maximal matching: a matching that is as large as any other matching for the graph in question is said to be maximal.
End of explanation
"""
bipartite_graph = [('1', 'a'), ('1', 'c'), ('2', 'b'), ('3', 'b'), ('3', 'd'), ('4', 'a')]
bipartite_graph
logger.setLevel('WARN')
def greedy_maximal_matching(connections):
maximal_matches = np.array([connections[0]])
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
for c in connections[1:]:
logger.debug('c: {}'.format(c))
if (c[0] not in maximal_matches[:,0]) and (c[1] not in maximal_matches[:,1]):
maximal_matches = np.append(maximal_matches, [c], axis=0)
logger.debug('maximal_matches: \n{}'.format(maximal_matches))
return maximal_matches
from random import sample
connections = sample(bipartite_graph, len(bipartite_graph))
print('connections: \n{}'.format(connections))
greedy_maximal_matching(bipartite_graph)
"""
Explanation: 8.3.2 The Greedy Algorithm for Maximal Matching
Off-line algorithm for finding a maximal matching: $O(n^2)$ for an $n$-node graph.
On-line greedy algorithm:
We consider the edges in whatever order they are given.
When we consider $(x,y)$, add this edge to the matching if neither $x$ nor $y$ are ends of any edge selected for the matching so far. Otherwise, skip $(x,y)$.
End of explanation
"""
#(2)
from itertools import permutations
stat = []
for connections in permutations(bipartite_graph, len(bipartite_graph)):
stat.append(greedy_maximal_matching(connections).shape[0])
pd.Series(stat).value_counts()
"""
Explanation: 8.3.3 Competitive Ratio for Greedy Matching
conclusion: The competitive ratio is 1/2 exactly.
The proof is as follows:
<= 1/2
The competitive ratio for the greedy matching cannot be more than 1/2, as shown in Example 8.6.
>= 1/2
The competitive ration is no more than 1/2.
Proof:
Suppose $M$ is bipartitle graph, $M_o$ is a maximal matching, and $M_g$ is the matching of the greedy algorithm.
Let $L = {M_o.l - M_g.l}$, and $R = {r \ | (l,r) \in M; l \in L}$
Lemma (0): $R \subset M_g.r$
Suppose $\forall r \in R, r\notin M_g.r$,
becase $\exists l \in L, (l,r) \in M$, so $(l,r) \in M_g$. $\implies$ conradiction.
Lemma (1): $|M_o| \leq |M_g| + |L|$
$|M_o| = |M_o.l| = |M_o.l \cap M_g.l| + |L| \leq |M_g| + |L|$
Lemma (2): $|L| \leq |R|$
according to the definition of $R$, one-vs-many might exist.
Lemma (3): $|R| \leq |M_g|$
according to Lemma (0).
Combine Lemma (2) and Lemma (3), we get $|L| \leq |M_g|$. And together with Lemma (1), gives us $|M_o| \leq 2|M_g|$, namely,$$|M_g| \geq \frac{1}{2}|M_o|$$.
Exercises for Section 8.3
8.3.1
$j$ and $k$ cannot be the same for any $i$.
The number of node in $a$ linked to $b_j$ is no more than 2.
Proof:
because $i = 0, 1, \dotsc, n-1$,
so $j \in [0, 2, \dotsc, 2n-2] \text{ mod } n$.
hence $j$ is no more than $2n = 2*n$, namely, there are only two node in $a$ can link to any $b_j$.
The number of node in $a$ linked to $b_k$ is no more than 2.
Proof is similar with (2).
In all, there are only two node in $a$ can link to any node in $b$. So assign $b$ to $a$ one by one, the peferct matching always exists.
8.3.2
Because any node in $b$ has only two links, and also any node in $a$ has only two links. And for any $j$, there has one $k$ paired. Namely, two node of $a$ is full linked to two node of $b$.
num: $n$.
8.3.3
(1) depend on the order of edges.
End of explanation
"""
plt.imshow(plt.imread('./res/fig8_3.png'))
"""
Explanation: 8.4 The Adwords Problem
8.4.1 History of Search Advertsing
Google would show only a limited number of ads with each query.
Users of the Adwords system specified a budge.
Google did not simply order ads by the amount of the bid, but by the amount they expected to receive for display of each ad.
8.4.2 Definition of the Adwords Problem
Given:
A set of bids by advertisers for search queries.
A click-through rate for each advertiser-query pair.
A budge for each advertiser.
A limit on the nuber of ads to be displayed with each search query.
Respond to each search query with a set of advertisers such that:
The size of the set is no larger than the limit on the number of ads per query.
Each advertiser has bid on the search query.
Each advertiser has enough budget left to pay for the ad if it is clicked upon.
The revenue of a selection of ads is the total value of the ads selected, where the value of an ad is the product of the bid and the click-through rate for the ad and query.
8.4.3 The Greedy Approach to the Adwords Problem
Make some simplifications:
There is one ad shown for each query.
All advertisers have the same budget.
All click-through rates are the same.
All bids are either 0 or 1.
The greddy algorithm picks, for each search query, any advertiser who has bid 1 for that query.
competitive ratio for this algorithm is 1/2. It's similar with 8.3.3.
8.4.4 The Balance Algorithm
The Balance algorithm assigns a query to the advertiser who bids on the query and has the largest remaining budget.
8.4.5 A Lower Bound on Competitive Ratio for Balance
With only two advertisers, $3/4$ is exactly the competitive ratio.
Let two advertisers $A_1$ and $A_2$ have a same budget of $B$. We assume:
each query is assigned to an advertiser by the optimum algorithm.
if not, we can delete those queries without affecting the revenue of the optimum algorithm and possibly reducing the revenue of Balance.
both advertisers' budgets are consumed by the optimum algorithm.
If not, we can reduce the budgets, and again argue that the revenue of the optimum algorithm is not reduced while that of Balance can only shrink.
End of explanation
"""
plt.imshow(plt.imread('./res/fig8_4.png'))
"""
Explanation: In fig 8.3, observe that Balance must exhaust the budget of at least one of the advertisers, say $A_2$.
If the revenue of Balance is at least $3/4$th the revenue of the optimum algorithm, we need to show $y \geq x$.
There are two cases that the queries that are assigned to $A_1$ by the optimum algorithm are assigned to $A_1$ or $A_2$ by Balance:
Suppose at least half of these queries are assigned by Balance to $A_1$. Then $y \geq B/2$, so surely $y \geq x$.
Suppose more than half of these queries are assigned by Balance to $A_2$.
Why dose Balance assgin them to $A_2$, instead of $A_1$ like the optimum algorithm? Because $A_2$ must have had at least as great a budget available as $A_1$.
Since more than half of the $B$ queries that the optimum algorithm assigns to $A_1$ are assigned to $A_2$ by Balance, so the remaining budget of $A_2$ was less than $B/2$.
Thus, the remaining budget of $A_1$ was laso less than $B/2$. We know that $x < B/2$.
It follows that $y > x$, since $x + y = B$.
We conclude that $y \geq x$ in either case, so the competitve ratio of the Balance Algorithm is $3/4$.
8.4.6 The Balance Algorithm with Many Bidders
The worst case for Balance is as follows:
There are $N$ advertisers, $A_1, A_2, \dotsc, A_N$.
Each advertiser has a budge $B = N!$.
There are $N$ queries $q_1, q_2, \dotsc, q_N$.
Advertiser $A_i$ bids on queries $q_1, q_2, \dotsc, q_i$ and no other queries.
The query sequence consists of $N$ rounds. The $i$th round consists of $B$ occurrences of query $q_i$ and nothing else.
The optimum off-line algorithm assigns the $B$ queries $q_i$ in the $i$th round to $A_i$ for all $i$. Its total revenue is $NB$.
However, for the Balance Algorithm,
End of explanation
"""
class advertiser:
def __init__(self, name, bids):
self.name = name
self.bids = bids
def get_info(self):
return self.name, self.bids
advertisers = [
advertiser('David', ['Google', 'email', 'product']),
advertiser('Jim', ['SNS', 'Facebook', 'product']),
advertiser('Sun', ['product', 'Google', 'email']),
]
bids_hash_table = dict()
for ad in advertisers:
v, k = ad.get_info()
k = [x.lower() for x in k]
k = ' '.join(sorted(k))
if k not in bids_hash_table:
bids_hash_table[k] = [v]
else:
bids_hash_table[k].append(v)
bids_hash_table
queries = [
('EMAIL', 'google', 'Product'),
('google', 'facebook', 'Product')
]
def handle_query(query):
q = [x.lower() for x in query]
q = ' '.join(sorted(q))
print(q)
try:
print('Found: {}'.format(bids_hash_table[q]))
except KeyError:
print('No bids')
for query in queries:
handle_query(query)
print()
"""
Explanation: Lower-numbered advertisers cannot bid at first, and the budgets of hte higher-numbered advertisers will be exhausted eventually. All advertisers will end at $j$ round where
$$B(\frac{1}{N} + \frac{1}{N-1} + \dotsb + \frac{1}{N-j+1}) \geq B$$
Solving this equation for $j$, we get $$j = N(1 - \frac{1}{e})$$
Thus, the approxiamte revenue obtained by the Balance Algorithm is $BN(1 - \frac{1}{e})$. Therefore, the competitive ration is $1 - \frac{1}{e}$.
8.4.7 The Generalized Balance Algorithm
With arbitrary bids and budgets Balance fails to weight the sizes of the bids properly. In order to make Balance work in more general situations, we need to make two modifications:
bias the choice of ad in favor of higher bids.
use the fraction of the budgets remaining.
We calculate $\Phi_i = x_i (1 - e^{-f_i})$, where $x_i$ is the bid of $A_i$ for the query, and $f_i$ is the fraction fo the unspent budget of $A_i$. The algorithm assignes the query to $\text{argmax} \Phi_i$.
The competitive ration is $1 - \frac{1}{e}$.
8.4.8 Final Observations About the Adwords Problem
click-through rate.
multiply the bid by the click-through rate when computing the $\Phi_i$'s.
historical frequency of queries.
If $A_i$ has a budget sufficiently small, then we maintain $\Phi_i$ as long as we can expect that there will be enough queries remaining in the month to give $A_i$ its full budget of ads.
Exercises for Section 8.4
#maybe
8.5 Adwords Implementation
8.5.1 Matching Bids and Search Queries
If a search query occurs having exactly that set of words in some order, then the bid is said to match the query, and it becomes a candidate for selection.
Storing all sets of words representing a bid in lexicographic(alphabetic) order, and use it as the hash-key for the bid.
End of explanation
"""
n_common_words = {'the': 0.9, 'and': 0.8, 'twas': 0.3}
def construct_document(doc):
doc = doc.replace(',','').lower().split(' ')
com = set(doc).intersection(set(n_common_words.keys()))
diff = set(doc).difference(set(n_common_words.keys()))
freq = [n_common_words[x] for x in com]
freq_sec = [x for (y,x) in sorted(zip(freq, com))]
rare_sec = sorted(diff)
sec = ' '.join(rare_sec + freq_sec)
print(sec)
doc = 'Twas brilling, and the slithy toves'
construct_document(doc)
"""
Explanation: 8.5.2 More Complex Matching Problems
Hard: Matches adwords bids to emails.
a bid on a set of words $S$ matches an email if all the words in $S$ appear anywhere in the email.
Easy: Matching single words or consecutive sequences of words in a long article
On-line news sites often push certain news or articles to users who subscribed by keywords or phrases.
8.5.3 A Matching Algorithm for Documents and Bids
match many "bids" against many "documents".
A bid is a (typically small) set of words.
A document is a larger set of words, such as email, tweet, or news article.
We assume there may be hundrends of documents per second arriving, and there are many bids, perhaps on the order of a hundred million or a billion.
representing a bid by its words listed in some order
status: It is an integer indicating how many of the first words on the list have been matched by the current document.
ordering words rarest-first.
We might identify the $n$ most common words are sorted by frequency, and they occupy the end of the list, with the most frequent words at the very end.
All words not among the $n$ most frequent can be assumed equally infrequent and ordered lexicographically.
End of explanation
"""
plt.imshow(plt.imread('./res/fig8_5.png'))
"""
Explanation: The bids are stored in a hash-table, whose hash key is the first word of the bid, in the order explained above.
There is another hash table, whose job is to contain copies of those bids that have been partially matched. If the status is $i$, then the hash-key for this hash table is the $(i + 1)$st word.
To process a document:
End of explanation
"""
|
moble/MatchedFiltering | GW150914/HybridizeNR.ipynb | mit | 16.4 / ((36.+29.) * m_sun)
"""
Explanation: We need about 16.4 seconds of data, after we scale the system to (36+29=) $65\, M_{\odot}$. In terms of $M$ as we know it, that's about...
End of explanation
"""
metadata = read_metadata_into_object(data_dir + '/metadata.txt')
m1 = metadata.relaxed_mass1
m2 = metadata.relaxed_mass2
chi1 = np.array(metadata.relaxed_spin1) / m1**2
chi2 = np.array(metadata.relaxed_spin2) / m2**2
# I guess(...) that the units on the metadata quantity are just those of M*Omega, so I'll divide by M to get units of M=1
Omega_orb_i = np.linalg.norm(metadata.relaxed_orbital_frequency) / (m1+m2)
"""
Explanation: We can read in the metadata and establish some quantities. These may not be the same as the optimal parameters, but they need to be consistent between NR and PN.
End of explanation
"""
nr = GWFrames.ReadFromNRAR(data_dir + 'rhOverM_Asymptotic_GeometricUnits_CoM.h5/Extrapolated_N4.dir')
nr.SetT(nr.T()-metadata.relaxed_measurement_time);
approximant = 'TaylorT4' # 'TaylorT1'|'TaylorT4'|'TaylorT5'
delta = (m1 - m2) / (m1 + m2) # Normalized BH mass difference (M1-M2)/(M1+M2)
chi1_i = chi1 # Initial dimensionless spin vector of BH1
chi2_i = chi2 # Initial dimensionless spin vector of BH2
Omega_orb_i = Omega_orb_i # Initial orbital angular frequency
Omega_orb_0 = Omega_orb_i/3.25 # Earliest orbital angular frequency to compute (default: Omega_orb_i)
# R_frame_i: Initial rotation of the binary (default: No rotation)
# MinStepsPerOrbit = # Minimum number of time steps at which to evaluate (default: 32)
# PNWaveformModeOrder: PN order at which to compute waveform modes (default: 3.5)
# PNOrbitalEvolutionOrder: PN order at which to compute orbital evolution (default: 4.0)
pn = GWFrames.PNWaveform(approximant, delta, chi1_i, chi2_i, Omega_orb_i, Omega_orb_0)
plt.close()
plt.semilogy(pn.T(), np.abs(pn.Data(0)))
plt.semilogy(nr.T(), np.abs(nr.Data(0)))
! /Users/boyle/.continuum/anaconda/envs/gwframes/bin/python ~/Research/Code/misc/GWFrames/Code/Scripts/HybridizeOneWaveform.py {data_dir} \
--Waveform=rhOverM_Asymptotic_GeometricUnits_CoM.h5/Extrapolated_N4.dir --t1={metadata.relaxed_measurement_time} --t2=2000.0 \
--InitialOmega_orb={Omega_orb_0} --Approximant=TaylorT4
--DirectAlignmentEvaluations 100
h = hybrid.EvaluateAtPoint(0.0, 0.0)[:-1]
hybrid = scri.SpEC.read_from_h5(data_dir + 'rhOverM_Inertial_Hybrid.h5')
hybrid = hybrid[:-1]
h = hybrid.SI_units(current_unit_mass_in_solar_masses=36.+29., distance_from_source_in_megaparsecs=410)
t_merger = 16.429
h.max_norm_time()
h.t = h.t - h.max_norm_time() + t_merger
plt.close()
plt.semilogy(h.t, np.abs(h.data[:, 0]))
sampling_rate = 4096. # Hz
dt = 1 / sampling_rate # sec
t = np.linspace(0, 32, num=int(32*sampling_rate))
h_discrete = h.interpolate(t)
h_discrete.data[np.argmax(t>16.4739):, :] = 1e-40j
from utilities import transition_function
h_trimmed = h_discrete.copy()
h_trimmed.data = (1-transition_function(h_discrete.t, 16.445, 16.4737))[:, np.newaxis] * h_discrete.data
plt.close()
plt.semilogy(h_discrete.t, np.abs(h_discrete.data[:, 0]))
plt.semilogy(h_trimmed.t, np.abs(h_trimmed.data[:, 0]))
import quaternion
import spherical_functions as sf
sYlm = sf.SWSH(quaternion.one, h_discrete.spin_weight, h_discrete.LM)
(sYlm * h_trimmed.data).shape
h_data = np.tensordot(sYlm, h_trimmed.data, axes=([0, 1]))
np.savetxt('../Data/NR_GW150914.txt', np.vstack((h_data.real, h_data.imag)).T)
! head -n 1 ../Data/NR_GW150914.txt
"""
Explanation: Now read the NR waveform and offset so that the "relaxed" measurement time is $0$.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_parcellation.ipynb | bsd-3-clause | # Author: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
from surfer import Brain
import mne
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
mne.datasets.fetch_hcp_mmp_parcellation(subjects_dir=subjects_dir,
verbose=True)
labels = mne.read_labels_from_annot(
'fsaverage', 'HCPMMP1', 'lh', subjects_dir=subjects_dir)
brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
cortex='low_contrast', background='white', size=(800, 600))
brain.add_annotation('HCPMMP1')
aud_label = [label for label in labels if label.name == 'L_A1_ROI-lh'][0]
brain.add_label(aud_label, borders=False)
"""
Explanation: Plot a cortical parcellation
In this example, we download the HCP-MMP1.0 parcellation [1]_ and show it
on fsaverage.
<div class="alert alert-info"><h4>Note</h4><p>The HCP-MMP dataset has license terms restricting its use.
Of particular relevance:
"I will acknowledge the use of WU-Minn HCP data and data
derived from WU-Minn HCP data when publicly presenting any
results or algorithms that benefitted from their use."</p></div>
References
.. [1] Glasser MF et al. (2016) A multi-modal parcellation of human
cerebral cortex. Nature 536:171-178.
End of explanation
"""
brain = Brain('fsaverage', 'lh', 'inflated', subjects_dir=subjects_dir,
cortex='low_contrast', background='white', size=(800, 600))
brain.add_annotation('HCPMMP1_combined')
"""
Explanation: We can also plot a combined set of labels (23 per hemisphere).
End of explanation
"""
|
danecollins/pyawr | awr_nb/basic_awrde_connection.ipynb | mit | # import com library
import win32com.client
"""
Explanation: Working with AWR Design Environment
This notebook shows how to connect to AWRDE and retrieve data from a simulation.
Setup
To communicate with COM enabled Windows applications we must import the com interface library using the raw win32com connection.
End of explanation
"""
# connect to awrde
mwo = win32com.client.Dispatch("MWOApp.MWOffice")
"""
Explanation: Once this is done, we can connect to AWRDE. If it is already running it will connect to that instance, otherwise it will be started up. If you have multiple instances running and want to connect to a specific instance, see this link
End of explanation
"""
import os
example_directory = mwo.Directories(8).ValueAsString
example_filename = os.path.join(example_directory, 'LPF_lumped.emp')
mwo.Open(example_filename)
"""
Explanation: Open Project, Get Data
Let's open the LPF_filter.emp example
End of explanation
"""
mwo.Project.Simulate()
graph = mwo.Project.Graphs('Passband and Stopband')
m1 = graph.Measurements(1)
m2 = graph.Measurements(2)
frequencies = m1.XValues
S11_dB = m1.YValues(0) # in this case we have to specify a dimension
S21_dB = m2.YValues(0)
"""
Explanation: Next we will simulate and bring some results back.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import pylab
pylab.rcParams['figure.figsize'] = (10,6) # set default plot size
plt.plot(frequencies, S11_dB)
plt.plot(frequencies, S21_dB)
plt.show()
"""
Explanation: Create a Plot
I'll use matplotlib to plot but we could use any other python based plotting package.
End of explanation
"""
import pandas as pd
S11db = pd.Series(S11_dB, index=frequencies)
S21db = pd.Series(S21_dB, index=frequencies)
df = pd.concat([S11db, S21db], axis=1)
df.columns=['S21', 'S11']
df.head()
"""
Explanation: Perform Analysis
Often the reason for wanting to pull data into Python is to do further analysis. For this we would like to get the data into a DataFrame.
End of explanation
"""
df.index=df.index/1e9
df.head()
"""
Explanation: This is a little ugly with the frequencies in Hertz so let's convert them to GHz
End of explanation
"""
|
xianjunzhengbackup/code | IoT/Basic_dweet_cloud.ipynb | mit | payload={'Temperature':'28.1'}
req=requests.get('https://dweet.io/dweet/for/JunTest1?',params=payload)
print(req.content)
"""
Explanation: dweet.io is a simple cloud which could accept data via requests.
End of explanation
"""
import dweepy
data={'Temperature':'29.1'}
dweepy.dweet_for('JunTest1', data)
"""
Explanation: Or using dweepy lib
End of explanation
"""
import random
import time
for i in range(100):
data['Temperature']=random.randint(0,100)
dweepy.dweet_for('JunTest1',data)
time.sleep(2)
"""
Explanation: Both ways are ok. And the website to watch data is https://dweet.io/follow/JunTest1
End of explanation
"""
req=requests.get('https://dweet.io/get/dweets/for/JunTest1')
print(req.content)
"""
Explanation: The following website could fetch data from long term storage of dweet.io
End of explanation
"""
|
DJCordhose/ai | notebooks/tf2/fashion-mnist-resnet.ipynb | mit | !pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
x_train.shape
import numpy as np
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
x_train.shape
# recude memory and compute time
NUMBER_OF_SAMPLES = 50000
x_train_samples = x_train[:NUMBER_OF_SAMPLES]
y_train_samples = y_train[:NUMBER_OF_SAMPLES]
import skimage.data
import skimage.transform
x_train_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_train_samples])
x_train_224.shape
"""
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/fashion-mnist-resnet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Fashion MNIST with Keras and Resnet
Adapted from
* https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb
* https://github.com/margaretmz/deep-learning/blob/master/fashion_mnist_keras.ipynb
End of explanation
"""
from tensorflow.keras.applications.resnet50 import ResNet50
# https://keras.io/applications/#mobilenet
# https://arxiv.org/pdf/1704.04861.pdf
from tensorflow.keras.applications.mobilenet import MobileNet
# model = ResNet50(classes=10, weights=None, input_shape=(32, 32, 1))
model = MobileNet(classes=10, weights=None, input_shape=(32, 32, 1))
model.summary()
%%time
BATCH_SIZE=10
EPOCHS = 10
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train_224, y_train_samples, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=1)
import matplotlib.pyplot as plt
plt.xlabel('epochs')
plt.ylabel('loss')
plt.yscale('log')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['Loss', 'Validation Loss'])
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['Accuracy', 'Validation Accuracy'])
"""
Explanation: Alternative: ResNet
basic ideas
depth does matter
8x deeper than VGG
possible by using shortcuts and skipping final fc layer
prevents vanishing gradient problem
https://keras.io/applications/#resnet50
https://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba
http://arxiv.org/abs/1512.03385
End of explanation
"""
x_test_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_test])
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
def plot_predictions(images, predictions):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = plt.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14)
plt.gcf().set_size_inches(8, 8)
plot_predictions(np.squeeze(x_test_224[:16]),
model.predict(x_test_224[:16]))
train_loss, train_accuracy = model.evaluate(x_train_224, y_train_samples, batch_size=BATCH_SIZE)
train_accuracy
test_loss, test_accuracy = model.evaluate(x_test_224, y_test, batch_size=BATCH_SIZE)
test_accuracy
"""
Explanation: Checking our results (inference)
End of explanation
"""
|
google/compass | packages/propensity/09.audience_upload.ipynb | apache-2.0 | # Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
from IPython import display
from utils import helpers
"""
Explanation: 9. Audience Upload to GMP
GMP and Google Ads Connector is used to upload audience data to GMP (e.g. Google Analytics, Campaign Manager) or Google Ads in an automatic and reliable way.
Following sections provide high level guidelines on deploying and configuring GMP and Google Ads Connector. For detailed instructions on how to set up different GMP endpoints, refer to solution's README.md.
Requirements
This notebook requires BigQuery table containing scored audience list. Refer to 7.batch_scoring.ipynb for details on how to get scored audience.
Import required modules
End of explanation
"""
!git clone https://github.com/GoogleCloudPlatform/cloud-for-marketing.git
"""
Explanation: Deploy GMP and Google Ads Connector
First clone the source code by executing below cell:
End of explanation
"""
display.HTML('<a href="" data-commandlinker-command="terminal:create-new">โถAccess Terminalโ๏ธ</a>')
"""
Explanation: Next, exectute following two steps to deploy GMP and Google Ads Connector on your GCP project.
Copy following content:
bash
cd cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector && ./deploy.sh default_install
Execute following cell to start a new Terminal session and paste above copied content to the Terminal. NOTE: This notebook uses Google Analytics Measurement Protocol API to demonstrate audience upload, thus choose 0 on Step 5: Confirm the integration with external APIs... during the installation process on the Terminal session.
It takes about 3 minutes to setup audience uploader pipeline.
End of explanation
"""
%%writefile cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector/config_api.json
{
"MP": {
"default": {
"mpConfig": {
"v": "1",
"t": "event",
"ec": "video",
"ea": "play",
"ni": "1",
"tid": "UA-XXXXXXXXX-Y"
}
}
}
}
"""
Explanation: When the deployment is done, you can verify three Cloud Functions deployments via the Cloud Console UI. If deployment is succeeded, move to next section to upload audience data to Google Analytics via JSONL file.
Configure audience upload endpoint
Different audience upload endpoint APIs have different configurations. Following demonstrates how endpoint for Google Analytics can be configured via Measurement Protocol. Refer to 3.3. Configurations of APIs for detailed configuration options for other endpoints.
Update following GA values according to your needs in the following cell. Refer to Working with the Measurement Protocol for details on field names and correct values.
json
{
"t": "event",
"ec": "video",
"ea": "play",
"ni": "1",
"tid": "UA-112752759-1"
}
End of explanation
"""
configs = helpers.get_configs('config.yaml')
dest_configs = configs.destination
# GCP project ID
PROJECT_ID = dest_configs.project_id
# Name of BigQuery dataset
DATASET_NAME = dest_configs.dataset_name
# Google Cloud Storage Bucket name to store audience upload JSON files
# NOTE: The name should be same as indicated while deploying
# "GMP and Google Ads Connector" on the Terminal
GCS_BUCKET = 'bucket'
# This Cloud Storage folder is monitored by the "GMP and Google Ads Connector"
# to send over to endpoint (eg: Google Analytics).
GCS_FOLDER = 'outbound'
# File name to export BigQuery Table to Cloud Storage
JSONL_FILENAME = 'myproject_API[MP]_config[default].jsonl'
# BigQuery table containing scored audience data
AUDIENCE_SCORE_TABLE_NAME = 'table'
%%bash -s $PROJECT_ID $DATASET_NAME $AUDIENCE_SCORE_TABLE_NAME $GCS_BUCKET $GCS_FOLDER $JSONL_FILENAME
bq extract \
--destination_format NEWLINE_DELIMITED_JSON \
$1:$2.$3 \
gs://$4/$5/$6
"""
Explanation: Create audience list JSON files
GMP and Google Ads Connector's Google Analytics Measurement Protocol pipeline requires JSONL text format. Following cells help to export BigQuery table containing audience list as JSONL file to Google Cloud Storage Bucket. NOTE: This solution has specific file naming requirement to work properly. Refer to 3.4. Name convention of data files for more details.
As soon as the file is uploaded, GMP and Google Ads Connector processes it and sends it via Measurement Protocol to Google Analytics property configured above ("tid": "UA-XXXXXXXXX-Y").
End of explanation
"""
|
nsrchemie/code_guild | wk1/notebooks/wk1.4.ipynb | mit | # How to make a set
a = {1, 2, 3}
type(a)
# Getting a set from a list
b = set([1, 2, 3])
a == b
# How to make a frozen set
a = frozenset({1, 2, 3})
# Getting a set from a list
b = frozenset([1, 2, 3])
# Getting a set from a string
set("obtuse")
# Getting a set from a dictionary
c = set({'a':1, 'b':2})
type(c)
# Getting a set from a tuple
c = set(('a','b'))
type(c)
# Sets do not contain duplicates
a = {1, 2, 2, 3, 3, 3, 3}
a
# Sets do not support indexing (because they don't preserve order)
a[2]
# Sets cannot be used for dictionary keys because they are mutable but frozensets can be used for dictionary keys.
# Adding elements to a set
s = set([12, 26, 54])
s.add(32)
s # If we try to add 32 again, nothing will happen
# Updating a set using an iterable
s.update([26, 12, 9, 14]) # once again, note that adding duplicates has no effect.
s
# making copies of sets
s2 = s.copy()
"""
Explanation: wk1.4
warm-up
Instructions: For each of the following problems, fill out the answer underneath and submit the finished quiz to me via personal slack message. You may consult your notes but please do not use the internet.
assign the number 8 to a variable eight.
set b equal to eight.
print b.
Write a boolean expression that will return true if x is 'a' or 'b' and false otherwise.
Write a boolean expression that returns true if and only if x is greater than ten and x is odd.
write a function that takes a parameter, n, and then returns n (unchanged).
write a function that takes a string, str_, and prints the string three times (once per line).
Write a program to prompt the user for hours and rate per hour to compute gross pay.
Enter Hours: 35
Enter Rate: 2.75
Pay: 96.25
given a str1 = "Hello " and a str2 = "World", how can we concatenate (join together) str1 to str2?
given a str1 = "Hello", how can we index str1 to get the 'o'? Give two different ways.
given a str1 = "Hi", what operation can we do to the string to output "HiHiHiHi"?
make a list, lst, containing the numbers 0 through 10.
append the string 'hi' to the list
remove the 4 from the lst
how can you check if 5 is in the lst (your expression should return True if 5 is in the lst, and False otherwise)
write a loop that prints each element from 0 through 9
write a loop that prints each element from your lst.
write a loop that prints out the element multiplied by two for each element from 0 through 9.
write a loop that will count from 0 to infinity.
write a statement that checks if a variable var is empty.
make a tuple containing a single element 'a'
make a tuple containing two elements, 'a' and 'b'
given a tuple containing 'Dicaprio' and 43, unpack the tuple with the variables name and age.
make an empty dictionary, dct.
add the key value pairs 'one'/1, 'two'/2, 'three'/3, 'four'/4
change the value of three to 'tres'
delete the key value pair 'two'/2.
write the following loops over dct:
a loop that gets the keys
a loop that gets the values
a loop that prints the key value pairs (not tuple)
a loop that prints tuples of the key value pairs.
why might we use a dictionary over a list of tuples?
Give a definition of the following:
mutability/immutability
homogeneous/heterogenous datatypes
overflow
abstraction
modularization
For each of the following datatypes, write M for mutable or I for immutable, HO for homogeneous or HE for heterogenous:
ex. blub: MHO (note blub is not a datatype we will be going over in this class)
string
list
tuple
dictionary
what is the difference between printing output from a function vs. returning output from a function?
what is a variable?
what is the difference between aliasing and copying? What type of datatypes does aliasing apply to? Why do we prefer to copy?
Sets and frozen sets
A big difference: sets are mutable, frozen sets are not
End of explanation
"""
32 in s
55 in s
"""
Explanation: Testing membership
End of explanation
"""
s.issubset(set([32, 8, 9, 12, 14, -4, 54, 26, 19]))
s.issuperset(set([9, 12]))
# Note that subset and superset testing works on other iterables
s.issuperset([32, 9])
# We can also use <= and >= respectively for subset and superset testing
set([4, 5, 7]) <= set([4, 5, 7, 9])
set([9, 12, 15]) >= set([9, 12])
"""
Explanation: Subsets and supersets
End of explanation
"""
s = set([1,2,3,4,5,6])
s.pop()
s.remove(3)
s.remove(9) # Removing an item that isn't in the set causes an error
s.discard(9) # discard is the same as remove but doesn't throw an error
s.clear() # removes everything
s
"""
Explanation: Removing items
End of explanation
"""
s = set("blerg")
for char in s:
print(char)
"""
Explanation: Iterating over sets
Big takeaway: you can do it but good luck guessing the order
End of explanation
"""
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.intersection(s2)
s1 & s2
s1.intersection_update(s2) # updates s1 with the intersection of s1 and s2
s1
"""
Explanation: Set operations
Intersection
End of explanation
"""
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.union(s2)
s1 | s2
# To update using union, simpy use update
"""
Explanation: Union
End of explanation
"""
s1.symmetric_difference(s2)
s1 ^ s2
s1.symmetric_difference_update(s2)
s1
"""
Explanation: Symmetric difference (xor)
End of explanation
"""
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.difference(s2)
s1 - s2
s1.difference_update(s2)
s1
"""
Explanation: Set Difference
End of explanation
"""
|
Bismarrck/deep-learning | sentiment-rnn/Sentiment_RNN.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# Create your dictionary that maps vocab words to integers here
from collections import Counter
counter = Counter(words)
vocab = sorted(counter, key=counter.get, reverse=True)
vocab_to_int = {word: i for i, word in enumerate(vocab, 1)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for review in reviews:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
label_to_int= {"positive": 1, "negative": 0}
labels = labels.split()
labels = np.array([label_to_int[label.strip().lower()] for label in labels])
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review) > 0]
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
num_reviews = len(reviews_ints)
features = np.zeros((num_reviews, seq_len), dtype=int)
for i, review in enumerate(reviews_ints):
rlen = min(len(review), seq_len)
istart = seq_len - rlen
features[i, istart:] = review[:rlen]
print(features[0, :100])
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
split_index = int(num_reviews * split_frac)
train_x, val_x = features[:split_index], features[split_index:]
train_y, val_y = labels[:split_index], labels[split_index:]
split_index = int(len(val_x) * 0.5)
val_x, test_x = val_x[:split_index], val_x[split_index:]
val_y, test_y = val_y[:split_index], val_y[split_index:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None], name="inputs")
labels_ = tf.placeholder(tf.int32, [None, None], name="labels")
keep_prob = tf.placeholder(tf.float32, shape=None, name="keep_prob")
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable()
embed =
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state =
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
Raag079/self-driving-car | Term01-Computer-Vision-and-Deep-Learning/Labs/03-CarND-LeNet-Lab/.ipynb_checkpoints/LeNet-Lab-Solution-checkpoint.ipynb | mit | from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
"""
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
"""
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
"""
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
"""
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
"""
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
"""
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
"""
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
"""
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
"""
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
"""
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
"""
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
"""
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
"""
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet')
print("Model saved")
"""
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/LSTM_IMDB_Sentiment_Example.ipynb | apache-2.0 | # keras.datasets.imdb is broken in TensorFlow 1.13 and 1.14 due to numpy 1.16.3
!pip install numpy==1.16.2
# All the imports!
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing import sequence
from numpy import array
# Supress deprecation warnings
import logging
logging.getLogger('tensorflow').disabled = True
# Fetch "IMDB Movie Review" data, constraining our reviews to
# the 10000 most commonly used words
vocab_size = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.imdb.load_data(num_words=vocab_size)
# Map for readable classnames
class_names = ["Negative", "Positive"]
"""
Explanation: LSTM Recurrent Neural Network
Learning Objectives
Create map for converting IMDB dataset to readable reviews.
Create and build LSTM Recurrent Neural Network.
Visualise the Model and train the LSTM.
Evaluate model with test data and view results.
What is this?
This Jupyter Notebook contains Python code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on the IMDB Movie Review Sentiment Analysis Dataset.
More information is given on this blogpost.
Introduction
Long Short Term Memory networks โ usually just called โLSTMsโ โ are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997), and were refined and popularized by many people in following work. They work tremendously well on a large variety of problems, and are now widely used.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior, not something they struggle to learn!
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Setting up
When running this for the first time you may get a warning telling you to restart the Runtime. You can ignore this, but feel free to select "Kernel->Restart Kernel" from the overhead menu if you encounter problems.
End of explanation
"""
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
"""
Explanation: Note: Please ignore any incompatibility errors or warnings as it does not impact the notebook's functionality.
This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
"""
# Get the word index from the dataset
word_index = tf.keras.datasets.imdb.get_word_index()
# Ensure that "special" words are mapped into human readable terms
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNKNOWN>"] = 2
word_index["<UNUSED>"] = 3
# Perform reverse word lookup and make it callable
# TODO -- your code goes here
"""
Explanation: Create map for converting IMDB dataset to readable reviews
Reviews in the IMDB dataset have been encoded as a sequence of integers. Luckily the dataset also
contains an index for converting the reviews back into human readable form.
End of explanation
"""
# Concatenate test and training datasets
allreviews = np.concatenate((x_train, x_test), axis=0)
# Review lengths across test and training whole datasets
print("Maximum review length: {}".format(len(max((allreviews), key=len))))
print("Minimum review length: {}".format(len(min((allreviews), key=len))))
result = [len(x) for x in allreviews]
print("Mean review length: {}".format(np.mean(result)))
# Print a review and it's class as stored in the dataset. Replace the number
# to select a different review.
print("")
print("Machine readable Review")
print(" Review Text: " + str(x_train[60]))
print(" Review Sentiment: " + str(y_train[60]))
# Print a review and it's class in human readable format. Replace the number
# to select a different review.
print("")
print("Human Readable Review")
print(" Review Text: " + decode_review(x_train[60]))
print(" Review Sentiment: " + class_names[y_train[60]])
"""
Explanation: Data Insight
Here we take a closer look at our data. How many words do our reviews contain?
And what do our reviews look like in machine and human readable form?
End of explanation
"""
# The length of reviews
review_length = 500
# Padding / truncated our reviews
x_train = sequence.pad_sequences(x_train, maxlen = review_length)
x_test = sequence.pad_sequences(x_test, maxlen = review_length)
# Check the size of our datasets. Review data for both test and training should
# contain 25000 reviews of 500 integers. Class data should contain 25000 values,
# one for each review. Class values are 0 or 1, indicating a negative
# or positive review.
print("Shape Training Review Data: " + str(x_train.shape))
print("Shape Training Class Data: " + str(y_train.shape))
print("Shape Test Review Data: " + str(x_test.shape))
print("Shape Test Class Data: " + str(y_test.shape))
# Note padding is added to start of review, not the end
print("")
print("Human Readable Review Text (post padding): " + decode_review(x_train[60]))
"""
Explanation: Pre-processing Data
We need to make sure that our reviews are of a uniform length. This is for the LSTM's parameters.
Some reviews will need to be truncated, while others need to be padded.
End of explanation
"""
# We begin by defining an empty stack. We'll use this for building our
# network, later by layer.
model = tf.keras.models.Sequential()
# The Embedding Layer provides a spatial mapping (or Word Embedding) of all the
# individual words in our training set. Words close to one another share context
# and or meaning. This spatial mapping is learning during the training process.
model.add(
tf.keras.layers.Embedding(
input_dim = vocab_size, # The size of our vocabulary
output_dim = 32, # Dimensions to which each words shall be mapped
input_length = review_length # Length of input sequences
)
)
# Dropout layers fight overfitting and forces the model to learn multiple
# representations of the same data by randomly disabling neurons in the
# learning phase.
# TODO -- your code goes here
# We are using a fast version of LSTM which is optimised for GPUs. This layer
# looks at the sequence of words in the review, along with their word embeddings
# and uses both of these to determine the sentiment of a given review.
# TODO -- your code goes here
# Add a second dropout layer with the same aim as the first.
# TODO -- your code goes here
# All LSTM units are connected to a single node in the dense layer. A sigmoid
# activation function determines the output from this node - a value
# between 0 and 1. Closer to 0 indicates a negative review. Closer to 1
# indicates a positive review.
model.add(
tf.keras.layers.Dense(
units=1, # Single unit
activation='sigmoid' # Sigmoid activation function (output from 0 to 1)
)
)
# Compile the model
model.compile(
loss=tf.keras.losses.binary_crossentropy, # loss function
optimizer=tf.keras.optimizers.Adam(), # optimiser function
metrics=['accuracy']) # reporting metric
# Display a summary of the models structure
model.summary()
"""
Explanation: Create and build LSTM Recurrent Neural Network
End of explanation
"""
tf.keras.utils.plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=False)
"""
Explanation: Visualise the Model
End of explanation
"""
# Train the LSTM on the training data
history = model.fit(
# Training data : features (review) and classes (positive or negative)
x_train, y_train,
# Number of samples to work through before updating the
# internal model parameters via back propagation. The
# higher the batch, the more memory you need.
batch_size=256,
# An epoch is an iteration over the entire training data.
epochs=3,
# The model will set apart his fraction of the training
# data, will not train on it, and will evaluate the loss
# and any model metrics on this data at the end of
# each epoch.
validation_split=0.2,
verbose=1
)
"""
Explanation: Train the LSTM
End of explanation
"""
# Get Model Predictions for test data
# TODO -- your code goes here
"""
Explanation: Evaluate model with test data and view results
End of explanation
"""
predicted_classes_reshaped = np.reshape(predicted_classes, 25000)
incorrect = np.nonzero(predicted_classes_reshaped!=y_test)[0]
# We select the first 10 incorrectly classified reviews
for j, incorrect in enumerate(incorrect[0:20]):
predicted = class_names[predicted_classes_reshaped[incorrect]]
actual = class_names[y_test[incorrect]]
human_readable_review = decode_review(x_test[incorrect])
print("Incorrectly classified Test Review ["+ str(j+1) +"]")
print("Test Review #" + str(incorrect) + ": Predicted ["+ predicted + "] Actual ["+ actual + "]")
print("Test Review Text: " + human_readable_review.replace("<PAD> ", ""))
print("")
"""
Explanation: View some incorrect predictions
Let's have a look at some of the incorrectly classified reviews. For readability we remove the padding.
End of explanation
"""
# Write your own review
review = "this was a terrible film with too much sex and violence i walked out halfway through"
#review = "this is the best film i have ever seen it is great and fantastic and i loved it"
#review = "this was an awful film that i will never see again"
# Encode review (replace word with integers)
tmp = []
for word in review.split(" "):
tmp.append(word_index[word])
# Ensure review is 500 words long (by padding or truncating)
tmp_padded = sequence.pad_sequences([tmp], maxlen=review_length)
# Run your processed review against the trained model
rawprediction = model.predict(array([tmp_padded][0]))[0][0]
prediction = int(round(rawprediction))
# Test the model and print the result
print("Review: " + review)
print("Raw Prediction: " + str(rawprediction))
print("Predicted Class: " + class_names[prediction])
"""
Explanation: Run your own text against the trained model
This is a fun way to test out the limits of the trained model. To avoid getting errors - type in lower case only and do not use punctuation!
You'll see the raw prediction from the model - basically a value between 0 and 1.
End of explanation
"""
|
yy/dviz-course | m10-logscale/m10-lab.ipynb | mit | import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import scipy.stats as ss
import vega_datasets
"""
Explanation: Module 10: Logscale
End of explanation
"""
x = np.array([1, 1, 1, 1, 10, 100, 1000])
y = np.array([1000, 100, 10, 1, 1, 1, 1 ])
ratio = x/y
print(ratio)
"""
Explanation: Ratio and logarithm
If you use linear scale to visualize ratios, it can be quite misleading.
Let's first create some ratios.
End of explanation
"""
X = np.arange(len(ratio))
# Implement
"""
Explanation: Q: Plot on the linear scale using the scatter() function. Also draw a horizontal line at ratio=1 for a reference.
End of explanation
"""
# Implement
"""
Explanation: Q: Is this a good visualization of the ratio data? Why? Why not? Explain.
Q: Can you fix it?
End of explanation
"""
# TODO: Implement the functionality mentioned above
# The following code is just a dummy. You should load the correct dataset from vega_datasets package.
movies = pd.DataFrame({"Worldwide_Gross": np.random.sample(200), "IMDB_Rating": np.random.sample(200)})
"""
Explanation: Log-binning
Let's first see what happens if we do not use the log scale for a dataset with a heavy tail.
Q: Load the movie dataset from vega_datasets and remove the NaN rows based on the following three columns: IMDB_Rating, IMDB_Votes, Rotten_Tomatoes_Rating.
End of explanation
"""
# Implement
"""
Explanation: If you simply call hist() method with a dataframe object, it identifies all the numeric columns and draw a histogram for each.
Q: draw all possible histograms of the movie dataframe. Adjust the size of the plots if needed.
End of explanation
"""
ax = movies["Worldwide_Gross"].hist(bins=200)
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
"""
Explanation: As we can see, a majority of the columns are not normally distributed. In particular, if you look at the worldwide gross variable, you only see a couple of meaningful data from the histogram. Is this a problem of resolution? How about increasing the number of bins?
End of explanation
"""
ax = movies["Worldwide_Gross"].hist(bins=200)
ax.set_yscale('log')
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
"""
Explanation: Maybe a bit more useful, but it doesn't tell anything about the data distribution above certain point. How about changing the vertical scale to logarithmic scale?
End of explanation
"""
movies["IMDB_Rating"].hist(bins=range(0,11))
"""
Explanation: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows.
End of explanation
"""
min(movies["Worldwide_Gross"])
"""
Explanation: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources:
Google query: python log-bin
numpy.logspace
numpy.linspace vs numpy.logspace
Hint: since $10^{\text{start}} = \text{min(Worldwide_Gross)}$, $\text{start} = \log_{10}(\text{min(Worldwide_Gross)})$
End of explanation
"""
movies["Worldwide_Gross"] = movies["Worldwide_Gross"]+1.0
# TODO: Replace the dummy value of bins using np.logspace.
# Create 20 bins that cover the whole range of the dataset.
bins = [1.0, 2.0, 4.0]
bins
"""
Explanation: Because there seems to be movie(s) that made $0, and because log(0) is undefined & log(1) = 0, let's add 1 to the variable.
End of explanation
"""
ax = (movies["Worldwide_Gross"]+1.0).hist(bins=bins)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
"""
Explanation: Now we can plot a histgram with log-bin. Set both axis to be log-scale.
End of explanation
"""
# Implement
"""
Explanation: What is going on? Is this the right plot?
Q: explain and fix
End of explanation
"""
# TODO: Implement functionality mentioned above
# You must replace the dummy values with the correct code.
worldgross_sorted = np.random.sample(200)
Y = np.random.sample(200)
"""
Explanation: Q: Can you explain the plot? Why are there gaps?
CCDF
CCDF is a nice alternative to examine distributions with heavy tails. The idea is same as CDF, but the direction of aggregation is opposite. For a given value x, CCDF(x) is the number (fraction) of data points that are same or larger than x. To write code to draw CCDF, it'll be helpful to draw it by hand by using a very small, toy dataset. Draw it by hand and then think about how each point in the CCDF plot can be computed.
Q: Draw a CCDF of worldwide gross data in log-log scale
End of explanation
"""
plt.xlabel("World wide gross")
plt.ylabel("CCDF")
plt.plot(worldgross_sorted,Y)
plt.yscale('log')
"""
Explanation: We can also try in semilog scale (only one axis is in a log-scale), where the horizontal axis is linear.
End of explanation
"""
# Implement
"""
Explanation: A straight line in semilog scale means exponential decay (cf. a straight line in log-log scale means power-law decay). So it seems like the amount of money a movie makes across the world follows roughly an exponential distribution, while there are some outliers that make insane amount of money.
Q: Which is the most successful movie in our dataset?
You can use the following
idxmax(): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html
loc: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html
End of explanation
"""
|
peterwittek/qml-rg | Archiv_Session_Spring_2017/Exercises/11_Markov_random_field.ipynb | gpl-3.0 | from skimage import io
from skimage.transform import resize
from functools import reduce # To do multiple-argument multiplications
import numpy as np
from numpy.linalg import norm
import matplotlib.pyplot as plt
"""
Explanation: QML - RG Homework 11: Markov Random Fields
Alejandro Pozas-Kerstjens
End of explanation
"""
einstein = io.imread('einstein.png')
einstein = einstein / einstein.max()
size = 3 # Choosing bigger sizes gives problems when computing the partition function
einstein = resize(einstein, (size, size), mode='constant')
def binarize(pixel):
if pixel > 0.5:
return 1
elif pixel <= 0.5:
return -1
vfunc = np.vectorize(binarize)
einst = vfunc(einstein)
plt.imshow(einst)
plt.show()
training_set = np.array([einst])
"""
Explanation: Pre-processing: Loading, resizing, normalizing and binarizing the image
End of explanation
"""
hpos = np.zeros(training_set[0].shape)
hpos = 1 / len(training_set) * np.sum(training_set, axis=0)
neighbors = 3 # Number of sites we are going to consider as nearest neighbors
Jpos = np.zeros((size, size , neighbors, neighbors))
for i in range(size):
for j in range(size):
for k in range(neighbors):
for l in range(neighbors):
if ((i + k > size - 1) | (j + l > size - 1)) | ((k == 0) & (l == 0)):
, # Condition to avoid correlations between top and bottom rows, or left and right columns
else:
Jpos[i][j][k][l] = np.sum(training_set[:, i + k, j + l]
* training_set[:, i, j], axis=0) / len(training_set)
"""
Explanation: Compute positive phase terms (constant for every iteration)
End of explanation
"""
# Initialize parameters
htrained = 2 * np.random.rand(size, size) - np.ones(training_set[0].shape)
Jtrained = np.zeros(Jpos.shape)
Jtrained[abs(Jpos) > 0] = 2 * np.random.rand() - 1 # Funny way to initialize only relevant cells in J
def potential(z, h): # Potential energy of a configuration
return np.sum(np.multiply(h, z))
def interactions(z, J): # Interaction energy of a configuration
return np.sum(np.array([J[i][j][k][l] * z[i][j] * z[i + k][j + l] for i in range(size)
for j in range(size) for k in range(size - i) for l in range(size - j)]))
def P(z, h, J, norm): # Thermal distribution of configurations with temperature=1
return np.exp(-np.sum(potential(z, h)) - np.sum(interactions(z, J))) / norm
def mse(a, b): # Mean squared error (used during training to assess convergence)
return ((a - b) ** 2).mean()
# Generate all possible configurations of spins for a given size
m = size ** 2
d = np.array(range(2 ** m))
allconfs = (((d[:,None] & (1 << np.arange(m)))) > 0).astype(int).reshape(2 ** m, size, size)
allconfs = 2 * allconfs - np.ones(allconfs.shape) # Change 0s by -1s
"""
Explanation: NOTE: The structure of J is funny. The first two labels $(i,j)$ denote each spin. The two second ones $(j,k)$ denote the neighbors of the corresponding spin, i.e., spin $(i+k,j+l)$. Self-energies $J_{i,j,0,0}$ are not taken into account. Only neighbors to the right and down need to be stored due to the symmetry of the interaction.
Functions needed for computing the negative phase terms
End of explanation
"""
hpre = np.zeros(htrained.shape)
Jpre = np.zeros(Jtrained.shape)
a = 1 # For debugging and iterations counting
ฯต = 10 ** (-8) # Precision of results
while (mse(hpre, htrained) > ฯต) | (mse(Jpre, Jtrained) > ฯต):
hpre = htrained
Jpre = Jtrained
norm = np.sum([np.exp(-np.sum(potential(conf, htrained)) - np.sum(interactions(conf, Jtrained))) for conf in allconfs])
# Compute negative phase terms
hneg = np.zeros(training_set[0].shape)
for i in range(size):
for j in range(size):
hneg[i][j] = (np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[allconfs[:, i, j]==1]])
- np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[allconfs[:, i, j]==-1]]))
Jneg = np.zeros(Jpos.shape)
for i in range(size):
for j in range(size):
for k in range(neighbors):
for l in range(neighbors):
if ((i + k > size - 1) | (j + l > size - 1)) | ((k == 0) & (l == 0)):
,
else:
Jneg[i][j][k][l] = (np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[(allconfs[:, i, j]==1) & (allconfs[:, i + k, j + l]==1)]])
+ np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[(allconfs[:, i, j]==-1) & (allconfs[:, i + k, j + l]==-1)]])
- np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[(allconfs[:, i, j]==-1) & (allconfs[:, i + k, j + l]==1)]])
- np.sum([P(z, htrained, Jtrained, norm) for z in allconfs[(allconfs[:, i, j]==1) & (allconfs[:, i + k, j + l]==-1)]]))
# Update parameters
htrained = htrained + hpos - hneg
Jtrained = Jtrained + Jpos - Jneg
# Idea to keep the parameters in [-1, 1].
# If there is a parameter outside [-1, 1], normalize all by the highest value
if any(abs(x) > 1 for x in np.ndarray.flatten(htrained)) | any(abs(x) > 1 for x in np.ndarray.flatten(Jtrained)):
hm = abs(htrained).max()
Jm = abs(Jtrained).max()
mx = np.array([hm, Jm]).max()
htrained = htrained / mx
Jtrained = Jtrained / mx
# Have a check that everything is running (in a fancy way :P)
print("Iterations done: %i" % a, end='\r')
a += 1
print("\nTraining complete")
"""
Explanation: Training
End of explanation
"""
## Given some partial image, we are going to choose the most probable configuration
keep = [0] # Rows that we want to keep from the original image
norm = np.sum([np.exp(-np.sum(potential(conf, htrained)) - np.sum(interactions(conf, Jtrained))) for conf in allconfs])
possible = np.array([[z, P(z, htrained, Jtrained, norm)]
for z in allconfs[reduce(np.multiply,[np.prod(allconfs[:, i]==einst[i], axis=1) for i in keep]).astype(bool)]])
tru = possible[:, 0][np.argmax(possible[:,1])]
plt.imshow(tru)
plt.show()
"""
Explanation: Sampling
End of explanation
"""
|
nwjs/chromium.src | third_party/tensorflow-text/src/docs/tutorials/text_classification_rnn.ipynb | bsd-3-clause | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
"""
Explanation: Text classification with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_classification_rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This text classification tutorial trains a recurrent neural network on the IMDB large movie review dataset for sentiment analysis.
Setup
End of explanation
"""
import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
"""
Explanation: Import matplotlib and create a helper function to plot graphs:
End of explanation
"""
dataset, info = tfds.load('imdb_reviews', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
train_dataset.element_spec
"""
Explanation: Setup input pipeline
The IMDB large movie review dataset is a binary classification datasetโall the reviews have either a positive or negative sentiment.
Download the dataset using TFDS. See the loading text tutorial for details on how to load this sort of data manually.
End of explanation
"""
for example, label in train_dataset.take(1):
print('text: ', example.numpy())
print('label: ', label.numpy())
"""
Explanation: Initially this returns a dataset of (text, label pairs):
End of explanation
"""
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset = test_dataset.batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
for example, label in train_dataset.take(1):
print('texts: ', example.numpy()[:3])
print()
print('labels: ', label.numpy()[:3])
"""
Explanation: Next shuffle the data for training and create batches of these (text, label) pairs:
End of explanation
"""
VOCAB_SIZE = 1000
encoder = tf.keras.layers.TextVectorization(
max_tokens=VOCAB_SIZE)
encoder.adapt(train_dataset.map(lambda text, label: text))
"""
Explanation: Create the text encoder
The raw text loaded by tfds needs to be processed before it can be used in a model. The simplest way to process text for training is using the TextVectorization layer. This layer has many capabilities, but this tutorial sticks to the default behavior.
Create the layer, and pass the dataset's text to the layer's .adapt method:
End of explanation
"""
vocab = np.array(encoder.get_vocabulary())
vocab[:20]
"""
Explanation: The .adapt method sets the layer's vocabulary. Here are the first 20 tokens. After the padding and unknown tokens they're sorted by frequency:
End of explanation
"""
encoded_example = encoder(example)[:3].numpy()
encoded_example
"""
Explanation: Once the vocabulary is set, the layer can encode text into indices. The tensors of indices are 0-padded to the longest sequence in the batch (unless you set a fixed output_sequence_length):
End of explanation
"""
for n in range(3):
print("Original: ", example[n].numpy())
print("Round-trip: ", " ".join(vocab[encoded_example[n]]))
print()
"""
Explanation: With the default settings, the process is not completely reversible. There are three main reasons for that:
The default value for preprocessing.TextVectorization's standardize argument is "lower_and_strip_punctuation".
The limited vocabulary size and lack of character-based fallback results in some unknown tokens.
End of explanation
"""
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
"""
Explanation: Create the model
Above is a diagram of the model.
This model can be build as a tf.keras.Sequential.
The first layer is the encoder, which converts the text to a sequence of token indices.
After the encoder is an embedding layer. An embedding layer stores one vector per word. When called, it converts the sequences of word indices to sequences of vectors. These vectors are trainable. After training (on enough data), words with similar meanings often have similar vectors.
This index-lookup is much more efficient than the equivalent operation of passing a one-hot encoded vector through a tf.keras.layers.Dense layer.
A recurrent neural network (RNN) processes sequence input by iterating through the elements. RNNs pass the outputs from one timestep to their input on the next timestep.
The tf.keras.layers.Bidirectional wrapper can also be used with an RNN layer. This propagates the input forward and backwards through the RNN layer and then concatenates the final output.
The main advantage of a bidirectional RNN is that the signal from the beginning of the input doesn't need to be processed all the way through every timestep to affect the output.
The main disadvantage of a bidirectional RNN is that you can't efficiently stream predictions as words are being added to the end.
After the RNN has converted the sequence to a single vector the two layers.Dense do some final processing, and convert from this vector representation to a single logit as the classification output.
The code to implement this is below:
End of explanation
"""
print([layer.supports_masking for layer in model.layers])
"""
Explanation: Please note that Keras sequential model is used here since all the layers in the model only have single input and produce single output. In case you want to use stateful RNN layer, you might want to build your model with Keras functional API or model subclassing so that you can retrieve and reuse the RNN layer states. Please check Keras RNN guide for more details.
The embedding layer uses masking to handle the varying sequence-lengths. All the layers after the Embedding support masking:
End of explanation
"""
# predict on a sample text without padding.
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model.predict(np.array([sample_text]))
print(predictions[0])
"""
Explanation: To confirm that this works as expected, evaluate a sentence twice. First, alone so there's no padding to mask:
End of explanation
"""
# predict on a sample text with padding
padding = "the " * 2000
predictions = model.predict(np.array([sample_text, padding]))
print(predictions[0])
"""
Explanation: Now, evaluate it again in a batch with a longer sentence. The result should be identical:
End of explanation
"""
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
"""
Explanation: Compile the Keras model to configure the training process:
End of explanation
"""
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.ylim(None, 1)
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
plt.ylim(0, None)
"""
Explanation: Train the model
End of explanation
"""
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model.predict(np.array([sample_text]))
"""
Explanation: Run a prediction on a new sentence:
If the prediction is >= 0.0, it is positive else it is negative.
End of explanation
"""
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(len(encoder.get_vocabulary()), 64, mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)
# predict on a sample text without padding.
sample_text = ('The movie was not good. The animation and the graphics '
'were terrible. I would not recommend this movie.')
predictions = model.predict(np.array([sample_text]))
print(predictions)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
"""
Explanation: Stack two or more LSTM layers
Keras recurrent layers have two available modes that are controlled by the return_sequences constructor argument:
If False it returns only the last output for each input sequence (a 2D tensor of shape (batch_size, output_features)). This is the default, used in the previous model.
If True the full sequences of successive outputs for each timestep is returned (a 3D tensor of shape (batch_size, timesteps, output_features)).
Here is what the flow of information looks like with return_sequences=True:
The interesting thing about using an RNN with return_sequences=True is that the output still has 3-axes, like the input, so it can be passed to another RNN layer, like this:
End of explanation
"""
|
SeismicPi/SeismicPi | Lessons/Lesson 2/Lesson 2.ipynb | mit | one_to_ten = [1,2,3,4,5,6,7,8,9,10]
print one_to_ten
"""
Explanation: Lesson 2
Analog to Digital
This lesson will cover how to convert analog values to digital values, how to log data and view the data over a time period.
If you remember from the last lesson, a lot of sensors are analog, meaning they can output values from a range. We also mentioned how it was troublesome because a lot of devices (such as the raspberry pi) can only read digital values, so they would not be able to directly get information from the analog sensors. In order to combat this problem, we must convert the analog values into digital values via an ADC (analog to digital converter).
In the last lesson, it was mentioned that digital values can only take on the values of $0$ or $1$. I lied. Kind of. It wasn't a lie that digital values can only be 0 or 1, however, this only applies to a single bit. But when we put more than one bit together, the different combinations of 0 and 1 may produce many unique outputs.
For example: One bit by itself can be: 0 or 1.
When we put two bits together we have four different combinations!
| Second Bit | First Bit |
|------------|-----------|
| 0 | 0 |
| 0 | 1 |
| 1 | 0 |
| 1 | 1 |
Create a table by hand for all the bit combinations of three bits. How many patterns could you find? Is there a pattern emerging for how many values $n$ number of bits can create?
By now you may notice that every time we add a bit, the number of patterns we can create is double the amount of patterns we could have created without the extra added bit, e.g. one bit has two patterns, two bits have four patterns, three bits have eight patterns, etc. In general $n$ bits can take on $2^n$ patterns.
But how can this be used to help represent analog values? Well, even though we can't represent all the values an analog sensor can output, we can represent a lot of them if we have enough bits. For example, lets say an analog signal can produce a value from 0 to 3. If we had one bit "0" would represent 0 and "1" would represent three. However, if the analog value gave us the value of 1, we wouldn't know how to represent it, and if we did, there would be a really large error. But if we had two bits, we can now have four values to represent. So $00$ could represent $0$, $01$ could represent $1$, $10$ would represent $2$ and $11$ would represent $3$. NOTE THAT $00, 01, 10, 11$ DO NOT CORRESPOND TO ZERO, ONE, TEN, ELEVEN. So now we see we can respresent four of the values from 0 to 3. If have more and more bits, we will be able to represent more and more values, and when given an analog value, we can choose the digital value that is closest to it. So an ADC takes in an analog value and returns the digital value that bests represents it.
The graph below shows an analog signal. Given a three bit resolution, can you give the digital value at certain time points? There are eight (since $2^3 = 8$) horizontal lines, each representing a bit pattern that take on the values in the range of the analog signal.
<img src="ADC.png" alt="Drawing" style="width: 750px;"/>
An example is worked through below. I want to know what the the digital value of the signal is at the time $1.2$. I first find the point on the analog signal that corresponds with $1.2$ on the $x$-axis
<img src="adc_example1.png" alt="Drawing" style="width: 750px;"/>
<img src="adc_example2.png" alt="Drawing" style="width: 750px;"/>
We then find the horizontal line that is closest to that point, in this case its the upper most one.
<img src="adc_example3.png" alt="Drawing" style="width: 750px;"/>
Thus, the digital value that is returned is the bit pattern that corresponds with this line. In this case its $111$.
Find the bit pattern that is returned at times $0.2$, $0.4$, $0.8$, $1.6$ and $1.8$.
So it should be clear by now that as we have more and more bits, we can be more and more accurate with our estimations. So how come we don't always use one hundred, one thousand, one million, bits to estimate our signal? Although more bits has its benefits, it also has its tradeoffs. Since we have to represent all of these bits in memory, if we have a really large bit resolution, each reading will take up more memory.
Since the raspberry pi cannot directly read in analog signals, we first process the analog signals through a $24$ bit resolution ADC, and then the pi reads the digital value from the ADC.
Data Logging
Now we know how to read data from all types of sensors, how do we use it? The first thing that we want to do before anything else is to log the data. Data logging is the collection of data over a period of time, and then analyzing the patterns in the data. We will explore how we can do this in python. Firstly we will introduce a data structure that can be used to remember and store our data. This data structure is called the list. They are essentially what they sound like, a list of items in order. The example below shows the list of integers from 1 to 10. We can print lists like we did with other entities before. In the example below, we are binding the list to a variable called one_to_ten.
End of explanation
"""
one_to_ten = []
one_to_ten.append(1)
print one_to_ten
one_to_ten.append(2)
print one_to_ten
one_to_ten.append(3)
print one_to_ten
one_to_ten.append(4)
print one_to_ten
one_to_ten.append(5)
print one_to_ten
one_to_ten.append(6)
print one_to_ten
one_to_ten.append(7)
print one_to_ten
one_to_ten.append(8)
print one_to_ten
one_to_ten.append(9)
print one_to_ten
one_to_ten.append(10)
print one_to_ten
"""
Explanation: This is great, because now we can remember values that our sensors returned at specific instances of time. There are many operations we can use to modify a list. One important one that we will use here is the append operation. Essentially what it lets us do is add an element to the end of the list. An example is shown below. Initially one_to_ten is empty, we want to see how we can fill it up. Initially we bind one_to_ten to an empty list, which is denoted by [].
End of explanation
"""
i = 1
while(i <= 10):
print i
i = i+1
"""
Explanation: We can see that every time we called one_to_ten.append(x), we appended x to the list. For more operations we can perform on lists, see the python documentation here.
But looking at the code above, this is quite cumbersome. It involves a lot of typing and a lot of code. Programmers are lazy, so they invented a nice little way to let us not write as much code by using loops. Loops are a construct that repeat a section of code a certain number of times, as defined by the programming. One example is shown below. This is the while loop
End of explanation
"""
i = 1;
while(i <= 10):
print i
i = i + 3
"""
Explanation: Let me explain what the code is doing. If you remember the if/else paradigm from the previous lesson, you'll remember that the if statement takes in a boolean value, and runs the code in the if block. A while loop also does this, except when it finishes running the code, it returns back to the condition statement, and if it is still true, runs the code again, whereas an if statement will just continue. In the code above, initial i is equal to 1, which we print. We then increment i by 1, so now i = 2. Since 2 <= 10, we run the code again, and print i, then increment. We keep doing this until i is incremented to 11, at which point 11 <= 10 is not true, so the while loop terminates. Note, it is important that we increment i by 1 every time, or else the condition statement is always true.
```python
i = 1
while(i <= 10):
print i
```
Lets step through the code above. We initialize i to be 1. We check that 1 <= 10, so we print i, we then return to the coniditional statement, but it has not changed and i is still = to 1, so we print i again. You can see that this loop is never going to exit! This is known as an infinite loop and we generally try to avoid it. One important thing to remember is not to forget to increment our variable!
However, we don't necessarily have to increment by 1. We can also increment by 3. See the example below. Try to see why it produces the output.
End of explanation
"""
#WRITE YOUR CODE HERE
#SOlUTION
i = 2
while(i <= 20):
print i
i = i+2
"""
Explanation: In the block below, write some code, using a while loop, that will print out every even number, starting from 2 and ending with 20. It should print out
2
4
6
8
10
12
14
16
18
20
End of explanation
"""
#WRITE YOUR CODE HERE
one_to_ten = []
#SOLUTION
i = 1
while(i <= 10):
one_to_ten.append(i)
i = i+1
print one_to_ten
"""
Explanation: Now we can easily create the list one_to_ten by using a loop! Try to see if you can make one_to_ten by using append and a while loop below.
End of explanation
"""
#WRITE YOUR CODE BELOW
import time
#import board
temperatures = []
#SOLUTION
i = 0
while(i <= 5):
#temperatures.append(board.getTemperature)
i = i+1
times.
"""
Explanation: Now we can see how we use these tools to log data from our sensors. Lets log the temperature of the room over 5 seconds.
We can get the current temperature by calling board.getTemperature() and append it to a list. Say we want to take 5 samples over the 5 seconds. This means that we take a sample every second. So we want to wait a second between additions to our list that remembers the previous temperatures. This can be done by calling time.sleep(1), which basically just tells the computer to wait 1 second before continuing. See if you can implement this data logger below.
End of explanation
"""
import matplotlib.pyplot as plot
plot.plot(temperatures)
plot.show()
"""
Explanation: Congratulations, you've made your first data logger! We could easily modify this code to take temperatures over an entire day, or a week, or a month, or even a year! Now we have to discuss a concept called sample rate. Simply put, a sample rate is how many samples we take a second. In the example above we had a sample rate of 1, because we took one sample every second. If we had taken 10 samples over 5 seconds, we would have a 0.5 second delay between each sample, and had a sample rate of 2, because we would have taken 2 samples every single second. In general we can see that $sample rate = \frac{total samples}{total time}$. In general a higher sample rate is good because it allows us to see more data points between points. If we have too low of a sample rate we won't be able to see any patterns/how the temperature changes, it'll just be telling us what the temperature is at certain times.
Once we have a list of our values, we can visualize it through python's plotter. The code below is written to view the temperature.
End of explanation
"""
|
bgroveben/python3_machine_learning_projects | learn_kaggle/machine_learning/data_leakage.ipynb | mit | import pandas as pd
data = pd.read_csv('input/credit_card_data.csv', true_values=['yes'], false_values=['no'])
data.head()
data.shape
"""
Explanation: Data Leakage
What is it?
Data leakage is one of the most important issues for a data scientist to understand.
If you don't know how to prevent it, leakage will come up frequently, and it will ruin your models in the most subtle and dangerous ways.
Specifically, leakage causes a model to look accurate until you start making decisions with the model, and then the model becomes very inaccurate.
This tutorial will show you what leakage is and how to avoid it.
There are two main types of leakage: Leaky Predictors and Leaky Validation Strategies.
Leaky Predictors
This occurs when your predictors include data that will not be available at the time you make your predictions.
For example, imagine that you want to predict who will catch pneumonia.
The first few rows of your raw data might look like this:
People take antibiotic medicines after getting pneumonia in order to recover.
So the raw data shows a strong relationship between those columns.
But took_antibiotic_medicine is frequently changed after the value for got_pneumonia is determined.
This is target leakage.
The model would see that anyone who has a value of False for took_antibiotic_medicine didn't have pneumonia.
Validation data comes from the same source, so the pattern will repeat itself in validation, and the model will have great validation (or cross-validation) scores.
However, the model will be less accurate when subsequently deployed in the real world.
To prevent this type of data leakage, any variable updated (or created) after the target value is realized should be excluded.
Because when we use this model to make new predictions, that data won't be available to the model.
Leaky Validation Strategies
A much different type of leak occurs when you aren't careful distinguishing training data from validation data.
For example, this happens if you run preprocessing (like fitting the Imputer for missing values) before calling train_test_split.
Validation is meant to be a measure of how the model does on data it hasn't considered before.
You can corrupt this process in subtle ways if the validation data affects the preprocessing behavior.
Your model will get very good validation scores, giving you great confidence in it, but perform poorly when you deploy it to make decisions.
Preventing Leaky Predictors
There is no single solution that universally prevents leaky predictors.
That being said, there are a few common strategies you can use.
Leaky predictors frequently have high statistical correlations to the target.
To screen for possible leaks, look for columns that are strongly correlated to your target.
If you then build your model and the results are very accurate, then there is a good chance of a leakage problem.
Preventing Leaky Validation Strategies
If your validation is based on a simple train-test split, exclude the validation data from any type of fitting, including the fitting of preprocessing steps.
This another place where scikit-learn pipelines make themselves useful.
When using cross-validation, it's very helpful to use pipelines and do your preprocessing inside the pipeline.
Now for the code:
We will use a small dataset about credit card applications, and we will build a model predicting which applications were accepted (stored in a variable called card).
End of explanation
"""
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
y = data.card
X =data.drop(['card'], axis=1)
# Using a pipeline is best practice, so it's included here even though
# the absence of preprocessing makes it unnecessary.
modeling_pipeline = make_pipeline(RandomForestClassifier())
cv_scores = cross_val_score(modeling_pipeline, X, y, scoring='accuracy')
print("Cross-validation accuracy: ")
print(cv_scores.mean())
"""
Explanation: This can be considered a small dataset, so we'll use cross-validation to ensure accurate measures of model quality.
End of explanation
"""
expenditures_cardholders = data.expenditure[data.card]
expenditures_not_cardholders = data.expenditure[~data.card]
((expenditures_cardholders == 0).mean())
((expenditures_not_cardholders == 0).mean())
"""
Explanation: With experience, you'll find that it's very rare to find models that are accurate 98% of the time.
It happens, but it's rare enough that we should inspect the data more closely to see if it is target leakage.
Here is a summary of the data:
card: Dummy variable, 1 if application for credit card accepted, 0 if not
* reports: Number of major derogatory reports
* age: Age n years plus twelfths of a year
* income: Yearly income (divided by 10,000)
* share: Ratio of monthly credit card expenditure to yearly income
* expenditure: Average monthly credit card expenditure
* owner: 1 if owns their home, 0 if rent
* selfempl: 1 if self employed, 0 if not.
* dependents: 1 + number of dependents
* months: Months living at current address
* majorcards: Number of major credit cards held
active: Number of active credit accounts
A few variables look suspicious. For example, does expenditure mean expenditure on this card or on cards used before appying?
At this point, basic data comparisons can be very helpful:
End of explanation
"""
potential_leaks = ['expenditure', 'share', 'active', 'majorcards']
X2 = X.drop(potential_leaks, axis=1)
cv_scores = cross_val_score(modeling_pipeline, X2, y, scoring='accuracy')
cv_scores.mean()
"""
Explanation: Everyone with card == False had no expenditures, while only 2% of those with card == True had no expenditures.
It's not surprising that our model appeared to have a high accuracy.
But this seems a data leak, where expenditures probably means expenditures on the card they applied for.
Since share is partially determined by expenditure, it should be excluded too.
The variables active, majorcards are a little less clear, but from the description, they may be affected.
In most situations, it's better to be safe than sorry if you can't track down the people who created the data to find out more.
Now that that pitfall has presented itself, it's time to build a model that is more data-leakage resistant:
End of explanation
"""
|
trangel/Data-Science | reinforcement_learning/experience_replay.ipynb | gpl-3.0 | %load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
import pandas as pd
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import random
class ReplayBuffer(object):
def __init__(self, size):
"""
Create Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
Note: for this assignment you can pick any data structure you want.
If you want to keep it simple, you can store a list of tuples of (s, a, r, s') in self._storage
However you may find out there are faster and/or more memory-efficient ways to do so.
"""
self._storage = []
self._maxsize = size
# OPTIONAL: YOUR CODE
columns = ['state', 'action', 'reward', 'next_state', 'is_done']
self._storage = pd.DataFrame(columns=columns)
def __len__(self):
return len(self._storage)
def add(self, obs_t, action, reward, obs_tp1, done):
'''
Make sure, _storage will not exceed _maxsize.
Make sure, FIFO rule is being followed: the oldest examples has to be removed earlier
'''
#data = (obs_t, action, reward, obs_tp1, done)
data = {
'state' : obs_t,
'action' : action,
'reward' : reward,
'next_state' : obs_tp1,
'is_done' : done
}
# add data to storage
if len(self._storage) == self._maxsize:
self._storage.drop(0, axis=0, inplace=True)
self._storage.reset_index(drop=True, inplace=True)
self._storage = self._storage.append(data, ignore_index=True)
def sample(self, batch_size):
"""Sample a batch of experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
"""
idxes = np.random.randint(
low=0,
high=len(self._storage),
size=batch_size
) #<randomly generate batch_size integers to be used as indexes of samples>
r = self._storage.loc[idxes]
# collect <s,a,r,s',done> for each index
states = r.state.values
actions = r.action.values
rewards = r.reward.values
next_states = r.next_state.values
is_done = r.is_done.values
return (states, actions, rewards, next_states, is_done)
"""
Explanation: Honor Track: experience replay
This notebook builds upon qlearning.ipynb, or to be exact, generating qlearning.py.
There's a powerful technique that you can use to improve sample efficiency for off-policy algorithms: [spoiler] Experience replay :)
The catch is that you can train Q-learning and EV-SARSA on <s,a,r,s'> tuples even if they aren't sampled under current agent's policy. So here's what we're gonna do:
<img src=https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/exp_replay.png width=480>
Training with experience replay
Play game, sample <s,a,r,s'>.
Update q-values based on <s,a,r,s'>.
Store <s,a,r,s'> transition in a buffer.
If buffer is full, delete earliest data.
Sample K such transitions from that buffer and update q-values based on them.
To enable such training, first we must implement a memory structure that would act like such a buffer.
End of explanation
"""
replay = ReplayBuffer(2)
obj1 = tuple(range(5))
obj2 = tuple(range(5, 10))
replay.add(*obj1)
assert replay.sample(1)==obj1, "If there's just one object in buffer, it must be retrieved by buf.sample(1)"
replay.add(*obj2)
assert len(replay._storage)==2, "Please make sure __len__ methods works as intended."
replay.add(*obj2)
assert len(replay._storage)==2, "When buffer is at max capacity, replace objects instead of adding new ones."
assert tuple(np.unique(a) for a in replay.sample(100))==obj2
replay.add(*obj1)
assert max(len(np.unique(a)) for a in replay.sample(100))==2
replay.add(*obj1)
assert tuple(np.unique(a) for a in replay.sample(100))==obj1
print ("Success!")
"""
Explanation: Some tests to make sure your buffer works right
End of explanation
"""
import gym
from qlearning import QLearningAgent
env = gym.make("Taxi-v2")
n_actions = env.action_space.n
def play_and_train_with_replay(env, agent, replay=None,
t_max=10**4, replay_batch_size=32):
"""
This function should
- run a full game, actions given by agent.getAction(s)
- train agent using agent.update(...) whenever possible
- return total reward
:param replay: ReplayBuffer where agent can store and sample (s,a,r,s',done) tuples.
If None, do not use experience replay
"""
total_reward = 0.0
s = env.reset()
for t in range(t_max):
# get agent to pick action given state s
a = agent.get_action(s) #<YOUR CODE>
next_s, r, done, _ = env.step(a)
# update agent on current transition. Use agent.update
#<YOUR CODE>
agent.update(s, a, r, next_s)
if replay is not None:
# store current <s,a,r,s'> transition in buffer
#<YOUR CODE>
replay.add(s, a, r, next_s, done)
# sample replay_batch_size random transitions from replay,
# then update agent on each of them in a loop
#<YOUR CODE>
(states, actions, rewards, next_states, is_done) = replay.sample(replay_batch_size)
for s_, a_, r_, next_s_ in zip(states, actions, rewards, next_states):
agent.update(s_, a_, r_, next_s_)
s = next_s
total_reward +=r
if done:break
return total_reward
# Create two agents: first will use experience replay, second will not.
agent_baseline = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
agent_replay = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,
get_legal_actions = lambda s: range(n_actions))
replay = ReplayBuffer(1000)
from IPython.display import clear_output
from pandas import DataFrame
moving_average = lambda x, span=100: DataFrame({'x':np.asarray(x)}).x.ewm(span=span).mean().values
rewards_replay, rewards_baseline = [], []
for i in range(1000):
rewards_replay.append(play_and_train_with_replay(env, agent_replay, replay))
rewards_baseline.append(play_and_train_with_replay(env, agent_baseline, replay=None))
agent_replay.epsilon *= 0.99
agent_baseline.epsilon *= 0.99
if i %100 ==0:
clear_output(True)
print('Baseline : eps =', agent_replay.epsilon, 'mean reward =', np.mean(rewards_baseline[-10:]))
print('ExpReplay: eps =', agent_baseline.epsilon, 'mean reward =', np.mean(rewards_replay[-10:]))
plt.plot(moving_average(rewards_replay), label='exp. replay')
plt.plot(moving_average(rewards_baseline), label='baseline')
plt.grid()
plt.legend()
plt.show()
"""
Explanation: Now let's use this buffer to improve training:
End of explanation
"""
from submit import submit_experience_replay
submit_experience_replay(rewards_replay, rewards_baseline, 'tonatiuh_rangel@hotmail.com', 'GWnGSUsbgj3Fcn0B')
"""
Explanation: Submit to Coursera
End of explanation
"""
|
shngli/Data-Mining-Python | Mining massive datasets/algorithms.ipynb | gpl-3.0 | from math import e
"""
Explanation: Generalized BALANCE algorithm
End of explanation
"""
psi = lambda x, f: x * (1 - e ** (-f))
xs = [1, 2, 3]
fs = [0.9, 0.5, 0.6]
print "If a query arrives that is bidded on by A and B"
for i in [0, 1]:
print psi(xs[i], fs[i])
print "If a query arrives that is bidded on by A and C"
for i in [0, 2]:
print psi(xs[i], fs[i])
print "If a query arrives that is bidded on by A and B and C"
for i in [0, 1, 2]:
print psi(xs[i], fs[i])
"""
Explanation: Calculate psi
x: A has bid x for this query
f: Fraction f of the budget of A is currently unspent
End of explanation
"""
import math
"""
Explanation: Bloom Filter
End of explanation
"""
# return 3x + 7
def hash(x):
return (3 * x + 7) % 11
for i in range(1, 11):
print bin(hash(i))[2:]
# asymptotic
fp = lambda k, m, n: (1.0 - math.e ** (-1.0 * k * m / n)) ** k
print fp(2, 3, 10)
# tiny bloom filter
fp = lambda k, m, n: (1.0 - (1.0 - (1.0 / n)) ** (k * m)) ** k
print fp(2, 3, 10)
"""
Explanation: Flagolet-Martin algorithm
End of explanation
"""
# Decimal number to binary number
def dec2bin(dec):
return bin(dec)[2:]
# Count the number of trailing zeros
def counttrailingzero(b):
cnt = 0
for i in range(len(b))[::-1]:
if b[i] == '0':
cnt += 1
else:
return cnt
return cnt
# Given R = max r(a), estimate number of distinct elements
def distinctelements(r):
return 2 ** r
print counttrailingzero(dec2bin(10))
print counttrailingzero(dec2bin(12))
print counttrailingzero(dec2bin(4))
print counttrailingzero(dec2bin(16))
print counttrailingzero(dec2bin(1))
"""
Explanation: Flajolet-Martin
End of explanation
"""
import numpy as np
from copy import deepcopy
"""
Explanation: Hits algorithm
End of explanation
"""
def getAdjList(nodes, edges):
nodeMap = {nodes[i] : i for i in range(len(nodes))}
adjList = {i : [] for i in range(len(nodes))}
for u, v in edges:
adjList[nodeMap[u]].append(nodeMap[v])
return adjList
"""
Explanation: Generate adjacency list from nodes and edges
```
nodes = ['yahoo', 'amazon', 'microsoft']
edges = [('yahoo', 'yahoo'), ('yahoo', 'microsoft'), ('yahoo', 'amazon'), ('amazon', 'microsoft'), ('amazon', 'yahoo'), ('microsoft', 'amazon')]
getAdjList(nodes, edges)
{0: [0, 2, 1], 1: [2, 0], 2: [1]}
```
End of explanation
"""
def getA(adjList):
N = len(adjList)
A = np.zeros([N, N])
for u in adjList:
vs = adjList[u]
for v in vs:
A[u, v] = 1
return A
"""
Explanation: Genereate A from adjacency list
```
adjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}
A = getA(adjList)
```
End of explanation
"""
def hits(A, epsilon=10**-6, numiter=1000):
# initialize
AT = A.T
N = len(A)
aold = np.ones(N) * 1.0 / np.sqrt(N)
hold = np.ones(N) * 1.0 / np.sqrt(N)
for i in range(numiter):
hnew = A.dot(aold)
anew = AT.dot(hnew)
hnew *= np.sqrt(1.0 / sum([v * v for v in hnew]))
anew *= np.sqrt(1.0 / sum([v * v for v in anew]))
if np.sum([v * v for v in anew - aold]) < epsilon or \
np.sum([v * v for v in hnew - hold]) < epsilon:
break
hold = hnew
aold = anew
return hnew, anew
def main():
adjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}
A = getA(adjList)
print hits(A)
if __name__ == '__main__':
main()
import doctest
doctest.testmod()
"""
Explanation: Hits algorithm
```
adjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}
A = getA(adjList)
hits(A)
(array([ 0.78875329, 0.57713655, 0.21161674]), array([ 0.62790075, 0.45987097, 0.62790075]))
```
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: LSH Plot
End of explanation
"""
andor = lambda x, r, b: 1 - (1 - x ** r) ** b
orand = lambda x, r, b: (1 - (1 - x) ** b) ** r
cascade = lambda x, r, b: orand(andor(x, r, b), r, b)
print andor(0.2, 3, 4)
def plot():
# Variable Initialization
k = 2
r = k ** 2
b = k ** 2
# AND-OR Construction
x1 = np.arange(0, 1, 0.01)
y1 = andor(x1, r, b)
# OR-AND Construction
x2 = np.arange(0, 1, 0.01)
y2 = orand(x2, r, b)
# Cascade Construction
x3 = np.arange(0, 1, 0.01)
y3 = cascade(x3, k, k)
# Show plot
plt.plot(x1, y1, '-r', x2, y2, '-g', x3, y3, '-b')
plt.grid(True)
plt.legend(('and-or', 'or-and', 'cascade'))
#plt.savefig('lsh.pdf')
plot()
"""
Explanation: Helper functions
End of explanation
"""
import numpy as np
class MinHashing:
def __init__(self, mat, hashfunc):
self.matrix = mat
self.m = len(mat)
self.n = len(mat[0])
self.hashfunc = hashfunc
self.k = len(self.hashfunc)
def minhash(self):
self.sig = np.ones((self.k, self.n)) * 2 ** 10
for j in range(self.m):
for c in range(self.n):
if self.matrix[j][c] == 1:
for i in range(self.k):
if self.hashfunc[i](j) < self.sig[i][c]:
self.sig[i][c] = self.hashfunc[i](j)
def show(self):
print self.sig
if __name__ == '__main__':
hashfunc = [lambda x: (3 * x + 2) % 7, lambda x: (x - 1) % 7]
mat = [[0,1],[1,0],[0,1],[0,0],[1,1],[1,1],[1,0]]
mh = MinHashing(mat, hashfunc)
mh.minhash()
mh.show()
# 2009 final
mat = [[0,0,1],[1,1,1],[0,1,1],[1,0,0],[0,1,0]]
hashfunc = [lambda x: x + 1, lambda x: (x - 1) % 5 + 1, lambda x: (x - 2) % 5 + 1, lambda x : (x - 3) % 5 + 1, lambda x: (x - 4) % 5 + 1]
mh = MinHashing(mat, hashfunc)
mh.minhash()
mh.show()
"""
Explanation: Min-hashing algorithm
End of explanation
"""
import numpy as np
from copy import deepcopy
"""
Explanation: PageRank algorithm
End of explanation
"""
def getAdjList(nodes, edges):
nodeMap = {nodes[i] : i for i in range(len(nodes))}
adjList = {i : [] for i in range(len(nodes))}
for u, v in edges:
adjList[nodeMap[u]].append(nodeMap[v])
return adjList
"""
Explanation: Generate adjacency list from nodes and edges
End of explanation
"""
def getM(adjList):
size = len(adjList)
M = np.zeros([size, size])
for u in adjList:
vs = adjList[u]
n = len(vs)
for v in vs:
M[v, u] = 1.0 / n
return M
"""
Explanation: Generate M from adjacency list
End of explanation
"""
def pageRank(M, beta=0.8, epsilon=10**-6, numiter=1000):
N = len(M)
const = np.ones(N) * (1 - beta) / N
rold = np.ones(N) * 1.0 / N
rnew = np.zeros(N)
for i in range(numiter):
rnew = beta * M.dot(rold)
rnew += const
if np.sum(np.abs(rold - rnew)) < epsilon:
break
rold = rnew
return rnew
"""
Explanation: PageRank
nodes = ['y', 'a', 'm']
edges = [('y', 'y'), ('a', 'm'), ('m', 'm'), ('a', 'y'), ('y', 'a')]
adjList = getAdjList(nodes, edges)
M = getM(adjList)
print pageRank(M, 0.8)
End of explanation
"""
def topicSpecific(M, S, beta=0.8, epsilon=10**-6, numiter=1000):
N = len(M)
rold = np.ones(N) * 1.0 / N
const = np.zeros(N)
for i in S:
const[i] = (1 - beta) * S[i]
for i in range(numiter):
rnew = M.dot(rold) * beta
rnew += const
if np.sum(np.abs(rold - rnew)) < epsilon:
break
rold = rnew
return rnew
def main():
nodes = ['y', 'a', 'm']
edges = [('y', 'y'), ('a', 'm'), ('m', 'm'), ('a', 'y'), ('y', 'a')]
adjList = getAdjList(nodes, edges)
M = getM(adjList)
print pageRank(M, 0.8)
def test():
nodes = [1,2,3,4,5,6]
edges = [(1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (1, 1), (2, 1), (2, 4), (2, 6), (2, 3), (2, 5), (3, 2), (3, 6), (3, 5), (4, 1), (4, 6), (5, 2), (5, 3), (6, 1)]
adjList = getAdjList(nodes, edges)
M = getM(adjList)
print M
print pageRank(M, 0.8)
if __name__ == '__main__':
test()
"""
Explanation: Topic-specific pagerank
```
nodes = [1, 2, 3, 4]
edges = [(1, 2), (1, 3), (3, 4), (4, 3), (2, 1)]
adjList = getAdjList(nodes, edges)
M = getM(adjList)
print topicSpecific(M, {0: 1}, 0.8)
print topicSpecific(M, {0: 1}, 0.9)
print topicSpecific(M, {0: 1}, 0.7)
```
End of explanation
"""
from numpy import dot
def rh(x, v):
return 1 if dot(x, v) >= 0 else -1
a = [1, 0, -2, 1, -3, 0, 0]
b = [2, 0, -3, 0, -2, 0, 2]
c = [1, -1, 0, 1, 2, -2, 1]
x = [1, 1, 1, 1, 1, 1, 1]
y = [-1, 1, -1, 1, -1, 1, -1]
z = [1, 1, 1, -1, -1, -1, -1]
print "a"
print rh(a, x)
print rh(a, y)
print rh(a, z)
print "b"
print rh(b, x)
print rh(b, y)
print rh(b, z)
print "c"
print rh(c, x)
print rh(c, y)
print rh(c, z)
"""
Explanation: Random hyperplane
End of explanation
"""
from scipy.spatial.distance import cosine
from numpy import arccos
from numpy import pi
a = [-1, 1, 1]
b = [-1, 1, -1]
c = [1, -1, -1]
print arccos(1 - cosine(a, b)) / pi * 180
print arccos(1 - cosine(b, c)) / pi * 180
print arccos(1 - cosine(c, a)) / pi * 180
"""
Explanation: Estimate angles
End of explanation
"""
from scipy.spatial.distance import jaccard
from scipy.spatial.distance import cosine
def jaccard_sim(u, v):
return 1 - jaccard(u, v)
def cosine_sim(u, v):
return 1 - cosine(u, v)
print jaccard_sim([1,0,0,1,1], [0,1,1,1,0])
print cosine_sim([1,0,1], [0,1,1])
print cosine_sim([5.22, 1.42], [4.06, 6.39])
"""
Explanation: Jaccard similarity
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/9552276573be20bde95d1b4bc52b4768/20_event_arrays.ipynb | bsd-3-clause | import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
"""
Explanation: Working with events
This tutorial describes event representation and how event arrays are used to
subselect data.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the :class:~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
"""
Explanation: The tutorial tut-events-vs-annotations describes in detail the
different ways of obtaining an :term:Events array <events> from a
:class:~mne.io.Raw object (see the section
overview-tut-events-section for details). Since the sample
dataset <sample-dataset> includes experimental events recorded on
:term:stim channel STI 014, we'll start this tutorial by parsing the
events from that channel using :func:mne.find_events:
End of explanation
"""
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw-eve.fif')
events_from_file = mne.read_events(sample_data_events_file)
assert np.array_equal(events, events_from_file[:len(events)])
"""
Explanation: Reading and writing events from/to a file
Event arrays are :class:NumPy array <numpy.ndarray> objects, so they could
be saved to disk as binary :file:.npy files using :func:numpy.save.
However, MNE-Python provides convenience functions :func:mne.read_events
and :func:mne.write_events for reading and writing event arrays as either
text files (common file extensions are :file:.eve, :file:.lst, and
:file:.txt) or binary :file:.fif files. The example dataset includes the
results of mne.find_events(raw) in a :file:.fif file. Since we've
truncated our :class:~mne.io.Raw object, it will have fewer events than the
events file loaded from disk (which contains events for the entire
recording), but the events should match for the first 60 seconds anyway:
End of explanation
"""
mne.find_events(raw, stim_channel='STI 014')
"""
Explanation: When writing event arrays to disk, the format will be inferred from the file
extension you provide. By convention, MNE-Python expects events files to
either have an :file:.eve extension or to have a file basename ending in
-eve or _eve (e.g., :file:{my_experiment}_eve.fif), and will issue
a warning if this convention is not respected.
Subselecting and combining events
The output of :func:~mne.find_events above (repeated here) told us the
number of events that were found, and the unique integer event IDs present:
End of explanation
"""
events_no_button = mne.pick_events(events, exclude=32)
"""
Explanation: .. sidebar:: Including/excluding events
Just like `~mne.pick_events`, `~mne.read_events` also has ``include``
and ``exclude`` parameters.
If some of those events are not of interest, you can easily subselect events
using :func:mne.pick_events, which has parameters include and
exclude. For example, in the sample data Event ID 32 corresponds to a
subject button press, which could be excluded as:
End of explanation
"""
merged_events = mne.merge_events(events, [1, 2, 3], 1)
print(np.unique(merged_events[:, -1]))
"""
Explanation: It is also possible to combine two Event IDs using :func:mne.merge_events;
the following example will combine Event IDs 1, 2 and 3 into a single event
labelled 1:
End of explanation
"""
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
"""
Explanation: Note, however, that merging events is not necessary if you simply want to
pool trial types for analysis; the next section describes how MNE-Python uses
event dictionaries to map integer Event IDs to more descriptive label
strings.
Mapping Event IDs to trial descriptors
So far in this tutorial we've only been dealing with integer Event IDs, which
were assigned based on DC voltage pulse magnitude (which is ultimately
determined by the experimenter's choices about what signals to send to the
STIM channels). Keeping track of which Event ID corresponds to which
experimental condition can be cumbersome, and it is often desirable to pool
experimental conditions during analysis. You may recall that the mapping of
integer Event IDs to meaningful descriptions for the sample dataset
<sample-dataset> is given in this table
<sample-data-event-dict-table> in the introductory tutorial
<tut-overview>. Here we simply reproduce that mapping as an
event dictionary:
End of explanation
"""
fig = mne.viz.plot_events(events, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp, event_id=event_dict)
fig.subplots_adjust(right=0.7) # make room for legend
"""
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data, and the resulting :class:~mne.Epochs object allows pooling
by requesting partial trial descriptors. For example, if we wanted to pool
all auditory trials, instead of merging Event IDs 1 and 2 using the
:func:~mne.merge_events function, we can make use of the fact that the keys
of event_dict contain multiple trial descriptors separated by /
characters: requesting 'auditory' trials will select all epochs with
Event IDs 1 and 2; requesting 'left' trials will select all epochs with
Event IDs 1 and 3. An example of this is shown later, in the
tut-section-subselect-epochs section of the tutorial
tut-epochs-class.
Plotting events
Another use of event dictionaries is when plotting events, which can serve as
a useful check that your event signals were properly sent to the STIM
channel(s) and that MNE-Python has successfully found them. The function
:func:mne.viz.plot_events will plot each event versus its sample number
(or, if you provide the sampling frequency, it will plot them versus time in
seconds). It can also account for the offset between sample number and sample
index in Neuromag systems, with the first_samp parameter. If an event
dictionary is provided, it will be used to generate a legend:
End of explanation
"""
raw.plot(events=events, start=5, duration=10, color='gray',
event_color={1: 'r', 2: 'g', 3: 'b', 4: 'm', 5: 'y', 32: 'k'})
"""
Explanation: Plotting events and raw data together
Events can also be plotted alongside the :class:~mne.io.Raw object they
were extracted from, by passing the Event array as the events parameter
of :meth:raw.plot <mne.io.Raw.plot>:
End of explanation
"""
new_events = mne.make_fixed_length_events(raw, start=5, stop=50, duration=2.)
"""
Explanation: Making equally-spaced Events arrays
For some experiments (such as those intending to analyze resting-state
activity) there may not be any experimental events included in the raw
recording. In such cases, an Events array of equally-spaced events can be
generated using :func:mne.make_fixed_length_events:
End of explanation
"""
|
jserenson/Python_Bootcamp | Statements Assessment Test.ipynb | gpl-3.0 | st = 'Print only the words that start with s in this sentence'
#Code here
st = 'Print only the words that start with s in this sentence'
for word in st.split():
if word[0] == 's':
print(word )
"""
Explanation: Statements Assessment Test
Lets test your knowledge!
Use for, split(), and if to create a Statement that will print out words that start with 's':
End of explanation
"""
#Code Here
for number in range(0,11):
if number % 2 == 0:
print(number)
"""
Explanation: Use range() to print all the even numbers from 0 to 10.
End of explanation
"""
#Code in this cell
l = [number for number in range(1,51) if number % 3 == 0]
print(l)
"""
Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
End of explanation
"""
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
st = 'Print every word in this sentence that has an even number of letters'
for word in st.split():
if len(word) % 2 == 0:
print(word)
"""
Explanation: Go through the string below and if the length of a word is even print "even!"
End of explanation
"""
#Code in this cell
l = range(1,101)
for val in l:
if val % 3 == 0 and val % 5 == 0:
print ('FizzBuzz num ' + str(val))
elif val % 3 == 0:
print('Fizz num ' + str(val))
elif val % 5 ==0 :
print('Buzz num ' + str(val))
"""
Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
End of explanation
"""
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
st = 'Create a list of the first letters of every word in this string'
l = []
for word in st.split():
l.append(word[0])
print(l)
"""
Explanation: Use List Comprehension to create a list of the first letters of every word in the string below:
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/calibrate_predicted_probabilities_in_svc.ipynb | mit | # Load libraries
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import numpy as np
"""
Explanation: Title: Calibrate Predicted Probabilities In SVC
Slug: calibrate_predicted_probabilities_in_svc
Summary: How to calibrate predicted probabilities in support vector classifier in Scikit-Learn
Date: 2017-09-22 12:00
Category: Machine Learning
Tags: Support Vector Machines
Authors: Chris Albon
SVC's use of a hyperplane to create decision regions do not naturally output a probability estimate that an observation is a member of a certain class. However, we can in fact output calibrated class probabilities with a few caveats. In an SVC, Platt scaling can be used, wherein first the SVC is trained, then a separate cross-validated logistic regression is trained to map the SVC outputs into probabilities:
$$P(y=1 \mid x)={\frac {1}{1+e^{(A*f(x)+B)}}}$$
where $A$ and $B$ are parameter vectors and $f$ is the $i$th observation's signed distance from the hyperplane. When we have more than two classes, an extension of Platt scaling is used.
In scikit-learn, the predicted probabilities must be generated when the model is being trained. This can be done by setting SVC's probability to True. After the model is trained, we can output the estimated probabilities for each class using predict_proba.
Preliminaries
End of explanation
"""
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
"""
Explanation: Load Iris Flower Data
End of explanation
"""
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
"""
Explanation: Standardize Features
End of explanation
"""
# Create support vector classifier object
svc = SVC(kernel='linear', probability=True, random_state=0)
# Train classifier
model = svc.fit(X_std, y)
"""
Explanation: Train Support Vector Classifier
End of explanation
"""
# Create new observation
new_observation = [[.4, .4, .4, .4]]
"""
Explanation: Create Previously Unseen Observation
End of explanation
"""
# View predicted probabilities
model.predict_proba(new_observation)
"""
Explanation: View Predicted Probabilities
End of explanation
"""
|
fierval/retina | Notebooks/Unused/CicrularCrop.ipynb | mit | import os
import skimage
from skimage import io, util
from skimage.draw import circle
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import math
"""
Explanation: Experiments with Crop Improvements
This notebook experiments advances in image cropping. This performs the following steps
determine dimensions of the image
determine the center of the image
zeroify the borders of the image to get rid of non-black background and edge distortions
crop to the new size of the image
End of explanation
"""
baseFolder = '/Users/boris/Dropbox/Kaggle/Retina/train/sample'
imgFile = '78_left.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = io.imread(filename)
plt.imshow(img)
"""
Explanation: Non-Cropped image case
Load some image that is not cropped.
End of explanation
"""
threshold = 20000
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
"""
Explanation: The simplest way to detect edges for cropping of a circular image with dark background is to sum up along different axes. Let's see how it works. First we sum up all the color channels, then compute horizontal and vertical borders.
End of explanation
"""
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = height/2 + np.argmin(cols[height/2:height-1])
"""
Explanation: now compute borders of the image
End of explanation
"""
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius
radius1 = radius - 100
"""
Explanation: This is simple case, not trimmed image. Let's determine the radius and center of it. We reduce the radius in orger to get rid of the edge distortions
End of explanation
"""
mask = np.zeros(img.shape)
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img *= mask
"""
Explanation: Now we zeroify everything outside the circle determined above. We need to do this as the black background actually not trully black.
End of explanation
"""
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (center_y - radius1, img.shape[0] - center_y - radius1)
img2 = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders, (0,0)))
border_pixels = np.sum(1 - maskT)
plt.imshow(img2)
"""
Explanation: and now we are ready to do actual crop of the image. Perform the very same crop operation on the mask for further processing.
End of explanation
"""
baseFolder = '/Users/boris/Dropbox/Kaggle/Retina/train/sample'
imgFile = '263_right.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = skimage.io.imread(filename)
plt.imshow(img)
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
"""
Explanation: Cropped Image Case
now let's experiment with cropped image
End of explanation
"""
threshold = 20000
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = np.argmin(cols[height/2:height-1])
y_max = height/2 + y_max if y_max > 0 else height
print x_min, x_max, y_min, y_max, height/2
"""
Explanation: now compute borders of the image
End of explanation
"""
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius # the default case (if y_min != 0)
if y_min == 0: # the upper side is cropped
if height - y_max > 0: # lower border is not 0
center_y = y_max - radius
else:
upper_line_width = np.sum(s[0,:] > 100) # threshold for single line
center_y = math.sqrt( radius**2 - (upper_line_width/2)**2)
radius1 = radius - 200
mask = np.zeros(img.shape[0:2])
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img[:,:,0] *= mask
img[:,:,1] *= mask
img[:,:,2] *= mask
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (max(center_y - radius1,0), max(img.shape[0] - center_y - radius1, 0))
img2 = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders))
border_pixels = np.sum(1 - maskT)
plt.imshow(img2)
"""
Explanation: And radius and the center of the image. If we have at least upper or lower side of the disk non-cropped, use it to determine the vertical center. Otherwise use Pithagoras theorem :-)
End of explanation
"""
def circularcrop(img, border, threshold, threshold1):
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = np.argmin(cols[height/2:height-1])
y_max = height/2 + y_max if y_max > 0 else height
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius # the default case (if y_min != 0)
if y_min == 0: # the upper side is cropped
if height - y_max > 0: # lower border is not 0
center_y = y_max - radius
else:
upper_line_width = np.sum(s[0,:] > threshold1) # threshold for single line
center_y = math.sqrt( radius**2 - (upper_line_width/2)**2)
radius1 = radius - border
mask = np.zeros(img.shape[0:2])
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img[:,:,0] *= mask
img[:,:,1] *= mask
img[:,:,2] *= mask
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (max(center_y - radius1,0), max(img.shape[0] - center_y - radius1, 0))
imgres = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders))
border_pixels = np.sum(1 - maskT)
return imgres, maskT, center_x, center_y, radius
baseFolder = '/Users/boris/Dropbox/Shared/Retina'
imgFile = 'crop/20677_left.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = io.imread(filename)
plt.imshow(img)
(imgA, maskA, x,y,r) = circularcrop(img, 200, 20000, 100)
plt.imshow(imgA)
img.shape[0:2]
"""
Explanation: and now putting everything together
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/tutorials/images/transfer_learning.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 Franรงois Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
"""
Explanation: ์ฌ์ ํ์ต๋ ConvNet์ ์ด์ฉํ ์ ์ด ํ์ต
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์์ ๋ณด๊ธฐ</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a> </td>
</table>
์ด ํํ ๋ฆฌ์ผ์์๋ ์ฌ์ ํ๋ จ๋ ๋คํธ์ํฌ์์ ์ ์ด ํ์ต์ ์ฌ์ฉํ์ฌ ๊ณ ์์ด์ ๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ๋ถ๋ฅํ๋ ๋ฐฉ๋ฒ์ ๋ฐฐ์ฐ๊ฒ ๋ฉ๋๋ค.
์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ์ด์ ์ ๋๊ท๋ชจ ๋ฐ์ดํฐ์
์์ ํ๋ จ๋ ์ ์ฅ๋ ๋คํธ์ํฌ๋ก, ์ผ๋ฐ์ ์ผ๋ก ๋๊ท๋ชจ ์ด๋ฏธ์ง ๋ถ๋ฅ ์์
์์ ํ๋ จ๋ ๊ฒ์
๋๋ค. ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ๊ทธ๋๋ก ์ฌ์ฉํ๊ฑฐ๋ ์ ์ด ํ์ต์ ์ฌ์ฉํ์ฌ ์ด ๋ชจ๋ธ์ ์ฃผ์ด์ง ์์
์ผ๋ก ์ฌ์ฉ์ ์ ์ํ์ธ์.
์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ์ํ ์ ์ด ํ์ต์ ์ง๊ด์ ์ธ ์๊ฐ์์ ๋ฐ๋ผ๋ณด๋ฉด ๋ชจ๋ธ์ด ์ถฉ๋ถํ ํฌ๊ณ ์ผ๋ฐ์ ์ธ ๋ฐ์ดํฐ ์งํฉ์์ ํ๋ จ๋๋ค๋ฉด, ์ด ๋ชจ๋ธ์ ์ฌ์ค์ ์๊ฐ ์ธ๊ณ์ ์ผ๋ฐ์ ์ธ ๋ชจ๋ธ๋ก์ ๊ธฐ๋ฅํ ๊ฒ์ด๋ผ๋ ์ ์
๋๋ค. ๊ทธ๋ฐ ๋ค์ ๋๊ท๋ชจ ๋ฐ์ดํฐ์
์์ ๋๊ท๋ชจ ๋ชจ๋ธ์ ๊ต์กํ์ฌ ์ฒ์๋ถํฐ ์์ํ ํ์ ์์ด ์ด๋ฌํ ํ์ต๋ ํน์ง ๋งต์ ํ์ฉํ ์ ์์ต๋๋ค.
์ด๋ฒ notebook์์๋ ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ์ฌ์ฉ์ ์ ์ํ๋ ๋ ๊ฐ์ง ๋ฐฉ๋ฒ์ ์๋ ํด๋ณด๊ฒ ์ต๋๋ค.:
ํน์ฑ ์ถ์ถ: ์ ์ํ์์ ์๋ฏธ ์๋ ํน์ฑ์ ์ถ์ถํ๊ธฐ ์ํด ์ด์ ๋คํธ์ํฌ์์ ํ์ตํ ํํ์ ์ฌ์ฉํฉ๋๋ค. ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ ์์ ์ฒ์๋ถํฐ ํ๋ จํ ์ ๋ถ๋ฅ์๋ฅผ ์ถ๊ฐํ๊ธฐ๋ง ํ๋ฉด ์ด์ ์ ๋ฐ์ดํฐ์ธํธ๋ก ํ์ตํ ํน์ฑ ๋งต์ ์ฉ๋๋ฅผ ์ฌ์ฌ์ฉํ ์ ์์ต๋๋ค.
์ ์ฒด ๋ชจ๋ธ์ ์ฌํ๋ จ์ํฌ ํ์๋ ์์ต๋๋ค. ๊ธฐ๋ณธ ์ปจ๋ณผ๋ฃจ์
๋คํธ์ํฌ์๋ ๊ทธ๋ฆผ ๋ถ๋ฅ์ ์ผ๋ฐ์ ์ผ๋ก ์ ์ฉํ ๊ธฐ๋ฅ์ด ์ด๋ฏธ ํฌํจ๋์ด ์์ต๋๋ค. ๊ทธ๋ฌ๋ ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ์ต์ข
๋ถ๋ฅ ๋ถ๋ถ์ ๊ธฐ์กด์ ๋ถ๋ฅ ์์
์ ๋ฐ๋ผ ๋ค๋ฅด๋ฉฐ ์ดํ์ ๋ชจ๋ธ์ด ํ๋ จ๋ ํด๋์ค ์งํฉ์ ๋ฐ๋ผ ๋ค๋ฆ
๋๋ค.
๋ฏธ์ธ ์กฐ์ : ๊ณ ์ ๋ ๊ธฐ๋ณธ ๋ชจ๋ธ์ ์ผ๋ถ ์ต์์ ์ธต์ ๊ณ ์ ํด์ ํ๊ณ ์๋ก ์ถ๊ฐ ๋ ๋ถ๋ฅ๊ธฐ ์ธต๊ณผ ๊ธฐ๋ณธ ๋ชจ๋ธ์ ๋ง์ง๋ง ์ธต์ ํจ๊ป ํ๋ จ์ํต๋๋ค. ์ด๋ฅผ ํตํด ๊ธฐ๋ณธ ๋ชจ๋ธ์์ ๊ณ ์ฐจ์ ํน์ง ํํ์ "๋ฏธ์ธ ์กฐ์ "ํ์ฌ ํน์ ์์
์ ๋ณด๋ค ๊ด๋ จ์ฑ์ด ์๋๋ก ํ ์ ์์ต๋๋ค.
์ผ๋ฐ์ ์ธ ๊ธฐ๊ณ ํ์ต ์ผ๋ จ์ ๊ณผ์ ์ ์งํํฉ๋๋ค.
๋ฐ์ดํฐ ๊ฒ์ฌ ๋ฐ ์ดํด
์
๋ ฅ ํ์ดํ ๋ผ์ธ ๋น๋(์ด ๊ฒฝ์ฐ Keras ImageDataGenerator๋ฅผ ์ฌ์ฉ)
๋ชจ๋ธ ์์ฑ
์ฌ์ ํ๋ จ๋ ๊ธฐ๋ณธ ๋ชจ๋ธ(๋ํ ์ฌ์ ํ๋ จ๋ ๊ฐ์ค์น)์ ์ ์ฌ
๋ถ๋ฅ ๋ ์ด์ด๋ฅผ ๋งจ ์์ ์๊ธฐ
๋ชจ๋ธ ํ๋ จ
๋ชจ๋ธ ํ๊ฐ
End of explanation
"""
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
"""
Explanation: ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ
๋ฐ์ดํฐ ๋ค์ด๋ก๋
์ด ํํ ๋ฆฌ์ผ์์๋ ์์ฒ ๊ฐ์ ๊ณ ์์ด์ ๊ฐ์ ์ด๋ฏธ์ง๊ฐ ํฌํจ๋ ๋ฐ์ดํฐ์ธํธ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด๋ฏธ์ง๊ฐ ํฌํจ๋ zip ํ์ผ์ ๋ค์ด๋ก๋ํ์ฌ ์ถ์ถ์ ๋ค์ tf.keras.preprocessing.image_dataset_from_directory ์ ํธ๋ฆฌํฐ๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จ ๋ฐ ๊ฒ์ฆ์ ์ํ tf.data.Dataset๋ฅผ ์์ฑํฉ๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์ ์ด๋ฏธ์ง ๋ก๋์ ๋ํด ์์ธํ ์์๋ณผ ์ ์์ต๋๋ค.
End of explanation
"""
class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
"""
Explanation: ํ๋ จ์ฉ ๋ฐ์ดํฐ์
์์ ์ฒ์ ๋ ๊ฐ์ ์ด๋ฏธ์ง ๋ฐ ๋ ์ด๋ธ์ ๋ณด์ฌ์ค๋๋ค:
End of explanation
"""
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
"""
Explanation: ์๋ณธ ๋ฐ์ดํฐ์ธํธ์๋ ํ
์คํธ ์ธํธ๊ฐ ํฌํจ๋์ด ์์ง ์์ผ๋ฏ๋ก ํ
์คํธ ์ธํธ๋ฅผ ์์ฑํฉ๋๋ค. tf.data.experimental.cardinality๋ฅผ ์ฌ์ฉํ์ฌ ๊ฒ์ฆ ์ธํธ์์ ์ฌ์ฉํ ์ ์๋ ๋ฐ์ดํฐ ๋ฐฐ์น ์๋ฅผ ํ์ธํ ๋ค์ ๊ทธ ์ค 20%๋ฅผ ํ
์คํธ ์ธํธ๋ก ์ด๋ํฉ๋๋ค.
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
"""
Explanation: ์ฑ๋ฅ์ ๋์ด๋๋ก ๋ฐ์ดํฐ์ธํธ ๊ตฌ์ฑํ๊ธฐ
๋ฒํผ๋ง๋ ํ๋ฆฌํ์น๋ฅผ ์ฌ์ฉํ์ฌ I/O ์ฐจ๋จ ์์ด ๋์คํฌ์์ ์ด๋ฏธ์ง๋ฅผ ๋ก๋ํฉ๋๋ค. ์ด ๋ฐฉ๋ฒ์ ๋ํด ์์ธํ ์์๋ณด๋ ค๋ฉด ๋ฐ์ดํฐ ์ฑ๋ฅ ๊ฐ์ด๋๋ฅผ ์ฐธ์กฐํ์ธ์.
End of explanation
"""
data_augmentation = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
"""
Explanation: ๋ฐ์ดํฐ ์ฆ๊ฐ ์ฌ์ฉ
ํฐ ์ด๋ฏธ์ง ๋ฐ์ดํฐ์ธํธ๊ฐ ์๋ ๊ฒฝ์ฐ, ํ์ ๋ฐ ์ํ ๋ค์ง๊ธฐ์ ๊ฐ์ด ํ๋ จ ์ด๋ฏธ์ง์ ๋ฌด์์์ด์ง๋ง ์ฌ์ค์ ์ธ ๋ณํ์ ์ ์ฉํ์ฌ ์ํ ๋ค์์ฑ์ ์ธ์์ ์ผ๋ก ๋์
ํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ์ด๊ฒ์ ๋ชจ๋ธ์ ํ๋ จ ๋ฐ์ดํฐ์ ๋ค์ํ ์ธก๋ฉด์ ๋
ธ์ถ์ํค๊ณ ๊ณผ๋์ ํฉ์ ์ค์ด๋ ๋ฐ ๋์์ด ๋ฉ๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์ ๋ฐ์ดํฐ ์ฆ๊ฐ์ ๋ํด ์์ธํ ์์๋ณผ ์ ์์ต๋๋ค.
End of explanation
"""
for image, _ in train_dataset.take(1):
plt.figure(figsize=(10, 10))
first_image = image[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.axis('off')
"""
Explanation: ์ฐธ๊ณ : model.fit์ ํธ์ถํ ๋ ํ๋ จ ์ค์๋ง ์ด๋ฌํ ๋ ์ด์ด๊ฐ ํ์ฑํ๋ฉ๋๋ค. model.evaulate ๋๋ model.fit์ ์ถ๋ก ๋ชจ๋์์ ๋ชจ๋ธ์ ์ฌ์ฉํ๋ฉด ๋นํ์ฑํ๋ฉ๋๋ค.
๊ฐ์ ์ด๋ฏธ์ง์ ์ด ๋ ์ด์ด๋ฅผ ๋ฐ๋ณตํด์ ์ ์ฉํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ํ์ธํด ๋ณด๊ฒ ์ต๋๋ค.
End of explanation
"""
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
"""
Explanation: ํฝ์
๊ฐ ์ฌ์กฐ์
์ ์ ํ ๊ธฐ๋ณธ ๋ชจ๋ธ๋ก ์ฌ์ฉํ tf.keras.applications.MobileNetV2๋ฅผ ๋ค์ด๋ก๋ํฉ๋๋ค. ์ด ๋ชจ๋ธ์ [-1, 1]์ ํฝ์
๊ฐ์ ์์ํ์ง๋ง ์ด ์์ ์์ ์ด๋ฏธ์ง์ ํฝ์
๊ฐ์ [0, 255]์
๋๋ค. ํฌ๊ธฐ๋ฅผ ์ฌ์กฐ์ ํ๋ ค๋ฉด ๋ชจ๋ธ์ ํฌํจ๋ ์ ์ฒ๋ฆฌ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ์ธ์.
End of explanation
"""
rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
"""
Explanation: ์ฐธ๊ณ : ๋๋ ํฌ๊ธฐ ์กฐ์ ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ [0, 255]์์ [-1, 1]๋ก ํฝ์
๊ฐ์ ์ฌ์กฐ์ ํ ์ ์์ต๋๋ค.
End of explanation
"""
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
"""
Explanation: ์ฐธ๊ณ : ๋ค๋ฅธ tf.keras.applications๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ API ๋ฌธ์์์ [-1, 1] ๋๋ [0, 1]์ ํฝ์
์ด ํ์ํ์ง ํ์ธํ๊ฑฐ๋ ํฌํจ๋ preprocess_input ํจ์๋ฅผ ์ฌ์ฉํ์ธ์.
์ฌ์ ํ๋ จ๋ ์ปจ๋ณผ๋ฃจ์
๋คํธ์ํฌ๋ก๋ถํฐ ๊ธฐ๋ณธ ๋ชจ๋ธ ์์ฑํ๊ธฐ
Google์์ ๊ฐ๋ฐํ MobileNet V2 ๋ชจ๋ธ๋ก๋ถํฐ ๊ธฐ๋ณธ ๋ชจ๋ธ์ ์์ฑํฉ๋๋ค. ์ด ๋ชจ๋ธ์ 1.4M ์ด๋ฏธ์ง์ 1000๊ฐ์ ํด๋์ค๋ก ๊ตฌ์ฑ๋ ๋๊ท๋ชจ ๋ฐ์ดํฐ์
์ธ ImageNet ๋ฐ์ดํฐ์
๋ฅผ ์ฌ์ฉํด ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์
๋๋ค. ImageNet์ ์ญํ๋ฃจํธ ๋ฐ ์ฃผ์ฌ๊ธฐ์ ๊ฐ์ ๋ค์ํ ๋ฒ์ฃผ์ ์ฐ๊ตฌ์ฉ ํ๋ จ ๋ฐ์ดํฐ์
์
๋๋ค. ์ด ์ง์ ๊ธฐ๋ฐ์ ํน์ ๋ฐ์ดํฐ์
์์ ๊ณ ์์ด์ ๊ฐ๋ฅผ ๋ถ๋ฅํ๋๋ฐ ๋์์ด ๋ฉ๋๋ค.
๋จผ์ ํน์ฑ ์ถ์ถ์ ์ฌ์ฉํ MobileNet V2 ๋ ์ด์ด๋ฅผ ์ ํํด์ผ ํฉ๋๋ค. ๋งจ ๋ง์ง๋ง ๋ถ๋ฅ ๋ ์ด์ด("๋งจ ์์ธต", ๋๋ถ๋ถ์ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ ๋ค์ด์ด๊ทธ๋จ์ ์๋์์ ์๋ก ์ด๋ํ๋ฏ๋ก)๋ ๊ทธ๋ฆฌ ์ ์ฉํ์ง ์์ต๋๋ค. ๋์ ์ flatten ์ฐ์ฐ์ ํ๊ธฐ ์ ์ ๋งจ ๋ง์ง๋ง ๋ ์ด์ด๋ฅผ ๊ฐ์ง๊ณ ์งํํ๊ฒ ์ต๋๋ค. ์ด ๋ ์ด์ด๋ฅผ "๋ณ๋ชฉ ๋ ์ด์ด"๋ผ๊ณ ํฉ๋๋ค. ๋ณ๋ชฉ ๋ ์ด์ด ํน์ฑ์ ๋ง์ง๋ง/๋งจ ์ ๋ ์ด์ด๋ณด๋ค ์ผ๋ฐ์ฑ์ ์ ์งํฉ๋๋ค.
๋จผ์ ImageNet์ผ๋ก ํ๋ จ๋ ๊ฐ์ค์น๊ฐ ์ ์ฅ๋ MobileNet V2 ๋ชจ๋ธ์ ์ธ์คํด์คํ ํ์ธ์. include_top = False ๋ก ์ง์ ํ๋ฉด ๋งจ ์์ ๋ถ๋ฅ ์ธต์ด ํฌํจ๋์ง ์์ ๋คํธ์ํฌ๋ฅผ ๋ก๋ํ๋ฏ๋ก ํน์ง ์ถ์ถ์ ์ด์์ ์
๋๋ค.
End of explanation
"""
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
"""
Explanation: ์ด ํน์ง ์ถ์ถ๊ธฐ๋ ๊ฐ 160x160x3 ์ด๋ฏธ์ง๋ฅผ 5x5x1280 ๊ฐ์ ํน์ง ๋ธ๋ก์ผ๋ก ๋ณํํฉ๋๋ค. ์ด๋ฏธ์ง ๋ฐฐ์น ์์ ์์ ์ํํ๋ ์์
์ ํ์ธํ์ธ์:
End of explanation
"""
base_model.trainable = False
"""
Explanation: ํน์ง ์ถ์ถ
์ด ๋จ๊ณ์์๋ ์ด์ ๋จ๊ณ์์ ์์ฑ๋ ์ปจ๋ฒ๋ฃจ์
๋ฒ ์ด์ค ๋ชจ๋ธ์ ๋๊ฒฐํ๊ณ ํน์ง ์ถ์ถ๊ธฐ๋ก ์ฌ์ฉํฉ๋๋ค. ๋ํ ๊ทธ ์์ ๋ถ๋ฅ๊ธฐ๋ฅผ ์ถ๊ฐํ๊ณ ์ต์์ ๋ถ๋ฅ๊ธฐ๋ฅผ ํ๋ จ์ํต๋๋ค.
์ปจ๋ณผ๋ฃจ์
๋ฒ ์ด์ค ๋ชจ๋ธ ๊ณ ์ ํ๊ธฐ
๋ชจ๋ธ์ ์ปดํ์ผํ๊ณ ํ๋ จํ๊ธฐ ์ ์ ์ปจ๋ณผ๋ฃจ์
๊ธฐ๋ฐ์ ๊ณ ์ ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค. ๋๊ฒฐ(layer.trainable = False๋ก ์ค์ )์ ์ฃผ์ด์ง ๋ ์ด์ด์ ๊ฐ์ค์น๊ฐ ํ๋ จ ์ค์ ์
๋ฐ์ดํธ๋๋ ๊ฒ์ ๋ฐฉ์งํฉ๋๋ค. MobileNet V2์๋ ๋ง์ ๋ ์ด์ด๊ฐ ์์ผ๋ฏ๋ก ์ ์ฒด ๋ชจ๋ธ์ trainable ํ๋๊ทธ๋ฅผ False๋ก ์ค์ ํ๋ฉด ๋ ์ด์ด๊ฐ ๋ชจ๋ ๋๊ฒฐ๋ฉ๋๋ค.
End of explanation
"""
# ๊ธฐ๋ณธ ๋ชจ๋ธ ์ํคํ
์ฒ๋ฅผ ์ดํด๋ด
๋๋ค.
base_model.summary()
"""
Explanation: BatchNormalization ๋ ์ด์ด์ ๋ํ ์ค์ ์ฐธ๊ณ ์ฌํญ
๋ง์ ๋ชจ๋ธ์๋ tf.keras.layers.BatchNormalization ๋ ์ด์ด๊ฐ ํฌํจ๋์ด ์์ต๋๋ค. ์ด ๋ ์ด์ด๋ ํน๋ณํ ๊ฒฝ์ฐ์ด๋ฉฐ ์ด ํํ ๋ฆฌ์ผ์ ๋ท๋ถ๋ถ์ ๋์ ์๋ ๊ฒ์ฒ๋ผ ๋ฏธ์ธ ์กฐ์ ์ ๋งฅ๋ฝ์์ ์ฃผ์๋ฅผ ๊ธฐ์ธ์ฌ์ผ ํฉ๋๋ค.
layer.trainable = False๋ฅผ ์ค์ ํ๋ฉด BatchNormalization ๋ ์ด์ด๊ฐ ์ถ๋ก ๋ชจ๋์์ ์คํ๋๊ณ ํ๊ท ๋ฐ ๋ถ์ฐ ํต๊ณ๋ฅผ ์
๋ฐ์ดํธํ์ง ์์ต๋๋ค.
๋ฏธ์ธ ์กฐ์ ์ ์ํด BatchNormalization ๋ ์ด์ด๋ฅผ ํฌํจํ๋ ๋ชจ๋ธ์ ๋๊ฒฐ ํด์ ํ๋ฉด ๊ธฐ๋ณธ ๋ชจ๋ธ์ ํธ์ถํ ๋ training = False๋ฅผ ์ ๋ฌํ์ฌ BatchNormalization ๋ ์ด์ด๋ฅผ ์ถ๋ก ๋ชจ๋๋ก ์ ์งํด์ผ ํฉ๋๋ค. ๊ทธ๋ ์ง ์์ผ๋ฉด ํ๋ จ ๋ถ๊ฐ๋ฅํ ๊ฐ์ค์น์ ์ ์ฉ๋ ์
๋ฐ์ดํธ๋ก ์ธํด ๋ชจ๋ธ์ด ํ์ตํ ๋ด์ฉ์ด ํ๊ดด๋ฉ๋๋ค.
์์ธํ ๋ด์ฉ์ ์ ์ด ํ์ต ๊ฐ์ด๋๋ฅผ ์ฐธ์กฐํ์ธ์.
End of explanation
"""
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
"""
Explanation: ๋ถ๋ฅ ์ธต์ ๋งจ ์์ ์ถ๊ฐํ๊ธฐ
ํน์ฑ ๋ธ๋ก์์ ์์ธก์ ์์ฑํ๊ธฐ ์ํด tf.keras.layers.GlobalAveragePooling2D ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ ํน์ฑ์ ์ด๋ฏธ์ง๋น ํ๋์ 1280-์์ ๋ฒกํฐ๋ก ๋ณํํ์ฌ 5x5 ๊ณต๊ฐ ์์น์ ๋ํ ํ๊ท ์ ๊ตฌํฉ๋๋ค.
End of explanation
"""
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
"""
Explanation: tf.keras.layers.Dense ๋ ์ด์ด๋ฅผ ์ฌ์ฉํ์ฌ ํน์ฑ์ ์ด๋ฏธ์ง๋น ๋จ์ผ ์์ธก์ผ๋ก ๋ณํํฉ๋๋ค. ์ด ์์ธก์ logit๋๋ ์์ ์์ธก ๊ฐ์ผ๋ก ์ทจ๊ธ๋๋ฏ๋ก ํ์ฑํ ํจ์๊ฐ ํ์ํ์ง ์์ต๋๋ค. ์์๋ ํด๋์ค 1์ ์์ธกํ๊ณ ์์๋ ํด๋์ค 0์ ์์ธกํฉ๋๋ค.
End of explanation
"""
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
"""
Explanation: Keras Functional API๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ ์ฆ๊ฐ, ํฌ๊ธฐ ์กฐ์ , base_model ๋ฐ ํน์ฑ ์ถ์ถ๊ธฐ ๋ ์ด์ด๋ฅผ ํจ๊ป ์ฐ๊ฒฐํ์ฌ ๋ชจ๋ธ์ ๊ตฌ์ถํฉ๋๋ค. ์์ ์ธ๊ธํ๋ฏ์ด ๋ชจ๋ธ์ BatchNormalization ๋ ์ด์ด๊ฐ ํฌํจ๋์ด ์์ผ๋ฏ๋ก training=False๋ฅผ ์ฌ์ฉํ์ธ์.
End of explanation
"""
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
"""
Explanation: ๋ชจ๋ธ ์ปดํ์ผ
ํ์ตํ๊ธฐ ์ ์ ๋ชจ๋ธ์ ์ปดํ์ผํด์ผ ํฉ๋๋ค. ๋ ๊ฐ์ ํด๋์ค๊ฐ ์์ผ๋ฏ๋ก ๋ชจ๋ธ์ด ์ ํ ์ถ๋ ฅ์ ์ ๊ณตํ๋ฏ๋ก from_logits = True์ ํจ๊ป ์ด์ง ๊ต์ฐจ ์ํธ๋กํผ ์์ค์ ์ฌ์ฉํ์ธ์.
End of explanation
"""
len(model.trainable_variables)
"""
Explanation: MobileNet์ 2.5M ๊ฐ์ ๋งค๊ฐ ๋ณ์๋ ๊ณ ์ ๋์ด ์์ง๋ง Dense ์ธต์๋ 1.2K ๊ฐ์ trainable ๋งค๊ฐ ๋ณ์๊ฐ ์์ต๋๋ค. ์ด๊ฒ๋ค์ ๋ ๊ฐ์ tf.Variable ๊ฐ์ฒด, ์ฆ ๊ฐ์ค์น์ ๋ฐ์ด์ด์ค๋ก ๋๋ฉ๋๋ค.
End of explanation
"""
initial_epochs = 10
loss0, accuracy0 = model.evaluate(validation_dataset)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
"""
Explanation: ๋ชจ๋ธ ํ๋ จ
10 epoch๋งํผ ํ๋ จํ ํ, ๊ฒ์ฆ ์ธํธ์์ ~94%์ ์ ํ๋๋ฅผ ๋ณผ ์ ์์ต๋๋ค.
End of explanation
"""
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
"""
Explanation: ํ์ต ๊ณก์
MobileNet V2 ๊ธฐ๋ณธ ๋ชจ๋ธ์ ๊ณ ์ ๋ ํน์ง ์ถ์ถ๊ธฐ๋ก ์ฌ์ฉํ์ ๋์ ํ์ต ๋ฐ ๊ฒ์ฆ ์ ํ๋ / ์์ค์ ํ์ต ๊ณก์ ์ ์ดํด ๋ณด๊ฒ ์ต๋๋ค.
End of explanation
"""
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
"""
Explanation: Note: ์ ํจ์ฑ ๊ฒ์ฌ ์งํ๊ฐ ํ๋ จ ์งํ๋ณด๋ค ๋ช
ํํ๊ฒ ๋ ๋์ ์ด์ ๋ tf.keras.layers.BatchNormalization ๋ฐ tf.keras.layers.Dropout๊ณผ ๊ฐ์ ์ธต์ด ํ๋ จ ์ค ์ ํ๋์ ์ํฅ์ ์ฃผ๊ธฐ ๋๋ฌธ์
๋๋ค. ์ด๊ฒ๋ค์ ์ ํจ์ฑ ๊ฒ์ฌ ์์ค์ ๊ณ์ฐํ ๋ ํด์ ๋ฉ๋๋ค.
์ด๋ณด๋ค๋ ์ ์ ์ด์ ์ด๊ฒ ์ง๋ง, ํ๋ จ ๋ฉํธ๋ฆญ์ด ํ epoch ๋์์ ํ๊ท ์ ํ๊ฐํ๋ ๋ฐ๋ฉด, ๊ฒ์ฆ ๋ฉํธ๋ฆญ์ epoch ์ดํ์ ํ๊ฐ๋๋ฏ๋ก ๊ฒ์ฆ ๋ฉํธ๋ฆญ์ด ์ฝ๊ฐ ๋ ์ค๋ ํ๋ จ๋ ๋ชจ๋ธ์ ๋ณผ ์ ์๊ธฐ ๋๋ฌธ์
๋๋ค.
๋ฏธ์ธ ์กฐ์
๊ธฐ๋ฅ ์ถ์ถ ์คํ์์๋ MobileNet V2 ๊ธฐ๋ณธ ๋ชจ๋ธ์ ๊ธฐ๋ฐ์ผ๋ก ๋ช ๊ฐ์ ์ธต ๋ง ํ์ตํ์ต๋๋ค. ์ฌ์ ํ๋ จ๋ ๋คํธ์ํฌ์ ๊ฐ์ค์น๋ ํ๋ จ ์ค์ ์
๋ฐ์ดํธ ๋์ง ์์์ต๋๋ค.
์ฑ๋ฅ์ ๋์ฑ ํฅ์์ํค๋ ํ ๊ฐ์ง ๋ฐฉ๋ฒ์ ์ถ๊ฐ ํ ๋ถ๋ฅ๊ธฐ์ ํ๋ จ๊ณผ ํจ๊ป ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ์ต์์ ๋ ์ด์ด ๊ฐ์ค์น๋ฅผ ํ๋ จ(๋๋ "๋ฏธ์ธ ์กฐ์ ")ํ๋ ๊ฒ์
๋๋ค. ํ๋ จ์ ํตํด ๊ฐ์ค์น๋ ์ผ๋ฐ์ ์ธ ํน์ง ๋งต์์ ๊ฐ๋ณ ๋ฐ์ดํฐ์
๊ณผ ๊ด๋ จ๋ ํน์ง์ผ๋ก ์กฐ์ ๋ฉ๋๋ค.
Note: ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ํ๋ จ ๋ถ๊ฐ๋ฅ์ผ๋ก ์ค์ ํ์ฌ ์ต์์ ๋ถ๋ฅ๊ธฐ๋ฅผ ํ๋ จํ ํ์๋ง โโ์๋ํด์ผ ํฉ๋๋ค. ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ ์์ ๋ฌด์์๋ก ์ด๊ธฐํ๋ ๋ถ๋ฅ๊ธฐ๋ฅผ ์ถ๊ฐํ๊ณ ๋ชจ๋ ๋ ์ด์ด๋ฅผ ๊ณต๋์ผ๋ก ํ๋ จํ๋ ค๊ณ ํ๋ฉด (๋ถ๋ฅ๊ธฐ๊ฐ ๊ฐ์ค์น๋ฅผ ์์ ์ค์ ํ๊ธฐ ๋๋ฌธ์) ๊ทธ๋๋์ธํธ ์
๋ฐ์ดํธ์ ํฌ๊ธฐ๊ฐ ๋๋ฌด ์ปค์ง๊ณ ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ๋ฐฐ์ด ๊ฒ์ ์์ด๋ฒ๋ฆฌ๊ฒ ๋ฉ๋๋ค.
๋ํ ์ ์ฒด MobileNet ๋ชจ๋ธ์ด ์๋ ์์์ ์ต์์ ์ธต์ ๋ฏธ์ธ ์กฐ์ ํด์ผ ํฉ๋๋ค. ๋๋ถ๋ถ์ ์ปจ๋ณผ๋ฃจ์
๋คํธ์ํฌ์์ ์ธต์ด ๋์์๋ก ์ธต์ด ๋ ์ ๋ฌธํ๋ฉ๋๋ค. ์ฒ์ ๋ช ์ธต์ ๊ฑฐ์ ๋ชจ๋ ์ ํ์ ์ด๋ฏธ์ง๋ก ์ผ๋ฐํ๋๋ ๋งค์ฐ ๊ฐ๋จํ๊ณ ์ผ๋ฐ์ ์ธ ํน์ง์ ํ์ตํฉ๋๋ค. ๋ ๋์ ์์ค์ผ๋ก ์ฌ๋ผ๊ฐ๋ฉด ํ๋ จ์ ์ฌ์ฉ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋ง์ถฐ ํน์ง์ด ์ ์ ๋ ๊ตฌ์ฒดํ ๋ฉ๋๋ค. ๋ฏธ์ธ ์กฐ์ ์ ๋ชฉํ๋ ์ด๋ฌํ ์ ๋ฌธํ๋ ํน์ง์ด ์ผ๋ฐ์ ์ธ ํ์ต์ ๋ฎ์ด์ฐ์ง ์๊ณ ์ ๋ฐ์ดํฐ์
์ ๋ง์ถฐ ์ ๋์ ์ ์๋๋ก ์กฐ์ ํ๋ ๊ฒ์
๋๋ค.
์ต์์ ์ธต ๊ณ ์ ํด์ ํ๊ธฐ
base_model์ ๊ณ ์ ํด์ ํ๊ณ ๋งจ ์๋ ์ธต์ ํ๋ จ ํ ์ ์๋๋ก ์ค์ ํ๋ฉด ๋ฉ๋๋ค. ๊ทธ๋ฐ ๋ค์ ๋ชจ๋ธ์ ๋ค์ ์ปดํ์ผํ๊ณ (๋ณ๊ฒฝ ์ฌํญ์ ์ ์ฉํ๊ธฐ ์ํด์) ํ๋ จ์ ๋ค์ ์์ํด์ผ ํฉ๋๋ค.
End of explanation
"""
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
"""
Explanation: ๋ชจ๋ธ ์ปดํ์ผ
ํจ์ฌ ๋ ํฐ ๋ชจ๋ธ์ ํ๋ จํ๊ณ ์๊ณ ์ฌ์ ํ๋ จ๋ ๊ฐ์ค์น๋ฅผ ๋ค์ ์กฐ์ ํ๋ ค๋ฉด ์ด ๋จ๊ณ์์ ๋ฎ์ ํ์ต๋ฅ ์ ์ฌ์ฉํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค. ๊ทธ๋ ์ง ์์ผ๋ฉด ๋ชจ๋ธ์ด ๋งค์ฐ ๋น ๋ฅด๊ฒ ๊ณผ๋์ ํฉ๋ ์ ์์ต๋๋ค.
End of explanation
"""
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_dataset,
epochs=total_epochs,
initial_epoch=history.epoch[-1],
validation_data=validation_dataset)
"""
Explanation: ๋ชจ๋ธ ํ๋ จ ๊ณ์ํ๊ธฐ
์ด๋ฏธ ์๋ ด ์ํ๋ก ํ๋ จ๋ ๊ฒฝ์ฐ์, ์ด ๋จ๊ณ๋ ์ ํ๋๋ฅผ ๋ช ํผ์ผํธ ํฌ์ธํธ ํฅ์์ํต๋๋ค.
End of explanation
"""
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
"""
Explanation: MobileNet V2 ๊ธฐ๋ณธ ๋ชจ๋ธ์ ๋ง์ง๋ง ๋ช ์ธต์ ๋ฏธ์ธ ์กฐ์ ํ๊ณ ๊ทธ ์์ ๋ถ๋ฅ๊ธฐ๋ฅผ ํ๋ จํ ๋์ ํ์ต ๋ฐ ๊ฒ์ฆ ์ ํ๋ / ์์ค์ ํ์ต ๊ณก์ ์ ์ดํด ๋ณด๊ฒ ์ต๋๋ค. ๊ฒ์ฆ ์์ค์ ํ๋ จ ์์ค๋ณด๋ค ํจ์ฌ ๋์ผ๋ฏ๋ก ์ฝ๊ฐ์ ๊ณผ์ ํฉ์ด ๋์ฌ ์ ์์ต๋๋ค.
์๋ก์ด ํ๋ จ์ฉ ๋ฐ์ดํฐ์
์ด ์๋์ ์ผ๋ก ์๊ณ ์๋ MobileNet V2์ ๋ฐ์ดํฐ์
๊ณผ ์ ์ฌํ๊ธฐ ๋๋ฌธ์ ์ฝ๊ฐ์ ๊ณผ์ ํฉ์ด ๋ฐ์ํ ์ ์์ต๋๋ค.
๋ฏธ์ธ ์กฐ์ ํ ๋ชจ๋ธ์ ๊ฑฐ์ 98% ์ ํ๋์ ๋๋ฌํฉ๋๋ค.
End of explanation
"""
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
"""
Explanation: ํ๊ฐ ๋ฐ ์์ธก
๋ง์ง๋ง์ผ๋ก ํ
์คํธ ์ธํธ๋ฅผ ์ฌ์ฉํ์ฌ ์ ๋ฐ์ดํฐ์ ๋ํ ๋ชจ๋ธ์ ์ฑ๋ฅ์ ํ์ธํ ์ ์์ต๋๋ค.
End of explanation
"""
#Retrieve a batch of images from the test set
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
print('Labels:\n', label_batch)
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].astype("uint8"))
plt.title(class_names[predictions[i]])
plt.axis("off")
"""
Explanation: ์ด์ ์ด ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ ์ ๋๋ฌผ์ด ๊ณ ์์ด์ธ์ง ๊ฐ์ธ์ง ์์ธกํ ์ค๋น๊ฐ ๋์์ต๋๋ค.
End of explanation
"""
|
schoolie/bokeh | examples/howto/charts/bar.ipynb | bsd-3-clause | df['neg_mpg'] = 0 - df['mpg']
"""
Explanation: Calculate some negative values to show handling of them
End of explanation
"""
defaults.width = 550
defaults.height = 400
"""
Explanation: Override some default values to avoid requiring input on each chart
End of explanation
"""
bar_plot = Bar(df, label='cyl', title="label='cyl'")
show(bar_plot)
bar_plot2 = Bar(df, label='cyl', bar_width=0.4, title="label='cyl' bar_width=0.4")
show(bar_plot2)
bar_plot3 = Bar(df, label='cyl', values='mpg', agg='mean',
title="label='cyl' values='mpg' agg='mean'")
show(bar_plot3)
bar_plot4 = Bar(df, label='cyl', title="label='cyl' color='DimGray'", color='dimgray')
show(bar_plot4)
# multiple columns
bar_plot5 = Bar(df, label=['cyl', 'origin'], values='mpg', agg='mean',
title="label=['cyl', 'origin'] values='mpg' agg='mean'")
show(bar_plot5)
bar_plot6 = Bar(df, label='origin', values='mpg', agg='mean', stack='cyl',
title="label='origin' values='mpg' agg='mean' stack='cyl'", legend='top_right')
show(bar_plot6)
bar_plot7 = Bar(df, label='cyl', values='displ', agg='mean', group='origin',
title="label='cyl' values='displ' agg='mean' group='origin'", legend='top_right')
show(bar_plot7)
bar_plot8 = Bar(df, label='cyl', values='neg_mpg', agg='mean', group='origin',
color='origin', legend='top_right',
title="label='cyl' values='neg_mpg' agg='mean' group='origin'")
show(bar_plot8)
"""
Explanation: Create and show each bar chart
End of explanation
"""
|
tkphd/pycalphad | examples/BinaryExamples.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
import pycalphad.variables as v
# Load database and choose the phases that will be considered
db_alzn = Database('alzn_mey.tdb')
my_phases_alzn = ['LIQUID', 'FCC_A1', 'HCP_A3']
# Create a matplotlib Figure object and get the active Axes
fig = plt.figure(figsize=(9,6))
axes = fig.gca()
# Compute the phase diagram and plot it on the existing axes using the `plot_kwargs={'ax': axes}` keyword argument
binplot(db_alzn, ['AL', 'ZN', 'VA'] , my_phases_alzn, {v.X('ZN'):(0,1,0.02), v.T: (300, 1000, 10), v.P:101325, v.N: 1}, plot_kwargs={'ax': axes})
plt.show()
"""
Explanation: Plotting Isobaric Binary Phase Diagrams with binplot
These are a few examples of how to use Thermo-Calc TDB files to calculate isobaric binary phase diagrams. As long as the TDB file is present, each cell in these examples is self contained and can completely reproduce the figure shown.
binplot
The phase diagrams are computed with binplot, which has four required arguments:
1. The Database object
2. A list of active components (vacancies (VA), which are present in many databases, must be included explictly).
3. A list of phases to consider in the calculation
4. A dictionary conditions to consider, with keys of pycalphad StateVariables and values of scalars, 1D arrays, or (start, stop, step) ranges
Note that, at the time of writing, invariant reactions (three-phase 'regions' on binary diagrams) are not yet automatically detected so they
are not drawn on the diagram.
Also note that the magic variable %matplotlib inline should only be used in Jupyter notebooks.
TDB files
The TDB files should be located in the current working directory of the notebook. If you are running using a Jupyter notebook, the default working directory is the directory that that notebook is saved in.
To check the working directory, run:
python
import os
print(os.path.abspath(os.curdir))
TDB files can be found in the literature. The Thermodynamic DataBase DataBase (TDBDB) has indexed many available databases and links to the original papers and/or TDB files where possible.
Al-Zn (S. Mey, 1993)
The miscibility gap in the fcc phase is included in the Al-Zn diagram, shown below.
The format for specifying a range of a state variable is (start, stop, step).
S. an Mey, Zeitschrift fรผr Metallkunde 84(7) (1993) 451-455.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
import pycalphad.variables as v
# Load database
dbf = Database('Al-Mg_Zhong.tdb')
# Define the components
comps = ['AL', 'MG', 'VA']
# Get all possible phases programmatically
phases = dbf.phases.keys()
# Plot the phase diagram, if no axes are supplied, a new figure with axes will be created automatically
binplot(dbf, comps, phases, {v.N: 1, v.P:101325, v.T: (300, 1000, 10), v.X('MG'):(0, 1, 0.02)})
plt.show()
"""
Explanation: Al-Mg (Y. Zhong, 2005)
Y. Zhong, M. Yang, Z.-K. Liu, CALPHAD 29 (2005) 303-311 doi:10.1016/j.calphad.2005.08.004
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
import pycalphad.variables as v
# Load database
dbf = Database('NI_AL_DUPIN_2001.TDB')
# Set the components to consider, including vacanies (VA) explictly.
comps = ['AL', 'NI', 'VA']
# Get all the phases in the database programatically
phases = list(dbf.phases.keys())
# Create the dictionary of conditions
conds = {
v.N: 1, v.P: 101325,
v.T: (300, 2000, 10), # (start, stop, step)
v.X('AL'): (1e-5, 1, 0.02), # (start, stop, step)
}
# Create a matplotlib Figure object and get the active Axes
fig = plt.figure(figsize=(9,6))
axes = fig.gca()
# Plot by passing in all the variables
binplot(dbf, comps, phases, conds, plot_kwargs={'ax': axes})
plt.show()
"""
Explanation: Al-Ni (Dupin, 2001)
Components and conditions can also be stored as variables and passed to binplot.
N. Dupin, I. Ansara, B. Sundman, CALPHAD 25(2) (2001) 279-298 doi:10.1016/S0364-5916(01)00049-9
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot
import pycalphad.variables as v
# Load database and choose the phases that will be considered
db_alfe = Database('alfe_sei.TDB')
my_phases_alfe = ['LIQUID', 'B2_BCC', 'FCC_A1', 'HCP_A3', 'AL5FE2', 'AL2FE', 'AL13FE4', 'AL5FE4']
# Create a matplotlib Figure object and get the active Axes
fig = plt.figure(figsize=(9,6))
axes = fig.gca()
# Plot the phase diagram on the existing axes using the `plot_kwargs={'ax': axes}` keyword argument
# Tielines are turned off by including `'tielines': False` in the plotting keword argument
binplot(db_alfe, ['AL', 'FE', 'VA'] , my_phases_alfe, {v.X('AL'):(0,1,0.01), v.T: (300, 2000, 10), v.P:101325}, plot_kwargs={'ax': axes, 'tielines': False})
plt.show()
"""
Explanation: Al-Fe (M. Seiersten, 1991)
Removing tielines
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, binplot, variables as v
# Load database and choose the phases that will be plotted
db_nbre = Database('nbre_liu.tdb')
my_phases_nbre = ['CHI_RENB', 'SIGMARENB', 'FCC_RENB', 'LIQUID_RENB', 'BCC_RENB', 'HCP_RENB']
# Create a matplotlib Figure object and get the active Axes
fig = plt.figure(figsize=(9,6))
axes = fig.gca()
# Plot the phase diagram on the existing axes using the `plot_kwargs={'ax': axes}` keyword argument
binplot(db_nbre, ['NB', 'RE'] , my_phases_nbre, {v.X('RE'): (0,1,0.01), v.T: (1000, 3500, 20), v.P:101325}, plot_kwargs={'ax': axes})
axes.set_xlim(0, 1)
plt.show()
"""
Explanation: Nb-Re (Liu, 2013)
X.L. Liu, C.Z. Hargather, Z.-K. Liu, CALPHAD 41 (2013) 119-127 doi:10.1016/j.calphad.2013.02.006
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from pycalphad import Database, calculate, variables as v
from pycalphad.plot.utils import phase_legend
import numpy as np
# Load database and choose the phases that will be plotted
db_nbre = Database('nbre_liu.tdb')
my_phases_nbre = ['CHI_RENB', 'SIGMARENB', 'FCC_RENB', 'LIQUID_RENB', 'BCC_RENB', 'HCP_RENB']
# Get the colors that map phase names to colors in the legend
legend_handles, color_dict = phase_legend(my_phases_nbre)
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
# Loop over phases, calculate the Gibbs energy, and scatter plot GM vs. X(RE)
for phase_name in my_phases_nbre:
result = calculate(db_nbre, ['NB', 'RE'], phase_name, P=101325, T=2800, output='GM')
ax.scatter(result.X.sel(component='RE'), result.GM, marker='.', s=5, color=color_dict[phase_name])
# Format the plot
ax.set_xlabel('X(RE)')
ax.set_ylabel('GM')
ax.set_xlim((0, 1))
ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.6))
plt.show()
"""
Explanation: Calculating Energy Surfaces of Binary Systems
It is very common in CALPHAD modeling to directly examine the Gibbs energy surface of all the constituent phases in a system.
Below we show how the Gibbs energy of all phases may be calculated as a function of composition at a given temperature (2800 K).
Note that the chi phase has additional, internal degrees of freedom which allow it to take on multiple states for a given
overall composition. Only the low-energy states are relevant to calculating the equilibrium phase diagram.
End of explanation
"""
|
tensorflow/text | docs/tutorials/nmt_with_attention.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
!pip install "tensorflow-text==2.8.*"
import numpy as np
import typing
from typing import Any, Tuple
import tensorflow as tf
import tensorflow_text as tf_text
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
"""
Explanation: Neural machine translation with attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/nmt_with_attention">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/nmt_with_attention.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on Effective Approaches to Attention-based Neural Machine Translation. This is an advanced example that assumes some knowledge of:
Sequence to sequence models
TensorFlow fundamentals below the keras layer:
Working with tensors directly
Writing custom keras.Models and keras.layers
While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to Transformers).
After training the model in this notebook, you will be able to input a Spanish sentence, such as "ยฟtodavia estan en casa?", and return the English translation: "are you still at home?"
The resulting model is exportable as a tf.saved_model, so it can be used in other TensorFlow environments.
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 minutes to run on a single P100 GPU.
Setup
End of explanation
"""
use_builtins = True
"""
Explanation: This tutorial builds a few layers from scratch, use this variable if you want to switch between the custom and builtin implementations.
End of explanation
"""
#@title Shape checker
class ShapeChecker():
def __init__(self):
# Keep a cache of every axis-name seen
self.shapes = {}
def __call__(self, tensor, names, broadcast=False):
if not tf.executing_eagerly():
return
if isinstance(names, str):
names = (names,)
shape = tf.shape(tensor)
rank = tf.rank(tensor)
if rank != len(names):
raise ValueError(f'Rank mismatch:\n'
f' found {rank}: {shape.numpy()}\n'
f' expected {len(names)}: {names}\n')
for i, name in enumerate(names):
if isinstance(name, int):
old_dim = name
else:
old_dim = self.shapes.get(name, None)
new_dim = shape[i]
if (broadcast and new_dim == 1):
continue
if old_dim is None:
# If the axis name is new, add its length to the cache.
self.shapes[name] = new_dim
continue
if new_dim != old_dim:
raise ValueError(f"Shape mismatch for dimension: '{name}'\n"
f" found: {new_dim}\n"
f" expected: {old_dim}\n")
"""
Explanation: This tutorial uses a lot of low level API's where it's easy to get shapes wrong. This class is used to check shapes throughout the tutorial.
End of explanation
"""
# Download the file
import pathlib
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = pathlib.Path(path_to_zip).parent/'spa-eng/spa.txt'
def load_data(path):
text = path.read_text(encoding='utf-8')
lines = text.splitlines()
pairs = [line.split('\t') for line in lines]
inp = [inp for targ, inp in pairs]
targ = [targ for targ, inp in pairs]
return targ, inp
targ, inp = load_data(path_to_file)
print(inp[-1])
print(targ[-1])
"""
Explanation: The data
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ยฟPuedo tomar prestado este libro?
They have a variety of languages available, but we'll use the English-Spanish dataset.
Download and prepare the dataset
For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word โ id and id โ word).
Pad each sentence to a maximum length.
End of explanation
"""
BUFFER_SIZE = len(inp)
BATCH_SIZE = 64
dataset = tf.data.Dataset.from_tensor_slices((inp, targ)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE)
for example_input_batch, example_target_batch in dataset.take(1):
print(example_input_batch[:5])
print()
print(example_target_batch[:5])
break
"""
Explanation: Create a tf.data dataset
From these arrays of strings you can create a tf.data.Dataset of strings that shuffles and batches them efficiently:
End of explanation
"""
example_text = tf.constant('ยฟTodavรญa estรก en casa?')
print(example_text.numpy())
print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
"""
Explanation: Text preprocessing
One of the goals of this tutorial is to build a model that can be exported as a tf.saved_model. To make that exported model useful it should take tf.string inputs, and return tf.string outputs: All the text processing happens inside the model.
Standardization
The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text.
The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents.
The tensorflow_text package contains a unicode normalize operation:
End of explanation
"""
def tf_lower_and_split_punct(text):
# Split accecented characters.
text = tf_text.normalize_utf8(text, 'NFKD')
text = tf.strings.lower(text)
# Keep space, a to z, and select punctuation.
text = tf.strings.regex_replace(text, '[^ a-z.?!,ยฟ]', '')
# Add spaces around punctuation.
text = tf.strings.regex_replace(text, '[.?!,ยฟ]', r' \0 ')
# Strip whitespace.
text = tf.strings.strip(text)
text = tf.strings.join(['[START]', text, '[END]'], separator=' ')
return text
print(example_text.numpy().decode())
print(tf_lower_and_split_punct(example_text).numpy().decode())
"""
Explanation: Unicode normalization will be the first step in the text standardization function:
End of explanation
"""
max_vocab_size = 5000
input_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
"""
Explanation: Text Vectorization
This standardization function will be wrapped up in a tf.keras.layers.TextVectorization layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
End of explanation
"""
input_text_processor.adapt(inp)
# Here are the first 10 words from the vocabulary:
input_text_processor.get_vocabulary()[:10]
"""
Explanation: The TextVectorization layer and many other preprocessing layers have an adapt method. This method reads one epoch of the training data, and works a lot like Model.fix. This adapt method initializes the layer based on the data. Here it determines the vocabulary:
End of explanation
"""
output_text_processor = tf.keras.layers.TextVectorization(
standardize=tf_lower_and_split_punct,
max_tokens=max_vocab_size)
output_text_processor.adapt(targ)
output_text_processor.get_vocabulary()[:10]
"""
Explanation: That's the Spanish TextVectorization layer, now build and .adapt() the English one:
End of explanation
"""
example_tokens = input_text_processor(example_input_batch)
example_tokens[:3, :10]
"""
Explanation: Now these layers can convert a batch of strings into a batch of token IDs:
End of explanation
"""
input_vocab = np.array(input_text_processor.get_vocabulary())
tokens = input_vocab[example_tokens[0].numpy()]
' '.join(tokens)
"""
Explanation: The get_vocabulary method can be used to convert token IDs back to text:
End of explanation
"""
plt.subplot(1, 2, 1)
plt.pcolormesh(example_tokens)
plt.title('Token IDs')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
"""
Explanation: The returned token IDs are zero-padded. This can easily be turned into a mask:
End of explanation
"""
embedding_dim = 256
units = 1024
"""
Explanation: The encoder/decoder model
The following diagram shows an overview of the model. At each time-step the decoder's output is combined with a weighted sum over the encoded input, to predict the next word. The diagram and formulas are from Luong's paper.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
Before getting into it define a few constants for the model:
End of explanation
"""
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, embedding_dim, enc_units):
super(Encoder, self).__init__()
self.enc_units = enc_units
self.input_vocab_size = input_vocab_size
# The embedding layer converts tokens to vectors
self.embedding = tf.keras.layers.Embedding(self.input_vocab_size,
embedding_dim)
# The GRU RNN layer processes those vectors sequentially.
self.gru = tf.keras.layers.GRU(self.enc_units,
# Return the sequence and state
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, tokens, state=None):
shape_checker = ShapeChecker()
shape_checker(tokens, ('batch', 's'))
# 2. The embedding layer looks up the embedding for each token.
vectors = self.embedding(tokens)
shape_checker(vectors, ('batch', 's', 'embed_dim'))
# 3. The GRU processes the embedding sequence.
# output shape: (batch, s, enc_units)
# state shape: (batch, enc_units)
output, state = self.gru(vectors, initial_state=state)
shape_checker(output, ('batch', 's', 'enc_units'))
shape_checker(state, ('batch', 'enc_units'))
# 4. Returns the new sequence and its state.
return output, state
"""
Explanation: The encoder
Start by building the encoder, the blue part of the diagram above.
The encoder:
Takes a list of token IDs (from input_text_processor).
Looks up an embedding vector for each token (Using a layers.Embedding).
Processes the embeddings into a new sequence (Using a layers.GRU).
Returns:
The processed sequence. This will be passed to the attention head.
The internal state. This will be used to initialize the decoder
End of explanation
"""
# Convert the input text to tokens.
example_tokens = input_text_processor(example_input_batch)
# Encode the input sequence.
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
example_enc_output, example_enc_state = encoder(example_tokens)
print(f'Input batch, shape (batch): {example_input_batch.shape}')
print(f'Input batch tokens, shape (batch, s): {example_tokens.shape}')
print(f'Encoder output, shape (batch, s, units): {example_enc_output.shape}')
print(f'Encoder state, shape (batch, units): {example_enc_state.shape}')
"""
Explanation: Here is how it fits together so far:
End of explanation
"""
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__()
# For Eqn. (4), the Bahdanau attention
self.W1 = tf.keras.layers.Dense(units, use_bias=False)
self.W2 = tf.keras.layers.Dense(units, use_bias=False)
self.attention = tf.keras.layers.AdditiveAttention()
def call(self, query, value, mask):
shape_checker = ShapeChecker()
shape_checker(query, ('batch', 't', 'query_units'))
shape_checker(value, ('batch', 's', 'value_units'))
shape_checker(mask, ('batch', 's'))
# From Eqn. (4), `W1@ht`.
w1_query = self.W1(query)
shape_checker(w1_query, ('batch', 't', 'attn_units'))
# From Eqn. (4), `W2@hs`.
w2_key = self.W2(value)
shape_checker(w2_key, ('batch', 's', 'attn_units'))
query_mask = tf.ones(tf.shape(query)[:-1], dtype=bool)
value_mask = mask
context_vector, attention_weights = self.attention(
inputs = [w1_query, value, w2_key],
mask=[query_mask, value_mask],
return_attention_scores = True,
)
shape_checker(context_vector, ('batch', 't', 'value_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
return context_vector, attention_weights
"""
Explanation: The encoder returns its internal state so that its state can be used to initialize the decoder.
It's also common for an RNN to return its state so that it can process a sequence over multiple calls. You'll see more of that building the decoder.
The attention head
The decoder uses attention to selectively focus on parts of the input sequence.
The attention takes a sequence of vectors as input for each example and returns an "attention" vector for each example. This attention layer is similar to a layers.GlobalAveragePoling1D but the attention layer performs a weighted average.
Let's look at how this works:
<img src="images/attention_equation_1.jpg" alt="attention equation 1" width="800">
<img src="images/attention_equation_2.jpg" alt="attention equation 2" width="800">
Where:
$s$ is the encoder index.
$t$ is the decoder index.
$\alpha_{ts}$ is the attention weights.
$h_s$ is the sequence of encoder outputs being attended to (the attention "key" and "value" in transformer terminology).
$h_t$ is the the decoder state attending to the sequence (the attention "query" in transformer terminology).
$c_t$ is the resulting context vector.
$a_t$ is the final output combining the "context" and "query".
The equations:
Calculates the attention weights, $\alpha_{ts}$, as a softmax across the encoder's output sequence.
Calculates the context vector as the weighted sum of the encoder outputs.
Last is the $score$ function. Its job is to calculate a scalar logit-score for each key-query pair. There are two common approaches:
<img src="images/attention_equation_4.jpg" alt="attention equation 4" width="800">
This tutorial uses Bahdanau's additive attention. TensorFlow includes implementations of both as layers.Attention and
layers.AdditiveAttention. The class below handles the weight matrices in a pair of layers.Dense layers, and calls the builtin implementation.
End of explanation
"""
attention_layer = BahdanauAttention(units)
"""
Explanation: Test the Attention layer
Create a BahdanauAttention layer:
End of explanation
"""
(example_tokens != 0).shape
"""
Explanation: This layer takes 3 inputs:
The query: This will be generated by the decoder, later.
The value: This Will be the output of the encoder.
The mask: To exclude the padding, example_tokens != 0
End of explanation
"""
# Later, the decoder will generate this attention query
example_attention_query = tf.random.normal(shape=[len(example_tokens), 2, 10])
# Attend to the encoded tokens
context_vector, attention_weights = attention_layer(
query=example_attention_query,
value=example_enc_output,
mask=(example_tokens != 0))
print(f'Attention result shape: (batch_size, query_seq_length, units): {context_vector.shape}')
print(f'Attention weights shape: (batch_size, query_seq_length, value_seq_length): {attention_weights.shape}')
"""
Explanation: The vectorized implementation of the attention layer lets you pass a batch of sequences of query vectors and a batch of sequence of value vectors. The result is:
A batch of sequences of result vectors the size of the queries.
A batch attention maps, with size (query_length, value_length).
End of explanation
"""
plt.subplot(1, 2, 1)
plt.pcolormesh(attention_weights[:, 0, :])
plt.title('Attention weights')
plt.subplot(1, 2, 2)
plt.pcolormesh(example_tokens != 0)
plt.title('Mask')
"""
Explanation: The attention weights should sum to 1.0 for each sequence.
Here are the attention weights across the sequences at t=0:
End of explanation
"""
attention_weights.shape
attention_slice = attention_weights[0, 0].numpy()
attention_slice = attention_slice[attention_slice != 0]
#@title
plt.suptitle('Attention weights for one sequence')
plt.figure(figsize=(12, 6))
a1 = plt.subplot(1, 2, 1)
plt.bar(range(len(attention_slice)), attention_slice)
# freeze the xlim
plt.xlim(plt.xlim())
plt.xlabel('Attention weights')
a2 = plt.subplot(1, 2, 2)
plt.bar(range(len(attention_slice)), attention_slice)
plt.xlabel('Attention weights, zoomed')
# zoom in
top = max(a1.get_ylim())
zoom = 0.85*top
a2.set_ylim([0.90*top, top])
a1.plot(a1.get_xlim(), [zoom, zoom], color='k')
"""
Explanation: Because of the small-random initialization the attention weights are all close to 1/(sequence_length). If you zoom in on the weights for a single sequence, you can see that there is some small variation that the model can learn to expand, and exploit.
End of explanation
"""
class Decoder(tf.keras.layers.Layer):
def __init__(self, output_vocab_size, embedding_dim, dec_units):
super(Decoder, self).__init__()
self.dec_units = dec_units
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
# For Step 1. The embedding layer convets token IDs to vectors
self.embedding = tf.keras.layers.Embedding(self.output_vocab_size,
embedding_dim)
# For Step 2. The RNN keeps track of what's been generated so far.
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
# For step 3. The RNN output will be the query for the attention layer.
self.attention = BahdanauAttention(self.dec_units)
# For step 4. Eqn. (3): converting `ct` to `at`
self.Wc = tf.keras.layers.Dense(dec_units, activation=tf.math.tanh,
use_bias=False)
# For step 5. This fully connected layer produces the logits for each
# output token.
self.fc = tf.keras.layers.Dense(self.output_vocab_size)
"""
Explanation: The decoder
The decoder's job is to generate predictions for the next output token.
The decoder receives the complete encoder output.
It uses an RNN to keep track of what it has generated so far.
It uses its RNN output as the query to the attention over the encoder's output, producing the context vector.
It combines the RNN output and the context vector using Equation 3 (below) to generate the "attention vector".
It generates logit predictions for the next token based on the "attention vector".
<img src="images/attention_equation_3.jpg" alt="attention equation 3" width="800">
Here is the Decoder class and its initializer. The initializer creates all the necessary layers.
End of explanation
"""
class DecoderInput(typing.NamedTuple):
new_tokens: Any
enc_output: Any
mask: Any
class DecoderOutput(typing.NamedTuple):
logits: Any
attention_weights: Any
"""
Explanation: The call method for this layer takes and returns multiple tensors. Organize those into simple container classes:
End of explanation
"""
def call(self,
inputs: DecoderInput,
state=None) -> Tuple[DecoderOutput, tf.Tensor]:
shape_checker = ShapeChecker()
shape_checker(inputs.new_tokens, ('batch', 't'))
shape_checker(inputs.enc_output, ('batch', 's', 'enc_units'))
shape_checker(inputs.mask, ('batch', 's'))
if state is not None:
shape_checker(state, ('batch', 'dec_units'))
# Step 1. Lookup the embeddings
vectors = self.embedding(inputs.new_tokens)
shape_checker(vectors, ('batch', 't', 'embedding_dim'))
# Step 2. Process one step with the RNN
rnn_output, state = self.gru(vectors, initial_state=state)
shape_checker(rnn_output, ('batch', 't', 'dec_units'))
shape_checker(state, ('batch', 'dec_units'))
# Step 3. Use the RNN output as the query for the attention over the
# encoder output.
context_vector, attention_weights = self.attention(
query=rnn_output, value=inputs.enc_output, mask=inputs.mask)
shape_checker(context_vector, ('batch', 't', 'dec_units'))
shape_checker(attention_weights, ('batch', 't', 's'))
# Step 4. Eqn. (3): Join the context_vector and rnn_output
# [ct; ht] shape: (batch t, value_units + query_units)
context_and_rnn_output = tf.concat([context_vector, rnn_output], axis=-1)
# Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])`
attention_vector = self.Wc(context_and_rnn_output)
shape_checker(attention_vector, ('batch', 't', 'dec_units'))
# Step 5. Generate logit predictions:
logits = self.fc(attention_vector)
shape_checker(logits, ('batch', 't', 'output_vocab_size'))
return DecoderOutput(logits, attention_weights), state
Decoder.call = call
"""
Explanation: Here is the implementation of the call method:
End of explanation
"""
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
"""
Explanation: The encoder processes its full input sequence with a single call to its RNN. This implementation of the decoder can do that as well for efficient training. But this tutorial will run the decoder in a loop for a few reasons:
Flexibility: Writing the loop gives you direct control over the training procedure.
Clarity: It's possible to do masking tricks and use layers.RNN, or tfa.seq2seq APIs to pack this all into a single call. But writing it out as a loop may be clearer.
Loop free training is demonstrated in the Text generation tutiorial.
Now try using this decoder.
End of explanation
"""
# Convert the target sequence, and collect the "[START]" tokens
example_output_tokens = output_text_processor(example_target_batch)
start_index = output_text_processor.get_vocabulary().index('[START]')
first_token = tf.constant([[start_index]] * example_output_tokens.shape[0])
# Run the decoder
dec_result, dec_state = decoder(
inputs = DecoderInput(new_tokens=first_token,
enc_output=example_enc_output,
mask=(example_tokens != 0)),
state = example_enc_state
)
print(f'logits shape: (batch_size, t, output_vocab_size) {dec_result.logits.shape}')
print(f'state shape: (batch_size, dec_units) {dec_state.shape}')
"""
Explanation: The decoder takes 4 inputs.
new_tokens - The last token generated. Initialize the decoder with the "[START]" token.
enc_output - Generated by the Encoder.
mask - A boolean tensor indicating where tokens != 0
state - The previous state output from the decoder (the internal state
of the decoder's RNN). Pass None to zero-initialize it. The original
paper initializes it from the encoder's final RNN state.
End of explanation
"""
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
"""
Explanation: Sample a token according to the logits:
End of explanation
"""
vocab = np.array(output_text_processor.get_vocabulary())
first_word = vocab[sampled_token.numpy()]
first_word[:5]
"""
Explanation: Decode the token as the first word of the output:
End of explanation
"""
dec_result, dec_state = decoder(
DecoderInput(sampled_token,
example_enc_output,
mask=(example_tokens != 0)),
state=dec_state)
sampled_token = tf.random.categorical(dec_result.logits[:, 0, :], num_samples=1)
first_word = vocab[sampled_token.numpy()]
first_word[:5]
"""
Explanation: Now use the decoder to generate a second set of logits.
Pass the same enc_output and mask, these haven't changed.
Pass the sampled token as new_tokens.
Pass the decoder_state the decoder returned last time, so the RNN continues with a memory of where it left off last time.
End of explanation
"""
class MaskedLoss(tf.keras.losses.Loss):
def __init__(self):
self.name = 'masked_loss'
self.loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def __call__(self, y_true, y_pred):
shape_checker = ShapeChecker()
shape_checker(y_true, ('batch', 't'))
shape_checker(y_pred, ('batch', 't', 'logits'))
# Calculate the loss for each item in the batch.
loss = self.loss(y_true, y_pred)
shape_checker(loss, ('batch', 't'))
# Mask off the losses on padding.
mask = tf.cast(y_true != 0, tf.float32)
shape_checker(mask, ('batch', 't'))
loss *= mask
# Return the total.
return tf.reduce_sum(loss)
"""
Explanation: Training
Now that you have all the model components, it's time to start training the model. You'll need:
A loss function and optimizer to perform the optimization.
A training step function defining how to update the model for each input/target batch.
A training loop to drive the training and save checkpoints.
Define the loss function
End of explanation
"""
class TrainTranslator(tf.keras.Model):
def __init__(self, embedding_dim, units,
input_text_processor,
output_text_processor,
use_tf_function=True):
super().__init__()
# Build the encoder and decoder
encoder = Encoder(input_text_processor.vocabulary_size(),
embedding_dim, units)
decoder = Decoder(output_text_processor.vocabulary_size(),
embedding_dim, units)
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.use_tf_function = use_tf_function
self.shape_checker = ShapeChecker()
def train_step(self, inputs):
self.shape_checker = ShapeChecker()
if self.use_tf_function:
return self._tf_train_step(inputs)
else:
return self._train_step(inputs)
"""
Explanation: Implement the training step
Start with a model class, the training process will be implemented as the train_step method on this model. See Customizing fit for details.
Here the train_step method is a wrapper around the _train_step implementation which will come later. This wrapper includes a switch to turn on and off tf.function compilation, to make debugging easier.
End of explanation
"""
def _preprocess(self, input_text, target_text):
self.shape_checker(input_text, ('batch',))
self.shape_checker(target_text, ('batch',))
# Convert the text to token IDs
input_tokens = self.input_text_processor(input_text)
target_tokens = self.output_text_processor(target_text)
self.shape_checker(input_tokens, ('batch', 's'))
self.shape_checker(target_tokens, ('batch', 't'))
# Convert IDs to masks.
input_mask = input_tokens != 0
self.shape_checker(input_mask, ('batch', 's'))
target_mask = target_tokens != 0
self.shape_checker(target_mask, ('batch', 't'))
return input_tokens, input_mask, target_tokens, target_mask
TrainTranslator._preprocess = _preprocess
"""
Explanation: Overall the implementation for the Model.train_step method is as follows:
Receive a batch of input_text, target_text from the tf.data.Dataset.
Convert those raw text inputs to token-embeddings and masks.
Run the encoder on the input_tokens to get the encoder_output and encoder_state.
Initialize the decoder state and loss.
Loop over the target_tokens:
Run the decoder one step at a time.
Calculate the loss for each step.
Accumulate the average loss.
Calculate the gradient of the loss and use the optimizer to apply updates to the model's trainable_variables.
The _preprocess method, added below, implements steps #1 and #2:
End of explanation
"""
def _train_step(self, inputs):
input_text, target_text = inputs
(input_tokens, input_mask,
target_tokens, target_mask) = self._preprocess(input_text, target_text)
max_target_length = tf.shape(target_tokens)[1]
with tf.GradientTape() as tape:
# Encode the input
enc_output, enc_state = self.encoder(input_tokens)
self.shape_checker(enc_output, ('batch', 's', 'enc_units'))
self.shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder's state to the encoder's final state.
# This only works if the encoder and decoder have the same number of
# units.
dec_state = enc_state
loss = tf.constant(0.0)
for t in tf.range(max_target_length-1):
# Pass in two tokens from the target sequence:
# 1. The current input to the decoder.
# 2. The target for the decoder's next prediction.
new_tokens = target_tokens[:, t:t+2]
step_loss, dec_state = self._loop_step(new_tokens, input_mask,
enc_output, dec_state)
loss = loss + step_loss
# Average the loss over all non padding tokens.
average_loss = loss / tf.reduce_sum(tf.cast(target_mask, tf.float32))
# Apply an optimization step
variables = self.trainable_variables
gradients = tape.gradient(average_loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
# Return a dict mapping metric names to current value
return {'batch_loss': average_loss}
TrainTranslator._train_step = _train_step
"""
Explanation: The _train_step method, added below, handles the remaining steps except for actually running the decoder:
End of explanation
"""
def _loop_step(self, new_tokens, input_mask, enc_output, dec_state):
input_token, target_token = new_tokens[:, 0:1], new_tokens[:, 1:2]
# Run the decoder one step.
decoder_input = DecoderInput(new_tokens=input_token,
enc_output=enc_output,
mask=input_mask)
dec_result, dec_state = self.decoder(decoder_input, state=dec_state)
self.shape_checker(dec_result.logits, ('batch', 't1', 'logits'))
self.shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
self.shape_checker(dec_state, ('batch', 'dec_units'))
# `self.loss` returns the total for non-padded tokens
y = target_token
y_pred = dec_result.logits
step_loss = self.loss(y, y_pred)
return step_loss, dec_state
TrainTranslator._loop_step = _loop_step
"""
Explanation: The _loop_step method, added below, executes the decoder and calculates the incremental loss and new decoder state (dec_state).
End of explanation
"""
translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
use_tf_function=False)
# Configure the loss and optimizer
translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
"""
Explanation: Test the training step
Build a TrainTranslator, and configure it for training using the Model.compile method:
End of explanation
"""
np.log(output_text_processor.vocabulary_size())
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
"""
Explanation: Test out the train_step. For a text model like this the loss should start near:
End of explanation
"""
@tf.function(input_signature=[[tf.TensorSpec(dtype=tf.string, shape=[None]),
tf.TensorSpec(dtype=tf.string, shape=[None])]])
def _tf_train_step(self, inputs):
return self._train_step(inputs)
TrainTranslator._tf_train_step = _tf_train_step
translator.use_tf_function = True
"""
Explanation: While it's easier to debug without a tf.function it does give a performance boost. So now that the _train_step method is working, try the tf.function-wrapped _tf_train_step, to maximize performance while training:
End of explanation
"""
translator.train_step([example_input_batch, example_target_batch])
"""
Explanation: The first call will be slow, because it traces the function.
End of explanation
"""
%%time
for n in range(10):
print(translator.train_step([example_input_batch, example_target_batch]))
print()
"""
Explanation: But after that it's usually 2-3x faster than the eager train_step method:
End of explanation
"""
losses = []
for n in range(100):
print('.', end='')
logs = translator.train_step([example_input_batch, example_target_batch])
losses.append(logs['batch_loss'].numpy())
print()
plt.plot(losses)
"""
Explanation: A good test of a new model is to see that it can overfit a single batch of input. Try it, the loss should quickly go to zero:
End of explanation
"""
train_translator = TrainTranslator(
embedding_dim, units,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor)
# Configure the loss and optimizer
train_translator.compile(
optimizer=tf.optimizers.Adam(),
loss=MaskedLoss(),
)
"""
Explanation: Now that you're confident that the training step is working, build a fresh copy of the model to train from scratch:
End of explanation
"""
class BatchLogs(tf.keras.callbacks.Callback):
def __init__(self, key):
self.key = key
self.logs = []
def on_train_batch_end(self, n, logs):
self.logs.append(logs[self.key])
batch_loss = BatchLogs('batch_loss')
train_translator.fit(dataset, epochs=3,
callbacks=[batch_loss])
plt.plot(batch_loss.logs)
plt.ylim([0, 3])
plt.xlabel('Batch #')
plt.ylabel('CE/token')
"""
Explanation: Train the model
While there's nothing wrong with writing your own custom training loop, implementing the Model.train_step method, as in the previous section, allows you to run Model.fit and avoid rewriting all that boiler-plate code.
This tutorial only trains for a couple of epochs, so use a callbacks.Callback to collect the history of batch losses, for plotting:
End of explanation
"""
class Translator(tf.Module):
def __init__(self, encoder, decoder, input_text_processor,
output_text_processor):
self.encoder = encoder
self.decoder = decoder
self.input_text_processor = input_text_processor
self.output_text_processor = output_text_processor
self.output_token_string_from_index = (
tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(),
mask_token='',
invert=True))
# The output should never generate padding, unknown, or start.
index_from_string = tf.keras.layers.StringLookup(
vocabulary=output_text_processor.get_vocabulary(), mask_token='')
token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy()
token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool)
token_mask[np.array(token_mask_ids)] = True
self.token_mask = token_mask
self.start_token = index_from_string(tf.constant('[START]'))
self.end_token = index_from_string(tf.constant('[END]'))
translator = Translator(
encoder=train_translator.encoder,
decoder=train_translator.decoder,
input_text_processor=input_text_processor,
output_text_processor=output_text_processor,
)
"""
Explanation: The visible jumps in the plot are at the epoch boundaries.
Translate
Now that the model is trained, implement a function to execute the full text => text translation.
For this the model needs to invert the text => token IDs mapping provided by the output_text_processor. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow.
Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
End of explanation
"""
def tokens_to_text(self, result_tokens):
shape_checker = ShapeChecker()
shape_checker(result_tokens, ('batch', 't'))
result_text_tokens = self.output_token_string_from_index(result_tokens)
shape_checker(result_text_tokens, ('batch', 't'))
result_text = tf.strings.reduce_join(result_text_tokens,
axis=1, separator=' ')
shape_checker(result_text, ('batch'))
result_text = tf.strings.strip(result_text)
shape_checker(result_text, ('batch',))
return result_text
Translator.tokens_to_text = tokens_to_text
"""
Explanation: Convert token IDs to text
The first method to implement is tokens_to_text which converts from token IDs to human readable text.
End of explanation
"""
example_output_tokens = tf.random.uniform(
shape=[5, 2], minval=0, dtype=tf.int64,
maxval=output_text_processor.vocabulary_size())
translator.tokens_to_text(example_output_tokens).numpy()
"""
Explanation: Input some random token IDs and see what it generates:
End of explanation
"""
def sample(self, logits, temperature):
shape_checker = ShapeChecker()
# 't' is usually 1 here.
shape_checker(logits, ('batch', 't', 'vocab'))
shape_checker(self.token_mask, ('vocab',))
token_mask = self.token_mask[tf.newaxis, tf.newaxis, :]
shape_checker(token_mask, ('batch', 't', 'vocab'), broadcast=True)
# Set the logits for all masked tokens to -inf, so they are never chosen.
logits = tf.where(self.token_mask, -np.inf, logits)
if temperature == 0.0:
new_tokens = tf.argmax(logits, axis=-1)
else:
logits = tf.squeeze(logits, axis=1)
new_tokens = tf.random.categorical(logits/temperature,
num_samples=1)
shape_checker(new_tokens, ('batch', 't'))
return new_tokens
Translator.sample = sample
"""
Explanation: Sample from the decoder's predictions
This function takes the decoder's logit outputs and samples token IDs from that distribution:
End of explanation
"""
example_logits = tf.random.normal([5, 1, output_text_processor.vocabulary_size()])
example_output_tokens = translator.sample(example_logits, temperature=1.0)
example_output_tokens
"""
Explanation: Test run this function on some random inputs:
End of explanation
"""
def translate_unrolled(self,
input_text, *,
max_length=50,
return_attention=True,
temperature=1.0):
batch_size = tf.shape(input_text)[0]
input_tokens = self.input_text_processor(input_text)
enc_output, enc_state = self.encoder(input_tokens)
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
result_tokens = []
attention = []
done = tf.zeros([batch_size, 1], dtype=tf.bool)
for _ in range(max_length):
dec_input = DecoderInput(new_tokens=new_tokens,
enc_output=enc_output,
mask=(input_tokens!=0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
attention.append(dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens.append(new_tokens)
if tf.executing_eagerly() and tf.reduce_all(done):
break
# Convert the list of generates token ids to a list of strings.
result_tokens = tf.concat(result_tokens, axis=-1)
result_text = self.tokens_to_text(result_tokens)
if return_attention:
attention_stack = tf.concat(attention, axis=1)
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_unrolled
"""
Explanation: Implement the translation loop
Here is a complete implementation of the text to text translation loop.
This implementation collects the results into python lists, before using tf.concat to join them into tensors.
This implementation statically unrolls the graph out to max_length iterations.
This is okay with eager execution in python.
End of explanation
"""
%%time
input_text = tf.constant([
'hace mucho frio aqui.', # "It's really cold here."
'Esta es mi vida.', # "This is my life.""
])
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: Run it on a simple input:
End of explanation
"""
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
"""
Explanation: If you want to export this model you'll need to wrap this method in a tf.function. This basic implementation has a few issues if you try to do that:
The resulting graphs are very large and take a few seconds to build, save or load.
You can't break from a statically unrolled loop, so it will always run max_length iterations, even if all the outputs are done. But even then it's marginally faster than eager execution.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
#@title [Optional] Use a symbolic loop
def translate_symbolic(self,
input_text,
*,
max_length=50,
return_attention=True,
temperature=1.0):
shape_checker = ShapeChecker()
shape_checker(input_text, ('batch',))
batch_size = tf.shape(input_text)[0]
# Encode the input
input_tokens = self.input_text_processor(input_text)
shape_checker(input_tokens, ('batch', 's'))
enc_output, enc_state = self.encoder(input_tokens)
shape_checker(enc_output, ('batch', 's', 'enc_units'))
shape_checker(enc_state, ('batch', 'enc_units'))
# Initialize the decoder
dec_state = enc_state
new_tokens = tf.fill([batch_size, 1], self.start_token)
shape_checker(new_tokens, ('batch', 't1'))
# Initialize the accumulators
result_tokens = tf.TensorArray(tf.int64, size=1, dynamic_size=True)
attention = tf.TensorArray(tf.float32, size=1, dynamic_size=True)
done = tf.zeros([batch_size, 1], dtype=tf.bool)
shape_checker(done, ('batch', 't1'))
for t in tf.range(max_length):
dec_input = DecoderInput(
new_tokens=new_tokens, enc_output=enc_output, mask=(input_tokens != 0))
dec_result, dec_state = self.decoder(dec_input, state=dec_state)
shape_checker(dec_result.attention_weights, ('batch', 't1', 's'))
attention = attention.write(t, dec_result.attention_weights)
new_tokens = self.sample(dec_result.logits, temperature)
shape_checker(dec_result.logits, ('batch', 't1', 'vocab'))
shape_checker(new_tokens, ('batch', 't1'))
# If a sequence produces an `end_token`, set it `done`
done = done | (new_tokens == self.end_token)
# Once a sequence is done it only produces 0-padding.
new_tokens = tf.where(done, tf.constant(0, dtype=tf.int64), new_tokens)
# Collect the generated tokens
result_tokens = result_tokens.write(t, new_tokens)
if tf.reduce_all(done):
break
# Convert the list of generated token ids to a list of strings.
result_tokens = result_tokens.stack()
shape_checker(result_tokens, ('t', 'batch', 't0'))
result_tokens = tf.squeeze(result_tokens, -1)
result_tokens = tf.transpose(result_tokens, [1, 0])
shape_checker(result_tokens, ('batch', 't'))
result_text = self.tokens_to_text(result_tokens)
shape_checker(result_text, ('batch',))
if return_attention:
attention_stack = attention.stack()
shape_checker(attention_stack, ('t', 'batch', 't1', 's'))
attention_stack = tf.squeeze(attention_stack, 2)
shape_checker(attention_stack, ('t', 'batch', 's'))
attention_stack = tf.transpose(attention_stack, [1, 0, 2])
shape_checker(attention_stack, ('batch', 't', 's'))
return {'text': result_text, 'attention': attention_stack}
else:
return {'text': result_text}
Translator.translate = translate_symbolic
"""
Explanation: Run the tf.function once to compile it:
End of explanation
"""
%%time
result = translator.translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: The initial implementation used python lists to collect the outputs. This uses tf.range as the loop iterator, allowing tf.autograph to convert the loop. The biggest change in this implementation is the use of tf.TensorArray instead of python list to accumulate tensors. tf.TensorArray is required to collect a variable number of tensors in graph mode.
With eager execution this implementation performs on par with the original:
End of explanation
"""
@tf.function(input_signature=[tf.TensorSpec(dtype=tf.string, shape=[None])])
def tf_translate(self, input_text):
return self.translate(input_text)
Translator.tf_translate = tf_translate
"""
Explanation: But when you wrap it in a tf.function you'll notice two differences.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
"""
Explanation: First: Graph creation is much faster (~10x), since it doesn't create max_iterations copies of the model.
End of explanation
"""
%%time
result = translator.tf_translate(
input_text = input_text)
print(result['text'][0].numpy().decode())
print(result['text'][1].numpy().decode())
print()
"""
Explanation: Second: The compiled function is much faster on small inputs (5x on this example), because it can break out of the loop.
End of explanation
"""
a = result['attention'][0]
print(np.sum(a, axis=-1))
"""
Explanation: Visualize the process
The attention weights returned by the translate method show where the model was "looking" when it generated each output token.
So the sum of the attention over the input should return all ones:
End of explanation
"""
_ = plt.bar(range(len(a[0, :])), a[0, :])
"""
Explanation: Here is the attention distribution for the first output step of the first example. Note how the attention is now much more focused than it was for the untrained model:
End of explanation
"""
plt.imshow(np.array(a), vmin=0.0)
"""
Explanation: Since there is some rough alignment between the input and output words, you expect the attention to be focused near the diagonal:
End of explanation
"""
#@title Labeled attention plots
def plot_attention(attention, sentence, predicted_sentence):
sentence = tf_lower_and_split_punct(sentence).numpy().decode().split()
predicted_sentence = predicted_sentence.numpy().decode().split() + ['[END]']
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1)
attention = attention[:len(predicted_sentence), :len(sentence)]
ax.matshow(attention, cmap='viridis', vmin=0.0)
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax.set_xlabel('Input text')
ax.set_ylabel('Output text')
plt.suptitle('Attention weights')
i=0
plot_attention(result['attention'][i], input_text[i], result['text'][i])
"""
Explanation: Here is some code to make a better attention plot:
End of explanation
"""
%%time
three_input_text = tf.constant([
# This is my life.
'Esta es mi vida.',
# Are they still home?
'ยฟTodavรญa estรกn en casa?',
# Try to find out.'
'Tratar de descubrir.',
])
result = translator.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
result['text']
i = 0
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 1
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
i = 2
plot_attention(result['attention'][i], three_input_text[i], result['text'][i])
"""
Explanation: Translate a few more sentences and plot them:
End of explanation
"""
long_input_text = tf.constant([inp[-1]])
import textwrap
print('Expected output:\n', '\n'.join(textwrap.wrap(targ[-1])))
result = translator.tf_translate(long_input_text)
i = 0
plot_attention(result['attention'][i], long_input_text[i], result['text'][i])
_ = plt.suptitle('This never works')
"""
Explanation: The short sentences often work well, but if the input is too long the model literally loses focus and stops providing reasonable predictions. There are two main reasons for this:
The model was trained with teacher-forcing feeding the correct token at each step, regardless of the model's predictions. The model could be made more robust if it were sometimes fed its own predictions.
The model only has access to its previous output through the RNN state. If the RNN state gets corrupted, there's no way for the model to recover. Transformers solve this by using self-attention in the encoder and decoder.
End of explanation
"""
tf.saved_model.save(translator, 'translator',
signatures={'serving_default': translator.tf_translate})
reloaded = tf.saved_model.load('translator')
result = reloaded.tf_translate(three_input_text)
%%time
result = reloaded.tf_translate(three_input_text)
for tr in result['text']:
print(tr.numpy().decode())
print()
"""
Explanation: Export
Once you have a model you're satisfied with you might want to export it as a tf.saved_model for use outside of this python program that created it.
Since the model is a subclass of tf.Module (through keras.Model), and all the functionality for export is compiled in a tf.function the model should export cleanly with tf.saved_model.save:
Now that the function has been traced it can be exported using saved_model.save:
End of explanation
"""
|
rhancockn/MRS | ipynb/003-aligning-with-anatomy.ipynb | mit | import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import os.path as op
import nibabel as nib
import MRS.data as mrd
import IPython.html.widgets as wdg
import IPython.display as display
mrs_nifti = nib.load(op.join(mrd.data_folder, '12_1_PROBE_MEGA_L_Occ.nii.gz'))
t1_nifti = nib.load(op.join(mrd.data_folder, '5062_2_1.nii.gz'))
"""
Explanation: Aligning MRS voxels with the anatomy
Several steps in the analysis and interpertation of the MRS data require knowledge of the anatomical location of the volume from which MRS data was acquired. In particular, we would like to know how much of the volume contains gray matter, relative to other tissue components, such as white matter, CSF, etc. In order to infer this, we need to acquire a T1-weighted MRI scan in the same session, and (assuming the subject hasn't moved too much), use the segmentation of the T1w image into different tissue types (e.g. using Freesurfer).
However, in order to do that, we first need to align the MRS voxel with the T1w data, so that we can extract these quantities.
End of explanation
"""
mrs_aff = mrs_nifti.get_affine()
t1_aff = t1_nifti.get_affine()
print("The affine transform for the MRS data is:")
print(mrs_aff)
print("The affine transform for the T1 data is:")
print(t1_aff)
"""
Explanation: In order to be able to align the files with regard to each other, they need to both encode an affine transformation relative to the scanner space. For a very thorough introduction to these transformations and their utility, see this tutorial
End of explanation
"""
composed_affine = np.dot(np.linalg.pinv(t1_aff), mrs_aff)
"""
Explanation: If you read the aformentioned tutorial, this will make sense. The diagonal of the top ledt 3 x 3 matrix encodes the resolution of the voxels used in each of the acquisitions (in mm). The MRS data has a single 2.5 x 2.5 x 2.5 cm$^2$ isotropic voxel, and the T1 has (approximately) 0.9 x 0.9 x 0.9 mm$^2$ isotropic voxels. They were both acquired without any rotation relative to the scanner coordinate system, which is why the off-diagonal terms of the top left 3 x 3 matrix is all zeros. The 4th column of each of these matrices encodes the xyz shift (again, in mm) relative to the scanner isocenter.
Composing these two transformations together tells us how to align the two volumes relative to each other. In particular, we might ask where in the t1 coordinate system the center of the MRS voxel is. Since we are multiplying
End of explanation
"""
mrs_center = [0,0,0,1]
t1_center = np.round(np.dot(composed_affine, mrs_center)).astype(int)
mrs_corners = [[-0.5, -0.5, -0.5, 1],
[-0.5, -0.5, 0.5, 1],
[-0.5, 0.5, -0.5, 1],
[-0.5, 0.5, 0.5, 1],
[ 0.5, -0.5, -0.5, 1],
[ 0.5, -0.5, 0.5, 1],
[ 0.5, 0.5, -0.5, 1],
[ 0.5, 0.5, 0.5, 1]]
t1_corners = [np.round(np.dot(composed_affine, c)).astype(int) for c in mrs_corners]
t1_corners
"""
Explanation: This allows us to compute the location of the center of the MRS voxel in the T1 volume coordinates, and the locations of the corners of the voxel:
End of explanation
"""
t1_data = t1_nifti.get_data().squeeze()
mrs_roi = np.ones_like(t1_data) * np.nan
mrs_roi[144:172, 176:204, 78:106] = t1_data[144:172, 176:204, 78:106]
"""
Explanation: Using this information, we can manually create a volume that only contains the T1-weighted data in the MRS ROI:
End of explanation
"""
def show_voxel(x=t1_center[0], y=t1_center[1], z=t1_center[2]):
fig = plt.figure()
ax = fig.add_subplot(221)
ax.axis('off')
ax.imshow(np.rot90(t1_data[:, :, z]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[:, :, z]), matplotlib.cm.jet)
ax.plot([x, x], [0, t1_data.shape[0]], color='w')
ax.plot([0, t1_data.shape[1]], [y, y], color='w')
ax.set_ylim([0, t1_data.shape[0]])
ax.set_xlim([0, t1_data.shape[1]])
ax = fig.add_subplot(222)
ax.axis('off')
ax.imshow(np.rot90(t1_data[:, -y, :]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[:, -y, :]), matplotlib.cm.jet)
ax.plot([x, x], [0, t1_data.shape[1]], color='w')
ax.plot([0, t1_data.shape[1]], [z, z], color='w')
ax.set_xlim([0, t1_data.shape[0]])
ax.set_ylim([t1_data.shape[2], 0])
ax = fig.add_subplot(223)
ax.axis('off')
ax.imshow(np.rot90(t1_data[x, :, :]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[x, :, :]), matplotlib.cm.jet)
ax.plot([t1_data.shape[1]-y, t1_data.shape[1]-y], [0, t1_data.shape[1]], color='w')
ax.plot([0, t1_data.shape[1]], [z, z], color='w')
ax.set_xlim([0, t1_data.shape[1]])
ax.set_ylim([t1_data.shape[2], 0])
fig.set_size_inches(10, 10)
return fig
def voxel_viewer(t1_data, mrs_roi):
pb_widget = wdg.interactive(show_voxel,
t1_data = wdg.fixed(t1_data),
mrs_roi = wdg.fixed(mrs_roi),
x=wdg.IntSliderWidget(min=0, max=t1_data.shape[0]-1, value=155),
y=wdg.IntSliderWidget(min=0, max=t1_data.shape[1]-1, value=65),
z=wdg.IntSliderWidget(min=0, max=t1_data.shape[2]-1, value=92)
)
display.display(pb_widget)
voxel_viewer(t1_data, mrs_roi)
"""
Explanation: To view this, we will create a rather rough orthographic viewer of the T1 data, using IPython's interactive widget system. We add the data int the MRS ROI using a different color map, so that we can see where it is in the context of the anatomy
End of explanation
"""
|
SeismicPi/SeismicPi | Lessons/Lesson 3/Lesson 3.ipynb | mit | def double(x):
return(2*x);
"""
Explanation: Lesson 3
This lesson will review linear equations, briefly discuss kinematics and see how we can write functions in python to reuse code.
Linear Equations
Recall the definition of a line is $y(x) = mx + c$. Where $m$ is the slope of the line and $c$ is the y-intercept. See if you can find the equation of the line below.
<img src="ygraph.png" alt="Drawing" style="width: 300px;"/>
The slope of the line is $2$ and the y-intercept is $4$, so the equation is $y(x) = 2x+4$. If we are told that a line goes through two points $(x_1, y_1)$ and $(x_2,y_2)$, we can also find the equation of the line. We know that since the slope is defined as rise over run: $m = \frac{y_2-y_1}{x_2-x_1}$. Now that we have found the slope of the line, how can we find the y-interccept? The line goes through $(x_1, y_1)$ so $y_1 = mx_1 + c$ and hence $c = y_1-mx_1$. Alternatively, we also know that the line goes through $(x_2, y_2)$ so $y_2 = mx_2 + c$ and $c = y_2 - mx_2$. Now we have found both $m$ and $c$ so we know the equation of the line!
Example Problems
1. What is the slope of the line that goes through the points $(1,2)$ and $(3,4)$?
2. What is the y-intercept of the line that goes through $(1,2)$ and $(3,4)$?
3. What is the equation of the line that goes through the points $(1,2)$ and $(3,4)$?
Basic Kinematics
Kinematics is defined as the study of motion and one branch of it looks at the relationships between position, speed, and time.
The position of an object is where it is relative to an origin at a specific moment in time. If your school is two miles away from home, then at this moment your position, relative to your home, is two miles. When you leave school and decide to go home, your position will slowly decrease as you walk home until it reaches zero miles.
The speed of an object is how fast it is moving at a specific moment in time. If you walk from your house to school in one hour, then your speed would have been two miles per hour. If, instead, you decided to bike from home to school and it only took you half an hour, your speed would have been four miles per hour.
If you decide to skip school (please don't) to go the new arcade four miles away from your house, it will take you twice as long because you have to travel twice the distance. If you decide to walk there, it will take you two hours and if you bike there, it will take you one hour.
In general we have the following relationship $position = speed \times time$ or $p(t) = st$. But wait, doesn't this look really similar to our linear equation above? $p(t) = st + 0$ and $y(x) = mx + c$. Its just a linear line with a y-intercept of zero! The same properties hold as well. Since $s$ is the "slope" of this graph, we can calculate $s$ by taking the rise over run of the graph or the change in position divided by the change in time. The time it takes to travel from one one position to another is the change in position divided by the speed.
The equations are summarized below. $\Delta$ is the shorthand notation for "change in".
1. $p(t) = st$
2. $s = \frac{\Delta p}{\Delta t}$
3. $t = \frac{\Delta p}{\Delta s}$
Suppose your friend lives four miles away from school, but in the opposite direction. If he walks just as quickly as you, how long will it take him to get home?
<img src="school.png" alt="Drawing" style="width: 750px;"/>
Because he lives four miles away and walks at two miles per hour, it will take him $\frac{4}{2} = 2$ hours. This is one hour after you arrive at your home (it takes you one hour to get home by foot). We define this as the time difference of arrival. However, if you lived four miles away and your friend lived two miles away, he would have gotten home one hour before you have, and the time difference of arrival would be negative (-1 hours). Notice that the time difference of arrival is also how much more distance your friend has to travel over his speed.
Problem
What would be the time difference of arrival if you lived 3 miles away from school and your friend also lived 3 miles away from school?
Functions in Python
In programming, a function refers to a procedure that does something for you! Much like how in mathematics a function takes in an "input" and returns an "output" after performing some operations on the inputs.
Below is a function that returns the double of an input;
End of explanation
"""
print double(1);
print double(2);
print double(3);
"""
Explanation: Look at some sample outputs below
End of explanation
"""
def print_double(x):
print(2*x);
"""
Explanation: We can define a function in python by the def keyword, followed by the name of the function, a set of parenthesis with its inputs or paramaters and a :. The code that modifies our inputs is in the lines that follow. It has the general format of
python
def name(parameters):
code;
Look at our definition of double again and notice that we return the value $2x$. We then print out double(1) which prints out the return value $2 \times 1$. However, it is not required for a function to return anything! A function is still valid even it doesn't explicitly return anything. An example is shown below.
End of explanation
"""
print_double(1);
print_double(2);
print_double(3);
"""
Explanation: Whenever we call print_double, we ask the function to print $2x$ so we don't have to.
End of explanation
"""
def leet():
return 1337;
"""
Explanation: A function can also have multiple parameters, or no parameters at all. Remember, the parameters are the things that go in between the paranthesis. Suppose that we want a function that takes in zero parameters and always returns $1337$, we just leave the parameter section blank.
End of explanation
"""
#Try what "print leet()" does.
#Enter your code here:
"""
Explanation: What do you think will happen if we run
python
print leet();
Try it below!
End of explanation
"""
#Write your function here
#Solution
def find_Slope(x1,y1,x2,y2):
return (y2-y1)/(x2-x1);
"""
Explanation: Recall that the slope of a line that goes through $(x_1, y_1)$ and $(x_2, y_2)$ is $\frac{y_2-y_1}{x_2-x_1}$. Write a function that takes in four parameters x1,y1,x2,y2 and finds the slope of the line that goes through the two points. Call it find_Slope.
End of explanation
"""
#Write your function here
#Solution
def TDOA(a, x, s):
return (a-2.0*x)/s;
"""
Explanation: Recall the example of you and your friend walking home from school. Now we will write a function that will calculate the time difference of arrival for an abstract sense. If your friend lives a distance $a$ miles away from you and you both decide to hang out at a location $x$ during the weekend, what will be the time difference of arrival of you and your friend getting home? Both of you walk with the same speed $s$. Answer the questions below to help find the TDOA.
How far do you have to walk home?
How long does it take for you to walk home?
How far does your friend have to walk home?
How long will it take him to walk home?
What is the difference in these two times?
Write a python function below that takes in $a$ (friends house), $x$ (hangout location) and $s$ (the speed you two walk) and returns the TDOA. The function declaration should be
python
def TDOA(a, x, s):
End of explanation
"""
#Write your function here
#Solution
def find_position(a, t, s):
return (a-s*t)/2.0;
"""
Explanation: Now we have a function that given a position finds the time difference of arrival. What if we want to find the position of your hangout spot given the TDOA? Write a function called find_position that takes in the position of your friend's house $a$, a time difference of arrival ($t$), and speed ($s$) and returns the position of your hangout location.
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 Google LLC
End of explanation
"""
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
"""
Explanation: ไฝฟ็จ่ฎก็ฎๅพๆญฃๅๅๅฎ็ฐๅฉ็จๅๆ่ฎก็ฎๅพ็ๆ
ๆๅ็ฑป
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">ๅจ TensorFlow.orgไธๆฅ็</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">ๅจ Google Colab ไธญ่ฟ่ก </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">ๅจ GitHub ไธญๆฅ็ๆบไปฃ็ </a></td>
</table>
ๆๆฌ็นๅพๅ้
ๆญค็ฌ่ฎฐๆฌๅฉ็จ่ฏ่ฎบๆๆฌๅฐ็ตๅฝฑ่ฏ่ฎบๅ็ฑปไธบๆญฃ้ขๆ่ด้ข่ฏไปทใ่ฟๆฏไธไธชไบๅ
ๅ็ฑป็คบไพ๏ผไนๆฏไธไธช้่ฆไธๅบ็จๅนฟๆณ็ๆบๅจๅญฆไน ้ฎ้ขใ
ๅจๆญค็ฌ่ฎฐๆฌไธญ๏ผๆไปฌๅฐ้่ฟๆ นๆฎ็ปๅฎ็่พๅ
ฅๆๅปบ่ฎก็ฎๅพๆฅๆผ็คบๅฆไฝไฝฟ็จ่ฎก็ฎๅพๆญฃๅๅใๅฝ่พๅ
ฅไธๅ
ๅซๆพๅผ่ฎก็ฎๅพๆถ๏ผไฝฟ็จ็ฅ็ป็ปๆๅญฆไน (NSL) ๆกๆถๆๅปบ่ฎก็ฎๅพๆญฃๅๅๆจกๅ็ไธ่ฌๆนๆณๅฆไธ๏ผ
ไธบ่พๅ
ฅไธญ็ๆฏไธชๆๆฌๆ ทๆฌๅๅปบๅตๅ
ฅๅ้ใ่ฏฅๆไฝๅฏไฝฟ็จ word2vecใSwivelใBERT ็ญ้ข่ฎญ็ปๆจกๅๆฅๅฎๆใ
้่ฟไฝฟ็จ่ฏธๅฆโL2โ่ท็ฆปใโไฝๅผฆโ่ท็ฆป็ญ็ธไผผๅบฆๆๆ ๏ผๅบไบ่ฟไบๅตๅ
ฅๅ้ๆๅปบ่ฎก็ฎๅพใ่ฎก็ฎๅพไธญ็่็นๅฏนๅบไบๆ ทๆฌ๏ผ่ฎก็ฎๅพไธญ็่พนๅฏนๅบไบๆ ทๆฌๅฏนไน้ด็็ธไผผๅบฆใ
ๅบไบไธ่ฟฐๅๆ่ฎก็ฎๅพๅๆ ทๆฌ็นๅพ็ๆ่ฎญ็ปๆฐๆฎใ้คๅๅง่็น็นๅพๅค๏ผๆๅพ็่ฎญ็ปๆฐๆฎ่ฟๅฐๅ
ๅซ่ฟ้ป็นๅพใ
ไฝฟ็จ Keras ๅบๅๅผใๅฝๆฐๅผๆๅญ็ฑป API ไฝไธบๅบ็กๆจกๅๅๅปบ็ฅ็ป็ฝ็ปใ
ไฝฟ็จ NSL ๆกๆถๆไพ็ GraphRegularization ๅ
่ฃ
ๅจ็ฑปๅ
่ฃ
ๅบ็กๆจกๅ๏ผไปฅๅๅปบๆฐ็่ฎก็ฎๅพ Keras ๆจกๅใ่ฟไธชๆฐๆจกๅๅฐๅ
ๅซ่ฎก็ฎๅพๆญฃๅๅๆๅคฑไฝไธบๅ
ถ่ฎญ็ป็ฎๆ ไธญ็ไธไธชๆญฃ่งๅ้กนใ
่ฎญ็ปๅ่ฏไผฐ่ฎก็ฎๅพ Keras ๆจกๅใ
ๆณจ๏ผๆไปฌ้ข่ฎก่ฏป่
้
่ฏปๆฌๆ็จๆ้ๆถ้ดไธบ 1 ๅฐๆถๅทฆๅณใ
่ฆๆฑ
ๅฎ่ฃ
Neural Structured Learning ่ฝฏไปถๅ
ใ
ๅฎ่ฃ
tensorflow-hubใ
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
import tensorflow_hub as hub
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
"""
Explanation: ไพ่ต้กนๅๅฏผๅ
ฅ
End of explanation
"""
imdb = tf.keras.datasets.imdb
(pp_train_data, pp_train_labels), (pp_test_data, pp_test_labels) = (
imdb.load_data(num_words=10000))
"""
Explanation: IMDB ๆฐๆฎ้
IMDB ๆฐๆฎ้ๅ
ๅซ Internet Movie Database ไธญ็ 50,000 ๆก็ตๅฝฑ่ฏ่ฎบๆๆฌ ใๆไปฌๅฐ่ฟไบ่ฏ่ฎบๅไธบไธค็ป๏ผๅ
ถไธญ 25,000 ๆก็จไบ่ฎญ็ป๏ผๅฆๅค 25,000 ๆก็จไบๆต่ฏใ่ฎญ็ป็ปๅๆต่ฏ็ปๆฏๅ่กก็๏ผไนๅฐฑๆฏ่ฏดๅ
ถไธญๅ
ๅซ็ธ็ญๆฐ้็ๆญฃ้ข่ฏไปทๅ่ด้ข่ฏไปทใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐไฝฟ็จ IMDB ๆฐๆฎ้็้ขๅค็็ๆฌใ
ไธ่ฝฝ้ขๅค็็ IMDB ๆฐๆฎ้
TensorFlow ้้ IMDB ๆฐๆฎ้ใ่ฏฅๆฐๆฎ้็ป่ฟ้ขๅค็๏ผๅทฒๅฐ่ฏ่ฎบ๏ผๅ่ฏๅบๅ๏ผ่ฝฌๆขไธบๆดๆฐๅบๅ๏ผๅ
ถไธญๆฏไธชๆดๆฐๅไปฃ่กจๅญๅ
ธไธญ็็นๅฎๅ่ฏใ
ไปฅไธไปฃ็ ๅฏไธ่ฝฝ IMDB ๆฐๆฎ้๏ผๅฆๅทฒไธ่ฝฝ๏ผๅไฝฟ็จ็ผๅญๅฏๆฌ๏ผ๏ผ
End of explanation
"""
print('Training entries: {}, labels: {}'.format(
len(pp_train_data), len(pp_train_labels)))
training_samples_count = len(pp_train_data)
"""
Explanation: ๅๆฐ num_words=10000 ไผๅฐ่ฎญ็ปๆฐๆฎไธญ็ๅ 10,000 ไธชๆ้ข็นๅบ็ฐ็ๅ่ฏไฟ็ไธๆฅใ็จๆๅ่ฏๅฐ่ขซไธขๅผไปฅไฟๆ่ฏๆฑ้็ๅฏ็ฎก็ๆงใ
ๆข็ดขๆฐๆฎ
ๆไปฌ่ฑไธ็นๆถ้ดๆฅไบ่งฃๆฐๆฎ็ๆ ผๅผใๆฐๆฎ้็ป่ฟ้ขๅค็๏ผๆฏไธชๆ ทๆฌ้ฝๆฏไธไธชๆดๆฐๆฐ็ป๏ผๆฏไธชๆดๆฐไปฃ่กจ็ตๅฝฑ่ฏ่ฎบไธญ็ๅ่ฏใๆฏไธชๆ ็ญพๆฏไธไธชๆดๆฐๅผ๏ผ0 ๆ 1๏ผ๏ผๅ
ถไธญ 0 ่กจ็คบ่ด้ข่ฏไปท๏ผ่ 1 ่กจ็คบๆญฃ้ข่ฏไปทใ
End of explanation
"""
print(pp_train_data[0])
"""
Explanation: ่ฏ่ฎบๆๆฌๅทฒ่ฝฌๆขไธบๆดๆฐ๏ผๅ
ถไธญๆฏไธชๆดๆฐๅไปฃ่กจๅญๅ
ธไธญ็็นๅฎๅ่ฏใ็ฌฌไธๆก่ฏ่ฎบๅฆไธๆ็คบ๏ผ
End of explanation
"""
len(pp_train_data[0]), len(pp_train_data[1])
"""
Explanation: ็ตๅฝฑ่ฏ่ฎบ็้ฟๅบฆๅฏ่ฝๅไธ็ธๅใไปฅไธไปฃ็ ๆพ็คบไบ็ฌฌไธๆก่ฏ่ฎบๅ็ฌฌไบๆก่ฏ่ฎบไธญ็ๅ่ฏๆฐใ็ฑไบ็ฅ็ป็ฝ็ป็่พๅ
ฅๅฟ
้กปๅ
ทๆ็ธๅ็้ฟๅบฆ๏ผๅ ๆญคๆไปฌ็จๅ้่ฆ่งฃๅณ้ฟๅบฆ้ฎ้ขใ
End of explanation
"""
def build_reverse_word_index():
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index['<PAD>'] = 0
word_index['<START>'] = 1
word_index['<UNK>'] = 2 # unknown
word_index['<UNUSED>'] = 3
return dict((value, key) for (key, value) in word_index.items())
reverse_word_index = build_reverse_word_index()
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
"""
Explanation: ๅฐๆดๆฐ้ๆฐ่ฝฌๆขไธบๅ่ฏ
ไบ่งฃๅฆไฝๅฐๆดๆฐ้ๆฐ่ฝฌๆขไธบ็ธๅบ็ๆๆฌๅฏ่ฝ้ๅธธๅฎ็จใๅจ่ฟ้๏ผๆไปฌๅฐๅๅปบไธไธช่พ
ๅฉๅฝๆฐๆฅๆฅ่ฏขๅ
ๅซๆดๆฐๅฐๅญ็ฌฆไธฒๆ ๅฐ็ๅญๅ
ธๅฏน่ฑก๏ผ
End of explanation
"""
decode_review(pp_train_data[0])
"""
Explanation: ็ฐๅจ๏ผๆไปฌๅฏไปฅไฝฟ็จ decode_review ๅฝๆฐๆฅๆพ็คบ็ฌฌไธๆก่ฏ่ฎบ็ๆๆฌ๏ผ
End of explanation
"""
!mkdir -p /tmp/imdb
"""
Explanation: ่ฎก็ฎๅพๆ้
่ฎก็ฎๅพ็ๆ้ ๆถๅไธบๆๆฌๆ ทๆฌๅๅปบๅตๅ
ฅๅ้๏ผ็ถๅไฝฟ็จ็ธไผผๅบฆๅฝๆฐๆฏ่พๅตๅ
ฅๅ้ใ
ๅจ็ปง็ปญไนๅ๏ผๆไปฌๅ
ๅๅปบไธไธช็ฎๅฝๆฅๅญๅจๅจๆฌๆ็จไธญๅๅปบ็ๅทฅไปถใ
End of explanation
"""
pretrained_embedding = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1'
hub_layer = hub.KerasLayer(
pretrained_embedding, input_shape=[], dtype=tf.string, trainable=True)
def _int64_feature(value):
"""Returns int64 tf.train.Feature."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
"""Returns bytes tf.train.Feature."""
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
"""Returns float tf.train.Feature."""
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(word_vector, record_id):
"""Create tf.Example containing the sample's embedding and its ID."""
text = decode_review(word_vector)
# Shape = [batch_size,].
sentence_embedding = hub_layer(tf.reshape(text, shape=[-1,]))
# Flatten the sentence embedding back to 1-D.
sentence_embedding = tf.reshape(sentence_embedding, shape=[-1])
features = {
'id': _bytes_feature(str(record_id)),
'embedding': _float_feature(sentence_embedding.numpy())
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_embeddings(word_vectors, output_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for word_vector in word_vectors:
example = create_embedding_example(word_vector, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(pp_train_data, '/tmp/imdb/embeddings.tfr', 0)
"""
Explanation: ๅๅปบๆ ทๆฌๅตๅ
ฅๅ้
ๆไปฌๅฐไฝฟ็จ้ข่ฎญ็ป็ Swivel ๅตๅ
ฅๅ้ไธบ่พๅ
ฅไธญ็ๆฏไธชๆ ทๆฌๅๅปบ tf.train.Example ๆ ผๅผ็ๅตๅ
ฅๅ้ใๆไปฌๅฐไปฅ TFRecord ๆ ผๅผๅญๅจ็ๆ็ๅตๅ
ฅๅ้ไปฅๅไปฃ่กจๆฏไธชๆ ทๆฌ ID ็้ๅ ็นๅพใ่ฟๆๅฉไบๆไปฌๅจๆชๆฅ่ฝๅคๅฐๆ ทๆฌๅตๅ
ฅๅ้ไธ่ฎก็ฎๅพไธญ็็ธๅบ่็น่ฟ่กๅน้
ใ
End of explanation
"""
graph_builder_config = nsl.configs.GraphBuilderConfig(
similarity_threshold=0.99, lsh_splits=32, lsh_rounds=15, random_seed=12345)
nsl.tools.build_graph_from_config(['/tmp/imdb/embeddings.tfr'],
'/tmp/imdb/graph_99.tsv',
graph_builder_config)
"""
Explanation: ๆๅปบ่ฎก็ฎๅพ
็ฐๅจๆไบๆ ทๆฌๅตๅ
ฅๅ้๏ผๆไปฌๅฐไฝฟ็จๅฎไปฌๆฅๆๅปบ็ธไผผๅบฆ่ฎก็ฎๅพ๏ผๆญค่ฎก็ฎๅพไธญ็่็นๅฐไธๆ ทๆฌๅฏนๅบ๏ผๆญค่ฎก็ฎๅพไธญ็่พนๅฐไธ่็นๅฏนไน้ด็็ธไผผๅบฆๅฏนๅบใ
็ฅ็ป็ปๆๅญฆไน ๆไพไบไธไธช่ฎก็ฎๅพๆๅปบๅบ๏ผ็จไบๅบไบๆ ทๆฌๅตๅ
ฅๅ้ๆๅปบ่ฎก็ฎๅพใๅฎไฝฟ็จไฝๅผฆ็ธไผผๅบฆไฝไธบ็ธไผผๅบฆๆๆ ๆฅๆฏ่พๅตๅ
ฅๅ้ๅนถๅจๅฎไปฌไน้ดๆๅปบ่พนใๅฎ่ฟๆฏๆๆๅฎ็ธไผผๅบฆ้ๅผ๏ผ็จไบไปๆ็ป่ฎก็ฎๅพไธญไธขๅผไธ็ธไผผ็่พนใๅจๆฌ็คบไพไธญ๏ผไฝฟ็จ 0.99 ไฝไธบ็ธไผผๅบฆ้ๅผ๏ผไฝฟ็จ 12345 ไฝไธบ้ๆบ็งๅญ๏ผๆไปฌๆ็ปๅพๅฐไธไธชๅ
ทๆ 429,415 ๆกๅๅ่พน็่ฎก็ฎๅพใๅจ่ฟ้๏ผๆไปฌๅๅฉ่ฎก็ฎๅพๆๅปบๅจๅฏนๅฑ้จๆๆๅๅธ (LSH) ็ฎๆณ็ๆฏๆๆฅๅ ๅฟซ่ฎก็ฎๅพๆๅปบใๆๅ
ณไฝฟ็จ่ฎก็ฎๅพๆๅปบๅจ็ LSH ๆฏๆ็่ฏฆ็ปไฟกๆฏ๏ผ่ฏทๅ้
build_graph_from_config API ๆๆกฃใ
End of explanation
"""
!wc -l /tmp/imdb/graph_99.tsv
"""
Explanation: ๅจ่พๅบ TSV ๆไปถไธญ๏ผๆฏๆกๅๅ่พนๅ็ฑไธคๆกๆๅ่พน่กจ็คบ๏ผๅ ๆญค่ฏฅๆไปถๅ
ฑๅซ 429,415 * 2 = 858,830 ่ก๏ผ
End of explanation
"""
def create_example(word_vector, label, record_id):
"""Create tf.Example containing the sample's word vector, label, and ID."""
features = {
'id': _bytes_feature(str(record_id)),
'words': _int64_feature(np.asarray(word_vector)),
'label': _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(word_vectors, labels, record_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for word_vector, label in zip(word_vectors, labels):
example = create_example(word_vector, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (word vectors and labels) for training and test
# data in TFRecord format.
next_record_id = create_records(pp_train_data, pp_train_labels,
'/tmp/imdb/train_data.tfr', 0)
create_records(pp_test_data, pp_test_labels, '/tmp/imdb/test_data.tfr',
next_record_id)
"""
Explanation: ๆณจ๏ผ่ฎก็ฎๅพ่ดจ้ไปฅๅไธไน็ธๅ
ณ็ๅตๅ
ฅๅ้่ดจ้ๅฏนไบ่ฎก็ฎๅพๆญฃๅๅ้ๅธธ้่ฆใ่ฝ็ถๆไปฌๅจๆญค็ฌ่ฎฐๆฌไธญไฝฟ็จไบ Swivel ๅตๅ
ฅๅ้๏ผไฝๅฆๆไฝฟ็จ BERT ็ญๅตๅ
ฅๅ้๏ผๅฏ่ฝไผๆดๅ็กฎๅฐๆ่ท่ฏ่ฎบ่ฏญไนใๆไปฌ้ผๅฑ็จๆทๆ นๆฎ่ช่บซ้ๆฑ้็จๅ้็ๅตๅ
ฅๅ้ใ
ๆ ทๆฌ็นๅพ
ๆไปฌไฝฟ็จ tf.train.Example ๆ ผๅผไธบ้ฎ้ขๅๅปบๆ ทๆฌ็นๅพ๏ผๅนถๅฐๅ
ถไฟ็ไธบ TFRecord ๆ ผๅผใๆฏไธชๆ ทๆฌๅฐๅ
ๅซไปฅไธไธไธช็นๅพ๏ผ
id๏ผๆ ทๆฌ็่็น IDใ
words๏ผๅ
ๅซๅ่ฏ ID ็ int64 ๅ่กจใ
label๏ผ็จไบๆ ่ฏ่ฏ่ฎบ็็ฎๆ ็ฑป็ๅไพ int64ใ
End of explanation
"""
nsl.tools.pack_nbrs(
'/tmp/imdb/train_data.tfr',
'',
'/tmp/imdb/graph_99.tsv',
'/tmp/imdb/nsl_train_data.tfr',
add_undirected_edges=True,
max_nbrs=3)
"""
Explanation: ไฝฟ็จ่ฎก็ฎๅพ่ฟ้ปๅขๅผบ่ฎญ็ปๆฐๆฎ
ๆฅๆๆ ทๆฌ็นๅพไธๅๆ่ฎก็ฎๅพๅ๏ผๆไปฌๅฏไปฅ็ๆ็จไบ็ฅ็ป็ปๆๅญฆไน ็ๅขๅผบ่ฎญ็ปๆฐๆฎใNSL ๆกๆถๆไพไบไธไธชๅฐ่ฎก็ฎๅพๅๆ ทๆฌ็นๅพ็ธ็ปๅ็ๅบ๏ผไบ่
็ปๅๅฏ็ๆ็จไบ่ฎก็ฎๅพๆญฃๅๅ็ๆ็ป่ฎญ็ปๆฐๆฎใๆๅพ็่ฎญ็ปๆฐๆฎๅฐๅ
ๆฌๅๅงๆ ทๆฌ็นๅพๅๅ
ถ็ธๅบ่ฟ้ป็็นๅพใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌ่่ๆ ๅ่พนๅนถไธบๆฏไธชๆ ทๆฌๆๅคไฝฟ็จ 3 ไธช่ฟ้ป๏ผไปฅไฝฟ็จ่ฎก็ฎๅพ่ฟ้ปๆฅๅขๅผบ่ฎญ็ปๆฐๆฎใ
End of explanation
"""
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
"""
Explanation: ๅบ็กๆจกๅ
็ฐๅจ๏ผๆไปฌๅทฒๅๅคๅฅฝๆๅปบๆ ่ฎก็ฎๅพๆญฃๅๅ็ๅบ็กๆจกๅใไธบไบๆๅปบๆญคๆจกๅ๏ผๆไปฌๅฏไปฅไฝฟ็จๅจๆๅปบ่ฎก็ฎๅพๆถไฝฟ็จ็ๅตๅ
ฅๅ้๏ผไนๅฏไปฅไธๅ็ฑปไปปๅกไธ่ตทๅญฆไน ๆฐ็ๅตๅ
ฅๅ้ใๅจๆญค็ฌ่ฎฐๆฌไธญ๏ผๆไปฌๅฐไฝฟ็จๅ่
ใ
ๅ
จๅฑๅ้
End of explanation
"""
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 2
self.max_seq_length = 256
self.vocab_size = 10000
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 2
### model architecture
self.num_embedding_dims = 16
self.num_lstm_dims = 64
self.num_fc_units = 64
### training parameters
self.train_epochs = 10
self.batch_size = 128
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
"""
Explanation: ่ถ
ๅๆฐ
ๆไปฌๅฐไฝฟ็จ HParams ็ๅฎไพๆฅๅ
ๅซ็จไบ่ฎญ็ปๅ่ฏไผฐ็ๅ็ง่ถ
ๅๆฐๅๅธธ้ใไปฅไธไธบๅ้กนๅ
ๅฎน็็ฎ่ฆไป็ป๏ผ
num_classes๏ผๆ 2 ไธช ็ฑป - ๆญฃ้ขๅ่ด้ขใ
max_seq_length๏ผๅจๆฌ็คบไพไธญ๏ผๆญคๅๆฐไธบๆฏๆก็ตๅฝฑ่ฏ่ฎบไธญ่่็ๆๅคงๅ่ฏๆฐใ
vocab_size๏ผๆญคๅๆฐไธบๆฌ็คบไพ่่็่ฏๆฑ้ใ
distance_type๏ผๆญคๅๆฐไธบ็จไบๆญฃๅๅๆ ทๆฌไธๅ
ถ่ฟ้ป็่ท็ฆปๆๆ ใ
graph_regularization_multiplier๏ผๆญคๅๆฐๆงๅถ่ฎก็ฎๅพๆญฃๅๅ้กนๅจๆปไฝๆๅคฑๅฝๆฐไธญ็็ธๅฏนๆ้ใ
num_neighbors๏ผ็จไบ่ฎก็ฎๅพๆญฃๅๅ็่ฟ้ปๆฐใๆญคๅผๅฟ
้กปๅฐไบๆ็ญไบ่ฐ็จ nsl.tools.pack_nbrs ๆถไธๆไฝฟ็จ็ max_nbrs ๅๆฐใ
num_fc_units๏ผ็ฅ็ป็ฝ็ป็ๅ
จ่ฟๆฅๅฑไธญ็ๅๅ
ๆฐใ
train_epochs๏ผ่ฎญ็ปๅจๆๆฐใ
batch_size๏ผ็จไบ่ฎญ็ปๅ่ฏไผฐ็ๆนๆฌกๅคงๅฐใ
eval_steps๏ผ่ฎคๅฎ่ฏไผฐๅฎๆไนๅ้่ฆๅค็็ๆนๆฌกๆฐใๅฆๆ่ฎพ็ฝฎไธบ None๏ผๅๅฐ่ฏไผฐๆต่ฏ้ไธญ็ๆๆๅฎไพใ
End of explanation
"""
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
def pad_sequence(sequence, max_seq_length):
"""Pads the input sequence (a `tf.SparseTensor`) to `max_seq_length`."""
pad_size = tf.maximum([0], max_seq_length - tf.shape(sequence)[0])
padded = tf.concat(
[sequence.values,
tf.fill((pad_size), tf.cast(0, sequence.dtype))],
axis=0)
# The input sequence may be larger than max_seq_length. Truncate down if
# necessary.
return tf.slice(padded, [0], [max_seq_length])
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a variable length word ID vector.
feature_spec = {
'words': tf.io.VarLenFeature(tf.int64),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.VarLenFeature(tf.int64)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
# Since the 'words' feature is a variable length word vector, we pad it to a
# constant maximum length based on HPARAMS.max_seq_length
features['words'] = pad_sequence(features['words'], HPARAMS.max_seq_length)
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
features[nbr_feature_key] = pad_sequence(features[nbr_feature_key],
HPARAMS.max_seq_length)
labels = features.pop('label')
return features, labels
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('/tmp/imdb/nsl_train_data.tfr', True)
test_dataset = make_dataset('/tmp/imdb/test_data.tfr')
"""
Explanation: ๅๅคๆฐๆฎ
่ฏ่ฎบ๏ผๆดๆฐๆฐ็ป๏ผๅฟ
้กปๅ
่ฝฌๆขไธบๅผ ้๏ผ็ถๅๆ่ฝ้ฆๅ
ฅ็ฅ็ป็ฝ็ปใๅฏไปฅ้่ฟไปฅไธไธค็งๆนๅผๅฎๆๆญค่ฝฌๆข๏ผ
ๅฐๆฐ็ป่ฝฌๆขไธบๆ็คบๅ่ฏๆฏๅฆๅบ็ฐ็ 0 ๅ 1 ๅ้๏ผ็ฑปไผผไบ็ฌ็ญ็ผ็ ใไพๅฆ๏ผๅบๅ [3, 5] ๅฐๆไธบ 10000-็ปดๅ้๏ผ้คไบ็ดขๅผ 3 ๅ 5 ไธบ 1 ไนๅค๏ผๅ
ถไฝๅไธบ 0ใ็ถๅ๏ผไฝฟๅ
ถๆไธบๆไปฌ็ฝ็ปไธญ็็ฌฌไธๅฑ๏ผDense ๅฑ๏ผ๏ผๅฏไปฅๅค็ๆตฎ็นๅ้ๆฐๆฎใไฝๆฏ๏ผๆญคๆนๆณ้่ฆๅ ็จๅคง้ๅ
ๅญ๏ผ้่ฆ num_words * num_reviews ๅคงๅฐ็็ฉ้ตใ
ๅฆๅค๏ผๆไปฌๅฏไปฅๅกซๅ
ๆฐ็ปไปฅไฝฟๅ
ถๅๅ
ทๆ็ธๅ็้ฟๅบฆ๏ผ็ถๅๅๅปบๅฝข็ถไธบ max_length * num_reviews ็ๆดๆฐๅผ ้ใๆไปฌๅฏไปฅไฝฟ็จ่ฝๅคๅค็ๆญคๅฝข็ถ็ๅตๅ
ฅๅ้ๅฑไฝไธบ็ฝ็ปไธญ็็ฌฌไธๅฑใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐไฝฟ็จ็ฌฌไบ็งๆนๆณใ
็ฑไบ็ตๅฝฑ่ฏ่ฎบ้ฟๅบฆๅฟ
้กป็ธๅ๏ผๅ ๆญคๆไปฌๅฐไฝฟ็จๅฆไธๅฎไน็ pad_sequence ๅฝๆฐๆฅๆ ๅๅ้ฟๅบฆใ
End of explanation
"""
# This function exists as an alternative to the bi-LSTM model used in this
# notebook.
def make_feed_forward_model():
"""Builds a simple 2 layer feed forward neural network."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer)
dense_layer = tf.keras.layers.Dense(16, activation='relu')(pooling_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def make_bilstm_model():
"""Builds a bi-directional LSTM model."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size,
HPARAMS.num_embedding_dims)(
inputs)
lstm_layer = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(HPARAMS.num_lstm_dims))(
embedding_layer)
dense_layer = tf.keras.layers.Dense(
HPARAMS.num_fc_units, activation='relu')(
lstm_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
# Feel free to use an architecture of your choice.
model = make_bilstm_model()
model.summary()
"""
Explanation: ๆๅปบๆจกๅ
็ฅ็ป็ฝ็ปๆฏ้่ฟๅ ๅ ๅฑๅๅปบ็๏ผ่ฟ้่ฆ็กฎๅฎไธคไธชไธป่ฆๆถๆๅณ็ญ๏ผ
ๅจๆจกๅไธญไฝฟ็จๅคๅฐไธชๅฑ๏ผ
ไธบๆฏไธชๅฑไฝฟ็จๅคๅฐไธช้่ๅๅ
๏ผ
ๅจๆฌ็คบไพไธญ๏ผ่พๅ
ฅๆฐๆฎ็ฑๅ่ฏ็ดขๅผๆฐ็ป็ปๆใ่ฆ้ขๆต็ๆ ็ญพไธบ 0 ๆ 1ใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐไฝฟ็จๅๅ LSTM ไฝไธบๅบ็กๆจกๅใ
End of explanation
"""
model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
"""
Explanation: ๆ้กบๅบๆๆๅ ๅ ๅฑไปฅๆๅปบๅ็ฑปๅจ๏ผ
็ฌฌไธๅฑไธบๆฅๅๆดๆฐ็ผ็ ่ฏๆฑ็ Input ๅฑใ
็ฌฌไบๅฑไธบ Embedding ๅฑ๏ผ่ฏฅๅฑๆฅๅๆดๆฐ็ผ็ ่ฏๆฑๅนถๆฅๆพๅตๅ
ฅๅ้ไธญ็ๆฏไธชๅ่ฏ็ดขๅผใๅจๆจกๅ่ฎญ็ปๆถไผๅญฆไน ่ฟไบๅ้ใๅ้ไผๅ่พๅบๆฐ็ปๆทปๅ ็ปดๅบฆใๅพๅฐ็็ปดๅบฆไธบ๏ผ<code>(batch, sequence, embedding)</code>ใ
ๆฅไธๆฅ๏ผๅๅ LSTM ๅฑไผไธบๆฏไธชๆ ทๆฌ่ฟๅๅบๅฎ้ฟๅบฆ็่พๅบๅ้ใ
ๆญคๅบๅฎ้ฟๅบฆ็่พๅบๅ้็ฉฟ่ฟไธไธชๅ
ๅซ 64 ไธช้่ๅๅ
็ๅ
จ่ฟๆฅ (Dense) ๅฑใ
ๆๅไธๅฑไธๅไธช่พๅบ่็นๅฏ้่ฟๆฅใๅฉ็จ sigmoid ๆฟๆดปๅฝๆฐ๏ผๅพๅบๆญคๅผๆฏ 0 ๅฐ 1 ไน้ด็ๆตฎ็นๆฐ๏ผ่กจ็คบๆฆ็ๆ็ฝฎไฟกๅบฆใ
้่ๅๅ
ไธ่ฟฐๆจกๅๅจ่พๅ
ฅๅ่พๅบไน้ดๆไธคไธชไธญ้ด๏ผๆ็งฐโ้่โ๏ผๅฑ๏ผไธๅ
ๆฌ Embedding ๅฑ๏ผใ่พๅบ๏ผๅๅ
ใ่็นๆ็ฅ็ปๅ
๏ผ็ๆฐ้ๆฏๅฑ็่กจ็คบ็ฉบ้ด็็ปดๅบฆใๆข่จไน๏ผๅณ็ฝ็ปๅญฆไน ๅ
้จ่กจ็คบๆถๅ
่ฎธ็่ช็ฑๅบฆใ
ๆจกๅ็้่ๅๅ
่ถๅค๏ผๆด้ซ็ปดๅบฆ็่กจ็คบ็ฉบ้ด๏ผๅ/ๆๅฑ่ถๅค๏ผๅ็ฝ็ปๅฏไปฅๅญฆไน ็่กจ็คบ่ถๅคๆใไฝๆฏ๏ผ่ฟไผๅฏผ่ด็ฝ็ป็่ฎก็ฎๅผ้ๅขๅ ๏ผๅนถไธๅฏ่ฝๅฏผ่ดๅญฆไน ไธ้่ฆ็ๆจกๅผโโๆ้ซๅจ่ฎญ็ปๆฐๆฎ๏ผ่ไธๆฏๆต่ฏๆฐๆฎ๏ผไธ็ๆง่ฝ็ๆจกๅผใ่ฟๅฐฑๅซ่ฟๆๅใ
ๆๅคฑๅฝๆฐๅไผๅๅจ
ๆจกๅ่ฎญ็ป้่ฆไธไธชๆๅคฑๅฝๆฐๅไธไธชไผๅๅจใ็ฑไบ่ฟๆฏไบๅ
ๅ็ฑป้ฎ้ข๏ผๅนถไธๆจกๅ่พๅบๆฆ็๏ผๅ
ทๆ Sigmoid ๆฟๆดป็ๅไธๅๅ
ๅฑ๏ผ๏ผๆไปฌๅฐไฝฟ็จ binary_crossentropy ๆๅคฑๅฝๆฐใ
End of explanation
"""
validation_fraction = 0.9
validation_size = int(validation_fraction *
int(training_samples_count / HPARAMS.batch_size))
print(validation_size)
validation_dataset = train_dataset.take(validation_size)
train_dataset = train_dataset.skip(validation_size)
"""
Explanation: ๅๅปบ้ช่ฏ้
่ฎญ็ปๆถ๏ผๆไปฌๅธๆๆฃ้ช่ฏฅๆจกๅๅจๆช่ง่ฟ็ๆฐๆฎไธ็ๅ็กฎ็ใไธบๆญค๏ผ้่ฆๅฐๅๅง่ฎญ็ปๆฐๆฎไธญ็ไธ้จๅๅ็ฆปๅบๆฅ๏ผๅๅปบไธไธช้ช่ฏ้ใ๏ผไธบไฝ็ฐๅจไธไฝฟ็จๆต่ฏ้๏ผๅ ไธบๆไปฌ็็ฎๆ ๆฏไป
ไฝฟ็จ่ฎญ็ปๆฐๆฎๅผๅๅ่ฐๆดๆจกๅ๏ผ็ถๅๅชไฝฟ็จไธๆฌกๆต่ฏๆฐๆฎๆฅ่ฏไผฐๅ็กฎ็๏ผใ
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐๅคง็บฆ 10% ็ๅๅง่ฎญ็ปๆ ทๆฌ๏ผ25000 ็ 10%๏ผไฝไธบ็จไบ่ฎญ็ป็ๅธฆๆ ็ญพๆฐๆฎ๏ผๅ
ถไฝไฝไธบ้ช่ฏๆฐๆฎใ็ฑไบๅๅง่ฎญ็ป/ๆต่ฏๆฐๆฎ้ไปฅ 50/50 ็ๆฏไพๆๅ๏ผๆฏไธชๆฐๆฎ้ 25000 ไธชๆ ทๆฌ๏ผ๏ผๅ ๆญคๆไปฌ็ฐๅจ็ๆๆ่ฎญ็ป/้ช่ฏ/ๆต่ฏๆฐๆฎ้ๆๅๆฏไพไธบ 5/45/50ใ
่ฏทๆณจๆ๏ผโtrain_datasetโๅทฒ่ฟ่กๆนๅค็ๅนถไธๅทฒๆไนฑ้กบๅบใ
End of explanation
"""
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
"""
Explanation: ่ฎญ็ปๆจกๅใ
ไปฅ mini-batch ่ฎญ็ปๆจกๅใ่ฎญ็ปๆถ๏ผๅบไบ้ช่ฏ้็ๆตๆจกๅ็ๆๅคฑๅๅ็กฎ็๏ผ
End of explanation
"""
results = model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(results)
"""
Explanation: ่ฏไผฐๆจกๅ
็ฐๅจ๏ผๆไปฌๆฅ็็ๆจกๅ็่กจ็ฐใๆจกๅๅฐ่ฟๅไธคไธชๅผ๏ผๆๅคฑ๏ผ่กจ็คบ้่ฏฏ็ๆฐๅญ๏ผๅผ่ถไฝ่ถๅฅฝ๏ผๅๅ็กฎ็ใ
End of explanation
"""
history_dict = history.history
history_dict.keys()
"""
Explanation: Create a graph of accuracy/loss over time
model.fit() ไผ่ฟๅๅ
ๅซไธไธชๅญๅ
ธ็ History ๅฏน่ฑกใ่ฏฅๅญๅ
ธๅ
ๅซ่ฎญ็ป่ฟ็จไธญไบง็็ๆๆไฟกๆฏ๏ผ
End of explanation
"""
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
"""
Explanation: ๅ
ถไธญๆๅไธชๆก็ฎ๏ผๆฏไธชๆก็ฎไปฃ่กจ่ฎญ็ปๅ้ช่ฏ่ฟ็จไธญ็ไธ้กน็ๆตๆๆ ใๆไปฌๅฏไปฅไฝฟ็จ่ฟไบๆๆ ๆฅ็ปๅถ็จไบๆฏ่พ็่ฎญ็ปๅ้ช่ฏๅพ่กจ๏ผไปฅๅ่ฎญ็ปๅ้ช่ฏๅ็กฎ็ๅพ่กจ๏ผ
End of explanation
"""
# Build a new base LSTM model.
base_reg_model = make_bilstm_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
"""
Explanation: ่ฏทๆณจๆ๏ผ่ฎญ็ปๆๅคฑไผ้ๅจๆไธ้๏ผ่่ฎญ็ปๅ็กฎ็ๅ้ๅจๆไธๅใไฝฟ็จๆขฏๅบฆไธ้ไผๅๆถ๏ผ่ฟๆฏ้ขๆ็ปๆโโๅฎๅบ่ฏฅๅจๆฏๆฌก่ฟญไปฃไธญๆๅคง้ๅบฆๅๅฐๆ้็ๆฐ้ใ
่ฎก็ฎๅพๆญฃๅๅ
็ฐๅจ๏ผๆไปฌๅทฒๅๅคๅฅฝๅฐ่ฏไฝฟ็จไธ้ขๆๅปบ็ๅบ็กๆจกๅๆฅๆง่ก่ฎก็ฎๅพๆญฃๅๅใๆไปฌๅฐไฝฟ็จ็ฅ็ป็ปๆๅญฆไน ๆกๆถๆไพ็ GraphRegularization ๅ
่ฃ
ๅจ็ฑปๆฅๅ
่ฃ
ๅบ็ก (bi-LSTM) ๆจกๅไปฅๅ
ๅซ่ฎก็ฎๅพๆญฃๅๅใ่ฎญ็ปๅ่ฏไผฐ่ฎก็ฎๅพๆญฃๅๅๆจกๅ็ๅ
ถไฝๆญฅ้ชคไธๅบ็กๆจกๅ็ธไผผใ
ๅๅปบ่ฎก็ฎๅพๆญฃๅๅๆจกๅ
ไธบไบ่ฏไผฐ่ฎก็ฎๅพๆญฃๅๅ็ๅข้ๆถ็๏ผๆไปฌๅฐๅๅปบไธไธชๆฐ็ๅบ็กๆจกๅๅฎไพใ่ฟๆฏๅ ไธบ model ๅทฒๅฎๆไบๅ ๆฌก่ฎญ็ป่ฟญไปฃ๏ผ้็จ่ฟไธช็ป่ฟ่ฎญ็ป็ๆจกๅๆฅๅๅปบ่ฎก็ฎๅพๆญฃๅๅๆจกๅๅฏนไบ model ็ๆฏ่พ่่จ๏ผ็ปๆๅฐๆๅคฑๅ้ขใ
End of explanation
"""
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
"""
Explanation: ่ฎญ็ปๆจกๅใ
End of explanation
"""
graph_reg_results = graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(graph_reg_results)
"""
Explanation: ่ฏไผฐๆจกๅ
End of explanation
"""
graph_reg_history_dict = graph_reg_history.history
graph_reg_history_dict.keys()
"""
Explanation: ๅๅปบๅ็กฎ็/ๆๅคฑ้ๆถ้ดๅๅ็ๅพ่กจ
End of explanation
"""
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['scaled_graph_loss']
val_loss = graph_reg_history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
"""
Explanation: ๅญๅ
ธไธญๅ
ฑๆไบไธชๆก็ฎ๏ผ่ฎญ็ปๆๅคฑใ่ฎญ็ปๅ็กฎ็ใ่ฎญ็ป่ฎก็ฎๅพๆๅคฑใ้ช่ฏๆๅคฑๅ้ช่ฏๅ็กฎ็ใๆไปฌๅฏไปฅๅ
ฑๅ็ปๅถ่ฟไบๆก็ฎไปฅไพฟๆฏ่พใ่ฏทๆณจๆ๏ผ่ฎก็ฎๅพๆๅคฑไป
ๅจ่ฎญ็ปๆ้ด่ฎก็ฎใ
End of explanation
"""
# Accuracy values for both the Bi-LSTM model and the feed forward NN model have
# been precomputed for the following supervision ratios.
supervision_ratios = [0.3, 0.15, 0.05, 0.03, 0.02, 0.01, 0.005]
model_tags = ['Bi-LSTM model', 'Feed Forward NN model']
base_model_accs = [[84, 84, 83, 80, 65, 52, 50], [87, 86, 76, 74, 67, 52, 51]]
graph_reg_model_accs = [[84, 84, 83, 83, 65, 63, 50],
[87, 86, 80, 75, 67, 52, 50]]
plt.clf() # clear figure
fig, axes = plt.subplots(1, 2)
fig.set_size_inches((12, 5))
for ax, model_tag, base_model_acc, graph_reg_model_acc in zip(
axes, model_tags, base_model_accs, graph_reg_model_accs):
# "-r^" is for solid red line with triangle markers.
ax.plot(base_model_acc, '-r^', label='Base model')
# "-gD" is for solid green line with diamond markers.
ax.plot(graph_reg_model_acc, '-gD', label='Graph-regularized model')
ax.set_title(model_tag)
ax.set_xlabel('Supervision ratio')
ax.set_ylabel('Accuracy(%)')
ax.set_ylim((25, 100))
ax.set_xticks(range(len(supervision_ratios)))
ax.set_xticklabels(supervision_ratios)
ax.legend(loc='best')
plt.show()
"""
Explanation: ๅ็็ฃๅญฆไน ็่ฝๅ
ๅฝ่ฎญ็ปๆฐๆฎ้ๅพๅฐๆถ๏ผๅ็็ฃๅญฆไน ๏ผๆดๅ
ทไฝๅฐ่ฏด๏ผๅณๆฌๆ็จ่ๆฏไธญ็่ฎก็ฎๅพๆญฃๅๅ๏ผๅฐ้ๅธธๅฎ็จใๅฏ้่ฟๅฉ็จ่ฎญ็ปๆ ทๆฌไน้ด็็ธไผผๅบฆๆฅๅผฅ่กฅ็ผบไน่ฎญ็ปๆฐๆฎ็ไธ่ถณ๏ผ่ฟๅจไผ ็ป็็็ฃๅญฆไน ไธญๆฏๆ ๆณๅฎ็ฐ็ใ
ๆไปฌๅฐ็็ฃๆฏ็ๅฎไนไธบ่ฎญ็ปๆ ทๆฌไธๆ ทๆฌๆปๆฐ๏ผๅ
ๆฌ่ฎญ็ปๆ ทๆฌใ้ช่ฏๆ ทๆฌๅๆต่ฏๆ ทๆฌ๏ผไน้ด็ๆฏ็ใๅจๆญค็ฌ่ฎฐๆฌไธญ๏ผๆไปฌไฝฟ็จไบ 0.05 ็็็ฃๆฏ็๏ผๅณๅธฆๆ ็ญพๆฐๆฎ็ 5๏ผ
๏ผๆฅ่ฎญ็ปๅบ็กๆจกๅๅ่ฎก็ฎๅพๆญฃๅๅๆจกๅใๆไปฌๅจไธ้ข็ๅๅ
ไธญๅฑ็คบไบ็็ฃๆฏ็ๅฏนๆจกๅๅ็กฎ็็ๅฝฑๅใ
End of explanation
"""
|
setiQuest/ML4SETI | tutorials/General_move_data_to_from_Nimbix_Cloud.ipynb | apache-2.0 | #!pip install --user pysftp
#restart your kernel
import pysftp
"""
Explanation: How to move data to/from your Nimbix Cloud machine.
This tutorial shows you how to use the pysftp client to move data to/from your Nimbix cloud machine.
This will be especially useful for moving data between your IBM Apache Spark service and your IBM PowerAI system available during the Hackathon.
https://pysftp.readthedocs.io/en/release_0.2.9/#
Important for hackathon participants using the PowerAI systems. When those machines are shut down, all data in your local user space will be lost. So, be sure to save your work!
BUG: It was recently found that installing pysftp breaks the python-swiftclient, which is used to transfer data to Object Storage. If you install pysftp and then wish to resume using python-swiftclient you'll need to:
!pip uninstall -y pysftp
!pip uninstall -y paramiko
!pip uninstall -y pyasn1
!pip uninstall -y cryptography
End of explanation
"""
mydatafolder = os.environ['PWD'] + '/' + 'my_team_name_data_folder'
# THIS DISABLES HOST KEY CHECKING! Should be okay for our temporary running machines though.
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
#Get this from your Nimbix machine (or other cloud service provider!)
hostname='NAE-xxxx.jarvice.com'
username='nimbix'
password='xx'
"""
Explanation: Create Local File Space
End of explanation
"""
with pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:
sftp.put(mydatafolder + '/zipfiles/classification_6_noise.zip') # upload file to remote
"""
Explanation: PUT a file
If you follow the Step 3 tutorial, you will have created some zip files containing the PNGs. These will be located in your my_team_name_data_folder/zipfiles/ directory.
End of explanation
"""
fromnimbixfolder = mydatafolder + '/fromnimbix'
if os.path.exists(fromnimbixfolder) is False:
os.makedirs(fromnimbixfolder)
with pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:
with pysftp.cd(fromnimbixfolder):
sftp.get('test.csv') #data in local HOME space
sftp.get('/data/my_team_name_data_folder/our_results.csv') #data in persistent Nimbix Cloud storage
"""
Explanation: GET a file
First, I define a separate location to hold files I get from remote.
End of explanation
"""
|
EnSpec/SpecDAL | specdal/examples/process_collection.ipynb | mit | import os
datadir = "/home/young/data/specdal/aidan_data2/ASD/"
c = Collection(name='myFirst')
for f in sorted(os.listdir(datadir))[1:11]:
spectrum = Spectrum(filepath=os.path.join(datadir, f))
c.append(spectrum)
"""
Explanation: Processing a Collection of spectra
SpecDAL provides Collection class for processing multiple spectrum files in conjunction and for grouping operations.
Manual way of loading files into Collection object:
End of explanation
"""
print(type(c["ACPA_F_A_SU_20160617_00000"]))
print(c["ACPA_F_A_SU_20160617_00000"])
"""
Explanation: We can access spectra by name:
End of explanation
"""
print(type(c.spectra))
for s in c.spectra[0:2]:
print(s)
"""
Explanation: As a list:
End of explanation
"""
print(type(c.data))
c.data.head()
"""
Explanation: As a DataFrame:
End of explanation
"""
c.plot(legend=False, ylim=(0, 0.5))
plt.show()
"""
Explanation: Like the Spectrum class, Collection also provides wrappers around pandas.DataFrame methods. We can easily plot a collection as follows:
End of explanation
"""
c.plot(legend=False, xlim=(900, 1100), ylim=(0.4, 0.5))
plt.show()
"""
Explanation: If you look closely, there are jumps at wavelengths 1000, and 1800
End of explanation
"""
c.jump_correct(splices=[1000, 1800], reference=0)
c.plot(legend=False, ylim=(0, 0.5))
c.plot(legend=False, xlim=(900, 1100), ylim=(0.4, 0.5))
plt.show()
"""
Explanation: Spectrum objects provide jump_correct() method to deal with this. Collection class provides the same method which iterates through the spectrum objects and applies the jump correction.
We could similarly apply other spectral transformations such as resampling and overlap stitching on the entire collection.
End of explanation
"""
mean = c.mean()
print(type(mean))
mean.plot()
plt.show()
c.std(append=True) # append the spectrum to the original collection
c.plot(legend=False)
plt.show()
"""
Explanation: We can easily calculate aggregate functions (mean, median, min/max, std, etc.), which will return a Spectrum object:
End of explanation
"""
|
sisnkemp/deep-learning | embeddings/Skip-Gram_word2vec.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
"""
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
"""
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
"""
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
"""
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
"""
## Your code here
import collections
def subsample(wordlist, t):
rv = []
counts = collections.Counter(wordlist)
totcount = len(wordlist)
for w in wordlist:
f = counts[w] / totcount
p = 1 - np.sqrt(t / f)
r = np.random.random()
if p < r:
rv.append(w)
return rv
train_words = subsample(int_words, 1e-5)
print(len(int_words))
print(len(train_words))
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
import random
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
n = random.randint(1, window_size)
start = idx - n
end = idx + 1
if start < 0:
start = 0
if end > len(words):
end = len(words)
selected = set(words[start:idx] + words[idx + 1 : end + 1])
return list(selected)
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
"""
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
"""
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name = 'inputs')
labels = tf.placeholder(tf.int32, [None, None], name = 'labels')
"""
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
"""
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), minval=-1, maxval=1))
embed = tf.nn.embedding_lookup(embedding, inputs)
"""
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
"""
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1), name = "softmax_w")
softmax_b = tf.Variable(tf.zeros((n_vocab,)), name = "softmax_b")
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
"""
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
"""
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
"""
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
"""
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
"""
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
"""
Explanation: Restore the trained network if you need to:
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
gabrielcs/nyc-subway-canvass | stations-location-cleaning.ipynb | mit | import pandas as pd
stations = pd.read_csv('data/DOITT_SUBWAY_STATION_01_13SEPT2010.csv')
stations.head(4)
"""
Explanation: MTA Subway Stations dataset cleaning
In this notebook we will clean the Subway Stations dataset made available by MTA.
Let's start by opening and examining it.
End of explanation
"""
import coordinates as coord
coord.add_coord_columns(stations, 'the_geom', sep=' ', _reversed=True)
stations.loc[:, ('latitude', 'longitude')].head()
"""
Explanation: Let's extract the latitude and longitude from the dataset. For that we will use add_coord_columns() which is defined in coordinates.py. Notice that the coordinates are reversed as in (longitude, latitude).
End of explanation
"""
stations.rename(columns={'NAME': 'station', 'LINE': 'lines', 'NOTES': 'notes'}, inplace=True)
relevant_cols = ['station', 'latitude', 'longitude', 'lines', 'notes']
stations_cleaned = stations.loc[:, relevant_cols]
stations_cleaned.sort_values(by='station', inplace=True)
stations_cleaned.head()
"""
Explanation: Now let's clean the DataFrame.
End of explanation
"""
!pip install folium
import folium
stations_map = folium.Map([40.729, -73.9], zoom_start=11, tiles='CartoDB positron', width='60%')
for i, station in stations_cleaned.iterrows():
marker = folium.CircleMarker([station['latitude'], station['longitude']],
popup=station['station'], color='FireBrick',
fill_color='FireBrick', radius=2)
marker.add_to(stations_map)
stations_map.save('maps/all_entrances.html')
stations_map
"""
Explanation: Let's quickly plot the stations coordinates to have a feel for their geographical location:
End of explanation
"""
stations_cleaned.to_pickle('pickle/stations_locations.p')
"""
Explanation: The interactive map is available here.
Now let's just save it as a pickle binary file for later use in the recommender notebook.
End of explanation
"""
|
pdonorio/nbpydata-n-slides | slides/myslides.ipynb | mit | a = "Hello"
b = "World"
print a,b + "!"
"""
Explanation: Hello world
(press space)
This is how you do slides with ipython notebooks!
Formatting is simple, with markdown
...your python love will help you...
End of explanation
"""
# Please consider also that you can re-use
# variables defined in older slides ;)
print type(a + b)
"""
Explanation: How cool to have live code
inside your live slideshow!?!?!
Just another one
End of explanation
"""
|
jonathf/chaospy | docs/user_guide/main_usage/point_collocation.ipynb | mit | from pseudo_spectral_projection import gauss_quads
gauss_nodes = [nodes for nodes, _ in gauss_quads]
"""
Explanation: Point collocation
Point collection method is a broad term, as it covers multiple variation, but
in a nutshell all consist of the following steps:
Generate samples $Q_1=(\alpha_1, \beta_1), \dots, Q_N=(\alpha_N, \beta_N)$ that corresponds to your uncertain
parameters.
Evaluate model solver $U_1=u(t, \alpha_1, \beta_1), \dots, U_N=u(t, \alpha_N, \beta_N)$ for each sample.
Select a polynomial expansion $\Phi_1, \dots, \Phi_M$.
Solve linear regression problem: $U_n = \sum_m c_m(t)\ \Phi_m(\alpha_n,
\beta_n)$
with respect for $c_1, \dots, c_M$.
Construct model approximation $u(t, \alpha, \beta) = \sum_m c_m(t)\ \Phi_n(\alpha, \beta)$
Perform model analysis on approximation $u(t, \alpha, \beta)$ as a proxy for the real
model.
Let us go through the steps in more detail.
Generating samples
Unlike both Monte Carlo integration and
pseudo-spectral projection, point
collocation method does not assume that the samples follows any particular
form. Though traditionally they are selected to be random, quasi-random,
nodes from quadrature integration, or a subset of the three.
For this case, we select the sample to follow the Sobol samples from Monte
Carlo integration, and optimal quadrature
nodes from pseudo-spectral projection:
End of explanation
"""
from monte_carlo_integration import sobol_samples
sobol_nodes = [sobol_samples[:, :nodes.shape[1]] for nodes in gauss_nodes]
from matplotlib import pyplot
pyplot.rc("figure", figsize=[12, 4])
pyplot.subplot(121)
pyplot.scatter(*gauss_nodes[4])
pyplot.title("Gauss quadrature nodes")
pyplot.subplot(122)
pyplot.scatter(*sobol_nodes[4])
pyplot.title("Sobol nodes")
pyplot.show()
"""
Explanation: The number of Sobol samples to use at each order is arbitrary, but for
compare, we select them to be the same as the Gauss nodes:
End of explanation
"""
import numpy
from problem_formulation import model_solver
gauss_evals = [
numpy.array([model_solver(node) for node in nodes.T])
for nodes in gauss_nodes
]
sobol_evals = [
numpy.array([model_solver(node) for node in nodes.T])
for nodes in sobol_nodes
]
from problem_formulation import coordinates
pyplot.subplot(121)
pyplot.plot(coordinates, gauss_evals[4].T, alpha=0.3)
pyplot.title("Gauss evaluations")
pyplot.subplot(122)
pyplot.plot(coordinates, sobol_evals[4].T, alpha=0.3)
pyplot.title("Sobol evaluations")
pyplot.show()
"""
Explanation: Evaluating model solver
Like in the case of problem formulation again,
evaluation is straight forward:
End of explanation
"""
import chaospy
from problem_formulation import joint
expansions = [chaospy.generate_expansion(order, joint)
for order in range(1, 10)]
expansions[0].round(10)
"""
Explanation: Select polynomial expansion
Unlike pseudo spectral
projection, the polynomial in
point collocations are not required to be orthogonal. But stability
wise, orthogonal polynomials have still been shown to work well.
This can be achieved by using the
chaospy.generate_expansion()
function:
End of explanation
"""
gauss_model_approx = [
chaospy.fit_regression(expansion, samples, evals)
for expansion, samples, evals in zip(expansions, gauss_nodes, gauss_evals)
]
sobol_model_approx = [
chaospy.fit_regression(expansion, samples, evals)
for expansion, samples, evals in zip(expansions, sobol_nodes, sobol_evals)
]
pyplot.subplot(121)
model_approx = gauss_model_approx[4]
evals = model_approx(*gauss_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Gaussian approximation")
pyplot.subplot(122)
model_approx = sobol_model_approx[1]
evals = model_approx(*sobol_nodes[1])
pyplot.plot(coordinates, evals, alpha=0.3)
pyplot.title("Sobol approximation")
pyplot.show()
"""
Explanation: Solve the linear regression problem
With all samples $Q_1, ..., Q_N$, model evaluations $U_1, ..., U_N$ and
polynomial expansion $\Phi_1, ..., \Phi_M$, we can put everything together to
solve the equations:
$$
U_n = \sum_{m=1}^M c_m(t)\ \Phi_m(Q_n) \qquad n = 1, ..., N
$$
with respect to the coefficients $c_1, ..., c_M$.
This can be done using the helper function
chaospy.fit_regression():
End of explanation
"""
expected = chaospy.E(gauss_model_approx[-2], joint)
std = chaospy.Std(gauss_model_approx[-2], joint)
expected[:4].round(4), std[:4].round(4)
pyplot.rc("figure", figsize=[6, 4])
pyplot.xlabel("coordinates")
pyplot.ylabel("model approximation")
pyplot.fill_between(
coordinates, expected-2*std, expected+2*std, alpha=0.3)
pyplot.plot(coordinates, expected)
pyplot.show()
"""
Explanation: Descriptive statistics
The expected value and variance is calculated as follows:
End of explanation
"""
from problem_formulation import error_in_mean, error_in_variance
error_in_mean(expected), error_in_variance(std**2)
"""
Explanation: Error analysis
It is hard to assess how well these models are doing from the final
estimation alone. They look about the same. So to compare results, we do
error analysis. To do so, we use the reference analytical solution and error
function as defined in problem formulation.
End of explanation
"""
sizes = [nodes.shape[1] for nodes in gauss_nodes]
eps_gauss_mean = [
error_in_mean(chaospy.E(model, joint))
for model in gauss_model_approx
]
eps_gauss_var = [
error_in_variance(chaospy.Var(model, joint))
for model in gauss_model_approx
]
eps_sobol_mean = [
error_in_mean(chaospy.E(model, joint))
for model in sobol_model_approx
]
eps_sobol_var = [
error_in_variance(chaospy.Var(model, joint))
for model in sobol_model_approx
]
pyplot.rc("figure", figsize=[12, 4])
pyplot.subplot(121)
pyplot.title("Error in mean")
pyplot.loglog(sizes, eps_gauss_mean, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_mean, "--", label="Sobol")
pyplot.legend()
pyplot.subplot(122)
pyplot.title("Error in variance")
pyplot.loglog(sizes, eps_gauss_var, "-", label="Gaussian")
pyplot.loglog(sizes, eps_sobol_var, "--", label="Sobol")
pyplot.show()
"""
Explanation: The analysis can be performed as follows:
End of explanation
"""
|
gboeing/urban-data-science | modules/13-unsupervised-learning/lecture.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.cluster import hierarchy
from scipy.spatial.distance import pdist
from sklearn.cluster import DBSCAN, KMeans
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.metrics import accuracy_score, r2_score, silhouette_score
from sklearn.manifold import TSNE
from sklearn.preprocessing import StandardScaler
np.random.seed(0)
# load CA tract-level census variables
df = pd.read_csv('../../data/census_tracts_data_ca.csv', dtype={'GEOID10':str}).set_index('GEOID10')
df.shape
df.head()
"""
Explanation: Unsupervised learning
Overview of today's topics:
- linear discriminant analysis
- principal component analysis
- k-means clustering
- DBSCAN clustering
- hierarchical clustering
- t-sne projection
In unsupervised learning, we use an algorithm to discover structure in and extract information from data. It generally comprises two broad categories:
- dimensionality reduction: transform features to a lower-dimension space
- clustering: assign observations to groups based on their features
While supervised learning trains a model to make predictions based on a training data set that we feed it, unsupervised learning discovers relationships and groups automatically for us.
End of explanation
"""
# choose response and predictors
response = 'county_name'
features = ['median_age', 'pct_hispanic', 'pct_white', 'pct_black', 'pct_asian', 'pct_male',
'pct_single_family_home', 'med_home_value', 'med_rooms_per_home', 'pct_built_before_1940',
'pct_renting', 'rental_vacancy_rate', 'avg_renter_household_size', 'med_household_income',
'mean_commute_time', 'pct_commute_drive_alone', 'pct_below_poverty', 'pct_college_grad_student',
'pct_same_residence_year_ago', 'pct_bachelors_degree', 'pct_english_only', 'pct_foreign_born']
counties = ['Los Angeles', 'Orange', 'Riverside']
mask = df['county_name'].isin(counties)
subset = features + [response]
data = df.loc[mask].dropna(subset=subset)
y = data[response]
X = data[features]
y.shape, X.shape
# feature scaling
X = StandardScaler().fit_transform(X)
# reduce data from n dimensions to 2
lda = LinearDiscriminantAnalysis(n_components=2)
X_reduced = lda.fit_transform(X, y)
X_reduced.shape
fig, ax = plt.subplots(figsize=(6, 6))
for county_name in data['county_name'].unique():
mask = y == county_name
ax.scatter(x=X_reduced[mask, 0],
y=X_reduced[mask, 1],
alpha=0.5,
s=3,
label=county_name)
ax.set_aspect('equal')
ax.legend(loc='best', scatterpoints=4)
_ = ax.set_title('')
"""
Explanation: 1. Linear discriminant analysis
Dimensionality reduction lets us reduce the number of features (variables) in our data set with minimal loss of information. This data compression is called feature extraction. Feature extraction is similar to feature selection in that they both reduce the total number of variables in your analysis. In feature selection, we use domain theory or an algorithm to select important variables for our model. Feature extraction instead projects your features onto a lower-dimension space, creating new features rather than just selecting a subset of existing ones.
LDA is supervised dimensionality reduction, providing a link between supervised learning and dimensionality reduction. It uses a categorical response and continuous features to identify features that account for the most variance between classes (ie, maximum separability). It can be used as a classifier, similar to what we saw last week, or it can be used for dimensionality reduction by projecting the features in the most discriminative directions.
We will predict which county a tract is in using 1) a full set of features, and 2) a set of just two projected features. Let's see how it performs.
End of explanation
"""
# how accurate are my predictions using all 22 features?
y_pred = LogisticRegression(max_iter=200).fit(X, y).predict(X)
print(round(accuracy_score(y, y_pred), 3))
# how accurate are my predictions using just 2 projected features?
y_pred = LogisticRegression().fit(X_reduced, y).predict(X_reduced)
print(round(accuracy_score(y, y_pred), 3))
"""
Explanation: How good are my predictions with just two dimensions? This is a quick and dirty measure of predictive quality between the original, full feature space and the reduced feature space (for a formal analysis, I'd do a test-train split like we saw last week):
End of explanation
"""
# now it's your turn
# try changing the number of counties we retain and the number of dimensions
# how does this influence our classification predictions?
"""
Explanation: We have summarized the most relevant information of our feature space and reduced it from 22 features (ie, dimensions) to just 2. It's not perfect: there has been some information loss, but it's pretty good!
End of explanation
"""
# this is unsupervised, so we don't need a response variable, but we will
# define one just so we can build a simple regression model when we're done
response = 'med_gross_rent'
features = ['median_age', 'pct_hispanic', 'pct_white', 'pct_black', 'pct_asian', 'pct_male',
'pct_single_family_home', 'med_home_value', 'med_rooms_per_home', 'pct_built_before_1940',
'pct_renting', 'rental_vacancy_rate', 'avg_renter_household_size', 'med_household_income',
'mean_commute_time', 'pct_commute_drive_alone', 'pct_below_poverty', 'pct_college_grad_student',
'pct_same_residence_year_ago', 'pct_bachelors_degree', 'pct_english_only', 'pct_foreign_born']
subset = features + [response]
data = df.dropna(subset=subset)
y = data[response]
X = data[features]
y.shape, X.shape
# feature scaling
X = StandardScaler().fit_transform(X)
# project the features onto all principal components
pca = PCA(n_components=None)
X_reduced = pca.fit_transform(X)
# our features are correlated with each other, but our principal components are not
pd.DataFrame(X_reduced).corr().round(2)
# eigenvalues represent the variance explained by each component
# calculate each component's proportion of variance explained
eigenvalues = pca.explained_variance_
pve = eigenvalues / eigenvalues.sum()
pve
# create a variance-explained plot
xpos = range(1, len(features) + 1)
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(xpos, pve, marker='o', markersize=5, label='Individual')
ax.plot(xpos, np.cumsum(pve), marker='o', markersize=5, label='Cumulative')
ax.set_ylabel('Proportion of variance explained')
ax.set_xlabel('Principal component')
ax.set_xlim(0, len(features) + 1)
ax.set_ylim(0, 1)
ax.grid(True, ls='--')
_ = ax.legend()
"""
Explanation: 2. Principal component analysis
Remember simple pair-plots? They let you inspect pairwise relationships between your variables. But what if you have lots of features? PCA offers a more rigorous tool. PCA is very similar to exploratory factor analysis, and is often referred to as a type of factor analysis. The former is used to discover relationships in the data, whereas the latter usually implies that you are probing a theoretical (latent) relationship among your variables. We'll focus on PCA today.
PCA is used 1) to fix multicollinearity problems and 2) for dimensionality reduction. In the former, it converts a set of original, correlated features into a new set of orthogonal features, which is useful in regression and cluster analysis. In the latter, it summarizes a set of original, correlated features with a smaller number of features that still explain most of the variance in your data (data compression).
PCA identifies the combinations of features (directions in feature space) that account for the most variance in the dataset. These orthogonal axes of maximum variance are called principal components. A principal component is an eigenvector (direction of maximum variance) of the features' covariance matrix, and the corresponding eigenvalue is its magnitude (factor by which it is "stretched"). An eigenvector is the cosine of the angle between a feature and a component. Its corresponding eigenvalue represents the share of variance it accounts for. PCA takes your (standardized) features' covariance matrix, decomposes it into its eigenvectors/eigenvalues, sorts them by eigenvalue magnitude, constructs a projection matrix $W_k$ from the corresponding top $k$ eigenvectors, then transforms the features using the projection matrix to get the new $k$-dimensional feature subspace. Always standardize your data before PCA because it is sensitive to features' scale.
We will reduce our feature set to fewer dimensions.
End of explanation
"""
# project the features onto a 2-dimensional subspace
pca = PCA(n_components=2)
X_reduced = pca.fit_transform(X)
# see our projected features
X_reduced
"""
Explanation: So, how many components should we use? Remember, the goal here is to reduce the dimensionality of the feature set: we want to balance parsimony with explanatory power. There is no single answer, but in general you want the fewest components that explain sufficient variation. So what's the right balance?
variance-explained criteria: for example, take fewest components necessary to explain, say, 80% of your variance
visualization criteria: consider that it is impossible to visualize more than 3 dimensions
elbow criteria: use a scree plot (aka, variance-explained plot) and look for an "elbow" in the curve
kaiser criteria: use components with an eigenvalue >1 (an obsolete method today)
For visualization purposes, let's use two components:
End of explanation
"""
# project our features manually onto the two dimensions
eigenvectors = pca.components_.T
np.dot(X, eigenvectors)
"""
Explanation: We often refer to these projected data as "principal component scores" or a "score matrix", $T_k$, where $T_k = XW_k$ and $X$ is your original feature matrix and $W_k$ is the projection matrix, that is, a matrix containing the first $k$ principal components (ie, the $k$ eigenvectors with the largest corresponding eigenvalues). In our case, $k=2$. We can calculate this manually:
End of explanation
"""
eigenvalues = pca.explained_variance_
loadings = eigenvectors * np.sqrt(eigenvalues)
# turn into a DataFrame with column names and row labels
cols = [f'PC{i}' for i in range(1, pca.n_components_ + 1)]
pd.DataFrame(loadings, index=features, columns=cols).sort_values('PC1')
# how accurate are my predictions using all 22 features?
y_pred = LinearRegression().fit(X, y).predict(X)
print(round(r2_score(y, y_pred), 3))
# how accurate are my predictions using just the first 2 principal components?
y_pred = LinearRegression().fit(X_reduced, y).predict(X_reduced)
print(round(r2_score(y, y_pred), 3))
# plot the points on their first 2 PCs, and color by the response variable
fig, ax = plt.subplots(figsize=(6, 6))
ax = sns.scatterplot(ax=ax, x=X_reduced[:, 0], y=X_reduced[:, 1],
hue=y, palette='plasma_r', s=5, edgecolor='none')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
_ = ax.set_aspect('equal')
# now it's your turn
# conduct a PCA with the first 4 components
# how does our predictive accuracy change? how does the scatterplot change?
"""
Explanation: Loadings represent the correlations between the features and the components. Loadings are the eigenvectors scaled by the square roots of their eigenvalues (aka, "singular values").
End of explanation
"""
features = ['median_age', 'pct_hispanic', 'pct_white', 'pct_black', 'pct_asian', 'pct_male', 'med_gross_rent',
'pct_single_family_home', 'med_home_value', 'med_rooms_per_home', 'pct_built_before_1940',
'pct_renting', 'rental_vacancy_rate', 'avg_renter_household_size', 'med_household_income',
'mean_commute_time', 'pct_commute_drive_alone', 'pct_below_poverty', 'pct_college_grad_student',
'pct_same_residence_year_ago', 'pct_bachelors_degree', 'pct_english_only', 'pct_foreign_born']
# calculate then standardize median values across counties
counties = df.groupby('county_name').median()
X = counties[features].dropna()
X = StandardScaler().fit_transform(X)
X.shape
# project onto first two principal components for 2-D clustering
X_reduced = PCA(n_components=2).fit_transform(X)
X_reduced.shape
# cluster the data
km = KMeans(n_clusters=5).fit(X_reduced)
# get the cluster labels, the unique labels, and the number of clusters obtained
cluster_labels = km.labels_
unique_labels = set(cluster_labels)
num_clusters = len(unique_labels)
print(f'Number of clusters: {num_clusters}')
pd.Series(cluster_labels).value_counts().sort_index()
# scatterplot points on first two PCs and color by cluster
fig, ax = plt.subplots(figsize=(4, 4))
ax = sns.scatterplot(ax=ax, x=X_reduced[:, 0], y=X_reduced[:, 1],
hue=cluster_labels, palette='Set1', s=20, edgecolor='none')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
_ = ax.set_aspect('equal')
# silhouette score is the average silhouette coefficient
silhouette_score(X_reduced, cluster_labels)
"""
Explanation: 3. k-means clustering
Dimensionality reduction projects our data onto a lower-dimension space, usually through unsupervised learning. A second branch of unsupervised learning, cluster analysis, lets us discover natural groups that exist in our data. Last week we predicted groups in labeled data by training a supervised learning algorithm. In cluster analysis, we discover unknown groups in unlabeled data through an unsupervised learning algorithm. As with dimensionality reduction, remember to standardize your data before clustering. Many clustering algorithms work well in high-dimensional feature spaces, but some work better after PCA dimensionality reduction (due to the curse of dimensionality).
k-means is probably the most common clustering algorithm. It clusters data into $k$ groups based on their similarity. It is a form of prototype-based clustering where each cluster is represented by a prototype, or centroid. You have to specify the number of groups in advance. This works well when you want to partition your data into a predetermined number of groups. Otherwise, you have to determine an optimal value for $k$.
Here, we will identify counties that are similar to one another across a wide variety of characteristics.
End of explanation
"""
# create an elbow plot
fig, ax = plt.subplots()
ax.set_xlabel('Number of clusters')
ax.set_ylabel('Distortion')
kvals = range(1, 15)
distortions = []
for k in kvals:
km = KMeans(n_clusters=k).fit(X_reduced)
distortions.append(km.inertia_)
ax.plot(kvals, distortions, marker='o')
_ = ax.grid(True)
# now it's your turn
# use the elbow plot above to choose a new k value
# how does it affect the scatterplot and silhouette score?
"""
Explanation: The silhouette score measures cohesion vs separation: how similar the points are to to their own clusters vs to the other clusters, on average. This measures how tightly grouped our clusters are. The silhouette can range from -1 to +1. Negative values suggest clustering problems, including too many/few clusters.
So how do you pick a good $k$?
- theoretically, how many clusters should there be in your data (if knowable beforehand)?
- which $k$ value gives you the best silhouette score?
- elbow criteria (similar to what we saw for PCA): find an elbow in the line plot of distortion vs cluster count. Distortion is also called inertia, and represents the sum of squared errors.
End of explanation
"""
# cluster the data (in two dimensions again)
X_reduced = PCA(n_components=2).fit_transform(X)
db = DBSCAN(eps=1, min_samples=3, metric='euclidean').fit(X_reduced)
# get the cluster labels, the unique labels, and the number of clusters obtained
cluster_labels = db.labels_
unique_labels = set(cluster_labels)
num_clusters = len(unique_labels)
print(f'Number of clusters: {num_clusters}')
# scatterplot points on first two PCs and color by cluster
# cluster label -1 means noise
fig, ax = plt.subplots(figsize=(4, 4))
ax = sns.scatterplot(ax=ax, x=X_reduced[:, 0], y=X_reduced[:, 1],
hue=cluster_labels, palette='Set1', s=20, edgecolor='none')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
_ = ax.set_aspect('equal')
silhouette_score(X_reduced, cluster_labels)
# now it's your turn
# try changing the epsilon and min_samples then re-clustering
# how does it change the silhouette score and the cluster plot?
"""
Explanation: 4. DBSCAN clustering
DBSCAN (density-based spatial clustering of applications with noise) represents another form of clustering known as density-based clustering. Density-based clustering works better in low-dimension feature spaces, so PCA in advance is a good idea.
DBSCAN assigns cluster labels based on dense regions of points, by identifying core points, border points, and noise points. Unlike k-means, we do not need to know the number of clusters beforehand. We parameterize it with a minimum number of points that must fall within some radius $\epsilon$ of a point to consider that point a core point. The $\epsilon$ parameter represents the maximum distance in the feature space that points can be from each other to be considered a cluster. The min_samples parameter is the minimum cluster size allowed: everything else gets classified as noise.
DBSCAN can be useful for geospatial clustering of either projected coordinates, or lat-long coordinates if you use a haversine distance metric. But here, we will just cluster our same features as before.
End of explanation
"""
# project onto first 4 principal components
X_reduced = PCA(n_components=4).fit_transform(X)
X_reduced.shape
# calculate distance matrix then linkage matrix, choosing a method (algorithm)
distances = pdist(X_reduced)
Z = hierarchy.linkage(distances, method='complete', optimal_ordering=True)
# cophenetic correlation measures how well clustering preserved pairwise distances
c, _ = hierarchy.cophenet(Z, distances)
c
# pick a distance to cut dendrogram tree
cut_point = 6
# plot the dendrogram, colored by clusters below the cut point
fig, ax = plt.subplots(figsize=(5, 11))
ax.set_xlabel('Euclidean distance')
with plt.rc_context({'lines.linewidth': 1}):
R = hierarchy.dendrogram(Z=Z,
orientation='right',
labels=counties.index,
color_threshold=cut_point,
distance_sort='descending',
show_leaf_counts=False,
ax=ax)
plt.axvline(cut_point, c='k')
fig.savefig('dendrogram.png', dpi=600, facecolor='w', bbox_inches='tight')
# assign k cluster labels to the observations, based on where you cut tree
# k = number of clusters = how many horizontal lines you intersected above
k = 8
cluster_labels = hierarchy.fcluster(Z, t=k, criterion='maxclust')
pd.Series(cluster_labels).value_counts().sort_index()
# now it's your turn
# pick different points to cut the tree, how many clusters do they imply?
# which cut point is the right one to use?
"""
Explanation: 5. Hierarchical clustering
Another form of clustering is hierarchical clustering, which can be agglomerative or divisive. Agglomerative clustering initially treats each observation as its own cluster, then iteratively merges the closest two clusters until only one supercluster remains. There are four common algorithms:
- single linkage: calculate distance between the most similar members in each pair of clusters, then merge the two clusters with smallest such distance
- complete linkage: like single linkage, but instead compare the most dissimilar members
- average linkage: calculate average distance between all members in each pair of clusters, then merge the two clusters with smallest average distance
- Ward's linkage: merge the two clusters that cause the least increase in total within-cluster sum of squared errors
We could use scikit-learn, but I prefer agglomerative clustering in scipy so we can easily visualize the dendrogram. A dendrogram shows us how the clusters link up and lets us explore which observations are more/less similar. The dendrogram's structure suggests high-level superclusters and we can cut its tree at an arbitrary level.
In this example, we'll cluster in four dimensions, which was suggested by our PCA variance-explained plot earlier.
End of explanation
"""
# t-SNE with two dimensions, then project features onto this space
tsne = TSNE(n_components=2, n_iter=10000, random_state=0)
X_reduced = pd.DataFrame(data=tsne.fit_transform(X),
index=counties.index,
columns=['TC1', 'TC2'])
# plot the colored clusters projected onto the two t-SNE dimensions
fig, ax = plt.subplots(figsize=(4, 4))
ax.set_xlabel('t-SNE 1')
ax.set_ylabel('t-SNE 2')
X_reduced['color'] = pd.Series(dict(zip(R['ivl'], R['leaves_color_list'])))
ax.scatter(x=X_reduced['TC1'], y=X_reduced['TC2'], c=X_reduced['color'], s=10)
# identify a county of interest in the plot
county = 'San Francisco'
_ = ax.scatter(x=X_reduced.loc[county, 'TC1'],
y=X_reduced.loc[county, 'TC2'],
alpha=1, marker='o', s=300, linewidth=2, color='none', ec='k')
# now it's your turn
# pick different points to cut the tree, how does it change our t-SNE plot?
"""
Explanation: 6. t-SNE
What if I want to discover structure in >3 dimensions (like we did above), but still be able to visualize it?
Manifold learning is a nonlinear dimensionality reduction approach that usually uses unsupervised learning. t-SNE (t-distributed stochastic neighbor embedding) is a manifold learning technique used for projecting high-dimension data sets into a plane for easy visualization. Here, we project our counties' higher-dimension feature space to 2 dimensions for visualization. t-SNE projection is useful because it preserves group structure relatively well despite information loss. However, given the global density-equalizing nature of t-SNE, relative distances within and between clusters are not preserved and should not be interpreted otherwise.
For an example of using clustering + t-SNE to discover and visualize similar places, see this ANS article. Here, we will use t-SNE to project our data to two dimensions to scatterplot the hierarchical clusters from above.
End of explanation
"""
|
zingale/hydro_examples | compressible/euler-generaleos.ipynb | bsd-3-clause | from sympy import init_session
init_session()
from sympy.abc import rho, tau, alpha
rho, tau, c, h, p = symbols("rho tau c h p", real=True, positive=True)
re = symbols(r"(\rho{}e)", real=True, positive=True)
ge = symbols(r"\gamma_e", real=True, positive=True)
alpha, u = symbols("alpha u", real=True)
"""
Explanation: Hydrodynamics Systems with a General EOS
This notebook explores the eigensystem of the Euler equations augmented with an additional thermodynamic variable to describe a general equation of state in the reconstruction of interface states
End of explanation
"""
plus = u + c
zero = u
minus = u - c
class Eigenvector(object):
def __init__(self, name, ev, r, l=None):
self.name = name
if name == "minus":
self.d = 0
elif name == "zero":
self.d = 1
elif name == "plus":
self.d = 2
else:
self.d = None
self.ev = ev
self.l = l
self.r = r
def __lt__(self, other):
return self.d < other.d
def __str__(self):
return "{} wave, r = {}, l = {}".format(self.eigenvalue, self.r, self.l)
def eigensystem(A, suba=None, subb=None):
# get the left and right eigenvectors that diagonalize the system.
# it is best to use sympy diagonalize() for this purpose than getting
# the left and right eigenvectors independently.
e = []
R, D = A.diagonalize()
# the columns of R are the right eigenvectors and the diagonal
# element of D is the corresponding eigenvalues
for n in range(A.shape[0]):
r = R.col(n)
ev = D[n,n]
#print("here", r, ev)
if suba is not None and subb is not None:
ev = ev.subs(suba, subb)
# which eigenvalue are we?
if simplify(ev - minus) == 0:
name = "minus"
elif simplify(ev - plus) == 0:
name = "plus"
elif simplify(ev - zero) == 0:
name = "zero"
else:
return None
# normalize the right eigenvector
v = r[0]
if v != 0:
r = r/v
if suba is not None and subb is not None:
r = simplify(r.subs(suba, subb))
e.append(Eigenvector(name=name, ev=ev, r=r))
# now sort the system from smallest (u-c) to largest (u+c)
e.sort()
# now let's construct the R with this sorting
for n in range(A.shape[0]):
R[:,n] = e[n].r
# the left eigenvector matrix, L, is just the inverse
L = R**-1
for n in range(A.shape[0]):
e[n].l = L.row(n)
return e
"""
Explanation: The two routines below simplify the analysis of the eigensystem by simultaneously finding the set of orthonormal left and right eigenvectors of a matrix from the primitive form of the hydro equations
End of explanation
"""
q = Matrix([rho, u, p, re]).transpose()
A = Matrix([[u, rho, 0, 0], [0, u, rho**-1, 0], [0, c**2 * rho, u, 0], [0, rho*h, 0, u]])
A
"""
Explanation: Euler Equations with $(\rho e)$
The Euler equations in primitive variable form, $q = (\rho, u, p, (\rho e))^\intercal$ appear as:
\begin{align}
\frac{\partial \rho}{\partial t} &= -u \frac{\partial \rho}{\partial x} - \rho \frac{\partial u}{\partial x} \
\frac{\partial u}{\partial t} &= -u \frac{\partial u}{\partial x} - \frac{1}{\rho} \frac{\partial p}{\partial x} \
\frac{\partial p}{\partial t} &= -u \frac{\partial p}{\partial x} - \rho c^2 \frac{\partial u}{\partial x} \
\frac{\partial (\rho e)}{\partial t} &= -u \frac{\partial (\rho e)}{\partial x} - \rho h \frac{\partial u}{\partial x}
\end{align}
In vector form, we have:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{cccc} u & \rho & 0 & 0\
0 & u & 1/\rho & 0\
0 & \rho c^2 & u & 0\
0 & \rho h & 0 & u\end{array} \right )
$$
The sound speed is related to the adiabatic index, $\Gamma_1$, as $c^2 = \Gamma_1 p /\rho$.
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
"""
A.eigenvals()
"""
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
"""
# we use the helper rountines above to find the orthogonal left and right eigenvectors
eigen = eigensystem(A)
# printing them out for inspection
from sympy.printing.mathml import mathml
for e in eigen:
print(e.name)
display(e.r, e.l)
"""
Explanation: We see that there are 2 eigenvalues $u$ -- the addition of $(\rho e)$ to the system adds this degeneracy.
Eigenvectors
The right eigenvectors are defined for a given eigenvalue, $\lambda$, as:
$$A r = \lambda r$$
and the left eigenvectors satisfy:
$$l A = \lambda l$$
Note that the left and right eigenvectors are orthogonal to those corresponding to a different eigenvalue, and usually normalized so:
$$l^i \cdot r^j = \delta_{ij}$$
End of explanation
"""
from sympy.abc import delta
dr = symbols(r"\Delta\rho")
du = symbols(r"\Delta{}u")
dp = symbols(r"\Delta{}p")
dre = symbols(r"\Delta(\rho{}e)")
rhoi = symbols(r"\rho_\mathrm{int}")
ui = symbols(r"u_\mathrm{int}")
pri = symbols(r"p_\mathrm{int}")
rei = symbols(r"(\rho{}e)_\mathrm{int}")
# this is the jump
dq = Matrix([[dr, du, dp, dre]]).transpose()
# this is the interface state vector
qint = Matrix([[rhoi, ui, pri, rei]]).transpose()
"""
Explanation: $\beta$'s and final update
The final interface state is writen by projecting the jump in primitive variables, $\Delta q$, into characteristic variables (as $l \cdot \Delta q$), and then adding up all the jumps that reach the interface.
The convention is to write $\beta^\nu = l^\nu \cdot \Delta q$, where the superscript identifies which eigenvalue (and corresponding eigenvectors) we are considering. Note, that often a reference state is used, and the jump, $\Delta q$, will be the difference with respect to this reference state. For PPM, the $\Delta q$ will take the form of the integral under the parabola over the range that each wave can reach.
The final interface state is then:
$$
q_\mathrm{int} = q - \sum_\nu \beta^\nu r^\nu
$$
The tracing projects the primitive variables into characteristic variables by defining
$$
\Delta q = \left ( \begin{array}{c} \Delta \rho \ \Delta u \ \Delta p \ \Delta (\rho e) \end{array} \right )
$$
and then
$\beta^\nu = l^\nu \cdot \Delta q$
End of explanation
"""
betas = [symbols(r"\beta^-")]
for n in range(len([e for e in eigen if e.d == 1])):
betas += [symbols(r"\beta^0_{}".format(n))]
betas += [symbols(r"\beta^+")]
for n, e in enumerate(eigen):
print(e.name)
beta = e.l.dot(dq)
display(Eq(betas[n], simplify(beta)))
"""
Explanation: Now compute the $\beta$s
End of explanation
"""
for n in range(len(eigen)):
rhs = q[n]
for m in range(len(eigen)):
rhs -= betas[m]*eigen[m].r[n]
display(Eq(qint[n],rhs))
"""
Explanation: and now the final interface states
End of explanation
"""
q = Matrix([[tau, u, p, ge]]).transpose()
A = Matrix([[u, -tau, 0, 0], [0, u, tau, 0], [0, c**2/tau, u, 0], [0, -alpha, 0, u]])
A
"""
Explanation: Euler Equations with $(\gamma_e)$
We can define $\gamma_e = p/(\rho e) + 1$ and differentiate it to get:
$$
\frac{\partial \gamma_e}{\partial t} = -u \frac{\partial \gamma_e}{\partial x} + (\gamma_e - 1)(\gamma_e - \Gamma_1) \frac{\partial u}{\partial x}
$$
The original CG paper used $\tau = 1/\rho$ in place of density. With this, the continuity equation becomes:
$$
\frac{\partial \tau}{\partial t} = -u\frac{\partial \tau}{\partial x} + \tau \frac{\partial u}{\partial x}
$$
The Euler equations with this set of primitive variables, $q = (\tau, u, p, \gamma_e)^\intercal$ appear as:
\begin{align}
\frac{\partial \tau}{\partial t} &= -u\frac{\partial \tau}{\partial x} + \tau \frac{\partial u}{\partial x} \
\frac{\partial u}{\partial t} &= -u \frac{\partial u}{\partial x} - \frac{1}{\rho} \frac{\partial p}{\partial x} \
\frac{\partial p}{\partial t} &= -u \frac{\partial p}{\partial x} - \rho c^2 \frac{\partial u}{\partial x} \
\frac{\partial \gamma_e}{\partial t} &= -u \frac{\partial \gamma_e}{\partial x} + (\gamma_e - 1)(\gamma_e - \Gamma_1) \frac{\partial u}{\partial x}
\end{align}
For convenience, we define
$$
\alpha = (\gamma_e - 1)(\gamma_e - \Gamma_1)
$$
and then in vector form, we have:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{cccc} u & -\tau & 0 & 0\
0 & u & \tau & 0\
0 & c^2/\tau & u & 0\
0 & -\alpha & 0 & u\end{array} \right )
$$
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
"""
A.eigenvals()
"""
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
"""
# we use the helper rountines above to find the orthogonal left and right eigenvectors
eigen = eigensystem(A)
# printing them out for inspection
from sympy.printing.mathml import mathml
for e in eigen:
print(e.name)
display(e.r, e.l)
"""
Explanation: We see that there are 2 eigenvalues $u$ -- the addition of $\gamma_e$ to the system adds this degeneracy.
Eigenvectors
End of explanation
"""
dtau = symbols(r"\Delta\tau")
dge = symbols(r"\Delta\gamma_e")
dq = Matrix([[dtau, du, dp, dge]]).transpose()
taui = symbols(r"\tau_\mathrm{int}")
gei = symbols(r"(\gamma_e)_\mathrm{int}")
qint = Matrix([[taui, ui, pri, gei]]).transpose()
betas = [symbols(r"\beta^-")]
for n in range(len([e for e in eigen if e.d == 1])):
betas += [symbols(r"\beta^0_{}".format(n))]
betas += [symbols(r"\beta^+")]
for n, e in enumerate(eigen):
print(e.name)
beta = e.l.dot(dq)
display(Eq(betas[n], simplify(beta)))
"""
Explanation: $\beta$'s and final update
The tracing projects the primitive variables into characteristic variables by defining
$$
\Delta q = \left ( \begin{array}{c} \Delta \tau \ \Delta u \ \Delta p \ \Delta \gamma_e \end{array} \right )
$$
and then
$\beta^\nu = l^\nu \cdot \Delta q$
End of explanation
"""
for n in range(len(eigen)):
rhs = q[n]
for m in range(len(eigen)):
rhs -= betas[m]*eigen[m].r[n]
display(Eq(qint[n],rhs))
"""
Explanation: and now the final interface states
End of explanation
"""
cg = symbols(r"c_g", real=True, positive=True)
hg = symbols(r"h_g", real=True, positive=True)
Er = symbols(r"E_r", real=True, positive=True)
lf = symbols(r"\lambda_f", real=True)
f = symbols("f", real=True, positive=True)
q = Matrix([[rho, u, p, re, Er]]).transpose()
A = Matrix([[u, rho, 0, 0, 0],
[0, u, rho**-1, 0, lf/rho],
[0, rho*cg**2, u, 0, 0],
[0, rho*hg, 0, u, 0],
[0, (lf+1)*Er, 0, 0, u]])
A
"""
Explanation: Gray FLD Radiation Euler Equations with $(\rho e)$
Following Zhang et al. (2011), the equations of gray radiation hydrodynamics with primitive
variables $q = (\rho, u, p, (\rho e)_g, E_r)^\intercal$ are:
\begin{align}
\frac{\partial \rho}{\partial t} &= -u\frac{\partial \rho}{\partial x} - \rho \frac{\partial u}{\partial x} \
\frac{\partial u}{\partial t} &= -u \frac{\partial u}{\partial x} - \frac{1}{\rho} \frac{\partial p}{\partial x}
- \frac{\lambda_f}{\rho} \frac{\partial E_r}{\partial x}\
\frac{\partial p}{\partial t} &= -u \frac{\partial p}{\partial x} - \rho c_g^2 \frac{\partial u}{\partial x} \
\frac{\partial (\rho e)_g}{\partial t} &= -u \frac{\partial (\rho e)_g}{\partial x} - \rho h \frac{\partial u}{\partial x} \
\frac{\partial E_r}{\partial t} &= -\frac{3-f}{2} E_r \frac{\partial u}{\partial x} - \left ( \frac{3-f}{2} - \lambda_f\right ) u \frac{\partial E_r}{\partial x}
\end{align}
where $(\rho e)_g$ is the gas internal energy density, $h_g$ is the gas specific enthalpy, $c_g$ is the gas sound speed (obeying $c_g^2 = \Gamma_1 p /\rho$), $E_r$ is the radiation energy density, $f$ is the Eddington factor, and $\lambda_f$ is the flux limiter.
Following Zhang et al., we make the approximation that
$$
\frac{3-f}{2} = \lambda_f + 1
$$
and then in vector form, we have:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{ccccc} u & \rho & 0 & 0 & 0 \
0 & u & 1/\rho & 0 & \lambda_f/\rho\
0 & \rho c_g^2 & u & 0 & 0 \
0 & \rho h_g & 0 & u & 0 \
0 & (3-f)E_r/2 & 0 & 0 & u\end{array} \right )
$$
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
"""
A.eigenvals()
"""
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
"""
cc = c**2 - (lf +1)*lf*Er/rho
evs = A.eigenvals()
for e in evs.keys():
display(powsimp(simplify(e.subs(cg**2, cc))))
"""
Explanation: We see that there are 3 eigenvalues $u$. We identify the total sound speed (radiation + gas) as:
$$c^2 = c_g^2 + (\lambda_f + 1)\frac{\lambda_f E_r}{\rho}$$
We can simplify these by substituting in that relationship
End of explanation
"""
# we use the helper rountines above to find the orthogonal left and right eigenvectors
eigen = eigensystem(A, suba=cg, subb=sqrt(cc))
# printing them out for inspection
for e in eigen:
print(e.name)
display(e.r)
display(simplify(e.l))
"""
Explanation: We see that these are the same form of the eigenvalues for the pure hydrodynamics system
Eigenvectors
End of explanation
"""
dEr = symbols(r"\Delta{}E_r")
Eri = symbols(r"{E_r}_\mathrm{int}")
dq = Matrix([[dr, du, dp, dre, dEr]]).transpose()
qint = Matrix([[rhoi, ui, pri, rei, Eri]])
betas = [symbols(r"\beta^-")]
for n in range(len([e for e in eigen if e.d == 1])):
betas += [symbols(r"\beta^0_{}".format(n))]
betas += [symbols(r"\beta^+")]
for n, e in enumerate(eigen):
print(e.name)
beta = e.l.dot(dq)
display(Eq(betas[n], simplify(beta)))
"""
Explanation: $\beta$'s and final update
The tracing projects the primitive variables into characteristic variables by defining
$\beta^\nu = l^\nu \cdot \Delta q$
End of explanation
"""
for n in range(len(eigen)):
rhs = q[n]
for m in range(len(eigen)):
rhs -= betas[m]*eigen[m].r[n]
display(Eq(qint[n],rhs))
"""
Explanation: and now the final interface states
End of explanation
"""
geg = symbols(r"{\gamma_e}_g", real=True, positive=True)
Er = symbols(r"E_r", real=True, positive=True)
lf = symbols(r"\lambda_f", real=True, positive=True)
cg = symbols(r"c_g", real=True, positive=True)
q = Matrix([[tau, u, p, geg, Er]]).transpose()
A = Matrix([[u, -tau, 0, 0, 0],
[0, u, tau, 0, tau*lf],
[0, cg**2/tau, u, 0, 0],
[0, -alpha, 0, u, 0],
[0, (lf+1)*Er, 0, 0, u]])
A
"""
Explanation: Gray FLD Radiation Euler Equations with $(\gamma_e)$
We now look at the same system with a different auxillary thermodynamic variable (as we did with pure hydro), using $q = (\tau, u, p, {\gamma_e}_g, E_r)^\intercal$:
\begin{align}
\frac{\partial \tau}{\partial t} &= -u\frac{\partial \tau}{\partial x} + \tau \frac{\partial u}{\partial x} \
\frac{\partial u}{\partial t} &= -u \frac{\partial u}{\partial x} - \tau \frac{\partial p}{\partial x}
- \tau \lambda_f \frac{\partial E_r}{\partial x}\
\frac{\partial p}{\partial t} &= -u \frac{\partial p}{\partial x} - \frac{c_g^2}{\tau} \frac{\partial u}{\partial x} \
\frac{\partial {\gamma_e}_g}{\partial t} &= -u \frac{\partial {\gamma_e}_g}{\partial x} + \alpha \frac{\partial u}{\partial x} \
\frac{\partial E_r}{\partial t} &= -\frac{3-f}{2} E_r \frac{\partial u}{\partial x} - \left ( \frac{3-f}{2} - \lambda_f\right ) u \frac{\partial E_r}{\partial x}
\end{align}
here, ${\gamma_e}_g$ is defined solely in terms of the gas pressure and energy, the remaining variables have the same meaning as above. We again make the approximation that
$$
\frac{3-f}{2} = \lambda_f + 1
$$
and then in vector form, we have:
$$q_t + A(q) q_x = 0$$
with the matrix $A(q)$:
$$A(q) = \left ( \begin{array}{ccccc} u & -\tau & 0 & 0 & 0 \
0 & u & \tau & 0 & \tau \lambda_f\
0 & c_g^2/\tau & u & 0 & 0 \
0 & -\alpha & 0 & u & 0 \
0 & (3-f)E_r/2 & 0 & 0 & u\end{array} \right )
$$
We can represent this matrix symbolically in SymPy and explore its eigensystem.
End of explanation
"""
A.eigenvals()
"""
Explanation: The eigenvalues are the speeds at which information propagates with. SymPy returns them as a
dictionary, giving the multiplicity for each eigenvalue.
End of explanation
"""
cc = c**2 - (lf +1)*lf*Er*tau
evs = A.eigenvals()
for e in evs.keys():
display(powsimp(simplify(e.subs(cg**2, cc))))
"""
Explanation: We see that there are 3 eigenvalues $u$. We identify the total sound speed (radiation + gas) as:
$$c^2 = c_g^2 + (\lambda_f + 1)\lambda_f E_r\tau$$
We can simplify these by substituting in that relationship
End of explanation
"""
# we use the helper rountines above to find the orthogonal left and right eigenvectors
eigen = eigensystem(A, suba=cg, subb=sqrt(cc))
# printing them out for inspection
for e in eigen:
print(e.name)
display(e.r)
display(simplify(e.l))
"""
Explanation: We see that these are the same form of the eigenvalues for the pure hydrodynamics system
Eigenvectors
End of explanation
"""
dgeg = symbols(r"\Delta{\gamma_e}_g")
dEr = symbols(r"\Delta{}E_r")
dq = Matrix([[dtau, du, dp, dgeg, dEr]]).transpose()
Eri = symbols(r"{E_r}_\mathrm{int}")
qint = Matrix([[taui, ui, pri, gei, Eri]])
betas = [symbols(r"\beta^-")]
for n in range(len([e for e in eigen if e.d == 1])):
betas += [symbols(r"\beta^0_{}".format(n))]
betas += [symbols(r"\beta^+")]
for n, e in enumerate(eigen):
print(e.name)
beta = e.l.dot(dq)
display(Eq(betas[n], simplify(beta)))
"""
Explanation: $\beta$'s and final update
The tracing projects the primitive variables into characteristic variables by defining
$\beta^\nu = l^\nu \cdot \Delta q$
End of explanation
"""
for n in range(len(eigen)):
rhs = q[n]
for m in range(len(eigen)):
rhs -= betas[m]*eigen[m].r[n]
display(Eq(qint[n],rhs))
"""
Explanation: and now the final interface states
End of explanation
"""
|
epifanio/CesiumWidget | Examples/CesiumWidget Example KML.ipynb | apache-2.0 | from CesiumWidget import CesiumWidget
from IPython import display
import numpy as np
"""
Explanation: Cesium Widget Example KML
If the installation of Cesiumjs is ok, it should be reachable here:
http://localhost:8888/nbextensions/CesiumWidget/cesium/index.html
End of explanation
"""
cesium = CesiumWidget()
"""
Explanation: Create widget object
End of explanation
"""
cesium
"""
Explanation: Display the widget:
End of explanation
"""
cesium.kml_url = '/nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz'
# if running in binder use the following instead
!cp /home/main/.local/share/jupyter//nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz .
cesium.kml_url = 'gdpPerCapita2008.kmz'
"""
Explanation: Cesium is packed with example data. Let's look at some GDP per captia data from 2008.
End of explanation
"""
for lon in np.arange(0, 360, 0.5):
cesium.zoom_to(lon, 0, 36000000, 0 ,-90, 0)
cesium._zoomto
"""
Explanation: Example zoomto
End of explanation
"""
cesium.fly_to(14, 90, 20000001)
cesium._flyto
"""
Explanation: Example flyto
End of explanation
"""
|
google/picatrix | notebooks/adding_magic.ipynb | apache-2.0 | #@title Only execute if you are connecting to a hosted kernel
!pip install picatrix
from picatrix.lib import framework
from picatrix.lib import utils
# This should not be included in the magic definition file, only used
# in this notebook since we are comparing all magic registration.
from picatrix import notebook_init
notebook_init.init()
"""
Explanation: <a href="https://colab.research.google.com/github/google/picatrix/blob/main/notebooks/adding_magic.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Adding A Magic
This notebook describes how to add a magic or register a function into the picatrix set of magics.
Import
The first thing to do is install the picatrix framework and then import the libraries
(only need to install if you are running a colab hosted kernel)
End of explanation
"""
from typing import Optional
from typing import Text
@framework.picatrix_magic
def my_silly_magic(data: Text, magnitude: Optional[int] = 100) -> Text:
"""Return a silly string with no meaningful value.
Args:
data (str): This is a string that will be printed back.
magnitude (int): A number that will be displayed in the string.
Returns:
A string that basically combines the two options.
"""
return f'This magical magic produced {magnitude} magics of {data.strip()}'
"""
Explanation: Then we need to create a function:
End of explanation
"""
%picatrixmagics
"""
Explanation: In order to register a magic it has to have few properties:
Be a regular Python function that accepts parameters (optional if it returns a value)
The first argument it must accept is data (this is due to how magics work). If you don't need an argument, set the default value of data to an empty string.
Use typing to denote the type of the argument values.
The function must include a docstring, where the first line describes the function.
The docstring also must have an argument section, where each argument is further described (this is used to generate the helpstring for the magic/function).
If the function returns a value it must define a Returns section.
Once these requirements are fulfilled, a simple decorator is all that is required to register the magic and make sure it is available.
Test the Magic
Now once the magic has been registered we can first test to see if it is registered:
End of explanation
"""
magics = %picatrixmagics
magics[magics.name.str.contains('silly_magic')]
"""
Explanation: This does produce quite a lot of values, let's filter it out:
End of explanation
"""
%my_silly_magic foobar
"""
Explanation: OK, we can see that it is registered. Now let's try to call it:
End of explanation
"""
%my_silly_magic --help
"""
Explanation: And check out it's help message:
End of explanation
"""
%%my_silly_magic
this is some text
and some more text
and yet even more
"""
Explanation: Here you can see the results from the docstring being used to generate the help for the magic.
Now use the call magic:
End of explanation
"""
%%my_silly_magic --magnitude 234 store_here
and here is the text
store_here
"""
Explanation: And set the arguments:
End of explanation
"""
my_silly_magic_func?
my_silly_magic_func('some random string', magnitude=234)
"""
Explanation: And finally we can use the exposed function:
End of explanation
"""
|
feststelltaste/software-analytics | prototypes/_archive/Production Coverage Demo Notebook PowerPoint.ipynb | gpl-3.0 | import pandas as pd
coverage = pd.read_csv("../input/spring-petclinic/jacoco.csv")
coverage = coverage[['PACKAGE', 'CLASS', 'LINE_COVERED' ,'LINE_MISSED']]
coverage['LINES'] = coverage.LINE_COVERED + coverage.LINE_MISSED
coverage.head(1)
"""
Explanation: Context
John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not.
Idea
To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command:
bash
java -jar jacococli.jar report "C:\Temp\jacoco.exec" --classfiles \
C:\dev\repos\buschmais-spring-petclinic\target\classes --csv jacoco.csv
The CSV file contains all lines of code that were passed through during the measurement's time span. We just take the relevant data and add an additional LINES column to be able to calculate the ratio between covered and missed lines later on.
End of explanation
"""
grouped_by_packages = coverage.groupby("PACKAGE").sum()
grouped_by_packages['RATIO'] = grouped_by_packages.LINE_COVERED / grouped_by_packages.LINES
grouped_by_packages = grouped_by_packages.sort_values(by='RATIO')
grouped_by_packages
"""
Explanation: Analysis
It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package.
End of explanation
"""
%matplotlib inline
grouped_by_packages[['RATIO']].plot(kind="barh", figsize=(8,2))b
# Add PowerPoint Slide Generation here
"""
Explanation: We plot the data for the coverage ratio to get a brief overview of the result.
End of explanation
"""
|
weichetaru/weichetaru.github.com | notebook/machine-learning/deep_learning-logistic-regression-gradient-decent.ipynb | mit | import numpy as np # Matrix and vector computation package
np.seterr(all='ignore') # ignore numpy warning like multiplication of inf
import matplotlib.pyplot as plt # Plotting library
from matplotlib.colors import colorConverter, ListedColormap # some plotting functions
from matplotlib import cm # Colormaps
# Allow matplotlib to plot inside this notebook
%matplotlib inline
# Set the seed of the numpy random number generator so that the tutorial is reproducable
np.random.seed(seed=1)
# Define and generate the samples
nb_of_samples_per_class = 20 # The number of sample in each class
red_mean = [-1,0] # The mean of the red class
blue_mean = [1,0] # The mean of the blue class
std_dev = 1.2 # standard deviation of both classes
# Generate samples from both classes
x_red = np.random.randn(nb_of_samples_per_class, 2) * std_dev + red_mean
x_blue = np.random.randn(nb_of_samples_per_class, 2) * std_dev + blue_mean
# Merge samples in set of input variables x, and corresponding set of output variables t
X = np.vstack((x_red, x_blue)) # 20x2
t = np.vstack((np.zeros((nb_of_samples_per_class,1)), np.ones((nb_of_samples_per_class,1)))) # 20 x1
# Plot both classes on the x1, x2 plane
plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red')
plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue')
plt.grid()
plt.legend(loc=2)
plt.xlabel('$x_1$', fontsize=15)
plt.ylabel('$x_2$', fontsize=15)
plt.axis([-4, 4, -4, 4])
plt.title('red vs. blue classes in the input space')
plt.show()
"""
Explanation: Logistic Regression
In this note, I am going to train a logistic regression model with gradient decent estimation.
Logistic regression model can be thought as a neural network without hidden layer and hence a good entry of learning deep learning model.
Overview
This note will covere:
* Prepare the data
* Loss function, chain rule and its derivative
* Code implementation
Parepare the data
Here we are generating 20 data points from 2 class distributions: blue $(t=1)$ and red $(t=-1)$
End of explanation
"""
# Define the logistic function
def logistic(z):
return 1 / (1 + np.exp(-z))
# Define the neural network function y = 1 / (1 + numpy.exp(-x*w))
# x:20x2 and w: 1x2 so use w.T here
def nn(x, w):
return logistic(x.dot(w.T)) # 20x1 -> this is y
# Define the neural network prediction function that only returns
# 1 or 0 depending on the predicted class
def nn_predict(x,w):
return np.around(nn(x,w))
# Define the cost function
def cost(y, t):
return - np.sum(np.multiply(t, np.log(y)) + np.multiply((1-t), np.log(1-y))) # y and t all 20x1
"""
Explanation: Loss function, chain rule and its derivative
Model can be described as:
$$ y = \sigma(\mathbf{x} * \mathbf{w}^T) $$
$$\sigma(z) = \frac{1}{1+e^{-z}}$$
The parameter set $w$ can be optimized by maximizing the likelihood:
$$\underset{\theta}{\text{argmax}}\; \mathcal{L}(\theta|t,z) = \underset{\theta}{\text{argmax}} \prod_{i=1}^{n} \mathcal{L}(\theta|t_i,z_i)$$
The likelihood can be described as join distribution of $t\;and\;z\;$given $\theta$:
$$P(t,z|\theta) = P(t|z,\theta)P(z|\theta)$$
We don't care the probability of $z$ so
$$\mathcal{L}(\theta|t,z) = P(t|z,\theta) = \prod_{i=1}^{n} P(t_i|z_i,\theta)$$
and $t_i$ is a Bernoulli variable. so
$$\begin{split}
P(t|z) & = \prod_{i=1}^{n} P(t_i=1|z_i)^{t_i} * (1 - P(t_i=1|z_i))^{1-t_i} \
& = \prod_{i=1}^{n} y_i^{t_i} * (1 - y_i)^{1-t_i}
\end{split}$$
The cross entropy cost function can be defined as (by taking negative $log$):
$$\begin{split}
\xi(t,y) & = - log \mathcal{L}(\theta|t,z) \
& = - \sum_{i=1}^{n} \left[ t_i log(y_i) + (1-t_i)log(1-y_i) \right] \
& = - \sum_{i=1}^{n} \left[ t_i log(\sigma(z) + (1-t_i)log(1-\sigma(z)) \right]
\end{split}$$
and $t$ can be only 0 or 1 so above can be expressed as:
$$\xi(t,y) = -t * log(y) - (1-t) * log(1-y)$$
The grandient decent can be defined as:
$$w(k+1) = w(k) - \Delta w(k)$$
$$\Delta w(k) = \mu\frac{\partial \xi}{\partial w} \;\;\; where \;\mu\; is\; learning\; rate$$
simplely apply chain rule here:
$$\frac{\partial \xi_i}{\partial \mathbf{w}} = \frac{\partial z_i}{\partial \mathbf{w}} \frac{\partial y_i}{\partial z_i} \frac{\partial \xi_i}{\partial y_i}$$
(1)
$$\begin{split}
\frac{\partial \xi}{\partial y} & = \frac{\partial (-t * log(y) - (1-t) log(1-y))}{\partial y} = \frac{\partial (-t * log(y))}{\partial y} + \frac{\partial (- (1-t)log(1-y))}{\partial y} \
& = -\frac{t}{y} + \frac{1-t}{1-y} = \frac{y-t}{y(1-y)}
\end{split}$$
(2)
$$\frac{\partial y}{\partial z} = \frac{\partial \sigma(z)}{\partial z} = \frac{\partial \frac{1}{1+e^{-z}}}{\partial z} = \frac{-1}{(1+e^{-z})^2} e^{-z}-1 = \frac{1}{1+e^{-z}} \frac{e^{-z}}{1+e^{-z}} = \sigma(z) * (1- \sigma(z)) = y (1-y)$$
(3)
$$\frac{\partial z}{\partial \mathbf{w}} = \frac{\partial (\mathbf{x} * \mathbf{w})}{\partial \mathbf{w}} = \mathbf{x}$$
So combine (1) - (3):
$$\frac{\partial \xi_i}{\partial \mathbf{w}} = \frac{\partial z_i}{\partial \mathbf{w}} \frac{\partial y_i}{\partial z_i} \frac{\partial \xi_i}{\partial y_i} = \mathbf{x} * y_i (1 - y_i) * \frac{y_i - t_i}{y_i (1-y_i)} = \mathbf{x} * (y_i-t_i)$$
Finally, we get:
$$\Delta w_j = \mu * \sum_{i=1}^{N} x_{ij} (y_i - t_i)$$
Code implementation
First of all, define the logistic function logistic and the model nn. The cost function is the sum of the cross entropy of all training samples.
End of explanation
"""
# Plot the cost in function of the weights
# Define a vector of weights for which we want to plot the cost
nb_of_ws = 100 # compute the cost nb_of_ws times in each dimension
ws1 = np.linspace(-5, 5, num=nb_of_ws) # weight 1
ws2 = np.linspace(-5, 5, num=nb_of_ws) # weight 2
ws_x, ws_y = np.meshgrid(ws1, ws2) # generate grid
cost_ws = np.zeros((nb_of_ws, nb_of_ws)) # initialize cost matrix
# Fill the cost matrix for each combination of weights
for i in range(nb_of_ws):
for j in range(nb_of_ws):
cost_ws[i,j] = cost(nn(X, np.asmatrix([ws_x[i,j], ws_y[i,j]])) , t)
# Plot the cost function surface
plt.contourf(ws_x, ws_y, cost_ws, 20, cmap=cm.pink)
cbar = plt.colorbar()
cbar.ax.set_ylabel('$\\xi$', fontsize=15)
plt.xlabel('$w_1$', fontsize=15)
plt.ylabel('$w_2$', fontsize=15)
plt.title('Cost function surface')
plt.grid()
plt.show()
"""
Explanation: Plot the cost function and as you can see it's convex and has global optimal minimum.
End of explanation
"""
# define the gradient function.
def gradient(w, x, t):
return (nn(x, w) - t).T * x
# define the update function delta w which returns the
# delta w for each weight in a vector
def delta_w(w_k, x, t, learning_rate):
return learning_rate * gradient(w_k, x, t)
"""
Explanation: The grandinet and delta_w is just simple equations we produced above.
End of explanation
"""
# Set the initial weight parameter
w = np.asmatrix([-4, -2])
# Set the learning rate
learning_rate = 0.05
# Start the gradient descent updates and plot the iterations
nb_of_iterations = 10 # Number of gradient descent updates
w_iter = [w] # List to store the weight values over the iterations
for i in range(nb_of_iterations):
dw = delta_w(w, X, t, learning_rate) # Get the delta w update
w = w-dw # Update the weights
w_iter.append(w) # Store the weights for plotting
"""
Explanation: Start trining and just interating for 10 steps. w = w-dw is key point we update w during each integration.
End of explanation
"""
# Plot the first weight updates on the error surface
# Plot the error surface
plt.contourf(ws_x, ws_y, cost_ws, 20, alpha=0.9, cmap=cm.pink)
cbar = plt.colorbar()
cbar.ax.set_ylabel('cost')
# Plot the updates
for i in range(1, 4):
w1 = w_iter[i-1]
w2 = w_iter[i]
# Plot the weight-cost value and the line that represents the update
plt.plot(w1[0,0], w1[0,1], 'bo') # Plot the weight cost value
plt.plot([w1[0,0], w2[0,0]], [w1[0,1], w2[0,1]], 'b-')
plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(i), color='b')
w1 = w_iter[3]
# Plot the last weight
plt.plot(w1[0,0], w1[0,1], 'bo')
plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(4), color='b')
# Show figure
plt.xlabel('$w_1$', fontsize=15)
plt.ylabel('$w_2$', fontsize=15)
plt.title('Gradient descent updates on cost surface')
plt.grid()
plt.show()
"""
Explanation: Plot just 4 itegrations and you can see it toward to global minimum.
End of explanation
"""
|
gpagliuca/pyfas | docs/notebooks/Tab_files.ipynb | gpl-3.0 | tab_path = '../../pyfas/test/test_files/'
fname = '3P_single-fluid_key.tab'
tab = fa.Tab(tab_path+fname)
"""
Explanation: Tab files
A tab file contains thermodynamic properties pre-calculated by a thermodynamic simulator like PVTsim. It is good practice to analyze these text files before using them. Unfortunately there are several file layouts (key, fixed, with just a fluid, etc.). The Tab class handles some (most?) of the possible cases but not necessarily all the combinations.
The only public method is extract_all and returns a pandas dataframe with the thenrmodynamic properties.
At this moment in time the dtaframe obtained is not unique, it depends on the tab format and on the number of fluids in the original tab file. Room to improve here.
Tab file loading
End of explanation
"""
tab.export_all()
tab.data
"""
Explanation: Extraction
End of explanation
"""
tab.metadata
"""
Explanation: Some key info about the tab file are provided as tab.metadata
End of explanation
"""
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import itertools as it
def plot_property_keyword(pressure, temperature, thermo_property):
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111, projection='3d')
X = []
Y = []
for x, y in it.product(pressure, temperature):
X.append(x/1e5)
Y.append(y)
ax.scatter(X, Y, thermo_property)
ax.set_ylabel('Temperature [C]')
ax.set_xlabel('Pressure [bar]')
ax.set_xlim(0, )
ax.set_title('ROHL')
return fig
plot_property_keyword(tab.metadata['p_array'],
tab.metadata['t_array'],
tab.data.T['ROHL'].values[0])
"""
Explanation: Plotting
Here under an example of a 3D plot of the liquid hydropcarbon viscosity
End of explanation
"""
|
hchauvet/beampy | doc-src/auto_tutorials/positioning_system.ipynb | gpl-3.0 | from beampy import *
from beampy.utils import bounding_box, draw_axes
doc = document(quiet=True)
with slide():
draw_axes(show_ticks=True)
t1 = text('This is the default theme behaviour')
t2 = text('x are centered and y equally spaced')
for t in [t1, t2]:
t.add_border()
display_matplotlib(gcs())
"""
Explanation: Beampy Positioning system
Beampy has a positioning system that allows to make automatic, fixed or
relative positioning. The default behavior is set by the theme used in the
presentation.
The default theme sets the coordinates to:
x='center' which means that element is centered in the horizontal direction
x element anchor is set to left, which means that the horizontal distance is
computed between to left side of the slide and the left border of the element
bounding-box.
y='auto' which means that elements are equally spaced on the vertical
direction.
y element anchor is set to top, which means that the vertical distance is
computed between the top of the slide and the top border of the element
bounding-box.
The reference for computing coordinates as percent is the page or group width
for both x and y.
Slide coordinate system
The origin of the coordinate coordinate system is the upper-left corner of the
slide or the current group. And is positive when moving toward the bottom-right
corner.
End of explanation
"""
with slide():
draw_axes()
rectangle(x='center', y='center', width=400, height=200,
color='lightgreen', edgecolor=None)
text('x and y are centered for the text and the rectangle modules',
x='center', y='center', width=350)
display_matplotlib(gcs())
"""
Explanation: Automatic positioning
Beampy as some simple automatic positioning, which are 'centering' the Beampy
module with center, and equally spaced distribution of Beampy modules that
have auto as coordinates
Centering
+++++++++
End of explanation
"""
with slide():
draw_axes()
for c in ['gold', 'crimson', 'orangered']:
rectangle(x='center', y='auto', width=100, height=100,
color=c, edgecolor=None)
display_matplotlib(gcs())
"""
Explanation: Auto
++++
Equally spaced vertically
~~~~~~~~~~~~~~~~~~~~~~~~~
End of explanation
"""
with slide():
draw_axes()
for c in ['gold', 'crimson', 'orangered']:
rectangle(x='auto', y='center', width=100, height=100,
color=c, edgecolor=None)
display_matplotlib(gcs())
"""
Explanation: Equally spaced horizontally
~~~~~~~~~~~~~~~~~~~~~~~~~~~
End of explanation
"""
with slide():
draw_axes()
for c in ['gold', 'crimson', 'orangered']:
rectangle(x='auto', y='auto', width=100, height=100,
color=c, edgecolor=None)
display_matplotlib(gcs())
"""
Explanation: Equally spaced in xy directions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
End of explanation
"""
with slide():
draw_axes()
text('x and y relative to width', x=0.5, y=0.5)
text('x and y relative to width, with aspect ratio for y', x=0.5,
y=0.5*(3/4.), width=300)
text('x and y given in pixels', x=100, y=100)
text('x and y given in centimetres', x='2cm', y='5cm')
display_matplotlib(gcs())
"""
Explanation: Absolute positioning
units
+++++
Absolute coordinates could be given as follow:
(int or float) <= 1.0, the position is a percent of the slide or group width
for x and y (by default, but could be changed).
(int or float) > 1.0, the position is in pixels.
Given as a string, the position is in pixels or in the unit given just after,
like '2cm'.
<div class="alert alert-info"><h4>Note</h4><p>For `y` < 1.0, the default will be changed in future version to be percent
of the height. To already change this in your slide you could add just
after importing Beampy:
>>> DEFAULT_Y['unit'] = 'height'</p></div>
End of explanation
"""
with slide():
draw_axes()
t1 = text('Top-left absolute positioning $$x=x^2$$', x=400, y=100)
t2 = text('Top-right absolute positioning $$x=x^2$$', x=right(400), y=200)
t3 = text('Middle-middle absolute positioning $$x=x^2$$', x=center(400), y=center(300))
t4 = text('Bottom-right absolute positioning $$x=x^2$$', x=right(0.5), y=bottom(0.6))
for t in [t1, t2, t3, t4]:
bounding_box(t)
display_matplotlib(gcs())
"""
Explanation: Anchors
+++++++
We could also change the anchor of the Beampy module using the center,
right, bottom function in the coordinate.
End of explanation
"""
with slide():
draw_axes()
texts_width = 200
r = rectangle(x='center', y='center', width=100, height=100,
color='crimson', edgecolor=None)
t1 = text('Centered 10 px below the rectangle', x=r.center+center(0),
y=r.bottom+10, width=texts_width, align='center')
t2 = text('Centered 10 px above the rectangle', x=r.center+center(0),
y=r.top-bottom(10), width=texts_width, align='center')
t3 = text('10 px left of the rectangle', x=r.left-right(10),
y=r.center+center(10), width=texts_width, align='center')
t4 = text('10 px right of the rectangle', x=r.right+10,
y=r.center+center(10), width=texts_width, align='center')
for t in [t1, t2, t3, t4]:
bounding_box(t)
display_matplotlib(gcs())
"""
Explanation: Relative positioning
When a Beampy module as been placed on a slide, we could position an other
element relative to this first one. To do so Beampy module have methods to
refer to their anchors (module.left, module.right, module.top, module.bottom,
module.center).
End of explanation
"""
with slide():
draw_axes()
text('text x=20, y=0.5cm', x='20', y='0.5cm')
for i in range(2):
text('text x=-0, y=+0.5cm', x='-0', y='+0.5cm')
text('text x=25, y=0.3', x='25', y=0.3)
for i in range(2):
text('text x=+0, y=+0.5cm', x='+0', y='+0.5cm')
text('text x=25, y=0.5', x='25', y=0.5)
text('text x=+10, y=+0', x='+10', y='+0')
text('text x=+10, y=-0', x='+10', y='-0')
display_matplotlib(gcs())
"""
Explanation: An other way to do relative positioning is to use string as coordinate with
'+' ot '-' before the shift and the unit. This will place the new Beampy
Module relative to previous one.
End of explanation
"""
with slide():
draw_axes()
t = text('centered text',
x={'anchor':'middle', 'shift':0.5},
y={'anchor':'middle', 'shift':0.5, 'unit':'height'})
bounding_box(t)
t = text('bottom right shift',
x={'anchor':'right', 'shift':30, 'align':'right'},
y={'anchor':'bottom', 'shift':30, 'align':'bottom'})
bounding_box(t)
display_matplotlib(gcs())
"""
Explanation: Coordinate as dictionary
Coordinate could also be given as dictionary. The dictionary keys are the
following:
unit: ('px', 'pt', 'cm', 'width', 'height'), the width of the shift value.
shift: float value, the amount of shifting.
reference: ('slide' or 'relative') 'relative' is used to make relative
positioning.
anchor: (top, bottom, left, right, middle) define the anchor position on the
module bounding-box.
align: (left, right or center for x) and (top, bottom or center for y) is used
to set the origin of slide axes.
End of explanation
"""
|
winpython/winpython_afterdoc | docs/installing_R.ipynb | mit | import os
import sys
import io
# downloading R may takes a few minutes (80Mo)
try:
import urllib.request as urllib2 # Python 3
except:
import urllib2 # Python 2
# specify R binary and (md5, sha1) hash
# R-3.6.1:
r_url = "https://cran.r-project.org/bin/windows/base/old/3.6.1/R-3.6.1-win.exe"
hashes=("f6ca2ecfc66a10a196991b6b6c4e91f6","df4ad3c36e193423ebf2d698186feded15777da1")
# specify target location
# tweak change in recent winpython
tool_base_directory=os.environ["WINPYDIR"]+"\\..\\t\\"
if not os.path.isdir(tool_base_directory):
tool_base_directory=os.environ["WINPYDIR"]+"\\..\\tools\\"
r_installer = tool_base_directory+os.path.basename(r_url)
os.environ["r_installer"] = r_installer
# Download
g = urllib2.urlopen(r_url)
with io.open(r_installer, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
!dir %r_installer%
"""
Explanation: Installating R on WinPython (version of 2019-08-25)
Warning: as of 2019-08-25, the R installation is not supposed to support a move of Winpython library
see https://richpauloo.github.io/2018-05-16-Installing-the-R-kernel-in-Jupyter-Lab/
This procedure applys for Winpython (Version of December 2015 and after)
1 - Downloading R binary
End of explanation
"""
# checking it's the official R
import hashlib
def give_hash(of_file, with_this):
with io.open(r_installer, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-1"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*40)
print ("%s %s %s" % (give_hash(r_installer, hashlib.md5) , give_hash(r_installer, hashlib.sha1),r_installer))
if give_hash(r_installer, hashlib.md5) == hashes[0] and give_hash(r_installer, hashlib.sha1) == hashes[1]:
print("looks good!")
else:
print("problem ! please check")
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# preparing Dos variables
os.environ["R_HOME"] = tool_base_directory+ "R\\"
os.environ["R_HOMEbin"]=os.environ["R_HOME"] + "bin"
# for installation we need this
os.environ["tmp_Rbase"]=os.path.join(os.path.split(os.environ["WINPYDIR"])[0] , 't','R' )
if 'amd64' in sys.version.lower():
r_comp ='/COMPONENTS="main,x64,translations'
else:
r_comp ='/COMPONENTS="main,i386,translations'
os.environ["tmp_R_comp"]=r_comp
# let's install it, if hashes do match
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# If you are "USB life style", or multi-winpython
# ==> CLICK the OPTION "Don't create a StartMenuFolder' <== (when it will show up)
!start cmd /C %r_installer% /DIR=%tmp_Rbase% %tmp_R_comp%
"""
Explanation: 2 - checking and Installing R binary in the right place
End of explanation
"""
import os
import sys
import io
# let's create a R launcher
r_launcher = r"""
@echo off
call %~dp0env.bat
rscript %*
"""
r_launcher_bat = os.environ["WINPYDIR"]+"\\..\\scripts\\R_launcher.bat"
# let's create a R init script
# in manual command line, you can use repos = c('http://irkernel.github.io/', getOption('repos'))
r_initialization = r"""
install.packages(c('repr', 'IRdisplay', 'evaluate', 'crayon', 'pbdZMQ', 'devtools', 'uuid', 'digest', 'stringr'), repos = c('http://cran.rstudio.com/', 'http://cran.rstudio.com/'))
devtools::install_github('IRkernel/IRkernel')
library('pbdZMQ')
library('repr')
library('IRkernel')
library('IRdisplay')
library('crayon')
library('stringr')
IRkernel::installspec()
"""
# IRkernel::installspec() # install for the current user:
# IRkernel::installspec(user = FALSE) # install system-wide
r_initialization_r = os.path.normpath(os.environ["WINPYDIR"]+"\\..\\scripts\\R_initialization.r")
for i in [(r_launcher,r_launcher_bat), (r_initialization, r_initialization_r)]:
with io.open(i[1], 'w', encoding = sys.getdefaultencoding() ) as f:
for line in i[0].splitlines():
f.write('%s\n' % line )
#check what we are going to do
print ("!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save " + r_initialization_r)
# Launch Rkernel setup
os.environ["r_initialization_r"] = r_initialization_r
!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save %r_initialization_r%
"""
Explanation: During Installation (if you wan't to move the R installation after)
Choose non default option "Yes (customized startup"
then after 3 screens, Select "Don't create a Start Menu Folder"
Un-select "Create a desktop icon"
Un-select "Save version number in registery"
<img src="https://raw.githubusercontent.com/stonebig/winpython_afterdoc/master/examples/images/r_setup_unclick_shortcut.GIF">
3 - create a R_launcher and install irkernel
End of explanation
"""
%load_ext rpy2.ipython
#vitals: 'dplyr', 'R.utils', 'nycflights13'
# installation takes 2 minutes
%R install.packages(c('dplyr','R.utils', 'nycflights13'), repos='http://cran.rstudio.com/')
"""
Explanation: 4- Install a R package via a IPython Kernel
End of explanation
"""
!echo %R_HOME%
%load_ext rpy2.ipython
# avoid some pandas deprecation warning
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
%%R
library('dplyr')
library('nycflights13')
write.csv(flights, "flights.csv")
%R head(flights)
%R airports %>% mutate(dest = faa) %>% semi_join(flights) %>% head
"""
Explanation: 5- Small demo via R magic
End of explanation
"""
# essentials: 'tidyr', 'shiny', 'ggplot2', 'caret' , 'nnet'
# remaining of Hadley Wickahm "stack" (https://github.com/rstudio)
%R install.packages(c('tidyr', 'ggplot2', 'shiny','caret' , 'nnet'), repos='https://cran.rstudio.com/')
%R install.packages(c('knitr', 'purrr', 'readr', 'readxl'), repos='https://cran.rstudio.com/')
%R install.packages(c('rvest', 'lubridate', 'ggvis', 'readr','base64enc'), repos='https://cran.rstudio.com/')
# TRAINING = online training book http://r4ds.had.co.nz/ (or https://github.com/hadley/r4ds)
"""
Explanation: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory)
End of explanation
"""
%R install.packages(c('bindrccp'), repos='http://cran.rstudio.com/')
"""
Explanation: 7 - Relaunch Jupyter Notebook to get a R kernel option
launch a new notebook of "R" type, and type in it:
library('dplyr')
library('nycflights13')
head(flights)
9 - To Un-install / Re-install R (or other trouble-shooting)
launch winpython\t\R\unins000.exe (was formerly winpython\tools\R\unins000.exe)
delete the directory winpython\t\R (was formerly winpython\tools\R)
re-install
End of explanation
"""
|
ajhenrikson/phys202-2015-work | assignments/assignment06/ProjectEuler17.ipynb | mit | def number_to_words(n):#pair programed with noah miller on this problem
"""Given a number n between 1-1000 inclusive return a list of words for the number."""
s=[]
o={1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',9:'nine'}
t={0:'ten',1:'eleven',2:'twelve',3:'thirteen',4:'fourteen',5:'fifteen',6:'sixteen',7:'seventeen',8:'eightteen',9:'nineteen'}
h={2:'twenty',3:'thirty',4:'forty',5:'fifty',6:'sixty',7:'seventy',8:'eighty',9:'ninety'}
i=list(int(x)for x in str(n))[::-1]
while len(i)<3:
i.append(0) #turns all numbers into three didgit numbers so as to run the programs
if i[2]!=0:
s.append(o[i[2]]+' hundred')
if i[2]!=0 and i[1]!=0 or i[2]!=0 and i[0]!=0:
s.append('and')
if i[1]==1:
s.append(t[i[0]])
if i[1]>1 and i[0]!=0:
s.append(h[i[1]]+' '+ o[i[0]])
if i[1]>1 and i[0]==0:
s.append(h[i[1]])
if i[1]==0 and i[0]!=0:
s.append(o[i[0]])
if len(i)>3:
s.append('one thousand')
return ' '.join(s)
print (number_to_words(394))
"""
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
"""
assert number_to_words(1)=='one'
assert number_to_words(15)=='fifteen'
assert number_to_words(45)=='forty five'
assert number_to_words(394)=='three hundred and ninety four'
assert number_to_words(999)=='nine hundred and ninety nine'
assert True # use this for grading the number_to_words tests.
"""
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
"""
def count_letters(n):
"""Count the number of letters used to write out the words for 1-n inclusive."""
w=[]
for entry in number_to_words(n):
count=0
for char in entry:
if char !=' ':
count=count+1
w.append(count)
return sum(w)
print(count_letters(3))
"""
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
"""
assert count_letters(1)==3
assert count_letters(100)==10
assert count_letters(1000)==len('onethousand')
assert True # use this for grading the count_letters tests.
"""
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
"""
print(count_letters(1000))
assert True # use this for gradig the answer to the original question.
"""
Explanation: Finally used your count_letters function to solve the original question.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/01fb0f5b44af7b68840573c40d1eec05/plot_read_and_write_raw_data.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname)
# Set up pick list: MEG + STI 014 - bad channels
want_meg = True
want_eeg = False
want_stim = False
include = ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more
picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,
include=include, exclude='bads')
some_picks = picks[:5] # take 5 first
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
data, times = raw[some_picks, start:(stop + 1)]
# save 150s of MEG data in FIF file
raw.save('sample_audvis_meg_trunc_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
"""
Explanation: Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
End of explanation
"""
raw.plot()
"""
Explanation: Show MEG data
End of explanation
"""
|
ricklupton/sankeyview | docs/tutorials/system-boundary.ipynb | mit | import pandas as pd
flows = pd.read_csv('simple_fruit_sales.csv')
from floweaver import *
# Set the default size to fit the documentation better.
size = dict(width=570, height=300)
# Same partitions as the Quickstart tutorial
farms_with_other = Partition.Simple('process', [
'farm1',
'farm2',
'farm3',
('other', ['farm4', 'farm5', 'farm6']),
])
customers_by_name = Partition.Simple('process', [
'James', 'Mary', 'Fred', 'Susan'
])
# Define the nodes, this time setting the partition from the start
nodes = {
'farms': ProcessGroup(['farm1', 'farm2', 'farm3',
'farm4', 'farm5', 'farm6'],
partition=farms_with_other),
'customers': ProcessGroup(['James', 'Mary', 'Fred', 'Susan'],
partition=customers_by_name),
}
# Ordering and bundles as before
ordering = [
['farms'], # put "farms" on the left...
['customers'], # ... and "customers" on the right.
]
bundles = [
Bundle('farms', 'customers'),
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
"""
Explanation: System boundaries
Often we don't want to show all of the data in one Sankey diagram: you focus on one part of the system. But we still want conservation of mass (or whatever is being shown in the diagram) to work, so we end up with flows to & from "elsewhere". These can also be thought of as imports and exports.
Let's start by recreating the Quickstart example:
End of explanation
"""
nodes['farms'].selection = [
'farm1', 'farm3', 'farm4', 'farm5', 'farm6'
]
weave(sdd, flows).to_widget(**size)
"""
Explanation: What happens if we remove farm2 from the ProcessGroup?
End of explanation
"""
nodes['customers'].selection = ['James', 'Mary']
weave(sdd, flows).to_widget(**size)
"""
Explanation: The flow is still there! But it is labelled with a little arrow to show that it is coming "from elsewhere". This is important because we are still showing Susan and Fred in the diagram, and they get fruit from farm2. If we didn't show those flows, Susan's and Fred's inputs and outputs would not balance.
Try now removing Susan and Fred from the diagram:
End of explanation
"""
# Define a new Waypoint
nodes['exports'] = Waypoint(title='exports here')
# Update the ordering to include the waypoint
ordering = [
['farms'], # put "farms" on the left...
['customers', 'exports'], # ... and "exports" below "customers"
] # on the right.
# Add a new bundle from "farms" to Elsewhere, via the waypoint
bundles = [
Bundle('farms', 'customers'),
Bundle('farms', Elsewhere, waypoints=['exports']),
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
"""
Explanation: Now they're gone, we no longer see the incoming flows from farm2. But we see some outgoing flows "to elsewhere" from farm3 and the other group. This is because farm3 is within the system boundary -- it is shown in the diagram -- so its output flow has to go somewhere.
Controlling Elsewhere flows
These flows are added automatically to make sure that mass is conserved, but because they are automatic, we have little control over them. By explicitly adding a flow to or from Elsewhere to the diagram, we can control where they appear and what they look like.
To do this, add a Waypoint for the outgoing flows to 'pass through' on their way across the system boundary:
End of explanation
"""
ordering = [
['farms'],
['exports', 'customers'],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
"""
Explanation: This is pretty similar to what we had already, but now the waypoint is explicitly listed as part of the SankeyDefinition, we have more control over it.
For example, we can put the exports above James and Mary by changing the ordering:
End of explanation
"""
fruits_by_type = Partition.Simple('type', ['apples', 'bananas'])
nodes['exports'].partition = fruits_by_type
weave(sdd, flows).to_widget(**size)
"""
Explanation: Or we can partition the exports Waypoint to show how much of it is apples and bananas:
End of explanation
"""
ordering = [
[[], ['farms' ]],
[['exports'], ['customers']],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
"""
Explanation: Horizontal bands
Often, import/exports and loss flows are shown in a separate horizontal "band" either above or below the main flows. We can do this by modifying the ordering a little bit.
The ordering style we have used so far looks like this:
python
ordering = [
[list of nodes in layer 1], # left-hand side
[list of nodes in layer 2],
...
[list of nodes in layer N], # right-hand side
]
But we can add another layer of nesting to make it look like this:
python
ordering = [
# |top band| |bottom band|
[ [........], [...........] ], # left-hand side
[ [........], [...........] ],
...
[ [........], [...........] ], # right-hand side
]
Here's an example:
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_algo/td1a_correction_session7_edition.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
def dist_hamming(m1,m2):
d = 0
for a,b in zip(m1,m2):
if a != b :
d += 1
return d
dist_hamming("close", "cloue")
"""
Explanation: 1A.algo - La distance d'รฉdition (correction)
Correction.
End of explanation
"""
def dist_hamming(m1,m2):
d = abs(len(m1)-len(m2))
for a,b in zip(m1,m2):
if a != b :
d += 1
return d
dist_hamming("close", "cloue"), dist_hamming("close", "clouet")
"""
Explanation: Exercice 1 : comment prendre en compte diffรฉrentes tailles de mots ?
On peut allonger le mot le plus court par des espaces ce qui revient ร ajouter au rรฉsultat la diffรฉrence de taille.
End of explanation
"""
def distance_edition_rec(m1,m2):
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
return dist_hamming(m1,m2)
else:
collecte = []
for i in range(1,len(m1)):
for j in range(1,len(m2)):
d1 = distance_edition_rec(m1[:i],m2[:j])
d2 = distance_edition_rec(m1[i:],m2[j:])
collecte.append(d1+d2)
return min(collecte)
distance_edition_rec("longmot", "liongmot")
distance_edition_rec("longmot", "longmoit")
"""
Explanation: Exercice 2 : implรฉmenter une distance ร partir de cette รฉgalitรฉ
premiรจre option rรฉcursive
Comme l'รฉcriture est rรฉcursive, on peut essayer mรชme si cela n'est pas optimal (pas optimal du tout).
End of explanation
"""
def distance_edition_rec_cache(m1,m2,cache=None):
if cache is None:
cache = {}
if (m1,m2) in cache:
return cache[m1,m2]
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
cache[m1,m2] = dist_hamming(m1,m2)
return cache[m1,m2]
else:
collecte = []
for i in range(1,len(m1)):
for j in range(1,len(m2)):
d1 = distance_edition_rec_cache(m1[:i],m2[:j], cache)
d2 = distance_edition_rec_cache(m1[i:],m2[j:], cache)
collecte.append(d1+d2)
cache[m1,m2] = min(collecte)
return cache[m1,m2]
distance_edition_rec_cache("longmot", "liongmot"), distance_edition_rec_cache("longmot", "longmoit")
%timeit distance_edition_rec("longmot", "longmoit")
%timeit distance_edition_rec_cache("longmot", "longmoit")
"""
Explanation: Que se passe-t-il lorsqu'on enlรจve la condition or min(len(m1), len(m2)) <= 1 ?
version non rรฉcursive qui mรฉmorise les rรฉsultats
End of explanation
"""
def distance_edition_rec_cache_insecable(m1,m2,cache=None):
if cache is None:
cache = {}
if (m1,m2) in cache:
return cache[m1,m2]
if max(len(m1), len(m2)) <= 2 or min(len(m1), len(m2)) <= 1:
cache[m1,m2] = dist_hamming(m1,m2)
return cache[m1,m2]
else:
i = len(m1)
j = len(m2)
d1 = distance_edition_rec_cache_insecable(m1[:i-2],m2[:j-1], cache) + dist_hamming(m1[i-2:], m2[j-1:])
d2 = distance_edition_rec_cache_insecable(m1[:i-1],m2[:j-2], cache) + dist_hamming(m1[i-1:], m2[j-2:])
d3 = distance_edition_rec_cache_insecable(m1[:i-1],m2[:j-1], cache) + dist_hamming(m1[i-1:], m2[j-1:])
cache[m1,m2] = min(d1,d2,d3)
return cache[m1,m2]
distance_edition_rec_cache_insecable("longmot", "liongmot"), distance_edition_rec_cache_insecable("longmot", "longmoit")
%timeit distance_edition_rec_cache_insecable("longmot", "longmoit")
"""
Explanation: Il apparaรฎt qu'on perd un temps fou dans la premiรจre version ร recalculer un grand nombre de fois les mรชmes distances. Conserver ces rรฉsultats permet d'aller beaucoup plus vite.
Exercice 3 : implรฉmenter la distance d'รฉdition
version rรฉcursive avec cache
On reprend la derniรจre version en la modificant pour ne tenir compte des mots insรฉcables.
End of explanation
"""
def distance_edition_insecable(m1,m2,cache=None):
dist = {}
dist[-2,-1] = 0
dist[-1,-2] = 0
dist[-1,-1] = 0
for i in range(0,len(m1)):
dist[i,-1] = i
dist[i,-2] = i
for j in range(0,len(m2)):
dist[-1,j] = j
dist[-2,j] = j
for i in range(0,len(m1)):
for j in range(0,len(m2)):
d1 = dist[i-2,j-1] + dist_hamming(m1[i-2:i], m2[j-1:j])
d2 = dist[i-1,j-2] + dist_hamming(m1[i-1:i], m2[j-2:j])
d3 = dist[i-1,j-1] + dist_hamming(m1[i-1:i], m2[j-1:j])
dist[i,j] = min(d1,d2,d3)
return dist[len(m1)-1, len(m2)-1]
distance_edition_insecable("longmot", "liongmot"), distance_edition_insecable("longmot", "longmoit")
%timeit distance_edition_insecable("longmot", "longmoit")
"""
Explanation: C'est encore plus rapide.
version non rรฉcursive
La version non rรฉcursive est plus simple ร envisager dans ce cas.
End of explanation
"""
def distance_edition(m1,m2,cache=None):
dist = {}
dist[-1,-1] = 0
for i in range(0,len(m1)):
dist[i,-1] = i
for j in range(0,len(m2)):
dist[-1,j] = j
for i, c in enumerate(m1):
for j, d in enumerate(m2):
d1 = dist[i-1,j] + 1 # insertion
d2 = dist[i,j-1] + 1 # suppression
x = 0 if c == d else 1
d3 = dist[i-1,j-1] + x
dist[i,j] = min(d1,d2,d3)
return dist[len(m1)-1, len(m2)-1]
distance_edition("longmot", "liongmot"), distance_edition("longmot", "longmoit")
%timeit distance_edition_insecable("longmot", "longmoit")
"""
Explanation: diffรฉrence avec l'algorithme de wikipรฉdia
La distance de Hamming n'est pas prรฉsente dans l'algorithme dรฉcrit sur la page Wikipedia. C'est parce qu'on dรฉcompose la distance de Hamming entre un mot de 1 caractรจre et un mot de 2 caractรจres par une comparaison et une insertion (ou une suppression).
End of explanation
"""
|
cniedotus/Python_scrape | Python3_tutorial.ipynb | mit | width = 20
height = 5*9
width * height
"""
Explanation: <center> Python and MySQL tutorial </center>
<center> Author: Cheng Nie </center>
<center> Check chengnie.com for the most recent version </center>
<center> Current Version: Feb 18, 2016</center>
Python Setup
Since most students in this class use Windows 7, I will use Windows 7 for illustration of the setup. Setting up the environmnet in Mac OS and Linux should be similar. Please note that the code should produce the same results whichever operating system (even on your smart phone) you are using because Python is platform independent.
Download the Python 3.5 version of Anaconda that matches your operating system from this link. You can accept the default options during installation. To see if your Windows is 32 bit or 64 bit, check here
You can save and run this document using the Jupyter notebook (previously known as IPython notebook). Another tool that I recommend would be PyCharm, which has a free community edition.
This is a tutorial based on the official Python Tutorial for Python 3.5.1. If you need a little more motivation to learn this programming language, consider reading this article.
Numbers
End of explanation
"""
tax = 8.25 / 100
price = 100.50
price * tax
price + _
round(_, 2)
"""
Explanation: Calculator
End of explanation
"""
print('spam email')
"""
Explanation: Strings
End of explanation
"""
# This would cause error
print('doesn't')
# One way of doing it correctly
print('doesn\'t')
# Another way of doing it correctly
print("doesn't")
"""
Explanation: show ' and " in a string
End of explanation
"""
print('''
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
''')
print('''Cheng highly recommends Python programming language''')
"""
Explanation: span multiple lines
End of explanation
"""
word = 'HELP' + 'A'
word
"""
Explanation: slice and index
End of explanation
"""
word[0]
word[4]
# endding index not included
word[0:2]
word[2:4]
# length of a string
len(word)
"""
Explanation: Index in the Python way
End of explanation
"""
a = ['spam', 'eggs', 100, 1234]
a
a[0]
a[3]
a[2:4]
sum(a[2:4])
"""
Explanation: List
End of explanation
"""
a
a[2] = a[2] + 23
a
"""
Explanation: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
End of explanation
"""
q = [2, 3]
p = [1, q, 4]
p
len(p)
p[1]
p[1][0]
"""
Explanation: Nest lists
End of explanation
"""
x=(1,2,3,4)
x[0]
x[0]=7 # it will raise error since tuple is immutable
"""
Explanation: tuple
similar to list, but immutable (element cannot be changed)
End of explanation
"""
tel = {'jack': 4098, 'sam': 4139}
tel['dan'] = 4127
tel
tel['jack']
del tel['sam']
tel
tel['mike'] = 4127
tel
# Is dan in the dict?
'dan' in tel
for key in tel:
print('key:', key, '; value:', tel[key])
import collections
od = collections.OrderedDict(sorted(tel.items()))
od
"""
Explanation: dict
End of explanation
"""
x = int(input("Please enter an integer for x: "))
if x < 0:
x = 0
print('Negative; changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
"""
Explanation: Quiz: how to print the tel dict sorted by the key?
Control of flow
if
Ask a user to input a number, if it's negative, x=0, else if it's 1
End of explanation
"""
# multiple assignment to assign two variables at the same time
a, b = 0, 1
while a < 10:
print(a)
a, b = b, a+b
"""
Explanation: while
Fibonacci series: the sum of two elements defines the next with the first two elements to be 0 and 1.
End of explanation
"""
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for i in words:
print(i, len(i))
"""
Explanation: for
End of explanation
"""
# crawl_UTD_reviews
# Author: Cheng Nie
# Email: me@chengnie.com
# Date: Feb 8, 2016
# Updated: Feb 12, 2016
from urllib.request import urlopen
num_pages = 2
reviews_per_page = 20
# the file we will save the rating and date
out_file = open('UTD_reviews.csv', 'w')
# the url that we need to locate the page for UTD reviews
url = 'http://www.yelp.com/biz/university-of-texas-at-dallas-\
richardson?start={start_number}'
# the three string patterns we just explained
review_start_pattern = '<div class="review-wrapper">'
rating_pattern = '<i class="star-img stars_'
date_pattern = '"datePublished" content="'
reviews_count = 0
for page in range(num_pages):
print('processing page', page)
# open the url and save the source code string to page_content
html = urlopen(url.format(start_number = page * reviews_per_page))
page_content = html.read().decode('utf-8')
# locate the beginning of an individual review
review_start = page_content.find(review_start_pattern)
while review_start != -1:
# it means there at least one more review to be crawled
reviews_count += 1
# get the rating
cut_front = page_content.find(rating_pattern, review_start) \
+ len(rating_pattern)
cut_end = page_content.find('" title="', cut_front)
rating = page_content[cut_front:cut_end]
# get the date
cut_front = page_content.find(date_pattern, cut_end) \
+ len(date_pattern)
cut_end = page_content.find('">', cut_front)
date = page_content[cut_front:cut_end]
# save the data into out_file
out_file.write(','.join([rating, date]) + '\n')
review_start = page_content.find(review_start_pattern, cut_end)
print('crawled', reviews_count, 'reviews so far')
out_file.close()
"""
Explanation: Crawl the reviews for UT Dallas at Yelp.com
The University of Texas at Dallas is reviewed on Yelp.com. It shows on this page that it attracted 38 reviews so far from various reviewers. You learn from the webpage that Yelp displays at most 20 recommended reviews per page and we need to go to page 2 to see the review 21 to review 38. You notice that the URL in the address box of your browser changed when you click on the Next page. Previouly, on page 1, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson
On page 2, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson?start=20
You learn that probably Yelp use this ?start=20 to skip(or offset in MySQL language) the first 20 records to show you the next 18 reviews. You can use this pattern of going to the next page to enumerate all pages of a business in Yelp.com.
In this exmaple, we are going get the rating (number of stars) and the date for each of these 38 reviews.
The general procedure to crawl any web page is the following:
Look for the string patterns proceeding and succeeding the information you are looking for in the source code of the page (the html file).
Write a program to enumerate (for or while loop) all the pages.
For this example, I did a screenshot with my annotation to illustrate the critical patterns in the Yelp page for UTD reviews.
review_start_pattern is a variable to stroe the string of '<div class="review-wrapper">' to locate the beginning of an individual review.
rating_pattern is a variable to stroe the string of '<i class="star-img stars_' to locate the rating.
date_pattern is a variable to stroe the string of '"datePublished" content="' to locate date of the rating.
It takes some trails and errors to figure out what are good string patterns to use to locate the information you need in an html. For example, I found that '<div class="review-wrapper">' appeared exactly 20 times in the webpage, which is a good indication that it corresponds to the 20 individual reviews on the page (the review-wrapper tag seems to imply that too).
End of explanation
"""
def fib(n): # write Fibonacci series up to n
"""Print a Fibonacci series up to n."""
a, b = 0, 1
while a < n:
print(a)
a, b = b, a+b
fib(200)
fib(2000000000000000) # do not need to worry about the type of a,b
"""
Explanation: Define function
End of explanation
"""
# output for eyeballing the data
import string
import random
# fix the pseudo-random sequences for easy replication
# It will generate the same random sequences
# of nubmers/letters with the same seed.
random.seed(123)
for i in range(50):
# Data values separated by comma(csv file)
print(i+1,random.choice(string.ascii_uppercase),
random.choice(range(6)), sep=',')
# write the data to a file called data.csv
random.seed(123)
out_file=open('data.csv','w')
columns=['id','name','age']
out_file.write(','.join(columns)+'\n')
for i in range(50):
row=[str(i+1),random.choice(string.ascii_uppercase),
str(random.choice(range(6)))]
out_file.write(','.join(row)+'\n')
else:
out_file.close()
# load data back into Python
for line in open('data.csv', 'r'):
print(line)
# To disable to the new line added for each print
# use the end parameter in print function
for line in open('data.csv', 'r'):
print(line, end = '')
"""
Explanation: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns: id, name, and age to store information about 50 kids in a day care.
The various modules that extend the basic Python funtions are indexed here.
End of explanation
"""
These commands are executed in MySQL query tab, not in Python.
In mysql, you need to end all commands with ;
#
# ----------------------- In MySQL ------------------
# display the database
show databases;
# create a database named test
create database test;
# choose a database for future commands
use test;
# display the tables in test database
show tables;
# create a new table named example
create table example(
id int not null,
name varchar(30),
age tinyint,
primary key(id));
# now we should have the example table
show tables;
# how was the table example defined again?
desc example;
# is there anything in the example table?
select * from example;
# import csv file into MySQL database
load data local infile "C:\\Users\\cxn123430\\Downloads\\data.csv" into table test.example FIELDS TERMINATED BY ',' lines terminated by '\r\n' ignore 1 lines;
# is there anything now?
select * from example;
# drop the table
drop table example;
# does the example table still exist?
show tables;
"""
Explanation: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step: run the MySQL Installer, then add and intall the MySQL servers using the Installer. You can accept the default options during installation. Later, you will connect to MySQL using the password you set during the installation and configuration. I set the password to be pythonClass.
The documentation for MySQL is here.
To get comfortable with it, you might find this tutorial of Structured Query Language(SQL) to be helpful.
End of explanation
"""
#
# ----------------------- In Windows command line(cmd) ------------------
conda install mysql-connector-python
"""
Explanation: Quiz: import the crawled Yelp review file UTD_reviews.csv into a table in your database.
Use Python to access MySQL database
Since the official MySQL 5.7 provides support for Python upto Version 3.4 as of writing this tutorial, we need to install a package named mysql-connector-python to provide support for the cutting-edge Python 3.5. Execute the following line in Windows command line to install it.
This is relatively easy since you have the Anancoda installed. We can use the conda command to intall that package in the Windows command line.
End of explanation
"""
#
# ----------------------- In Python ------------------
# access table from Python
# connect to MySQL in Python
import mysql.connector
cnx = mysql.connector.connect(user='root',
password='pythonClass',
database='test')
# All DDL (Data Definition Language) statements are
# executed using a handle structure known as a cursor
cursor = cnx.cursor()
# create a table named example
cursor.execute('''create table example(
id int not null,
name varchar(30),
age tinyint,
primary key(id));''')
cnx.commit()
# write the same data to the example table without saving a csv file
query0_template = '''insert into example (id, name, age) \
values ({id_num},"{c_name}",{c_age});'''
random.seed(123)
for i in range(50):
query0 = query0_template.format(id_num = i+1,
c_name = random.choice(string.ascii_uppercase),
c_age = random.choice(range(6)))
print(query0)
cursor.execute(query0)
cnx.commit()
"""
Explanation: Remember that we use Python to save 50 kids' infomation into a csv file named data.csv first and then use the load command in MySQL to import the data? We don't actually need to save the data.csv file to hard disk. And we can "load" the same data into database without leaving Python.
End of explanation
"""
#
# ----------------------- In MySQL ------------------
# To get the totoal number of records
select count(*) from example;
# To get age histgram
select distinct age, count(*) from example group by age;
# create a copy of the example table for modifying.
create table e_copy select * from example;
select * from e_copy;
# note that the primary key is not copied to the e_copy table
desc e_copy;
# add the primary key to e_copy table using the alter command
alter table e_copy add primary key(id);
# is it done correctly?
desc e_copy;
# does MySQL take the primary key seriously?
insert into e_copy (id, name, age) values (null,'P',6);
insert into e_copy (id, name, age) values (3,'P',6);
# alright, let's insert something else
insert into e_copy (id, name, age) values (51,'P',6);
insert into e_copy (id, name, age) values (52,'Q',null);
insert into e_copy (id, name, age) values (54,'S',null),(55,'T',null);
insert into e_copy (id, name) values (53,'R');
# who is the child with id of 53?
select * from e_copy where id = 53;
# update the age for this child.
update e_copy set age=3 where id=53;
select * from e_copy where id = 53;
# what's inside the table now?
select * from e_copy;
"""
Explanation: To get better understanding of the table we just created. We will use MySQL command line again.
End of explanation
"""
#
# ----------------------- In Python ------------------
# query all the content in the e_copy table
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
"""
Explanation: Again, you can actually do everything in Python without going to the MySQL workbench.
End of explanation
"""
#
# ----------------------- In Python ------------------
# # example for adding new info for existing record
cursor.execute('alter table e_copy add mother_name varchar(1) default null')
cnx.commit()
query1_template='update e_copy set mother_name="{m_name}" where id={id_num};'
random.seed(333)
for i in range(55):
query1=query1_template.format(m_name = random.choice(string.ascii_uppercase),id_num = i+1)
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
# example for insert new records
query2_template='insert into e_copy (id, name,age,mother_name) \
values ({id_num},"{c_name}",{c_age},"{m_name}")'
for i in range(10):
query2=query2_template.format(id_num = i+60,
c_name = random.choice(string.ascii_uppercase),
c_age = random.randint(0,6),
m_name = random.choice(string.ascii_uppercase))
print(query2)
cursor.execute(query2)
cnx.commit()
"""
Explanation: Now we want to add one new column of mother_name to record the mother's name for each child in the child care.
End of explanation
"""
#
# ----------------------- In Python ------------------
# query all the content in the e_copy table
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
#
# ----------------------- In MySQL ------------------
Use the GUI to export the database into a self-contained file (the extension name would be sql)
"""
Explanation: Check if you've updated the data successfully in MySQL database from Python
End of explanation
"""
import re
infile=open('digits.txt','r')
content=infile.read()
print(content)
"""
Explanation: Regular expression in Python
Before you run this part, you need to download the digits.txt and spaces.txt files to the same folder as this notebook
What's in the digits.txt file?
End of explanation
"""
# Find all the numbers in the file
numbers=re.findall('\d+',content)
for n in numbers:
print(n)
"""
Explanation: How can I find all the numbers in a file like digits.txt?
End of explanation
"""
# find equations
equations=re.findall('(\d+)=\d+',content)
for e in equations:
print(e)
"""
Explanation: How can I find all the equations?
End of explanation
"""
# subsitute equations to correct them
# use the left hand side number
print(re.sub('(\d+)=\d+','\1=\1',content))
# another way to subsitute equations to correct them
# use the right hand side number
print(re.sub('\d+=(\d+)','\\1=\\1',content))
# Save to file
print(re.sub('(\d+)=\d+','\\1=\\1',content), file = open('digits_corrected.txt', 'w'))
"""
Explanation: The equations seem to be incorrect, how can I correct them without affecting other text information?
End of explanation
"""
infile=open('spaces.txt','r')
content=infile.read()
print(content)
print(re.sub('[\t ]+','\t',content))
print(re.sub('[\t ]+','\t',content), file = open('spaces_corrected.txt', 'w'))
"""
Explanation: Preprocessing a text file with various types of spaces.
End of explanation
"""
word = 'HELP' + 'A'
word
# first index default to 0 and second index default to the size
word[:2]
# It's equivalent to
word[0:2]
# Everything except the first two characters
word[2:]
# It's equivalent to
word[2:len(word)]
"""
Explanation: More about index
End of explanation
"""
# start: end: step
word[0::2]
# It's equivalent to
word[0:len(word):2]
"""
Explanation: How about selecting every other character?
End of explanation
"""
word[-1] # The last character
word[-2] # The last-but-one character
word[-2:] # The last two characters
word[:-2] # Everything except the last two characters
"""
Explanation: Negative index
End of explanation
"""
a = ['spam', 'eggs', 100, 1234]
a
a[-2]
a[1:-1]
a[:2] + ['bacon', 2*2]
3*a[:3] + ['Boo!']
"""
Explanation: More about list
End of explanation
"""
# Replace some items:
a[0:2] = [1, 12]
a
# Remove some:
del a[0:2] # or a[0:2] = []
a
# create some copies for change
b = a.copy()
c = a.copy()
# Insert some:
b[1:1] = ['insert', 'some']
b
# inserting at one position is not the same as changing one element
c[1] = ['insert', 'some']
c
"""
Explanation: Versatile features of a list
End of explanation
"""
# loop way
cubes = []
for x in range(11):
cubes.append(x**3)
cubes
# map way
def cube(x):
return x*x*x
list(map(cube, range(11)))
# list comprehension way
[x**3 for x in range(11)]
"""
Explanation: How to get the third power of integers between 0 and 10.
End of explanation
"""
result = []
for i in range(11):
if i%2 == 0:
result.append(i)
else:
print(result)
# Use if in list comprehension
[i for i in range(11) if i%2==0]
l=[1,3,5,6,8,10]
[i for i in l if i%2==0]
"""
Explanation: Target: find the even number below 10
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/9794ea6d3b7fc21947e9529fb55249c9/plot_read_proj.ipynb | bsd-3-clause | # Author: Joan Massich <mailsik@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import read_proj
from mne.io import read_raw_fif
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
"""
Explanation: ==============================================
Read and visualize projections (SSP and other)
==============================================
This example shows how to read and visualize Signal Subspace Projectors (SSP)
vector. Such projections are sometimes referred to as PCA projections.
End of explanation
"""
raw = read_raw_fif(fname)
empty_room_proj = raw.info['projs']
# Display the projections stored in `info['projs']` from the raw object
raw.plot_projs_topomap()
"""
Explanation: Load the FIF file and display the projections present in the file. Here the
projections are added to the file during the acquisition and are obtained
from empty room recordings.
End of explanation
"""
fig, axes = plt.subplots(1, len(empty_room_proj))
for proj, ax in zip(empty_room_proj, axes):
proj.plot_topomap(axes=ax)
"""
Explanation: Display the projections one by one
End of explanation
"""
assert isinstance(empty_room_proj, list)
mne.viz.plot_projs_topomap(empty_room_proj)
"""
Explanation: Use the function in mne.viz to display a list of projections
End of explanation
"""
# read the projections
ecg_projs = read_proj(ecg_fname)
# add them to raw and plot everything
raw.add_proj(ecg_projs)
raw.plot_projs_topomap()
"""
Explanation: As shown in the tutorial on how to
sphx_glr_auto_tutorials_plot_visualize_raw.py
the ECG projections can be loaded from a file and added to the raw object
End of explanation
"""
fig, axes = plt.subplots(1, len(ecg_projs))
for proj, ax in zip(ecg_projs, axes):
if proj['desc'].startswith('ECG-eeg'):
proj.plot_topomap(axes=ax, info=raw.info)
else:
proj.plot_topomap(axes=ax)
"""
Explanation: Displaying the projections from a raw object requires no extra information
since all the layout information is present in raw.info.
MNE is able to automatically determine the layout for some magnetometer and
gradiometer configurations but not the layout of EEG electrodes.
Here we display the ecg_projs individually and we provide extra parameters
for EEG. (Notice that planar projection refers to the gradiometers and axial
refers to magnetometers.)
Notice that the conditional is just for illustration purposes. We could
raw.info in all cases to avoid the guesswork in plot_topomap and ensure
that the right layout is always found
End of explanation
"""
possible_layouts = [mne.find_layout(raw.info, ch_type=ch_type)
for ch_type in ('grad', 'mag', 'eeg')]
mne.viz.plot_projs_topomap(ecg_projs, layout=possible_layouts)
"""
Explanation: The correct layout or a list of layouts from where to choose can also be
provided. Just for illustration purposes, here we generate the
possible_layouts from the raw object itself, but it can come from somewhere
else.
End of explanation
"""
|
tdeoskar/NLP1-2017 | lab1/lab1.ipynb | gpl-3.0 | ## YOUR CODE HERE ##
"""
Explanation: Lab 1: Text Corpora and Language Modelling
This lab is meant to help you get familiar with some language data, and use this data to estimate N-gram language models
First you will use the Penn Treebank, which is a collection of newspaper articles from the newspaper
The Wall Street Journal. The idea is to examine the data and notice interesting properties. This will not take more than a few lines of code.
Then you will use a corpus consisting of TedX talks. This you will use to estimate an N-gram language model for different orders of N, and use this this for some tasks.
The datasets are on blackboard under course materials. Download the zip and make sure to put the files in the same directory as the notebook.
Rules
The lab exercises should be made in groups of two people.
The deadline is Tuesday 7 nov 16:59.
The assignment should submitted to Blackboard as .ipynb. Only one submission per group.
The filename should be lab1_lastname1_lastname2.ipynb, so for example lab1_Jurafsky_Martin.ipynb.
The notebook is graded on a scale of 0-10. The number of points for each question is indicated in parantheses.
The questions marked optional are not graded; they are an additional challenge for those interested in going the extra mile.
Notes on implementation:
You should write your code and answers in this iPython Notebook (see http://ipython.org/notebook.html for reference material). If you have problems, please contact your teaching assistant.
Use only one cell for code and one cell for markdown answers!
Put all code in the cell with the # YOUR CODE HERE comment.
For theoretical question, put your solution in the YOUR ANSWER HERE cell.
Test your code and make sure we can run your notebook
1. Penn treebank
Exercise 1.1 (40 points, 5 points per subquestion )
You are provided with a corpus containing words with their Part-of-Speech tags (POS-tags for short). The format is
word|POS (one sentence per line) and the file name is sec02-22.gold.tagged. This data is extracted from Sections 02-22 from the Penn Treebank: these sections are most commonly used for training statistical models like POS-taggers and parsers.
[Hint] Figure 10.1 in chapter 10 of Jurafsky and Martin (see here) holds a summary of the 45 POS-tags used in the Penn Treebank tagset together with their meaning and some examples. (If you are keen on learning more about the word-classes represented POS-tags and their definitions you can do a litle reading ahead for next week and already have a look at section 10.1 of the same chapter).
[Hint] the Python library collections has an object called Counter which will come in handy for this exercise.
(a) How large is the corpus? (i.e. how many tokens). And what is the size of the vocabulary used in this corpus?
Estimate the vocabulary size both by lowercasing all the words as well as by leaving the words in their original orthography. What is an advantage of lowercasing all the words in your corpus? What is a notable downside? Give examples.
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: YOUR ANSWER HERE
For the rest of this exercise you should use the original orthography of the data when answering the questions.
(b) Plot a graph of word frequency versus rank of a word, in this corpus. Does this corpus obey Zipfโs law?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (c) What are the 20 most common words in the corpus and how often do they occur? What is the 50th most common word, the 100th and the 1000th and how often do they occur?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (d) How many different Part-of-speech tags are present in the corpus?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (e) Print a list of the 10 most commonly occurring POS tags in the data. For each of these POS tags, what are the 3 most common words that belong to that class?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (f) A single word may have several POS-tags. For example, record can be a both a noun (buy a record) or a verb (record a lecture). This make POS-tags extremely useful for disambiguation.
What percentage of the words in the vocabulary is ambiguous? (i.e. have more than one POS tag?) What are the 10 most frequent combinations of POS tags in the case of ambitguity? Which words are most ambiguous? Give some of them.
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (g) Print some of these words with their multiple POS-tags. Do you understand the ambiguity? Use figure 10.1 mentioned above to interpret the POS-tags.
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (h) Ambiguous words do not account for a great percentage of the vocabulary. Yet they are among the most commonly occuring words of the English language. What percentage of the dataset is ambiguous?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: Exercise 1.2 (10 points, 5 per subquestion)
You are also provided with another file called sec00.gold.tagged.
Section 00 of the Penn Treebank is typically used as development data.
(a) How many unseen words are present in the development data (i.e., words that have not occurred in the training data)?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: (b) What are the three POS tag categories that the most unseen words belong to?
End of explanation
"""
from collections import defaultdict
d = defaultdict(float)
d["new key"]
"""
Explanation: 2. Language Models
This part of the lab will be covered in the Wednesday lecture. If you have prior exposure to NLP, go ahead and finish this part! If you don't, start anyway, and this part will be clear after the lecture.
Reference chapter 4 of J&M Language Modeling with N-Grams.
Models that assign probabilities to sequences of words are called language language
modelels or LMs. The simplest model that assigns probabilities to sentences and sequences of words is the N-gram model.
Recall that an N-gram language model uses conditional probabilities of the form
$$P(w_k \mid w_{k-N+1} \dots w_{k-1})$$
to approximate the full joint probability
$$P(w_1 \dots w_n)$$
of a sequence of words $w_1 \dots w_n$.
The easiest way of obtaining estimates for the probabilities $P(w_k \mid w_{k-N+1} \dots w_{k-1})$ is to use the maximum likelihood estimate or MLE, a widely used statistical estimation method (read more. You count and normalize:
$$P_{MLE}(w_k \mid w_{k-N+1} \dots w_{k-1}) = \frac{C(w_{k-N+1} \dots w_{k-1} w_k)}{C(w_{k-N+1} \dots w_{k-1})}.$$
Exercise 2.1 (25 points)
(a) Complete the function train_ngram so that you can train a count-based $N$-gram language model on the data found in data/ted-train.txt and train this for $N=2,3,4$. 15 points
(b) Extend the function above so that it accepts a parameter k for optional add-$k$ smoothing. 10 points
[Datastructure hint] If you store the smoothed language in a naive manner (that is, to store all the numbers separately) your datastructure will get huge! If $V$ is the vocabulary then the smoothed bigram model assigns probabilities to $|V|^2$ entries. If $|V|$ is around 80k, the naive way requires you to store more than 64 billion floats. Yet almost all of these are actually just $P(w_n|w_{n-1}) = \frac{k}{N + k|V|}$, with $k$ the value with which you smooth and $N=C(w_{n-1})$. Think about how you use this fact to make your model work in practice.
[Python hint] The collections library has another useful datastructure: the defaultdict. Some example uses:
End of explanation
"""
d = dict()
d["new key"]
"""
Explanation: Compare that to an ordinary dictionary:
End of explanation
"""
d = defaultdict(int)
d["new key"]
d = defaultdict(list)
d["new key"]
"""
Explanation: Other datatypes as default_factory:
End of explanation
"""
d1 = {k: "value" for k in range(1, 11)}
d = defaultdict(float, d1) # convert it to a defaultdict
print(d[5])
print(d[100])
"""
Explanation: Converting an already existing dict:
End of explanation
"""
d = defaultdict(10)
"""
Explanation: This doesn't work:
End of explanation
"""
d = defaultdict(lambda: 10)
d["new key"]
d = defaultdict(lambda: defaultdict(float))
d["new key"]
"""
Explanation: Use a lambda to make the number 10 callable":
End of explanation
"""
train_file = "ted-train.txt"
def read(fname, max_lines=np.inf):
"""
Reads in the data in fname and returns it as
one long list of words. Also returns a vocabulary in
the form of a word2index and index2word dictionary.
"""
data = []
# w2i will automatically keep a counter to asign to new words
w2i = defaultdict(lambda: len(w2i))
i2w = dict()
start = "<s>"
end = "</s>"
with open(fname, "r") as fh:
for k, line in enumerate(fh):
if k > max_lines:
break
words = line.strip().split()
# assign an index to each word
for w in words:
i2w[w2i[w]] = w # trick
sent = [start] + words + [end]
data.append(sent)
return data, w2i, i2w
def train_ngram(data, N, k=0):
"""
Trains an n-gram language model with optional add-k smoothing
and additionaly returns the unigram model
:param data: text-data as returned by read
:param N: (N>1) the order of the ngram e.g. N=2 gives a bigram
:param k: optional add-k smoothing
:returns: ngram and unigram
"""
ngram = defaultdict(Counter) # ngram[history][word] = #(history,word)
unpacked_data = [word for sent in data for word in sent]
unigram = defaultdict(float, Counter(unpacked_data)) # default prob is 0.0
## YOUR CODE HERE ##
return ngram, unigram
data, w2i, i2w = read(train_file)
# bigram, unigram = train_ngram(data, N=2, k=0)
# bigram_smoothed, unigram_smoothed = train_ngram(data, N=2, k=1)
data[2]
"""
Explanation: Clever use of a defaultdict can be the solution to the problem of data-storing in a smoothing $N$-gram pointed out above:
ngram = defaultdict(lambda: k/(N+kV), ngram)
The following function is given:
End of explanation
"""
from random import random
P = [0.2,0.5,0.2,0.1]
def sample(P):
u = random() # uniformly random number between 0 and 1
p = 0
for i, p_i in enumerate(P):
if p > u:
return i # the first i s.t. p1 + ... + pi > u
p += p_i
print(sample(P))
print(Counter([sample(P) for i in range(1000)])) # check to see if the law of large numbers is still true
"""
Explanation: Exercise 2.2 (5 points)
You can use an N-gram language model to generate text. The higher the order N the better your model will be able to catch the long-range dependecies that occur in actual sentences and the better your changes are at generating sensible text. But beware: sparsity of language data will quickly cause your model to reproduce entire lines from your training data; in such cases only one $w_k$ was observed for the histories $w_{k-N+1}\dots w_{k-1}$ in the entire training-set.
Complete the function generate_sent. It takes a language model lm and an order N and should generate a sentence by sampling from the language model.
[Hint] You can use the method of inverse transform sampling to generate a sample from a categorical distribution, $p_1\dots p_k$ such that $p_i \geq 0$ and $\sum_{i=1}^k p_i = 1$, as follows:
End of explanation
"""
def generate_sent(lm, N):
## YOUR CODE HERE ##
raise NotImplementedError
"""
Explanation: Inverse transform sampling in the words of Jurafsky and Martin:
Imagine all the words of the English language covering the probability space
between 0 and 1, each word covering an interval proportional to its frequency. We
choose a random value between 0 and 1 and print the word whose interval includes
this chosen value.
(J&M, section 4.3)
End of explanation
"""
### ANSWER ###
"""
Explanation: [Optional]
For how many of the histories $w_{k-N+1}\dots w_{k-1}$ is the number of continuations $w_n$ equal to one? Calculate the percentage of such cases for the different orders N.
And which history has the most possible continuations?
End of explanation
"""
import pandas as pd
import seaborn as sns
def plot_bigram_dist(word, bigram, smoothbigram, k=30):
d = bigram[word]
ds = smoothbigram[word]
# sort the probabilities
d_sort = sorted(d.items(), reverse=True, key=lambda t: t[1])[0:k]
ds_sort = sorted(ds.items(), reverse=True, key=lambda t: t[1])[0:k]
_, probs = zip(*d_sort)
smooth_ws, smooth_probs = zip(*ds_sort)
# make up for the fact that in the unsmoothed case probs is generally less than k long
probs = probs + (0,) * (k-len(probs))
w_data = pd.DataFrame({"w": smooth_ws * 2,
"P({}|w)".format(word): probs + smooth_probs,
"smoothing": ["unsmoothed"]*k + ["smoothed"]*k})
fig, ax = plt.subplots(figsize=(10,10))
plt.xticks(rotation=90)
g = sns.barplot(ax=ax, x="w", y="P({}|w)".format(word), hue="smoothing",
data=w_data, palette="Blues_d")
## YOUR CODE HERE ##
"""
Explanation: Excercise 2.3 (5 points)
Let $V$ denote our vocabulary. Recall that for any $w$ in $V$ bigram[w] defines a conditional probability $p(v|w)$ over $v$ in $V$. In the case of an unsmoothed bigram, $p(v|w) = 0$ for most $v\in V$, whereas in the smoothed bigram smoothing took care that $p(v|w) \geq 0$ for all $v$.
The function plot_bigram_dist(word, bigram, smoothbigram, k=30) plots shows $p(v|word)$ for the k words $v$. One bar shows the probabilities in bigram and one in smoothbigram.
(a) Use this function to plot the distribution for at least two words w and answer the questions
* What is the effect of smoothing on the bigram distribution of frequent words?
* What is the effect in the case of infrequent words?
* Explain the difference between the two based on the raw counts of w
(b) Now experiment with $k$ much smaller than 1 (but greater than 0!)
* What are the effects?
[Hint] Remember that add-1 smoothing turns
$$P(w_n\mid w_{n-1}) = \frac{C(w_{n-1}w_{n})}{C(w_{n-1})}$$
into
$$P_{add-1}(w_n\mid w_{n-1}) = \frac{C(w_{n-1}w_{n}) + 1}{C(w_{n-1}) + |V|}.$$
What happens when $C(w_{n-1})$ is relatively big (similiar in of size as $ |V| $)? And what if $C(w_{n-1})$ is small?
End of explanation
"""
## YOUR CODE HERE ##
"""
Explanation: YOUR ANSWERS HERE
Recall that if we have a sentence $w_1,\dots,w_n$ we can write
$$P(w_1\dots w_n) = P(w_1)P(w_2|w_1) \cdots P(w_i|w_1 \dots w_{n-1}) \approx P(w_1)P(w_2|w_1)\cdots P(w_{N-1}|w_1\dots w_{N-2})\prod_{i=N}^{n} P(w_i|w_{i-(N-1)}\dots w_{i-1})$$
where in the last step we make an $N$-gram approximation of the full conditionals.
For example, in the case of a bigram (N=2), the above expression reduces to
$$P(w_1 \dots w_n)\approx P(w_1)\prod_{i=2}^{n} P(w_i| w_{i-1}).$$
Exercise 2.4 (5 points)
The following sentences are taken from the training data. Use your unsmoothed unigram, bigram, and trigram language model to estimate their probabilities:
1. Every day was about creating something new .
2. In this machine , a beam of protons and anti-protons are accelerated to near the speed of light and brought
together in a collision , producing a burst of pure energy .
Repeat this with the smoothed (add-1) versions of the N-grams. What is the effect of smoothing on the probabilities?
End of explanation
"""
### YOUR CODE HERE ###
"""
Explanation: YOUR ANSWERS HERE
Exercise 2.5 (5 points)
The above sentences were taken from the training set, hence they will all have probability greater than 0. The big challenge for our language model are of course with sentence that contain unseen N-grams: if such an N-gram occurs our model immediately assigns the sentence probability zero.
The following three senteces are taken from the test set availlable in the file ted-test.txt. What probabilities do your smoothed and unsmoothed language models asign in this case?
1. Because these robots are really safe .
2. We have sheer nothingness on one side , and we have this vision of a reality that encompasses every
conceivable world at the other extreme : the fullest possible reality , nothingness , the simplest possible
reality .
End of explanation
"""
### ANSWER HERE ###
"""
Explanation: YOUR ANSWERS HERE
[Optional]
Optional What percentage of the sentences in the test set get assigned probability 0 under your smoothed and unsmoothed language models?
End of explanation
"""
### YOUR CODE HERE ###
"""
Explanation: Exercise 2.6 (5 points)
Perplexity is very frequently used metric for evaluating probabilistic models such as language models. The perplexity (sometimes called PP for short) of a language model on a sentence is the inverse probability of the sentence, normalized by the number of words:
$$PP(w_1 \dots w_n) = P(w_1\dots w_n)^{-\frac{1}{n}}.$$
Here we can again approximate $P(w_1 \dots w_n)$ with N-gram probabilities, as above.
Note: $(x_1\cdots x_n)^{-\frac{1}{n}}$ is the geometric mean of the numbers $x_1,\dots,x_n$. It is like the (regular) artithmetic mean, but with products instead of sums. The geometric mean is a more natural choice in the case of PP because behind $P(w_1\dots w_n)$ is a series of $n$ products (more here).
Compute the perplexity of the training sentences from excercise 2.1. What big difference between the probabilities of the sentences and the perplexities of the sentences do you notice?
End of explanation
"""
|
mespe/SolRad | collection/compare_cimis_cfsr/compare_before_after_clouds.ipynb | mit | from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
import pandas as pd
import matplotlib.pyplot as plt
from netCDF4 import Dataset
import netCDF4
plt.style.use('ggplot')
%matplotlib inline
plt.rcParams['figure.figsize'] = 16, 10
my_example_nc_file = 'pgbh01.gdas.20052010.nc'
fh = Dataset(my_example_nc_file, mode='r')
times = fh.variables['time']
time_np = netCDF4.num2date(times[:],times.units) - pd.offsets.Hour(8)
#print (fh.variables['ULWRF_L1_Avg_1'])
print (fh.variables['USWRF_L1_Avg_1'])
variables = {"SHTFL_L1_Avg_1" : "Sensible heat flux",
"DSWRF_L1_Avg_1" : "Downward shortwave radiation flux",
"CSDSF_L1_Avg_1" : "Clear sky downward solar flux",
"DSWRF_L1_Avg_1" : "Downward shortwave radiation flux",
"DLWRF_L1_Avg_1" : "Downward longwave radiation flux",
"CSULF_L1_Avg_1" : "Clear sky upward longwave flux",
"GFLUX_L1_Avg_1" : "Ground heat flux"}
"""
Explanation: <center> Earth's Energy Budget </center>
<img src = 'https://science-edu.larc.nasa.gov/EDDOCS/images/Erb/components2.gif'>
End of explanation
"""
downward_solar_flux_np = fh.variables["CSDSF_L1_Avg_1"][:, 0, 0]
cfsr = pd.DataFrame({'datetime': time_np, 'solar rad': downward_solar_flux_np})
cimis = pd.read_pickle('cimis_2005_2010.pkl')
def compare(title):
plt.plot(cfsr['datetime'][1:], cfsr['solar rad'][1:], label = "cfsr")
plt.plot(cimis['datetime'][4:][::6], cimis['solar rad'][4:][::6], label = "cimis")
plt.title(title)
plt.legend()
plt.rcParams['figure.figsize'] = 16, 10
"""
Explanation: Clear sky downward solar flux is considered to be equivalent to solar radiation after clouds
End of explanation
"""
compare('cfsr: downward longwave vs cimis: after clouds')
"""
Explanation: compare CIMIS (measured on earth) with Clear Sky (CFSR)
End of explanation
"""
cfsr['month'] = cfsr.datetime.dt.month
grouped = cfsr.groupby('month').mean()
grouped.reset_index(inplace=True)
cimis['month'] = cimis.datetime.dt.month
grouped2 = cimis.groupby('month').mean()
grouped2.reset_index(inplace=True)
x = grouped['month']
y = grouped['solar rad']
z = grouped2['solar rad']
ax = plt.subplot(111)
ax.bar(x+0.2, y,width=0.2,color='b',align='center')
ax.bar(x, z,width=0.2,color='g',align='center')
ax.legend(['cfsr','cimis'])
plt.title('average solar radiation accross different months for cfsr and cimis')
downward_shortwave = fh.variables['DSWRF_L1_Avg_1'][:, 0, 0]
downward_longwave = fh.variables['DLWRF_L1_Avg_1'][:, 0, 0]
upward_longwave = fh.variables['ULWRF_L1_Avg_1'][:, 0, 0]
upward_shortwave = fh.variables['USWRF_L1_Avg_1'][:, 0, 0]
"""
Explanation: Clear sky overestimates the CIMIS data
Why???
simulation
locations are different. Distance between two points is 20 - 25 miles.
End of explanation
"""
plt.plot(cfsr['datetime'], fh.variables['CSDSF_L1_Avg_1'][:, 0, 0] + fh.variables['CSDLF_L1_Avg_1'][:, 0, 0] , label = "clear sky")
plt.plot(cfsr['datetime'], fh.variables['DSWRF_L1_Avg_1'][:, 0, 0] + fh.variables['DLWRF_L1_Avg_1'][:, 0, 0] , label = "down")
plt.title('clear sky and downward wave comparison')
plt.legend()
plt.rcParams['figure.figsize'] = 16, 10
"""
Explanation: Maybe clear sky is after clouds and downward waves are before clouds?
End of explanation
"""
|
hanhanwu/Hanhan_Data_Science_Practice | sequencial_analysis/try_poem_generator.ipynb | mit | import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import RNN
from keras.utils import np_utils
sample_poem = open('sample_sonnets.txt').read().lower()
sample_poem[77:99]
"""
Explanation: LSTM Poem Generator
I'm trying to generate poem
Method 1 - Character based sequence generation with LSTM
Method 2 - Word based sequence generation with LSTM
Download the sonnets text from : https://github.com/pranjal52/text_generators/blob/master/sonnets.txt
End of explanation
"""
characters = sorted(list(set(sample_poem)))
n_to_char = {n:char for n, char in enumerate(characters)} # store characters and their index
char_to_n = {char:n for n, char in enumerate(characters)}
print(n_to_char[7])
print(n_to_char[9])
X = []
y = []
total_len = len(sample_poem)
seq_len = 100 # each time we choose 100 character as a sequence and predict the next character after the sequence
for i in range(total_len - seq_len):
seq = sample_poem[i:i+seq_len]
label = sample_poem[i+seq_len]
X.append([char_to_n[char] for char in seq])
y.append(char_to_n[label])
# LSTM acceptable format, (number of sequneces(batch size), sequnece length (timesteps), number of features)
X_modified = np.reshape(X, (len(X), seq_len, 1))
X_modified = X_modified / float(len(characters)) # normalize the value
y_modified = np_utils.to_categorical(y) # convert to one-hot format, there are 36 distinct characters in total
print(X_modified.shape)
print(y_modified[4:10])
model = Sequential()
model.add(LSTM(700, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))
model.add(Dropout(0.2)) # dropout is used for regularization
model.add(LSTM(700, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(700))
model.add(Dropout(0.2))
model.add(Dense(y_modified.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X_modified, y_modified, epochs=10, batch_size=100)
model.save_weights('poem_generator_gigantic.h5') # save weights, so that later we can use without re-running the model
model.load_weights('poem_generator_gigantic.h5')
new_poem_lst = []
for j in range(77, 99): # randomly choose some records and predict the sequence (generate the poem)
string_mapped = X[j]
full_string = [n_to_char[value] for value in string_mapped]
for i in range(10): # predict the next 10 character
x = np.reshape(string_mapped,(1,len(string_mapped), 1))
x = x / float(len(characters))
# predict the next character
pred_index = np.argmax(model.predict(x, verbose=0))
seq = [n_to_char[value] for value in string_mapped]
full_string.append(n_to_char[pred_index])
# predicted character will be added to support the next prediction
string_mapped.append(pred_index)
string_mapped = string_mapped[1:len(string_mapped)]
new_poem_lst.extend(full_string)
generated_poem = ''.join(new_poem_lst)
print(generated_poem)
"""
Explanation: Method 1 - Character Based Poem Generation
End of explanation
"""
words = sorted(list(set(sample_poem.split())))
n_to_word = {n:word for n, word in enumerate(words)} # store characters and their index
word_to_n = {word:n for n, word in enumerate(words)}
print(n_to_word[7])
print(n_to_word[9])
X = []
y = []
all_words = sample_poem.split()
total_len = len(all_words)
seq_len = 100 # each time we choose 100 character as a sequence and predict the next character after the sequence
for i in range(total_len - seq_len):
seq = all_words[i:i+seq_len]
label = all_words[i+seq_len]
X.append([word_to_n[word] for word in seq])
y.append(word_to_n[label])
# LSTM acceptable format, (number of sequneces(batch size), sequnece length (timesteps), number of features)
X_modified = np.reshape(X, (len(X), seq_len, 1))
X_modified = X_modified / float(len(words)) # normalize the value
y_modified = np_utils.to_categorical(y) # convert to one-hot format, there are 36 distinct characters in total
print(X_modified.shape)
print(y_modified[4:10])
model = Sequential()
model.add(LSTM(700, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))
model.add(Dropout(0.2)) # dropout is used for regularization
model.add(LSTM(700, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(700))
model.add(Dropout(0.2))
model.add(Dense(y_modified.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X_modified, y_modified, epochs=10, batch_size=100)
model.save_weights('poem_generator_gigantic_word.h5')
model.load_weights('poem_generator_gigantic_word.h5')
new_poem_lst = []
for j in range(77, 99): # randomly choose some records and predict the sequence (generate the poem)
string_mapped = X[j]
full_string = [] # different from character based, here not recording the original sequence
for i in range(10): # predict the next 10 character
x = np.reshape(string_mapped,(1,len(string_mapped), 1))
x = x / float(len(words))
# predict the next character
pred_index = np.argmax(model.predict(x, verbose=0))
seq = [n_to_word[value] for value in string_mapped]
full_string.append(n_to_word[pred_index])
# predicted character will be added to support the next prediction
string_mapped.append(pred_index)
string_mapped = string_mapped[1:len(string_mapped)]
new_poem_lst.extend(full_string)
generated_poem = ' '.join(new_poem_lst)
print(generated_poem)
"""
Explanation: Observation...
I guess those readable words came from the original poem, they served as the testing data.
Method 2 - Word Based Poem Generation
Simply map words to index, without tokenizing
End of explanation
"""
|
dhercher/state-farm | exploratory-analysis/dylan-explore-data.ipynb | mit | # Sample Data Raw
sample_df = pd.read_csv('../raw_data/sample_submission.csv')
print len(sample_df)
sample_df.head(1)
col_map = {
'c0' : 'safe driving',
'c1' : 'texting - right',
'c2' : 'talking on the phone - right',
'c3': 'texting - left',
'c4': 'talking on the phone - left',
'c5': 'operating the radio',
'c6': 'drinking',
'c7': 'reaching behind',
'c8': 'hair and makeup',
'c9': 'talking to passenger'
}
sample_df.rename(columns=col_map).head(1)
"""
Explanation: Data Summary
1) Sample Submission
Sum(c0:c9) == 1
Represent the predicted likelihood of each class
Expected Samples: 79,726
End of explanation
"""
img_list_df = pd.read_csv('../raw_data/driver_imgs_list.csv')
print len(img_list_df)
img_list_df.head(1)
"""
Explanation: 2) Driver Images List
Training Dataset with proper classifications of some images
Train Size: 22,424
End of explanation
"""
!ls ../raw_data/imgs/
"""
Explanation: 3) Image Directory
Contains a train/ and test/ directory
End of explanation
"""
train_path = '../raw_data/imgs/train/'
train_files = listdir(train_path)
print len(train_files)
train_files
train_file_df = pd.DataFrame(columns=['file_path', 'class'])
for clas in train_files:
class_files = listdir(train_path+clas+'/')
if '.DS_Store' in class_files:
class_files.remove('.DS_Store')
# Create Dataframe with all files needed
train_file_df = train_file_df.append(
pd.DataFrame(zip([train_path+clas+'/'+f for f in class_files], [clas for _ in xrange(len(class_files))])
, columns=['file_path', 'class'])
)
train_file_df.head(2)
"""
Explanation: Train Directory:
- Contains a Direcotry for c0 through c9 classes (10 directories)
Class Directory Summary
- c0 :: safe driving :: Length: 2490
- c1 :: texting - right :: Length: 2267
- c2 :: talking on the phone - right :: Length: 2317
- c3 :: texting - left :: Length: 2346
- c4 :: talking on the phone - left :: Length: 2326
- c5 :: operating the radio :: Length: 2312
- c6 :: drinking :: Length: 2325
- c7 :: reaching behind :: Length: 2002
- c8 :: hair and makeup :: Length: 1911
- c9 :: talking to passenger :: Length: 2129
End of explanation
"""
|
Cristianobam/UFABC | Unidade6-Atividades.ipynb | mit | import numpy as np
from math import pi
import matplotlib.pyplot as plot
%matplotlib notebook
x = np.arange(-5, 5.001, 0.0001)
y = (x**4)-(16*(x**2)) + 16
plot.plot(x,y,'c')
plot.grid(True)
"""
Explanation: Questรฃo 1: Faรงa um grรกfico da funรงรฃo $f(x) = x^4-16x^2+16$ para x de -5 a 5.
Coloque a grade.
Olhando para o grรกfico, para quais valores de x temos f(x) = 0?
End of explanation
"""
print('Para a f(x) = ax^2 + bx+ c, diga os valores de a, b e c:\n')
a = float(input('Valor de a: '))
b = float(input('Valor de b: '))
c = float(input('Valor de c: '))
delta = b**2 - 4*a*c
xmax = (-b)/(2*a)
x = np.arange(xmax-4, xmax+4.001, 0.001)
y = a*(x**2) + b*x + c
plot.plot(x, y, 'c')
plot.grid(True)
"""
Explanation: Questรฃo 2: Faรงa um programa que pede ao usuรกrio para:
1. digitar um nรบmero a,
2. digitar um nรบmero b,
3. digitar um nรบmero c.
Em seguida, seu programa deve mostrar ao usuรกrio o grรกfico da funรงรฃo $f(x) = ax^2 + bx+ c$.
A dificuldade deste exercรญcio รฉ escolher um domรญnio para plotar $f$. Faรงa essa escolha de modo que a parรกbola fique centralizada.
End of explanation
"""
t = np.arange(0, 2*pi + 0.001, 0.001)
x = 0 + 2*np.sin(t)
y = 0 + 2*np.cos(t)
plot.plot(x, y, 'c')
plot.axis('equal')
t = np.arange(0, 2*pi+0.001, 0.001)
x = 2+2*np.sin(t)
y = 2+2*np.cos(t)
plot.plot(x, y, 'c')
plot.axis('equal')
plot.grid(True)
"""
Explanation: Questรฃo 3: Faรงa um grรกfico com dois cรญrculos:
um com raio 2 e centro (0,0)
e outro em com raio 2 e centro (2,2).
Em quais pontos eles se cruzam? (Coloque a grade para ajudar.)
End of explanation
"""
t = np.arange(0, 2*pi+0.001, 0.001)
for r in np.arange(1, 13.5, 0.5):
x = r*np.sin(t)
y = r*np.cos(t)
plot.plot(x, y, 'c')
plot.axis('equal')
plot.grid(True)
"""
Explanation: Questรฃo 4: Faรงa um programa que gere o grรกfico abaixo (25 cรญrculos concรชntricos de raios $1, 1.5, \dots, 13$).
<img src="http://tidia4.ufabc.edu.br/access/content/group/8398384e-4091-4504-9c21-eae9a6a24a61/Unidade%206/circles.png">
Dica: use o comando for.
End of explanation
"""
|
muxiaobai/CourseExercises | python/kaggle/data-visual/plot&seaborn.ipynb | gpl-2.0 | sns.countplot(reviews['points'])
#reviews['points'].value_counts().sort_index().plot.bar()
plt.show()
sns.kdeplot(reviews.query('price < 200').price)
#reviews[reviews['price'] < 200]['price'].value_counts().sort_index().plot.line()
plt.show()
# ๅบ็ฐ้ฏ้ฝฟ็ถ
reviews[reviews['price'] < 200]['price'].value_counts().sort_index().plot.line()
plt.show()
#ไธคไธช็ฑปๅซ็ๅ
ณ็ณป
sns.kdeplot(reviews[reviews['price'] < 200].loc[:, ['price', 'points']].dropna().sample(5000))
plt.show()
sns.distplot(reviews['points'], bins=10, kde=False)
#reviews[reviews['price'] < 200]['price'].plot.hist() ๅฏนๅบ็ดๆนๅพ
plt.show()
"""
Explanation: sns.countplot() sns.kdeplot() ๆ ธๅฏๅบฆไผฐ่ฎก sns.jointplot() sns.boxplot() sns.violinplot()
End of explanation
"""
sns.jointplot(x='price', y='points', data=reviews[reviews['price'] < 100])
plt.show()
sns.jointplot(x='price', y='points', data=reviews[reviews['price'] < 100], kind='hex',
gridsize=20)
plt.show()
sns.jointplot(x='price', y='points', data=reviews[reviews['price'] < 100], kind='reg')
plt.show()
sns.jointplot(x='price', y='points', data=reviews[reviews['price'] < 100], kind='kde',
gridsize=20)
plt.show()
df = reviews[reviews.variety.isin(reviews.variety.value_counts().head(5).index)]
sns.boxplot(x='variety', y='points', data=df)
plt.show()
"""
Explanation: jointplot ๅฏนๅบ kind=scatter/reg/hex/kde
End of explanation
"""
sns.violinplot( x='variety',y='points',data=reviews[reviews.variety.isin(reviews.variety.value_counts()[:5].index)])
plt.show()
"""
Explanation: Red Blend ๆฏChardonnay varietyๅพๅๆด้ซไธ็น
End of explanation
"""
|
andymccurdy/redis-py | docs/examples/set_and_get_examples.ipynb | mit | import redis
r = redis.Redis(decode_responses=True)
r.ping()
"""
Explanation: Basic set and get operations
Start off by connecting to the redis server
To understand what decode_responses=True does, refer back to this document
End of explanation
"""
r.set("full_name", "john doe")
r.exists("full_name")
r.get("full_name")
"""
Explanation: The most basic usage of set and get
End of explanation
"""
r.set("full_name", "overridee!")
r.get("full_name")
"""
Explanation: We can override the existing value by calling set method for the same key
End of explanation
"""
r.setex("important_key", 100, "important_value")
r.ttl("important_key")
"""
Explanation: It is also possible to pass an expiration value to the key by using setex method
End of explanation
"""
dict_data = {
"employee_name": "Adam Adams",
"employee_age": 30,
"position": "Software Engineer",
}
r.mset(dict_data)
"""
Explanation: A dictionary can be inserted like this
End of explanation
"""
r.mget("employee_name", "employee_age", "position", "non_existing")
"""
Explanation: To get multiple keys' values, we can use mget. If a non-existing key is also passed, Redis return None for that key
End of explanation
"""
|
jamesfolberth/jupyterhub_AWS_deployment | notebooks/20Q/setup_sportsDataset.ipynb | bsd-3-clause | import csv
sports = [] # This is a python "list" data structure (it is "mutable")
# The file has a list of sports, one per line.
# There are spaces in some names, but no commas or weird punctuation
with open('data/SportsDataset_ListOfSports.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for index, row in enumerate( myreader ):
sports.append(' '.join(row) ) # the join() call merges all fields
# Make a look-up table: if you input the name of the sport, it tells you the index
# Also, print out a list of all the sports, to make sure it looks OK
Sport2Index = {}
for ind, sprt in enumerate( sports ):
Sport2Index[sprt] = ind
print('Sport #', ind,'is',sprt)
# And example usage of the index lookup:
print('The sport "', sports[7],'" has 0-based index', Sport2Index[sports[7]])
"""
Explanation: Loads the sports data
Run this script to load the data. Your job after loading the data is to make a 20 questions style game (see www.20q.net )
This dataset is a list of 25 sports, each rated (by Stephen) with a yes/no answer to each of 13 questions
Knowing the answers to all 13 questions uniquely identifies each sport. Can you do it in less than 13 questions? In fewer questions than the trained decision tree?
Read in the list of sports
There should be 25 sports. We can print them out, so you know what the choices are
End of explanation
"""
# this csv file has only a single row
questions = []
with open('data/SportsDataset_ListOfAttributes.csv','r') as csvfile:
myreader = csv.reader( csvfile )
for row in myreader:
questions = row
Question2Index = {}
for ind, quest in enumerate( questions ):
Question2Index[quest] = ind
print('Question #', ind,': ',quest)
# And example usage of the index lookup:
print('The question "', questions[10],'" has 0-based index', Question2Index[questions[10]])
"""
Explanation: Read in the list of questions/attributes
There were 13 questions
End of explanation
"""
YesNoDict = { "Yes": 1, "No": -1, "Unsure": 0, "": 0 }
# Load from the csv file.
# Note: the file only has "1"s, because blanks mean "No"
X = []
with open('data/SportsDataset_DataAttributes.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
data = [];
for col in row:
data.append( col or "-1")
X.append( list(map(int,data)) ) # integers, not strings
# This data file is listed in the same order as the sports
# The variable "y" contains the index of the sport
y = range(len(sports)) # this doesn't work
y = list( map(int,y) ) # Instead, we need to ask python to really enumerate it!
"""
Explanation: Read in the training data
The columns of X correspond to questions, and rows correspond to more data. The rows of y are the movie indices. The values of X are 1, -1 or 0 (see YesNoDict for encoding)
End of explanation
"""
from sklearn import tree
# the rest is up to you
"""
Explanation: Your turn: train a decision tree classifier
End of explanation
"""
# up to you
"""
Explanation: Use the trained classifier to play a 20 questions game
You may want to use from sklearn.tree import _tree and 'tree.DecisionTreeClassifier' with commands like tree_.children_left[node], tree_.value[node], tree_.feature[node], and `tree_.threshold[node]'.
End of explanation
"""
|