markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values | hash
stringlengths 32
32
|
---|---|---|---|---|---|
Alias Names
A dictionary of {zone: "alias"} pairs can be passed to replace the typical "ZONE_X" fieldnames of the ZoneBudget structured array with more descriptive names. | aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'}
zb = flopy.utils.ZoneBudget(cbc_f, zon, totim=[1097.], aliases=aliases)
zb.get_budget() | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 49748467090e8ecdad07a764d5effde8 |
Return the Budgets as a Pandas DataFrame
Set kstpkper and totim keyword args to None (or omit) to return all times.
The get_dataframes() method will return a DataFrame multi-indexed on totim and name. | zon = np.ones((nlay, nrow, ncol), np.int)
zon[1, :, :] = 2
zon[2, :, :] = 3
aliases = {1: 'SURF', 2:'CONF', 3: 'UFA'}
zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=None, totim=None, aliases=aliases)
df = zb.get_dataframes()
print(df.head())
print(df.tail()) | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 2df8475e0561680ec19170fee6b6e9b1 |
Slice the multi-index dataframe to retrieve a subset of the budget | dateidx1 = 1092.
dateidx2 = 1097.
names = ['RECHARGE_IN', 'WELLS_OUT']
zones = ['SURF', 'CONF']
df.loc[(slice(dateidx1, dateidx2), names), :][zones] | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 934ab8fef2068c42905b6b45e9d53cc5 |
Look at pumpage (WELLS_OUT) as a percentage of recharge (RECHARGE_IN) | dateidx1 = 1092.
dateidx2 = 1097.
zones = ['SURF']
# Pull out the individual records of interest
rech = df.loc[(slice(dateidx1, dateidx2), ['RECHARGE_IN']), :][zones]
pump = df.loc[(slice(dateidx1, dateidx2), ['WELLS_OUT']), :][zones]
# Remove the "record" field from the index so we can
# take the difference of the two DataFrames
rech = rech.reset_index()
rech = rech.set_index(['totim'])
rech = rech[zones]
pump = pump.reset_index()
pump = pump.set_index(['totim'])
pump = pump[zones] * -1
# Compute pumping as a percentage of recharge
pump_as_pct = (pump / rech) * 100.
pump_as_pct
# Use "slice(None)" to return all records
df.loc[(slice(dateidx1, dateidx2), slice(None)), :][zones]
# Or all times
df.loc[(slice(None), names), :][zones] | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 08b5c0bc5da18c8f87fd94cde095316f |
Pass start_datetime and timeunit keyword arguments to return a dataframe with a datetime multi-index | df = zb.get_dataframes(start_datetime='1970-01-01', timeunit='D')
dateidx1 = pd.Timestamp('1972-12-01')
dateidx2 = pd.Timestamp('1972-12-06')
names = ['RECHARGE_IN', 'WELLS_OUT']
zones = ['SURF', 'CONF']
df.loc[(slice(dateidx1, dateidx2), names), :][zones] | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 094a2bebbf5e4b1e65acba7ca7a20c4a |
Pass index_key to indicate which fields to use in the multi-index (defualt is "totim"; valid keys are "totim" and "kstpkper") | df = zb.get_dataframes(index_key='kstpkper')
df.head() | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 620b5c5cc8cbdd381336abb63415fae0 |
Write Budget Output to CSV
We can write the resulting recarray to a csv file with the .to_csv() method of the ZoneBudget object. | zb = flopy.utils.ZoneBudget(cbc_f, zon, kstpkper=[(0, 0), (0, 1096)])
zb.to_csv(os.path.join(loadpth, 'zonbud.csv'))
# Read the file in to see the contents
fname = os.path.join(loadpth, 'zonbud.csv')
try:
import pandas as pd
print(pd.read_csv(fname).to_string(index=False))
except:
with open(fname, 'r') as f:
for line in f.readlines():
print('\t'.join(line.split(','))) | examples/Notebooks/flopy3_ZoneBudget_example.ipynb | bdestombe/flopy-1 | bsd-3-clause | 59d0f210ede4b35022cdc11a21c2713b |
๋ฒ ์ด์ง์ ๊ฐ์ฐ์์ ํผํฉ ๋ชจ๋ธ ๋ฐ ํด๋ฐํด MCMC
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Bayesian_Gaussian_Mixture_Model"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์์ ๋ณด๊ธฐ</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์์ ์คํํ๊ธฐ</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์์ ๋ณด๊ธฐ</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋
ธํธ๋ถ ๋ค์ด๋ก๋ํ๊ธฐ</a></td>
</table>
์ด colab์์๋ TensorFlow Probability ๊ธฐ๋ณธ ํ์๋ง ์ฌ์ฉํ์ฌ ๋ฒ ์ด์ง์ ๊ฐ์ฐ์์ ํผํฉ ๋ชจ๋ธ(BGMM)์ ์ฌํ ํ๋ฅ ์์ ์ํ๋ง์ ํ์ํฉ๋๋ค.
๋ชจ๋ธ
๊ฐ ์ฐจ์ $D$์ $k\in{1,\ldots,K}$ ํผํฉ ๊ตฌ์ฑ ์์์ ๋ํด ๋ค์์ ๋ฒ ์ด์ง์ ๊ฐ์ฐ์์ ํผํฉ ๋ชจ๋ธ๋ก $i\in{1,\ldots,N}$ iid ์ํ์ ๋ชจ๋ธ๋งํ๋ ค๊ณ ํฉ๋๋ค.
$$\begin{align} \theta &\sim \text{Dirichlet}(\text{concentration}=\alpha_0)\ \mu_k &\sim \text{Normal}(\text{loc}=\mu_{0k}, \text{scale}=I_D)\ T_k &\sim \text{Wishart}(\text{df}=5, \text{scale}=I_D)\ Z_i &\sim \text{Categorical}(\text{probs}=\theta)\ Y_i &\sim \text{Normal}(\text{loc}=\mu_{z_i}, \text{scale}=T_{z_i}^{-1/2})\ \end{align}$$
scale ์ธ์๋ ๋ชจ๋ cholesky ์๋ฏธ ์ฒด๊ณ๋ฅผ ๊ฐ์ง๊ณ ์์ต๋๋ค. ์ด ๊ท์น์ TF ๋ถํฌ์ ๊ท์น์ด๊ธฐ ๋๋ฌธ์ ์ฌ์ฉ๋ฉ๋๋ค(๊ณ์ฐ์ ์ผ๋ก ์ ๋ฆฌํ๋ฏ๋ก TF ๋ถํฌ์์ ๋ถ๋ถ์ ์ผ๋ก ์ด ๊ท์น์ ์ฌ์ฉํฉ๋๋ค).
๋ชฉํ๋ ๋ค์์ ์ฌํ ํ๋ฅ ์์ ์ํ์ ์์ฑํ๋ ๊ฒ์
๋๋ค.
$$p\left(\theta, {\mu_k, T_k}{k=1}^K \Big| {y_i}{i=1}^N, \alpha_0, {\mu_{ok}}_{k=1}^K\right)$$
${Z_i}_{i=1}^N$๋ ์กด์ฌํ์ง ์๋๋ค๋ ์ ์ ์ ์ํ์ธ์. $N$๋ก ์กฐ์ ๋์ง ์๋ ํ๋ฅ ๋ณ์์๋ง ๊ด์ฌ์ ๋ก๋๋ค(๋ํ, ์ด ์ข๊ฒ๋ $Z_i$๋ฅผ ๋ฌด์ํ๋ TF ๋ถํฌ๊ฐ ์์ต๋๋ค).
๊ณ์ฐ์ ์ผ๋ก ๋ค๋ฃจ๊ธฐ ํ๋ ์ ๊ทํ ํญ์ผ๋ก ์ธํด ์ด ๋ถํฌ์์ ์ง์ ์ํ๋งํ๋ ๊ฒ์ ๋ถ๊ฐ๋ฅํฉ๋๋ค.
๋ฉํธ๋กํด๋ฆฌ์ค-ํค์ด์คํ
์ค ์๊ณ ๋ฆฌ์ฆ์ ๋ค๋ฃจ๊ธฐ ํ๋ ์ ๊ทํ๋ ๋ถํฌ์์ ์ํ๋งํ๋ ๊ธฐ์ ์
๋๋ค.
TensorFlow Probability๋ ๋ฉํธ๋กํด๋ฆฌ์ค-ํค์ด์คํ
์ค ๊ธฐ๋ฐ์ ์ฌ๋ฌ ์ต์
์ ํฌํจํ์ฌ ๋ง์ MCMC ์ต์
์ ์ ๊ณตํฉ๋๋ค. ์ด ๋
ธํธ๋ถ์์๋ ํด๋ฐํด ๋ชฌํ
์นด๋ฅผ๋ก(tfp.mcmc.HamiltonianMonteCarlo)๋ฅผ ์ฌ์ฉํฉ๋๋ค. ํด๋ฐํด ๋ชฌํ
์นด๋ฅผ๋ก(HMC)๋ ์ ์ํ๊ฒ ์๋ ดํ๊ณ (์ขํ ๋ฐฉ์์ด ์๋) ์ํ ๊ณต๊ฐ์ ๊ณต๋์ผ๋ก ์ํ๋งํ๋ฉฐ, TF์ ์ฅ์ ์ค ํ๋์ธ ์๋ ๋ฏธ๋ถ์ ํ์ฉํ๋ฏ๋ก ์ข
์ข
์ข์ ์ ํ์
๋๋ค. ์ฆ, BGMM ์ฌํ ํ๋ฅ ์์์ ์ํ๋ง์ ์ค์ ๋ก Gibb์ ์ํ๋ง๊ณผ ๊ฐ์ ๋ค๋ฅธ ์ ๊ทผ ๋ฐฉ์์ ์ฌ์ฉํ๋ฉด ๋ ์ ์ํ๋ ์ ์์ต๋๋ค. | %matplotlib inline
import functools
import matplotlib.pyplot as plt; plt.style.use('ggplot')
import numpy as np
import seaborn as sns; sns.set_context('notebook')
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(physical_devices) > 0:
tf.config.experimental.set_memory_growth(physical_devices[0], True) | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | 43d5b274066cfed554d2a9875468e212 |
์ค์ ๋ก ๋ชจ๋ธ์ ๋น๋ํ๊ธฐ ์ ์ ์๋ก์ด ์ ํ์ ๋ถํฌ๋ฅผ ์ ์ํด์ผ ํฉ๋๋ค. ์์ ๋ชจ๋ธ ์ฌ์์์ ์ญ๊ณต๋ถ์ฐ ํ๋ ฌ, ์ฆ ์ ๋ฐ ํ๋ ฌ๋ก MVN์ ๋งค๊ฐ๋ณ์ํํ๊ณ ์์์ด ๋ถ๋ช
ํฉ๋๋ค. ์ด๋ฅผ TF์์ ๋ฌ์ฑํ๋ ค๋ฉด, Bijector๋ฅผ ๋กค ์์ํด์ผ ํฉ๋๋ค. ์ด Bijector๋ ์๋ฐฉํฅ ๋ณํ์ ์ฌ์ฉํฉ๋๋ค.
Y = tf.linalg.triangular_solve((tf.linalg.matrix_transpose(chol_precision_tril), X, adjoint=True) + loc.
๊ทธ๋ฆฌ๊ณ log_prob ๊ณ์ฐ์ ๊ทธ ๋ฐ๋์
๋๋ค. ์ฆ, ์ด ๊ณ์ฐ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
X = tf.linalg.matmul(chol_precision_tril, X - loc, adjoint_a=True).
HMC์ ํ์ํ ๊ฒ์ log_prob ๋ฟ์ด๋ฏ๋ก, (tfd.MultivariateNormalTriL์ ๊ฒฝ์ฐ์ฒ๋ผ) tf.linalg.triangular_solve๋ฅผ ํธ์ถํ์ง ์์ต๋๋ค. ์ด๋ tf.linalg.matmul์ด ์ผ๋ฐ์ ์ผ๋ก ๋ ๋์ ์บ์ ์์น๋ก ์ธํด ๋ ๋น ๋ฅด๊ธฐ ๋๋ฌธ์ ์ ๋ฆฌํฉ๋๋ค. | class MVNCholPrecisionTriL(tfd.TransformedDistribution):
"""MVN from loc and (Cholesky) precision matrix."""
def __init__(self, loc, chol_precision_tril, name=None):
super(MVNCholPrecisionTriL, self).__init__(
distribution=tfd.Independent(tfd.Normal(tf.zeros_like(loc),
scale=tf.ones_like(loc)),
reinterpreted_batch_ndims=1),
bijector=tfb.Chain([
tfb.Affine(shift=loc),
tfb.Invert(tfb.Affine(scale_tril=chol_precision_tril,
adjoint=True)),
]),
name=name) | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | d910692e952bc8c6f109059905d1c261 |
tfd.Independent ๋ถํฌ๋ ํ ๋ถํฌ์ ๋
๋ฆฝ์ ์ธ ๊ทธ๋ฆฌ๊ธฐ๋ฅผ ํต๊ณ์ ์ผ๋ก ๋
๋ฆฝ๋ ์ขํ๊ฐ ์๋ ๋ค๋ณ๋ ๋ถํฌ๋ก ๋ฐ๊ฟ๋๋ค. log_prob ๊ณ์ฐ ์ธก๋ฉด์์, ์ด '๋ฉํ ๋ถํฌ'๋ ์ด๋ฒคํธ ์ฐจ์์ ๋ํ ๋จ์ ํฉ๊ณ๋ก ๋ํ๋ฉ๋๋ค.
๋ํ scale ํ๋ ฌ์ adjoint ('transpose')๋ฅผ ์ฌ์ฉํ์ต๋๋ค. ๊ทธ ์ด์ ๋ ์ ๋ฐ๋๊ฐ ์ญ๊ณต๋ถ์ฐ์ด๋ฉด, ์ฆ $P=C^{-1}$์ด๊ณ $C=AA^\top$์ด๋ฉด, $P=BB^{\top}$์ด๊ณ ์ฌ๊ธฐ์ $B=A^{-\top}$์
๋๋ค.
์ด ๋ถํฌ๋ ๋ค์ ๊น๋ค๋ก์ฐ๋ฏ๋ก MVNCholPrecisionTriL์ด ์์๋๋ ๋๋ก ๋์ํ๋์ง ๋น ๋ฅด๊ฒ ํ์ธํ๊ฒ ์ต๋๋ค. | def compute_sample_stats(d, seed=42, n=int(1e6)):
x = d.sample(n, seed=seed)
sample_mean = tf.reduce_mean(x, axis=0, keepdims=True)
s = x - sample_mean
sample_cov = tf.linalg.matmul(s, s, adjoint_a=True) / tf.cast(n, s.dtype)
sample_scale = tf.linalg.cholesky(sample_cov)
sample_mean = sample_mean[0]
return [
sample_mean,
sample_cov,
sample_scale,
]
dtype = np.float32
true_loc = np.array([1., -1.], dtype=dtype)
true_chol_precision = np.array([[1., 0.],
[2., 8.]],
dtype=dtype)
true_precision = np.matmul(true_chol_precision, true_chol_precision.T)
true_cov = np.linalg.inv(true_precision)
d = MVNCholPrecisionTriL(
loc=true_loc,
chol_precision_tril=true_chol_precision)
[sample_mean, sample_cov, sample_scale] = [
t.numpy() for t in compute_sample_stats(d)]
print('true mean:', true_loc)
print('sample mean:', sample_mean)
print('true cov:\n', true_cov)
print('sample cov:\n', sample_cov) | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | f7dbe7096bcaf1816035c89babf584b9 |
์ํ ํ๊ท ๊ณผ ๊ณต๋ถ์ฐ์ด ์ค์ ํ๊ท ๊ณผ ๊ณต๋ถ์ฐ์ ๊ฐ๊น์ฐ๋ฏ๋ก ๋ถํฌ๊ฐ ์ฌ๋ฐ๋ฅด๊ฒ ๊ตฌํ๋ ๊ฒ์ฒ๋ผ ๋ณด์
๋๋ค. ์ด์ MVNCholPrecisionTriL tfp.distributions.JointDistributionNamed๋ก BGMM ๋ชจ๋ธ์ ์ง์ ํฉ๋๋ค. ๊ด์ฐฐ ๋ชจ๋ธ์ ๊ฒฝ์ฐ, tfd.MixtureSameFamily๋ฅผ ์ฌ์ฉํ์ฌ ${Z_i}_{i=1}^N$ ๊ทธ๋ฆฌ๊ธฐ๋ฅผ ์๋์ผ๋ก ํตํฉํฉ๋๋ค. | dtype = np.float64
dims = 2
components = 3
num_samples = 1000
bgmm = tfd.JointDistributionNamed(dict(
mix_probs=tfd.Dirichlet(
concentration=np.ones(components, dtype) / 10.),
loc=tfd.Independent(
tfd.Normal(
loc=np.stack([
-np.ones(dims, dtype),
np.zeros(dims, dtype),
np.ones(dims, dtype),
]),
scale=tf.ones([components, dims], dtype)),
reinterpreted_batch_ndims=2),
precision=tfd.Independent(
tfd.WishartTriL(
df=5,
scale_tril=np.stack([np.eye(dims, dtype=dtype)]*components),
input_output_cholesky=True),
reinterpreted_batch_ndims=1),
s=lambda mix_probs, loc, precision: tfd.Sample(tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=MVNCholPrecisionTriL(
loc=loc,
chol_precision_tril=precision)),
sample_shape=num_samples)
))
def joint_log_prob(observations, mix_probs, loc, chol_precision):
"""BGMM with priors: loc=Normal, precision=Inverse-Wishart, mix=Dirichlet.
Args:
observations: `[n, d]`-shaped `Tensor` representing Bayesian Gaussian
Mixture model draws. Each sample is a length-`d` vector.
mix_probs: `[K]`-shaped `Tensor` representing random draw from
`Dirichlet` prior.
loc: `[K, d]`-shaped `Tensor` representing the location parameter of the
`K` components.
chol_precision: `[K, d, d]`-shaped `Tensor` representing `K` lower
triangular `cholesky(Precision)` matrices, each being sampled from
a Wishart distribution.
Returns:
log_prob: `Tensor` representing joint log-density over all inputs.
"""
return bgmm.log_prob(
mix_probs=mix_probs, loc=loc, precision=chol_precision, s=observations) | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | 23b099650f97a202951667343989a397 |
'ํ๋ จ' ๋ฐ์ดํฐ๋ฅผ ์์ฑํฉ๋๋ค.
๋ค์ ๋ฐ๋ชจ์์๋ ๋ฌด์์์ ๋ฐ์ดํฐ๋ฅผ ์ํ๋งํฉ๋๋ค. | true_loc = np.array([[-2., -2],
[0, 0],
[2, 2]], dtype)
random = np.random.RandomState(seed=43)
true_hidden_component = random.randint(0, components, num_samples)
observations = (true_loc[true_hidden_component] +
random.randn(num_samples, dims).astype(dtype)) | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | 945fa2f21909f9bfdb5a8ba3f2e01900 |
HMC๋ฅผ ์ฌ์ฉํ ๋ฒ ์ด์ง์ ์ถ๋ก
์ด์ TFD๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ์ง์ ํ๊ณ ์ผ๋ถ ๊ด์ฐฐ ๋ฐ์ดํฐ๋ฅผ ์ป์์ผ๋ฏ๋ก HMC๋ฅผ ์คํํ๋ ๋ฐ ํ์ํ ๋ชจ๋ ๋ถ๋ถ์ ํ๋ณดํ์ต๋๋ค.
HMC๋ฅผ ์คํํ๋ ค๋ฉด ๋ถ๋ถ ์ ์ฉ์ ์ฌ์ฉํ์ฌ ์ํ๋งํ๊ณ ์ถ์ง ์์ ํญ๋ชฉ์ '๊ณ ์ 'ํฉ๋๋ค. ์ด ๊ฒฝ์ฐ์๋ observations๋ง ๊ณ ์ ํ๋ฉด ๋ฉ๋๋ค(ํ์ดํผ ๋งค๊ฐ๋ณ์๋ ์ด๋ฏธ ์ฌ์ ํ๋ฅ ๋ถํฌ์ ์ ์ฉ๋์์ผ๋ฉฐ joint_log_prob ํจ์ ์๋ช
์ ์ผ๋ถ๊ฐ ์๋๋๋ค). | unnormalized_posterior_log_prob = functools.partial(joint_log_prob, observations)
initial_state = [
tf.fill([components],
value=np.array(1. / components, dtype),
name='mix_probs'),
tf.constant(np.array([[-2., -2],
[0, 0],
[2, 2]], dtype),
name='loc'),
tf.linalg.eye(dims, batch_shape=[components], dtype=dtype, name='chol_precision'),
] | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | c061e268db85d1f3f974a7add73d275d |
์ ์ฝ ์กฐ๊ฑด์ด ์๋ ํํ
ํด๋ฐํด ๋ชฌํ
์นด๋ฅผ๋ก(HMC)๋ ์ธ์์ ๊ด๋ จํ์ฌ ๋์ ๋ก๊ทธ ํ๋ฅ ํจ์๋ฅผ ๋ฏธ๋ถํ ์ ์์ด์ผ ํฉ๋๋ค. ๋ํ HMC๋ ์ํ ๊ณต๊ฐ์ ์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ฒฝ์ฐ ํจ์ฌ ๋ ๋์ ํต๊ณ ํจ์จ์ฑ์ ๋ํ๋ผ ์ ์์ต๋๋ค.
์ฆ, BGMM ์ฌํ ํ๋ฅ ์์ ์ํ๋งํ ๋ ๋ ๊ฐ์ง ์ฃผ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํด์ผ ํฉ๋๋ค.
$\theta$๋ ์ด์ฐ ํ๋ฅ ๋ฒกํฐ๋ฅผ ๋ํ๋
๋๋ค. ์ฆ, $\sum_{k=1}^K \theta_k = 1$ ๋ฐ $\theta_k>0$์ ๊ฐ์์ผ ํฉ๋๋ค.
$T_k$๋ ์ญ๊ณต๋ถ์ฐ ํ๋ ฌ์ ๋ํ๋
๋๋ค. ์ฆ, $T_k \succ 0$๊ฐ ๋์ด์ผ ํฉ๋๋ค. ์ด๋ ์์ ์น๊ฐ ๋ฉ๋๋ค.
์์ ์๊ตฌ ์ฌํญ์ ํด๊ฒฐํ๋ ค๋ฉด ๋ค์์ ์ํํด์ผ ํฉ๋๋ค.
์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๋ณ์๋ฅผ ์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ณต๊ฐ์ผ๋ก ๋ณํํฉ๋๋ค.
์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ณต๊ฐ์์ MCMC๋ฅผ ์คํํฉ๋๋ค.
์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๋ณ์๋ฅผ ์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ณต๊ฐ์ผ๋ก ๋ค์ ๋ณํํฉ๋๋ค.
MVNCholPrecisionTriL๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก, ์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ณต๊ฐ์ผ๋ก ํ๋ฅ ๋ณ์๋ฅผ ๋ณํํ๋ ค๋ฉด Bijector๋ฅผ ์ฌ์ฉํฉ๋๋ค.
Dirichlet์ softmax ํจ์๋ฅผ ํตํด ์ ์ฝ ์กฐ๊ฑด์ด ์๋ ๊ณต๊ฐ์ผ๋ก ๋ณํ๋ฉ๋๋ค.
์ ๋ฐ๋ ํ๋ฅ ๋ณ์๋ ์ค ์์ ์น ํ๋ ฌ์ ๋ํ ๋ถํฌ์
๋๋ค. ์ด๋ค์ ๋ํ ์ ์ฝ ์กฐ๊ฑด์ ์์ ๊ธฐ ์ํด์๋ FillTriangular ๋ฐ TransformDiagonal bijector๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด๋ค bijector๋ ๋ฒกํฐ๋ฅผ ํ๋ถ ์ผ๊ฐ ํ๋ ฌ๋ก ๋ณํํ๊ณ ๋๊ฐ์ ์ด ์์์ธ์ง ํ์ธํฉ๋๋ค. ์ ์๋ $d^2$ ๋์ $d(d+1)/2$ float๋ง ์ํ๋งํ ์ ์์ผ๋ฏ๋ก ์ ์ฉํฉ๋๋ค. | unconstraining_bijectors = [
tfb.SoftmaxCentered(),
tfb.Identity(),
tfb.Chain([
tfb.TransformDiagonal(tfb.Softplus()),
tfb.FillTriangular(),
])]
@tf.function(autograph=False)
def sample():
return tfp.mcmc.sample_chain(
num_results=2000,
num_burnin_steps=500,
current_state=initial_state,
kernel=tfp.mcmc.SimpleStepSizeAdaptation(
tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
step_size=0.065,
num_leapfrog_steps=5),
bijector=unconstraining_bijectors),
num_adaptation_steps=400),
trace_fn=lambda _, pkr: pkr.inner_results.inner_results.is_accepted)
[mix_probs, loc, chol_precision], is_accepted = sample() | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | c65f4e01f84b09f50f1da949e1fa0f4d |
์ด์ chain์ ์คํํ๊ณ ์ฌํ ํ๋ฅ ๋ถํฌ์ ํ๊ท ์ ์ถ๋ ฅํฉ๋๋ค. | acceptance_rate = tf.reduce_mean(tf.cast(is_accepted, dtype=tf.float32)).numpy()
mean_mix_probs = tf.reduce_mean(mix_probs, axis=0).numpy()
mean_loc = tf.reduce_mean(loc, axis=0).numpy()
mean_chol_precision = tf.reduce_mean(chol_precision, axis=0).numpy()
precision = tf.linalg.matmul(chol_precision, chol_precision, transpose_b=True)
print('acceptance_rate:', acceptance_rate)
print('avg mix probs:', mean_mix_probs)
print('avg loc:\n', mean_loc)
print('avg chol(precision):\n', mean_chol_precision)
loc_ = loc.numpy()
ax = sns.kdeplot(loc_[:,0,0], loc_[:,0,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,1,0], loc_[:,1,1], shade=True, shade_lowest=False)
ax = sns.kdeplot(loc_[:,2,0], loc_[:,2,1], shade=True, shade_lowest=False)
plt.title('KDE of loc draws'); | site/ko/probability/examples/Bayesian_Gaussian_Mixture_Model.ipynb | tensorflow/docs-l10n | apache-2.0 | e9d2f515aaeb742095f4ea9a83bc6ae1 |
<h2>Linear Regression</h2> | c = ['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin']
limit = int(3*cars.shape[0]/4)
X_train = cars.iloc[:limit,1:]
y_train = cars.iloc[:limit,0]
X_test = cars.iloc[limit:,1:]
y_test = cars.iloc[limit:,0]
lr = LinearRegression()
#Remember the [[]], is the data in 1 column
lr.fit(X_train, y_train)
predictions = lr.predict(X_test)
#mean_squared_error
mse = mean_squared_error(y_test, predictions)
# The coefficients
print('Coefficients: ', lr.coef_)
# The mean square error
print("Residual sum of squares: %.2f" % mse)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lr.score(X_test, y_test))
plt.scatter(X_test["weight"], y_test, c='b')
plt.scatter(X_test["weight"], prediction, c='r')
plt.show() | AutoMPG/AutoMPG.ipynb | JavierVLAB/DataAnalysisScience | gpl-3.0 | 53b0c19d8bb23406b251c28ae45c5e59 |
<p>The variance score is very low due to we use all the features.</p>
<p>For a better solution we aplied a feature selection. </p>
<p>From the pair graph, we can see that the features 'cylinders', 'model year' and 'origin' dont'show a clear correlation with the 'mpg' variable, so those are rejected.</p> | c = ['mpg','horsepower', "weight",'acceleration']
limit = int(3*cars.shape[0]/4)
X_train = cars[c].iloc[:limit,1:]
y_train = cars.iloc[:limit,0]
X_test = cars[c].iloc[limit:,1:]
y_test = cars.iloc[limit:,0]
lr2 = LinearRegression()
#Remember the [[]], is the data in 1 column
lr2.fit(X_train, y_train)
predictions = lr2.predict(X_test)
#mean_squared_error
mse = mean_squared_error(y_test, predictions)
# The coefficients
print('Coefficients: ', lr2.coef_)
# The mean square error
print("Residual sum of squares: %.2f" % mse)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % lr2.score(X_test, y_test))
feature = "weight"
plt.scatter(X_test[feature], y_test, c='b')
plt.scatter(X_test[feature], prediction, c='r')
plt.ylim([10,45])
plt.xlim([1500,4000])
plt.ylabel('MPG')
plt.xlabel(feature)
plt.show() | AutoMPG/AutoMPG.ipynb | JavierVLAB/DataAnalysisScience | gpl-3.0 | 7682d916f902e5a65f415f84ba21ec7d |
Date/Time data handling
Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.
The datetime built-in library handles temporal information down to the nanosecond. | from datetime import datetime
now = datetime.now()
now
now.day
now.weekday() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | cb8bffd3b50768ed6108819e111a5dc9 |
In addition to datetime there are simpler objects for date and time information only, respectively. | from datetime import date, time
time(3, 24)
date(1970, 9, 3) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 899da0483ae6749c071437d07ddc9322 |
Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times: | my_age = now - datetime(1970, 1, 1)
my_age
print(type(my_age))
my_age.days/365 | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 8694a1a1052863ab32e1b927234eeb39 |
In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
The International Maritime Organizationโs (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.
For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc. | segments = pd.read_csv("Data/AIS/transit_segments.csv")
segments.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 49667933f0a07b91a1dbfae091bb96c6 |
For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram: | segments.seg_length.hist(bins=500) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | fa8c938fbc933d3bd09301ea4faf2d0f |
Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful: | segments.seg_length.apply(np.log).hist(bins=500) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 010a165ce25a88b7a73518d99e832f22 |
We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime. | segments.st_time.dtype | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | c0df7a73db0861702ba41c8cb0b55814 |
Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information. | datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M') | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 93d5057b70ac5df596d0aa9d42a64762 |
The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically. | from dateutil.parser import parse
parse(segments.st_time.ix[0]) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | f46dfba79fa956521f6d45d5cf722dbf |
We can convert all the dates in a particular column by using the apply method. | segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M')) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | db7b6d164eb186593344189936f15036 |
As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects. | pd.to_datetime(segments.st_time[:10]) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | d4d1b2d32491514ef7b41e49fdb993cc |
Pandas also has a custom NA value for missing datetime objects, NaT. | pd.to_datetime([None]) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 3ff6c4a3627e6b234ece23dc677e7a6b |
Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import: | segments = pd.read_csv("Data/AIS/transit_segments.csv", parse_dates=['st_time', 'end_time'])
segments.dtypes | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 7a917cdf7262c0d5ea7a27fc2de8f5a5 |
Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example: | segments.st_time.dt.month.head()
segments.st_time.dt.hour.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 9ccf3198626476fabcbf79735b7d0eac |
This can be used to easily filter rows by particular temporal attributes: | segments[segments.st_time.dt.month==2].head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 247c3350964523e65a60b926c7dad480 |
In addition, time zone information can be applied: | segments.st_time.dt.tz_localize('UTC').head()
segments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 1b6c2bf682ed42e0e5e7f8b5706212fb |
Merging and joining DataFrame objects
Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table. | vessels = pd.read_csv("Data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
[v for v in vessels.type.unique() if v.find('/')==-1]
vessels.type.value_counts() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 0da1b4ae07b6f00b84dc289e88056b4d |
The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example: | df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)),
score=np.random.random(size=6)))
df1
df2
pd.merge(df1, df2) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 1e420c2e6a187caa5d832c266b2db40e |
Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables. | pd.merge(df1, df2, how='outer') | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | e583108f3ff50e34eafb3df101db73b3 |
The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Looking at the two datasets that we wish to merge: | segments.head(1)
vessels.head(1) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 1191a1e0520f3aada6e6c76ab140d403 |
we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other. | segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
segments_merged.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | db4a5decf299d899467ea7b20f1a8f8e |
In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.
Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables: | vessels.merge(segments, left_index=True, right_on='mmsi').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 094437cd64ba43f74caad74dced7a450 |
Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them. | segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 2e6df2fbfdea8a86fae410848ff7e646 |
This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Concatenation
A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_: | np.concatenate([np.random.random(5), np.random.random(5)])
np.r_[np.random.random(5), np.random.random(5)]
np.c_[np.random.random(5), np.random.random(5)] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 2521df56aa9d2e7ee7760ab57197d59a |
Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.
This operation is also called binding or stacking.
With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.
Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index. | mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
mb1.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | e706ca7f76bba84305d5def74c3bea31 |
Let's give the index and columns meaningful labels: | mb1.columns = mb2.columns = ['Count']
mb1.index.name = mb2.index.name = 'Taxon'
mb1.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 0a99318edf1b897e5744fae985d5d887 |
The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level. | mb1.index[:3]
mb1.index.is_unique | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 2c5a12745daabb708d8603719a80ea5e |
If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated: | pd.concat([mb1, mb2], axis=0).shape | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 815819a68e68760e48a29f3122326b26 |
However, the index is no longer unique, due to overlap between the two DataFrames. | pd.concat([mb1, mb2], axis=0).index.is_unique | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 1dc9e7c3b39d7a05128a60e4ef30fda4 |
Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames. | pd.concat([mb1, mb2], axis=1).shape
pd.concat([mb1, mb2], axis=1).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 5c37d5acb75710dda508a12c95abd7fb |
If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument. | pd.concat([mb1, mb2], axis=1, join='inner').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 3de78675e0ccbc086ebb3b887e8c8a29 |
If we wanted to use the second table to fill values absent from the first table, we could use combine_first. | mb1.combine_first(mb2).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 688bdf138eb765935780e8af0da188d6 |
We can also create a hierarchical index based on keys identifying the original tables. | pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 20d937c158808275fbf73faddb05c0af |
Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table. | pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 1d6504aca0cfb739fe3f1ee1864e4817 |
If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Exercise 1
In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame. | # Write solution here | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 64727bc2a00baf42b460b4803fe4ad8b |
Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began | cdystonia = pd.read_csv("Data/cdystonia.csv", index_col=None)
cdystonia.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 726de42414175ea86d96ef5a06660066 |
This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.
The stack method rotates the data frame so that columns are represented in rows: | stacked = cdystonia.stack()
stacked | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 835ef8b917a120a560f6b4bc650d93a6 |
To complement this, unstack pivots from rows back to columns. | stacked.unstack().head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 4c7b0945da28dd2cea3f46ec27273ef0 |
For this dataset, it makes sense to create a hierarchical index based on the patient and observation: | cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
cdystonia2.index.is_unique | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | f5522049abb61e6ac0e4073d8ab6ff54 |
If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs. | twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
.head())
cdystonia_wide | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 777535e81f68e2949cd1a91085445d5e |
A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking: | (cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
.unstack('week').head()) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 237aa64c0acc34d4c7c9b230e8e68b9e |
To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments. | pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | dd7c2d385c53af10f72d4c56fd59f475 |
This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
Pivoting
The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.
For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above: | cdystonia.pivot(index='patient', columns='obs', values='twstrs').head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 267b3e7b67e095660f5da21274dd5d20 |
If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table: | cdystonia.pivot('patient', 'obs') | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 8c969376da0663f603d9b0d89141ad5b |
A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function. | cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs',
aggfunc=max).head(20) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 79d0f4d0a86d4f5ab412922b564330a5 |
For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired. | pd.crosstab(cdystonia.sex, cdystonia.site) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | d5f94fb54220dd35e54866413655df32 |
Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values.
Dealing with duplicates
We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name: | vessels.duplicated(subset='names')
vessels.drop_duplicates(['names']) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 07f77cb844759d491b678d4f9aef7655 |
Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset: | cdystonia.treat.value_counts() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 5f4fd26ff66c0cc7a446f57842eac864 |
A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes. | treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | d566411481536c961c5754fac01ed6f0 |
Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values: | vals = pd.Series([float(i)**10 for i in range(10)])
vals
np.log(vals) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 058f8abcfa474cd0326b4215632977a1 |
In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace. | vals = vals.replace(0, 1e-6)
np.log(vals) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 9815c4d95b9ca828b5efadf9be73f4b1 |
We can also perform the same replacement that we used map for with replace: | cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2}) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | cd6bd896b598450949c0adf00afe77ae |
Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships: | top5 = vessels.type.isin(vessels.type.value_counts().index[:5])
top5.head(10)
vessels5 = vessels[top5]
pd.get_dummies(vessels5.type).head(10) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | aeccca798f726583520fefc5ced5e71c |
Categorical Data
Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category.
For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters. | cdystonia.treat.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 32b902ebc3f6171539c1c0ac176a93a3 |
We can convert this to a category type either by the Categorical constructor, or casting the column using astype: | pd.Categorical(cdystonia.treat)
cdystonia['treat'] = cdystonia.treat.astype('category')
cdystonia.treat.describe() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | d85c7bbd1fdfb48b4158450f9c5e94e5 |
By default the Categorical type represents an unordered categorical. | cdystonia.treat.cat.categories | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 60eb0a8a62504b119204452ad84087d8 |
However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order. | cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U']
cdystonia.treat.cat.as_ordered().head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | e4a8fdef6791ec3657dc7d43ca8e592c |
The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels. | cdystonia.treat.cat.codes | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | bedd8214d1672b8bb731a7c4c0664112 |
Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.
There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!): | %time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()
segments['name'] = segments.name.astype('category')
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 545f2c491509976dba7390f031b5e271 |
Hence, we get a considerable speedup simply by using the appropriate dtype for our data.
Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups: | cdystonia.age.describe() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | dca52b72649593cdb281db0a254b619d |
Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's: | pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 0b86bde7d6d01ededbe319c5495f5b8f |
The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False: | pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 04ce32536d6de93851304a6670521cf7 |
Since the data are now ordinal, rather than numeric, we can give them labels: | pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 94f2f82e28f118cb67d3affbfdc20be4 |
A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default: | pd.qcut(cdystonia.age, 4)[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 06aef490fd60bc9943f699def3e90139 |
Alternatively, one can specify custom quantiles to act as cut points: | quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | d303c4066eeb6f0d13a98429465da4fb |
Note that you can easily combine discretiztion with the generation of indicator variables shown above: | pd.get_dummies(quantiles).head(10) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | ab4453241a888e738d060d425fbafac8 |
Permutation and sampling
For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence. | new_order = np.random.permutation(len(segments))
new_order[:30] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 12244f32212ab9a9299c01c3d91c2f09 |
Using this sequence as an argument to the take method results in a reordered DataFrame: | segments.take(new_order).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 22721b0de1efec516999ee431dd8790c |
Compare this ordering with the original: | segments.head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | f9e0d1b45bdd33f29c332c96862cab47 |
For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement: | vessels.sample(n=10)
vessels.sample(n=10, replace=True) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 4b9db5e58388811cedad1f566365abbe |
Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
group-wise transformation, such as standardization/normalization | cdystonia_grouped = cdystonia.groupby(cdystonia.patient) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | e3e2a5a665c1dd6cfa7d1f8bf31a5dbc |
This grouped dataset is hard to visualize | cdystonia_grouped | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 17b10ef6b818d4d575f68622893959af |
However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups: | for patient, group in cdystonia_grouped:
print('patient', patient)
print('group', group) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 3e6451f596304cf9007ab31bf573c0cf |
A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method: | cdystonia_grouped.agg(np.mean).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 12e3d25cdd56eae0fbf7d6d7f899e057 |
Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean: | cdystonia_grouped.mean().head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 86d988b128528df08d8a353c6cc2883d |
The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation: | cdystonia_grouped.mean().add_suffix('_mean').head()
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5) | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | b869a25062d160ef4916da330bfd59ca |
If we wish, we can easily aggregate according to multiple keys: | cdystonia.groupby(['week','site']).mean().head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 2498168a47fa3aec2f8110e9e93ab33f |
Alternately, we can transform the data, using a function of our choice with the transform method: | normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 665109e69145d0aace6bffd3f45ef2ab |
It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns: | cdystonia_grouped['twstrs'].mean().head()
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | f2c009e4f57f82ef605a5bb5a7ff43ea |
If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed: | chunks = dict(list(cdystonia_grouped))
chunks[4] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 55a5b039a26595b1ba3dfed6e36bc507 |
By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way: | grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1)
{g:grouped_by_type.get_group(g) for g in grouped_by_type.groups} | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 4f0f01145c8703e1950284b00da7d8a2 |
Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index: | cdystonia2.head(10)
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean() | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | 00c4204fbbb2f86cc57f2aca7e863b5d |
Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call. | def top(df, column, n=5):
return df.sort_values(by=column, ascending=False)[:n] | Homework/01 - Pandas and Data Wrangling/temp/Data Wrangling with Pandas.ipynb | Merinorus/adaisawesome | gpl-3.0 | ae1a8447223eaeafee7916d33c2235e3 |